prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
How to clear database before seeding in rails Is there a simple line of code I can include at the top of my seed file to clear out each table before inserting new seed data without running any rake commands to rollback tables or databases? I was thinking something like: ``` [Foo, Bar].each{|class| class.destroy_all} ``` The point is I want to add new data where each new insertion starts from id: 1. What I want to avoid is deleting a table with 100 rows and when I add new data it's starting from 101.
**Updated Answer** You've to install *(OR you can add `gem 'database_cleaner'` to your Gemfile)* a GEM called [Database Cleaner](https://github.com/DatabaseCleaner/database_cleaner) which helps to clean your database without affecting your database schema.\_ To clean your database each time whenever you do `rake db:seed` then paste ``` # updated require 'database_cleaner' DatabaseCleaner.clean_with(:truncation) ``` on the top of your seed file. ***It'll clear your database and start count from `1` again.*** ***Disclaimer :** This updated answer is tested, and working perfectly in my system.* **===========================================================================** **Previous Untested Answer** Ya you can do that but it's depends on which database you're using. Below I'm giving solution for some popular DBs. In **MySQL**, `TRUNCATE table;` *deletes all rows and resets the auto increment counter*. In **PostgreSQL**, *it does not do this automatically.* You can use `TRUNCATE TABLE table RESTART IDENTITY;`. In **SQLite**, *there is no **TRUNCATE** statement*, instead, it's ``` DELETE FROM table; DELETE FROM sqlite_sequence WHERE name='table'; ``` **You can also try this** ``` ActiveRecord::Base.connection.tables.each do |table| ActiveRecord::Base.connection.execute("TRUNCATE #{table}") end ``` You can select any one of the solution & implement it to your seed file. *I hope this will help you... ;)* *Disclaimer : I've share my knowledge for the purpose of your help, but this solution was didn't tested.*
how to calculate excel pmt using php Have been trying to implement the pmt function used in excel into php. I have the formula but the calculations are showing incorrect. Its 6% interest rate for period of 30 years, final value being 833333. The right answer should be 10,541. payments are due at the end of the period so the type is zero and present value is zero. ``` <pre> $pv = 0; $fv = 833333; $i = 0.06/12; $n = 360; $pmt = (($pv - $fv) * $i )/ (1 - pow((1 + $i), (-$n))); echo $pmt; </pre> ``` [Using this link as reference for formula](https://stackoverflow.com/questions/5353511/pmt-in-javascript)
The formula that I use in PHPExcel to reflect MS Excel's formula is: ``` $PMT = (-$fv - $pv * pow(1 + $rate, $nper)) / (1 + $rate * $type) / ((pow(1 + $rate, $nper) - 1) / $rate); ``` where - $rate = interest rate - $nper = number of periods - $fv is future value - $pv is present value - $type is type Which returns the same result as MS Excel when I use ``` =PMT(6%/12, 360, 0, 833333, 0) ``` And which returns a result of -10540.755358736 (the same as MS Excel) when I use ``` =PMT(0.06,30,0,833333,0) ```
How to create named code blocks in Intellij? When Intellij generates code for UI forms you can collapse the code block and it is named: "UI Designer generated code". It is possible to collapse a code selection but its collapsed representation is displayed as "...". Is it possible to give it a custom name? It would look similar to a collapsed #region code block in C#.
They have implemented region-support now! From <http://youtrack.jetbrains.com/issue/IDEA-80636> Currently Intellij IDEA supports two basic types of custom folding comments: NetBeans-like: ``` //<editor-fold desc="..."> ... code ... //</editor-fold> ``` And VisualStudio-like: ``` //region <...> ... code ... //endregion ``` Note that line commenting symbols for "region...endregion" can be changed from "//" to "#" if the '#' character is supported by a language. But by default custom folding comments use the same comment characters as normally used for language line comments. Instead of typing the comments manually, in many cases you can use Ctrl+Alt+T (surround with). Please submit separate issues if you find that something doesn't work as expected.
AuthenticationManager when updating to Spring-security-3.2.0.RC2 I have updated recently to spring-security-3.2.0.RC2 from RC1, and according to the blog post the QUIESCENT\_POST\_PROCESSOR have been removed. Before I used to create an AuthenticationManager bean like this below: ``` @Bean(name = {"defaultAuthenticationManager", "authenticationManager"}) public AuthenticationManager defaultAuthenticationManager() throws Exception { return new AuthenticationManagerBuilder(null).userDetailsService(context.getBean(MyUserDetailsService.class)).passwordEncoder(new Md5PasswordEncoder()).and().build(); } ``` so I've changed it to: ``` @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws BeansException, Exception { auth.userDetailsService(context.getBean(MyUserDetailsService.class)).passwordEncoder(new Md5PasswordEncoder()); } ``` but unfortunately I can't get hold of the AuthenticationManager any more. I'm also creating RememberMeAuthenticationFilter like this: ``` @Bean(name = { "defaultRememberMeAuthenticationFilter", "rememberMeAuthenticationFilter" }) protected RememberMeAuthenticationFilter defaultRememberMeAuthenticationFilter() throws Exception { return new RememberMeAuthenticationFilter(defaultAuthenticationManager(), context.getBean(DefaultRememberMeServices.class)); } ``` so as you can see I need to get hold of AuthenticationManager, but I don't know how???
You really shouldn't need to get a hold of the AuthenticationManager. From [the javadoc of HttpSecurity](http://docs.spring.io/autorepo/docs/spring-security/3.2.x/apidocs/org/springframework/security/config/annotation/web/builders/HttpSecurity.html#rememberMe%28%29) the following should work just fine: ``` @Configuration @EnableWebSecurity public class RememberMeSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth .inMemoryAuthentication() .withUser("user").password("password").roles("USER"); } @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/**").hasRole("USER") .and() .formLogin() .permitAll() .and() // Example Remember Me Configuration .rememberMe(); } } ``` Of course if you are using global AuthenticationManager, this will work too: ``` @Configuration @EnableWebSecurity public class RememberMeSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth .inMemoryAuthentication() .withUser("user").password("password").roles("USER"); } @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/**").hasRole("USER") .and() .formLogin() .permitAll() .and() // Example Remember Me Configuration .rememberMe(); } } ``` The only difference is the first example isolates the AuthenticationManger to the HttpSecurity where as the second example will allow the AuthenticationManager to be used by global method security or another HttpSecurity (WebSecurityConfigurerAdapter). The reason this works is the .rememberMe() will automatically find the AuthenticationManager, UserDetailsService and use that when creating the RememberMeAuthenticationFilter. It also creates the appropriate RememberMeServices so there is no need to do that. Of course there are additional options on .rememberMe() if you want to customize it, so refer to the [RememberMeConfigurer javadoc](http://docs.spring.io/autorepo/docs/spring-security/3.2.x/apidocs/org/springframework/security/config/annotation/web/configurers/RememberMeConfigurer.html) for additional options. If you REALLY need a reference to the AuthenticationManager instance you can do the following: ``` @Configuration @EnableWebSecurity public class RememberMeSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired private AuthenticationManagerBuilder auth; @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth .inMemoryAuthentication() .withUser("user").password("password").roles("USER"); } @Bean public AuthenticationManager authenticationManager() { return auth.build(); } @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/**").hasRole("USER") .and() .formLogin() .permitAll() .and() // Example Remember Me Configuration .rememberMe(); } } ``` If you want to have multiple AuthenticationManager instances, you can do the following: ``` @Autowired private ObjectPostProcessor<Object> opp; public AuthenticationManager authenticationManager() throws Exception { return new AuthenticationManagerBuilder(opp) .inMemoryAuthentication() .withUser("user").password("password").roles("USER").and() .and() .build(); } public AuthenticationManager authenticationManager2() throws Exception { return new AuthenticationManagerBuilder(opp) .inMemoryAuthentication() .withUser("admin").password("password").roles("ADMIN").and() .and() .build(); } ``` NOTE This is almost the same as you had things before hand except instead of using the QUIESENT\_POST\_PROCESSOR you are using a real ObjectPostProcessor using the @Autowired annotation PS: Thanks for giving RC2 a try!
ReaderWriterLockSlim LockRecursionPolicy.SupportsRecursion DeadLock I have a Queue of Actions for Database Writing which manged by one dedicated Thread. And I have lots of threads that reads from Database whenever they want. I am using ReaderWriterLockSlim for READ/WRITE access control. My question is - Why LockRecursionPolicy.SupportsRecursion is not recommended? MSDN docs says: > > The use of recursion is not recommended for new development, because > it introduces unnecessary complications and makes your code more prone > to deadlocks. > > > How can be deadlock achieved here? For example, If I am trying call EnterReadLock when WriteReadLock already acquired (and I am under SupportsRecursion policy) I get an exception...
Lock recursion refers to taking the same lock multiple times on the same thread without leaving the original lock(s). The main issue with this is that to get in that situation in the first place, you probably have serious issues with who handles the necessary synchronization - your locks may be too granular, or too global. Multi-threading is hard, and making it even harder is utter tomfoolery. The second big deal is that locks are tied to threads. However, if you're writing asynchronous code, your code may be jumping between different threads willy-nilly, which may mean that code that *appears* to be taking a recursive lock isn't - the outer lock ends up being owned by a different thread than the inner lock, and you're forever deadlocked with Thread A waiting for Thread B to finish, while B is waiting for A to release the outer lock. You mentioned that `ReaderWriterLockSlim` throws a lot of recursing exceptions even when recursion is enabled. And yes, this means that using recursive locks is a tiny bit safer than when dealing with e.g. `ReaderWriterLock` or `Monitor`. The rules are clearly outlined in MSDN: > > For a ReaderWriterLockSlim that allows recursion, the following can be said about the modes a thread can enter: > > > - A thread in read mode can enter read mode recursively, but cannot enter write mode or upgradeable mode. If it tries to do this, a LockRecursionException is thrown. Entering read mode and then entering write mode or upgradeable mode is a pattern with a strong probability of deadlocks, so it is not allowed. As discussed earlier, upgradeable mode is provided for cases where it is necessary to upgrade a lock. > - A thread in upgradeable mode can enter write mode and/or read mode, and can enter any of the three modes recursively. However, an attempt to enter write mode blocks if there are other threads in read mode. > - A thread in write mode can enter read mode and/or upgradeable mode, and can enter any of the three modes recursively. > - A thread that has not entered the lock can enter any mode. This attempt can block for the same reasons as an attempt to enter a non-recursive lock. > > > A thread can exit the modes it has entered in any order, as long as it exits each mode exactly as many times as it entered that mode. If a thread tries to exit a mode too many times, or to exit a mode it has not entered, a SynchronizationLockException is thrown. > > > They did their best to outright disallow recursion that is pretty much guaranteed to cause deadlocks. However, that doesn't mean that there still aren't deadlocks that go unnoticed (after all, you don't need recursion to cause deadlocks - it just gives you plenty of hard to find opportunities for deadlocks). Not to mention that it's pretty hard to do any consistency guarantees in code that routinely recurses its locks - it may mean that some operations are (semi-)atomic when called from an outer lock, but cease to be when they're invoked directly. Multi-threading is hard enough as is. Don't make it even harder just because your object design is broken :) A great introduction to multi-threading (in general and in .NET in particular) is Joe Albahari's "Threading in C#", available on the internet for free (thanks, Joe!). `ReaderWriterLockSlim` in particular is handled in <http://www.albahari.com/threading/part4.aspx#_Reader_Writer_Locks>
Multithreading java optimization In my program I try tro grasp how to use ExecutorService to optimize my program. For some reason, It gets stuck a little on two Urls. The <http://sjsu.edu/> and <https://paypal.com>. When it sits on these two, it does not continue executing other URLS. Should the other 3 threads available not continue even though the two domains aren't responsing fast enough? How is this fixed in the best possible manner? ``` public class SequentialPinger { public static void main(String args[]) throws Exception { String[] hostList = {"http://crunchify.com", "http://yahoo.com", "http://www.ebay.com", "http://google.com", "http://www.example.co", "https://paypal.com", "http://bing.com/", "http://techcrunch.com/", "http://mashable.com/", "http://thenextweb.com/", "http://wordpress.com/", "http://cphbusiness.dk/", "http://example.com/", "http://sjsu.edu/", "http://ebay.co.uk/", "http://google.co.uk/", "http://www.wikipedia.org/", "http://dr.dk", "http://pol.dk", "https://www.google.dk", "http://phoronix.com", "http://www.webupd8.org/", "https://studypoint-plaul.rhcloud.com/", "http://stackoverflow.com", "http://docs.oracle.com", "https://fronter.com", "http://imgur.com/", "http://www.imagemagick.org" }; List<CallableImpl> callList = new ArrayList(); ExecutorService es = Executors.newFixedThreadPool(4); for (String url : hostList) { CallableImpl callable = new CallableImpl(url); callList.add(callable); } for (CallableImpl callableImpl : callList) { System.out.println("Trying to connect to: " + callableImpl.getUrl()); Future<String> lol = es.submit(callableImpl); System.out.println("status: " + lol.get()); } es.shutdown(); } } ``` My Callable implementation ``` public class CallableImpl implements Callable<String> { private final String url; public CallableImpl(String url) { this.url = url; } public String getUrl() { return url; } @Override public String call() { String result = "Error"; try { URL siteURL = new URL(url); HttpURLConnection connection = (HttpURLConnection) siteURL .openConnection(); connection.setRequestMethod("GET"); connection.connect(); int code = connection.getResponseCode(); if (code == 200) { result = "Green"; } if (code == 301) { result = "Redirect"; } } catch (IOException e) { result = "->Red<-"; } return result; } } ```
In your code you submit `Callable` to `ExecutorService` one by one and immediately call `Future.get()` which will block until result is ready (or exception is thrown at runtime). You'd better wrap `ExecutorService` with `CompletionSerivce` which provides results as soon as they are ready. And split for-loop into two loops: one to submit all `Callable`s and second to check results. ``` ExecutorService es = Executors.newFixedThreadPool(4); ExecutorCompletionService<String> completionService = new ExecutorCompletionService<>(es); for (CallableImpl callableImpl : callList) { System.out.println("Trying to connect to: " + callableImpl.getUrl()); completionService.submit(callableImpl); } for (int i = 0; i < callList.size(); ++i) { completionService.take().get(); //fetch next finished Future and check its result } ```
React Jest Testing onSubmit I am new to react and jest. I have been looking everywhere for testing but I cannot find anything that is helpful. This is partially because I am so new to it, I havent a clue where to start. So bear with me, please. I have an add to cart file which renders a form with a button inside it. The button is another component, so I'm not looking to test it. I have to test the onSubmit function for the form. Any thoughts? References? Here is my code so far for the test: ``` describe('AddToCart', () => { const React = require('react'); const BaseRenderer = require('react/lib/ReactTestUtils'); const Renderer = BaseRenderer.createRenderer(); const ReactTestUtils = require('react-addons-test-utils'); const AddToCart = require('../index.js').BaseAddToCart; it('Will Submit', () => { formInstance = ReactTestUtils.renderIntoDocument(<AddToCart product="" quantity=""/>); expect(ReactTestUtils.Simulate.onSubmit(formInstance)).toBeCalled(); }); }); ``` I'm getting this error: ``` Invariant Violation: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: undefined. ```
Consider using [Jest](https://facebook.github.io/jest/docs/tutorial-react.html) with [Enzyme](http://airbnb.io/enzyme/docs/api/). I think it's good stack for unit testing in react. Also, I made a sample test that tests onSubmit function in LogIn component. ``` import React from 'react'; import {shallow} from 'enzyme'; import LogIn from './LogIn'; describe('<LogIn />', () => { const testValues = { username: 'FOO', password: 'BAZ', handleSubmit: jest.fn(), }; it('Submit works', () => { const component = shallow( <LogIn {...testValues} /> ); component.find('#submitButton').simulate('click'); expect(testValues.handleSubmit).toHaveBeenCalledTimes(1); expect(testValues.handleSubmit).toBeCalledWith({username: testValues.username, password: testValues.password}); }); }); ```
How to setup jsdom when working with jest I'm trying to migrate from AVA to Jest. In AVA you can set `ava.setup`, in which you set the `jsdom` environment. For example, creating the DOM structure and doing necessary polyfills (localStorage). How do I accomplish that in Jest? Currently, I'm using `beforeEach` in each test suite, which doesn't feel like the best solution. Thanks in advance!
Great question. Jest actually ships with `jsdom` and the environment already configured. You can override it with the `testEnvironment` [setting](https://facebook.github.io/jest/docs/en/configuration.html#testenvironment-string). If you need to set up more aspects of the environment though, you can use the `setupTestFrameworkScriptFile` [setting](https://facebook.github.io/jest/docs/en/configuration.html#setuptestframeworkscriptfile-string) to point to a file that executes before all of your tests run. For example, if you need `window.yourVar` to be available on the window for all your tests, you would add this to your `package.json`: ``` "jest": { "setupTestFrameworkScriptFile": "tests/setup.js" } ``` And in tests/setup.js: ``` Object.defineProperty(window, 'yourVar', { value: 'yourValue' }); ```
Android, detect local IP and subnet mask for WiFi, both while tethering and connected to access point I need to detect the local IP address **and subnet mask** on the WiFi network, on an Android device (in order to proper calculate the UDP broadcast address strictly for the local subnet). When the device is connected to an Access Point, the following is properly working: ``` // Only works when NOT tethering WifiManager wifi = (WifiManager) context.getSystemService(Context.WIFI_SERVICE); DhcpInfo dhcp = wifi.getDhcpInfo(); if (dhcp == null) throw new IOException("No DHCPInfo on WiFi side."); foo(dhcp.ipAddress, dhcp.netmask); ``` But it doesn't work when it's the android device providing an Access Point though tethering: DhcpInfo seem to contain info set by the DCHP server when the Android device is a client of it, not when it's the Android device itself providing the DHCP service. When in tethering, the most promising solution I could find is: ``` // No way to get subnet mask WifiManager wifi = (WifiManager) context.getSystemService(Context.WIFI_SERVICE); WifiInfo info = wifi.getConnectionInfo(); if (info == null) throw new IOException("No connection info on WiFi side."); foo(info.getIpAddress(), info.??? /* netmask*/ ); ``` **EDIT**: WRONG, in my tests even this only works when NOT tethering. While tethering the IP is always 0. But there's nothing like `WifiInfo.getNetMask()`, how can I get the subnet mask in that case? (This absence strikes me as really strange, since there's a plethora of other info there. Am I missing something obvious?) Also, ideally I'd like a solution that doesn't need to discriminate if the Android device is providing tethering, and just get the local IP address and subnet mask, on the WiFi network, in any case, both when the Android device is providing or a client of an Access Point. Even standard Java (i.e. not Android-specific) `NetworkInterface.getNetworkInterfaces()`, don't seem to have a way to get the subnet mask (apart from not allowing to discriminate which corresponds to the WiFi). What am I missing?
**Best solution I found at the moment:** It baffles me how info/interface about tethering is so cumbersome/hidden to get, and yet not taken into consideration when you get info from `WifiManager`, or `ConnectivityManager` for the Wifi type: it all works only when NOT in tethering. I'm actually lost to that branch of investigation. Best solution I found at the moment is using standard Java `NetworkInterface.getNetworkInterfaces()`, instead of any Android API. **Experimentally, Android seems smart enough to set to null broadcast for network interfaces to the external mobile network**. It actually makes lot of sense since Android silently drop UDP broadcasts involving external mobile network. ``` // This works both in tethering and when connected to an Access Point Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces(); while (interfaces.hasMoreElements()) { NetworkInterface networkInterface = interfaces.nextElement(); if (networkInterface.isLoopback()) continue; // Don't want to broadcast to the loopback interface for (InterfaceAddress interfaceAddress : networkInterface.getInterfaceAddresses()) { InetAddress broadcast = interfaceAddress.getBroadcast(); // InetAddress ip = interfaceAddress.getAddress(); // interfaceAddress.getNetworkPrefixLength() is another way to express subnet mask // Android seems smart enough to set to null broadcast to // the external mobile network. It makes sense since Android // silently drop UDP broadcasts involving external mobile network. if (broadcast == null) continue; ... // Use the broadcast } } ``` As for subnet mask, the result from `getNetworkPrefixLength()` can be coerced into a subnet mask. I used `getBroadcast()` directly since that was my ultimate goal. No special permissions seem to be needed for this code (no `ACCESS_WIFI_STATE` nor `NETWORK`, just `INTERNET`). Primary reference for the code snippet: <http://enigma2eureka.blogspot.it/2009/08/finding-your-ip-v4-broadcast-address.html>
Finding the Emacs site-lisp directory I am trying to make my Emacs configuration file written for OS X work on Ubuntu. I have this line: ``` (add-to-list 'load-path "/usr/local/Cellar/emacs/23.3/share/emacs/site-lisp/w3m") ``` It is used to load emacs-w3m. On OS X I installed Emacs using Homebrew, thus it is in /usr/local/Cellar/.The site-lisp directory on Ubuntu is in a different place. How can I write this line in a way that will work on both operating systems? Is there an Emacs Lisp function to retrieve the site-lisp directory?
No, there's no way. The site-lisp directory is a convention and only its existence not its path is agreed on. Either you set a symbolic link on your Mac/Ubuntu or you use a system switch: ``` (defconst my-lisp-dir (cond ((equal system-type 'gnu/linux) "/usr/share/emacs/site-lisp/") ((equal system-type 'darwin) (concat "/usr/local/Cellar/emacs/" (number-to-string emacs-major-version) "." (number-to-string emacs-minor-version) "/share/emacs/site-lisp/")) (t (concat "/usr/local/emacs/site-lisp/"))) ``` and then ``` (add-to-list 'load-path (concat my-lisp-dir "w3m")) ```
How to find the last branch checked out in git We can checkout the last branch using `git checkout -`, but is there a way to just find out what was last branch and not check it out? EDIT: I already found that I could use: ``` git reflog | grep -i "checkout: moving"|head -1|cut -d' ' -f6 ``` But I wanted to know if there is a direct simpler command. I am updating the question to reflect this need. Sorry about not being clear enough
Your sample output (as produced by `git reflog | ...`) makes it sufficiently clear. The `git rev-parse` command can be combined with the reference lookup syntax to do this in one go: ``` $ git rev-parse --symbolic-full-name @{-1} refs/heads/stash-exp $ git rev-parse --abbrev-ref @{-1} stash-exp ``` Note that [the `gitrevisions` documentation](https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html) describes the `@{-*N*}` syntax. Note as well that if there is no N'th previous branch, `rev-parse` silently prints nothing at all: ``` $ git rev-parse --abbrev-ref @{-2} && echo ok || echo fail master ok $ git rev-parse --abbrev-ref @{-3} && echo ok || echo fail ok ``` And, of course, in most places where you might need the name, you can just use the `@{-1}` syntax directly.
Does compass-rails support Ruby on Rails 4.0? I have clean new Rails 4 app with Gemfile: ``` #default gems gem 'compass-rails' gem 'zurb-foundation' gem 'thin' ``` with style.scss: ``` @import "compass"; @import "foundation/variables"; $red: rgb(255,0,1); $green: rgb(51,153,50); $body-bg: #F4F4F4; $body-font-color: #7B7B7B; $primary-color: #999; $secondary-color: #0CC; $dark-color: #393939; $block-container-border-color: rgb(218,218,218); $block-container-shadow-color: rgb(208,208,208); // main background html{ background:image-url('bckg.jpg'); } body{ width:1000px; margin:0 auto; @include box-shadow(0px 0px 32px -5px #000); } ``` And I have this error: ``` Showing /Users/quatermain/Projects/rails40/app/views/layouts/application.html.erb where line #18 raised: File to import not found or unreadable: compass. Load paths: /Users/quatermain/Projects/rails40/app/assets/images /Users/quatermain/Projects/rails40/app/assets/javascripts /Users/quatermain/Projects/rails40/app/assets/stylesheets /Users/quatermain/Projects/rails40/vendor/assets/javascripts /Users/quatermain/Projects/rails40/vendor/assets/stylesheets /usr/local/rvm/gems/ruby-1.9.3-p392/gems/turbolinks-1.2.0/lib/assets/javascripts /usr/local/rvm/gems/ruby-1.9.3-p392/gems/jquery-rails-3.0.1/vendor/assets/javascripts /usr/local/rvm/gems/ruby-1.9.3-p392/gems/coffee-rails-4.0.0/lib/assets/javascripts /usr/local/rvm/gems/ruby-1.9.3-p392/gems/zurb-foundation-4.2.3/scss /usr/local/rvm/gems/ruby-1.9.3-p392/gems/zurb-foundation-4.2.3/js (in /Users/quatermain/Projects/rails40/app/assets/stylesheets/style.scss:5) ``` Is Rails 4 not currently supported by `compass-rails`?
Compass needs to have some key parts rewritten in order to support Rails 4. There is a temporary branch you can use that has hacked together some support: `'gem "compass-rails", github: "milgner/compass-rails", ref: "1749c06f15dc4b058427e7969810457213647fb8"` You can follow <https://github.com/Compass/compass-rails/pull/59> for the latest updates. Update: There is now a version in alpha. `gem "compass-rails", "~> 2.0.alpha.0"` Update 2: compass-rails is no longer in alpha. Add the following to your Gemfile and type `bundle install`. ``` gem "compass-rails", "~> 1.1.2" ```
How to get all commits in a Git tag through GitHub API I have to fetch all new commits that were a part when a new tag was created on a Git repo. This needs to be done through GitHub API. For example the Git UI says Tagging Tag1 and has a sha associated with it... let's say the sha is : SHA1 Now how do I get all commits which happened or were a part of Tag1 through GitHub API? I want to store all these commits and perform some analysis on them.
Based on the clarification on your comment: > > I want to get all commits between this newly created tag and previous tag > > > **1. Get all the tags in a given repo, so you can get the current and the previous tag names** ``` curl -X "GET" "https://api.github.com/repos/:owner/:repo/tags" \ -H "Authorization: token YOUR_GITHUB_ACCESS_TOKEN" ``` [![tags](https://i.stack.imgur.com/oZ65b.png)](https://i.stack.imgur.com/oZ65b.png) **2. Get all the commits between the latest 2 tags** ``` curl -X "GET" "https://api.github.com/repos/:owner/:repo/compare/:tag_1...:tag_2" \ -H "Authorization: token YOUR_GITHUB_ACCESS_TOKEN" ``` [![enter image description here](https://i.stack.imgur.com/VsXjv.png)](https://i.stack.imgur.com/VsXjv.png) Doc links: - <https://developer.github.com/v3/repos/#list-tags> - <https://developer.github.com/v3/repos/commits/#compare-two-commits>
Why don't developers make installation automatic on windows? *This is the **inverse** to "**[Why don't developers make installation wizards on linux?](https://softwareengineering.stackexchange.com/questions/256833/why-dont-developers-make-installation-wizards-on-linux)**", which is interesting, but made me think "Automatic installation is the natural way. Why do they use wizards?". So here is the inverse question:* I'm sure it's not about laziness, or anything like that, but I fail to understand why developers, of even mainly consumer facing apps, don't make a fully automatic sort of installation where you are not bothered at all. The same apps usually have automatic installation on Linux, so why not Windows and Mac OS? Is there any technical reason for this trend, or is it just convention?
### Informed Consent Users should be able to decide, first of all, whether they even want the program to be installed on their computer or not. It may seem self-evident to you that people are obviously choosing to install a program, but the prime characteristic of a malicious program is that it can be installed without the computer user knowing about it. Informed consent is made even more explicit through [UAC](http://en.wikipedia.org/wiki/User_Account_Control). ### License Agreement Most modern software follows a "click-through" model for licensing; that is, the user agrees to the terms of the license during the installation process as a condition of installing the program. That users seldom read these agreements doesn't mean they're not bound by them, especially if they have clicked the checkbox labeled "I agree to these terms." ### Configuring Options Many software packages have options that allow you to change the way the software is installed in certain ways. The most trivial of these lets you decide whether or not you want an icon on the desktop, but in larger applications you can decide which features you want installed. ### Installation Progress While programs in the Windows ecosystem are getting better at being less intrusive during the installation process (e.g. registry-free installation), installation is still often a non-trivial operation. Progress bars and other visual aids give an indication that something is actually happening. The final page in the wizard tells you whether or not the installation succeeded. ### Getting Started Finally, the best software packages tell you what to do next. What are the first steps, how to get started, how to get help. Most software, when installed, leaves you with a startup icon, and that's it. Never overestimate the level of expertise of your users; as incredible as it may seem to you, there are still folks that don't know how to find and start software programs they just installed.
DocumentViewer to RichTextBox Binding Error I have an application with RichTextBox and DocumentViewer (placed in a TabControl), and I want to make something like "hot preview". I've binded `DocumentViewer.Document` property to `RichTextBox.Document` Binding: `<DocumentViewer Document="{Binding Document, Converter={StaticResource FlowDocumentToPaginatorConverter}, ElementName=mainRTB, Mode=OneWay}" />` And this is Converter code: ``` public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { FlowDocument d = value as FlowDocument; DocumentPaginator pagin = ((IDocumentPaginatorSource)d).DocumentPaginator; FixedDocumentSequence result = null; Size s = new Size(793.700787402, 1122.519685039); pagin.PageSize = s; using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(d.ContentStart, d.ContentEnd); tr.Save(ms, DataFormats.XamlPackage); Package p = Package.Open(ms, FileMode.Create, FileAccess.ReadWrite); Uri uri = new Uri(@"memorystream://doc.xps"); PackageStore.AddPackage(uri, p); XpsDocument xpsDoc = new XpsDocument(p); xpsDoc.Uri = uri; XpsDocument.CreateXpsDocumentWriter(xpsDoc).Write(pagin); result = xpsDoc.GetFixedDocumentSequence(); } return result; } ``` When I start this application everything is ok until I switch to tab with DocumentViewer. Application crushes and I get such Exception: > > Cannot perform a read operation in write-only mode. > > > What I am doing wrong? Is it possible to make this binding?
The error message is indeed confusing and reason not immediately obvious. Basically you are closing the `MemoryStream` that holds `XpsDocument` too early and when the `DocumentViewer` attempts to read the document it cannot as it is write-only mode (because the stream was closed). The solution is to not immediately close the `MemoryStream` until **after** you have finished viewing the document. To achieve this I wrote an `XpsDocumentConverter` that returns `XpsReference`. Also, as you never been able to convert and display a single `XpsDocument` you won't have yet encountered the next issue of having multiple packages in the `PackageStore` with the same `Uri`. I have taken care of this in my implementation below. ``` public static XpsDocumentReference CreateXpsDocument(FlowDocument document) { // Do not close the memory stream as it still being used, it will be closed // later when the XpsDocumentReference is Disposed. MemoryStream ms = new MemoryStream(); // We store the package in the PackageStore Uri uri = new Uri(String.Format("pack://temp_{0}.xps/", Guid.NewGuid().ToString("N"))); Package pkg = Package.Open(ms, FileMode.Create, FileAccess.ReadWrite); PackageStore.AddPackage(uri, pkg); XpsDocument xpsDocument = new XpsDocument(pkg, CompressionOption.Normal, uri.AbsoluteUri); // Need to force render the FlowDocument before pagination. // HACK: This is done by *briefly* showing the document. DocumentHelper.ForceRenderFlowDocument(document); XpsSerializationManager rsm = new XpsSerializationManager(new XpsPackagingPolicy(xpsDocument), false); DocumentPaginator paginator = new FixedDocumentPaginator(document, A4PageDefinition.Default); rsm.SaveAsXaml(paginator); return new XpsDocumentReference(ms, xpsDocument); } public class XpsDocumentReference : IDisposable { private MemoryStream MemoryStream; public XpsDocument XpsDocument { get; private set; } public FixedDocument FixedDocument { get; private set; } public XpsDocumentReference(MemoryStream ms, XpsDocument xpsDocument) { MemoryStream = ms; XpsDocument = xpsDocument; DocumentReference reference = xpsDocument.GetFixedDocumentSequence().References.FirstOrDefault(); if (reference != null) FixedDocument = reference.GetDocument(false); } public void Dispose() { Package pkg = PackageStore.GetPackage(XpsDocument.Uri); if (pkg != null) { pkg.Close(); PackageStore.RemovePackage(XpsDocument.Uri); } if (MemoryStream != null) { MemoryStream.Dispose(); MemoryStream = null; } } } ``` `XpsReference` implements `IDisposable` so remember to call `Dispose()` on it. Also, once you resolve the above error the next problem you are likely to encounter will be content not rendering as you would expect. This is caused by the fact you need to clone `FlowDocument` and it has not undergone a full measure and arrange layout pass. Read [Printing BlockUIContainer to XpsDocument/FixedDocument](https://stackoverflow.com/questions/9447338/printing-blockuicontainer-to-xpsdocument-fixeddocument) on how to solve this.
How to open rabbitmq in browser using docker container? This was probably asked already, but so far I can't find any detailed explanation at all, and the existing documentation seems as if it was written for some kind on psychic who supposed to know everything. As per this [manual](https://hub.docker.com/_/rabbitmq/), I added the container ``` docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:latest ``` Then I checked it to receive the container ip ``` docker inspect some-rabbit ``` Checked ports with ``` docker ps ``` And tried to connect in the browser with this formula ``` https://{container-ip}:{port} ``` It did't work. Am I'm doing something wrong, or maybe I am supposed to add something additional, like a container for apache or other stuff? **EDIT** As I understand, after creating some-rabbit container, now I need to run Dockerfile to create image? (This whole thing is confusing to me). How am I supposed to do that? I mean, I saw command `docker build -f /path/to/a/Dockerfile` but if for example I placed the Dockerfile in second path `D:\Docker\rabbitmq`, how I supposed to get there? (the path doesn't seems to be recognized)
You are using the wrong image which doesn't have the rabbitmq\_management plugin enabled. Change `rabbitmq:latest` to `rabbitmq:management`. On [dockerhub](https://hub.docker.com/_/rabbitmq/) they are using the command: ``` docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management ``` If you want to go to the UI on `localhost:15672` make sure to expose the port by adding `-p 15672:15672` to the above command. The management image is just the rabbitmq latest image with the management plugin enabled. Here is the dockerfile for `rabbitmq:management` ``` FROM rabbitmq RUN rabbitmq-plugins enable --offline rabbitmq_management EXPOSE 15671 15672 ```
Suppressed exceptions in Java Consider the below example. ``` public static void main(String[] args) { try{ throw new NullPointerException(); } finally{ throw new ClassCastException(); } } ``` This throws a ClassCastException only and the NullPointerException is suppressed. And this is possible in Java SE6 too. Then why are Suppressed Exceptions stressed upon only in JavaSE7. What is it that I am missing here?
> > How is it different from this? Isn't it just 2 exceptions thrown with the latest one suppressing the previous one. > > > What you are actually doing is better described as "replacing" on exception with another. (Or more precisely, discarding the first exception and throwing the second exception unconditionally!) Exception suppression (as the JLS uses the term) is different in the following respects: 1. Normal suppression happens (automatically) only to exceptions that are thrown *in* a `try-with-resources`, during resource cleanup *by* the `try-with-resources`. What you are doing in your example happens to any / all exceptions, irrespective of their origin. 2. With normal suppression, the secondary exception is suppressed. In this case the primary exception is being "suppressed". 3. With normal suppression, the suppressed exception is recorded in the original exception. In your example, it is discarded. --- Now, admittedly, you *could* use the `Throwable.addSuppressedException(...)` method by hand in other ways. I'm describing how it is *intended* to be used.
Fastest way to convert string like id=1&type=2 into array in PHP? I need to change it into : ``` $arr['id']=1; $arr['type']=2; ```
Use: [parse\_str()](http://us.php.net/manual/en/function.parse-str.php). ``` void parse_str(string $str [, array &$arr]) ``` > > Parses str as if it were the query string passed via a URL and sets variables in the current scope. > > > Example: ``` <?php $str = "first=value&arr[]=foo+bar&arr[]=baz"; parse_str($str); echo $first; // value echo $arr[0]; // foo bar echo $arr[1]; // baz parse_str($str, $output); echo $output['first']; // value echo $output['arr'][0]; // foo bar echo $output['arr'][1]; // baz ?> ```
Is coding style in organizations an optional thing? [This programming style document](http://geosoft.no/development/cppstyle.html#Recommendation) has a general rule, that says : > > The rules can be violated if there are strong personal objections > against them. > > > This collides with the way I am thinking, and there are many articles saying that coding style is actually important. For example [this](http://paul-m-jones.com/archives/34) says: > > A coding standards document tells developers how they must write their > code. Instead of each developer coding in their own preferred style, > they will write all code to the standards outlined in the document. > This makes sure that a large project is coded in a consistent style — > parts are not written differently by different programmers. Not only > does this solution make the code easier to understand, it also ensures > that any developer who looks at the code will know what to expect > throughout the entire application. > > > So, am I misunderstanding something from [this document](http://geosoft.no/development/cppstyle.html#Recommendation) and the quote at the top of this question? Can people really just ignore coding style? --- Maybe I wasn't clear enough, so with this edit, I am going to clarify a bit. I am writing the coding style document for our team, and I want to check the style using some static analyzers. If it fails, Jenkins will send emails. And I want to fail the code review, if the style doesn't match. This clearly collides with the first quote. But then, if the quote is right, what is the use of the coding style document, if anyone can do whatever they want?
As far as I can tell, the statement that confused you is a pragmatic compromise made in order for the guidelines to serve as wide an audience as possible. Depending on your specific context (more on that below) you may have an option to adjust it and make more efficient use of the guidelines. You see, guidelines refer to "strong personal objections" as a means to justify violation. Such objections are not something to ignore lightly, especially if these are coming from experienced developers. These objections *may* be wrong, mind you, but (and this is a very very BIG BUT) they may also indicate that a particular rule is wrong - either generally or in the specific project's context (one example of rule misfit is a requirement to provide detailed logging in performance critical code). I think that any sensible style guide should take the above into account and try to accommodate a possible need to adjust itself. Now, if the guide that confused you was targeted *only* to mature teams with efficient and smooth processes and environment, it could be stated much less ambiguously, for example like this: > > The rules should be followed strictly, unless a challenge is raised against them - in which case challenged rule should stay ignored until this is resolved - either by rejecting the challenge or by accepting it and adjusting the rules to fit. > > > You might like the above better and you may wish it to be that way everywhere, for everyone, but look closer into that "challenge is raised / stay ignored / adjust" part and ask yourself how it can be implemented. Ask yourself *how long it may take* depending on the project and team. If it takes an hour, is that acceptable? What if it takes a day, or a week, or... a month? You see, that challenge-and-ignore-until-resolved approach could open a wide door for abuse if it was presented as a guide for any project. *"Yeah yeah we hear you, let's do it how the guide says. First, fill out this challenge form and get CEO / CFO / CTO approvals; expect this to take a week or two. After that, wait until we update our code checks; that may take another week or two. Meanwhile, please make sure that your performance critical code vomits properly formatted logging statements about every register move."* I can't read the guide authors' minds but it looks reasonable to assume that they wanted to avoid using it to justify a mess as described above. From this perspective it is simply safer to clearly state that the guide does not assume any enforcement - this way, however clumsy, still allows it to be usable for an arbitrarily wide range of teams and projects. There is probably an expectation that such a wide allowance leaves more mature and efficient teams the opportunity to reasonably narrow it down without damaging developer productivity. --- Applied to your specific case, writing the coding style document for your team and failing the code review if the style doesn't match - I think you need to figure how long it might take for developers to challenge a particular rule, get it ignored, resolved, and have it either changed or recovered depending on resolution. If you figure a way to make this process work without introducing many obstacles into your development workflow, then a formalized and easy to track challenge / resolution approach is indeed worth considering instead of the chaotic "violate if you cry loud enough". --- As a side note, I would like to address what you wrote in [another comment](https://softwareengineering.stackexchange.com/questions/318404/is-coding-style-in-organizations-an-optional-thing#comment675204_318407), "Assume that the coding style is ideal, and if that is not the case etc." This is a dangerous assumption, really. I broke my nose on it (twice! in a single project! where I had vast experience and imagined that I know everything about it, go figure) and I strongly recommend you to drop it. It is safer to assume that the style guide may have mistakes and put an effort into thinking about what to do in case such mistakes are discovered.
Sublime code-like method browser in Vim Sublime code has a shortcut Super-R which opens a method browser listing all methods in current class (Ruby). Is there a plugin to get similar functionality in Vim? "/def " or "m]" work only if you're familiar with the class and know what method you want to go to, whereas Super+R works for just exploring a class.
The [TagList](http://www.vim.org/scripts/script.php?script_id=273) plugin is another (very popular) option. There are a bunch of [others](http://www.vim.org/scripts/script_search_results.php?keywords=tags&script_type=&order_by=creation_date&direction=descending&search=search). FYI, `/def` and `m]` are not the equivalent of Sublime Text's `Ctrl`+`R`. That would be `:tag foo` which you can tab-complete if you don't know all the names of your methods. While we are at it, the [CtrlP](http://www.vim.org/scripts/script.php?script_id=3736) plugin has a feature very similar to Sublime Text's `Ctrl`+`R`: `:CtrlpBufTag` that I use hundreds of times a day. All of these methods depend on the presence of a `tags` file generated by [Exuberant Ctags](http://ctags.sourceforge.net/) or some compatible program. Actually, tags are quite an important part of the Vim experience.
MongoDB $elemMatch projection on Nested Arrays I have a collection (summary) like this. ``` { "id":"summaryid", "locations": [ { "id": "loc1", "datacenters": [ { "id": "dc1.1", "clusters": [ { "id": "cl1.1", "servers": [ { "id": "srvr1.1", "services": [ { "id": "srvc1.1" } ] } ] } ] }, { "id": "dc1.2", "clusters": [ { "id": "cl1.2", "servers": [ { "id": "srvr1.2", "services": [ { "id": "srvc1.2" } ] } ] } ] } ] }, { "id": "loc2", "datacenters": [ { "id": "dc2.1", "clusters": [ { "id": "cl2.1", "servers": [ { "id": "srvr2.1", "services": [ { "id": "srvc2.1" } ] } ] } ] }, { "id": "dc2.2", "clusters": [ { "id": "cl2.2", "servers": [ { "id": "srvr2.2", "services": [ { "id": "srvc2.2" } ] } ] } ] } ] } ] } ``` Now I want only the clusters that are for datacenter with id dc1.1. I would like to exclude servers for the clusters. I have tried using find query with $elemMatch and projections as below. ``` db.summary.find({}, {"locations": { $elemMatch: { "datacenters._id" : "dc1.1" } }, "locations.datacenters.clusters":0, "locations.datacenters.servers":0, "locations.datacentercount" : 0, "locations.clustercount" : 0, "locations.servercount" : 0}).pretty() ``` I am still getting all the datacenters instead of just 1 that matches the id. I am not sure if I am doing this right. Thank you!
It is not possible with `$elemMatch` to project the nested array element. You can try the below aggregation in 3.4 server. Use `$unwind` couple of times to reach the nested array and apply `$match` to pick the nested array element. ``` db.summary.aggregate([ { "$match": { "locations.datacenters._id": "dc1.1" } }, { "$unwind": "$locations" }, { "$unwind": "$locations.datacenters" }, { "$match": { "locations.datacenters._id": "dc1.1" } }, { "$project": { "locations.datacenters.clusters.servers": 0 } } ]) ``` `{"$project": {"locations.datacenters.clusters.servers": 0}}` will remove the `servers` field while keep all the other fields in the final output. From the docs > > If you specify the exclusion of a field other than \_id, you cannot > employ any other $project specification forms: i.e. if you exclude > fields, you cannot also specify the inclusion of fields, reset the > value of existing fields, or add new fields. > > > Reference: <https://docs.mongodb.com/manual/reference/operator/aggregation/project/#exclude-fields>
AWT XOR SWING together in one application? I am looking into developing an Java GUI for a biological analysis tool. My question is, Can I use both AWT and SWING libraries under the same Model View Control design pattern? Or,are they two libraries controlled in a distinct way? I would like to know If I can bring them in under the same roof for purposes of reusing existing code written with both libraries. Thank you very much for your time.
Swing is built on top of AWT, you can mix Swing and AWT and it will technically work, but with some limitations: - heavyweight vs lightweight components: components in AWT are heavyweight - they correspond to a native OS window. This means that all AWT components appear above sibling Swing components. (E.g. put a awt List and a swing JList in the same container, and the List will always appear above the JList.) - Look and Feel: the AWT components look and feel native, since they are native components. The Swing components have a pluggable look and feel, which defines their appearance and behavior. If you choose the L&F to match the native platform, these components are only "immitations" of the native look and feel, and can behave differently from their AWT counterparts. (E.g. JButton vs awt Button.) For these reasons, it might be wise to use just one UI library, presumably Swing. EDIT: JDK 6 (Update 12) offers seamless integration of heavyweight and lightweight components, so mixing will work seamlessly. So the first point is no longer true - but having inconsistent look and feel between the two UI toolkits still stands.
How to use AttentionMechanism with MultiRNNCell and dynamic\_decode? I want to create a multi-layered dynamic RNN-based decoder that uses an attention mechanism. To do this, I first create an attention mechanism: ``` attention_mechanism = BahdanauAttention(num_units=ATTENTION_UNITS, memory=encoder_outputs, normalize=True) ``` Then I use the `AttentionWrapper` to wrap a LSTM cell with the attention mechanism: ``` attention_wrapper = AttentionWrapper(cell=self._create_lstm_cell(DECODER_SIZE), attention_mechanism=attention_mechanism, output_attention=False, alignment_history=True, attention_layer_size=ATTENTION_LAYER_SIZE) ``` where `self._create_lstm_cell` is defined as follows: ``` @staticmethod def _create_lstm_cell(cell_size): return BasicLSTMCell(cell_size) ``` I then do some bookkeeping (e.g. creating my `MultiRNNCell`, creating an initial state, creating a `TrainingHelper`, etc.) ``` attention_zero = attention_wrapper.zero_state(batch_size=tf.flags.FLAGS.batch_size, dtype=tf.float32) # define initial state initial_state = attention_zero.clone(cell_state=encoder_final_states[0]) training_helper = TrainingHelper(inputs=self.y, # feed in ground truth sequence_length=self.y_lengths) # feed in sequence lengths layered_cell = MultiRNNCell( [attention_wrapper] + [ResidualWrapper(self._create_lstm_cell(cell_size=DECODER_SIZE)) for _ in range(NUMBER_OF_DECODER_LAYERS - 1)]) decoder = BasicDecoder(cell=layered_cell, helper=training_helper, initial_state=initial_state) decoder_outputs, decoder_final_state, decoder_final_sequence_lengths = dynamic_decode(decoder=decoder, maximum_iterations=tf.flags.FLAGS.max_number_of_scans // 12, impute_finished=True) ``` But I receive the following error: `AttributeError: 'LSTMStateTuple' object has no attribute 'attention'`. What is the correct way to add an attention mechanism to a MultiRNNCell dynamic decoder?
Have you tried using the [attention wrapper](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/AttentionCellWrapper) provided by tf.contrib? Here is an example using both an attention wrapper and dropout: ``` cells = [] for i in range(n_layers): cell = tf.contrib.rnn.LSTMCell(n_hidden, state_is_tuple=True) cell = tf.contrib.rnn.AttentionCellWrapper( cell, attn_length=40, state_is_tuple=True) cell = tf.contrib.rnn.DropoutWrapper(cell,output_keep_prob=0.5) cells.append(cell) cell = tf.contrib.rnn.MultiRNNCell(cells, state_is_tuple=True) init_state = cell.zero_state(batch_size, tf.float32) ```
Visual Studio 6 Windows Common Controls 6.0 (sp6) Windows 7, 64 bit I was asked to help work on a legacy vb6 application for someone, so I decided to toss Visual Studio 6 on my Windows 7 x64 laptop following [this guide](http://www.fortypoundhead.com/showcontent.asp?artid=20502). It installed fine, and from I can see everything is working except for Microsoft Windows Common Controls 6.0 (sp6). Microsoft Windows Common Controls-2 6.0 (sp6), Microsoft Windows Common Controls 5.0 (sp2), etc. all work just fine. However, when I try to add the Microsoft Windows Common Controls 6.0 (sp6) component I get the error: ``` '' could not be loaded ``` I could not add an image, so I have uploaded a screen shot to here for anyone wanting to see it exactly: ![http://i.imgur.com/7aym4xB.png](https://i.stack.imgur.com/HEcDn.png) I have tried registering/un-registering/re-registering MXCOMCTL.OCX, MSCOMCT2.OCX, msdatsrc.tlb all with no success. I have also uninstalled a known security update that may have caused issues with Windows Common Controls. I have removed IE11, even though reports state IE11 will not cause issues with MSCOMCTL.OCX the way IE10 did. I run VB6.EXE in Windows XP SP3 Compatibility mode. With Disable visual themes, Disable desktop composition, and Disable display scaling on high DPI settings. As well as run it as administrator. I have also attempted to open a previous project that utilizes this control, and it fails during load with the same "blank" error as in the screen shot with the following in the generated error log: Line 35: Class MSComctlLib.ListView of control lvData was not a loaded control class. Line 223: Class MSComctlLib.StatusBar of control Stat was not a loaded control class. So that tells me it isn't a version issue within the .vbp like some suggest (as it happens with new, blank projects as well). I am at wits end. Apparently people have gotten both VB6 and Windows 7 x64 to work properly (with Windows Common Controls) but NOTHING I have tried alleviates my problem. I am hoping someone here may have run into this, or have some ideas as to what is going on.
While waiting for a reply for ideas here, I had decided to try something. I ran regedit as administrator, navigated to the HKEY\_CLASSES\_ROOT\TypeLib Key and then did a search for "MSCOMCTL.OCX"... I deleted EVERY key that referenced this .ocx file. After searching the entire registry, deleting what I found, I ran command prompt as administrator. I then navigated to C:\Windows\SysWOW64 and typed the following commands: ``` regsvr32 MSCOMCTL.OCX regtlib msdatsrc.tlb ``` Upon registering these two files again, everything is WORKING! I scoured the web for HOURS looking for this solution to no avail. It just so happens I fixed it myself after posting a question here :( Even though Visual Studio 6 is outdated, hopefully this may still help others!
Is "associates to the right" equivalent to being left or right associative? If I say an operation is left-associative, is that equivalent to saying it "associates from the left" and "associates to the right"? My confusion comes from an example in my functional programming Haskell textbook. It states: Function application associates to the left. e.g. `mult x y z` means `((mult x)y)z`. i.e. `mult` takes an integer `x`, returns a function `mult x`, which takes an integer `y`, and returns a function `mult x y`, which takes an integer `z` and returns the result of `x*y*z`. But if I say it "associates to the left", I think of it being right-associative, i.e. evaluation starts from the right and to the left. However, since evaluation of `mult` starts *from* the left and *to* the right, is this left-associative? Should the author have said function application "associates to the right"? Or am I missing something and the author is correct?
You just need to stop thinking about evaluation order. Bracketing is actually about expression structure, that is, which of these we mean when we say `mult x y`, not about how we may later decide to evaluate it. ``` $ $ / \ / \ $ y mult $ / \ / \ mult x x y ``` Yes, we were taught in school that brackets are about the order you do things in. That's because we learned it in the context of arithmetic operators. Since these are all strict, there's less freedom to decide how to evaluate a given expression and the bracketing mostly determines an order. Plus we probably never thought much about expressions as abstract things distinct from the way they are written down. In the more general context of Haskell we can't conflate parsing and evaluation. When we say something "associates left" or "to the left" we're only talking about how it's parsed. It tells you that the `x` belongs in a subexpression with the `mult` on its left and not with the `y` on its right. (I haven't seen anyone use the phrase "associates from" and it doesn't really make sense unless maybe you read it as "associates away from".)
How can I remove this little space between first TabItem and edge of Window? How can I remove the space between the `TabItem` and edge of `Window`. There also seems to be a border around the tab content box as well that is not needed. How can I remove that as well? [![enter image description here](https://i.stack.imgur.com/SsvZp.png)](https://i.stack.imgur.com/SsvZp.png) Here's my XAML: ``` <Grid> <TabControl Margin="0" ItemsSource="{Binding TabItems}" SelectedIndex="0"> <TabControl.ItemContainerStyle> <Style TargetType="TabItem"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="TabItem"> <Grid Name="Panel"> <Border Name="Border" Margin="0,0,-4,0"> </Border> <ContentPresenter x:Name="ContentSite" VerticalAlignment="Center" HorizontalAlignment="Center" ContentSource="Header" Margin="10,2"/> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter TargetName="Panel" Property="Background" Value="Orange" /> </Trigger> <Trigger Property="IsSelected" Value="False"> <Setter TargetName="Panel" Property="Background" Value="LightGray" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="Header" Value="{Binding Header}"/> <Setter Property="Content" Value="{Binding Content}"/> </Style> </TabControl.ItemContainerStyle> </TabControl> </Grid> ``` I tried adding a border and setting it to -4 margin, but doesn't seem to be working. Any help will be appreciated. Thanks!
Set the `TabControl`'s `BorderThickness` property to 0: ``` <TabControl Margin="0" ItemsSource="{Binding TabItems}" SelectedIndex="0" BorderThickness="0"> <!--The rest of your code here--> </TabControl> ``` # Update - Adjusting the tab headers This one is a bit trickier - this will require updating the `TabControl`'s template. You can do this by hand but the `TabControl`'s template is quite large so I recommend using Blend to get started. Open your project in Blend, open the 'Objects and Timeline' window, right click your `TabControl`, click edit template, and then 'Edit a copy'. This will create a copy of the default `TabControl`'s template for you to start working with. [![enter image description here](https://i.stack.imgur.com/gODl7.png)](https://i.stack.imgur.com/gODl7.png) This is going to create a *lot* of XAML for you. You will end up with a style resource that looks something like this: ``` <Style x:Key="TabControlStyle1" TargetType="{x:Type TabControl}"> <Setter Property="Padding" Value="2" /> <Setter Property="HorizontalContentAlignment" Value="Center" /> <Setter Property="VerticalContentAlignment" Value="Center" /> <Setter Property="Background" Value="{StaticResource TabItem.Selected.Background}" /> <Setter Property="BorderBrush" Value="{StaticResource TabItem.Selected.Border}" /> <Setter Property="BorderThickness" Value="1" /> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type TabControl}"> <Grid x:Name="templateRoot" ClipToBounds="true" SnapsToDevicePixels="true" KeyboardNavigation.TabNavigation="Local"> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="ColumnDefinition0" /> <ColumnDefinition x:Name="ColumnDefinition1" Width="0" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition x:Name="RowDefinition0" Height="Auto" /> <RowDefinition x:Name="RowDefinition1" Height="*" /> </Grid.RowDefinitions> <TabPanel x:Name="headerPanel" Background="Transparent" Grid.Column="0" IsItemsHost="true" Margin="2,2,2,0" Grid.Row="0" KeyboardNavigation.TabIndex="1" Panel.ZIndex="1" /> <Border x:Name="contentPanel" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}" Grid.Column="0" KeyboardNavigation.DirectionalNavigation="Contained" Grid.Row="1" KeyboardNavigation.TabIndex="2" KeyboardNavigation.TabNavigation="Local"> <ContentPresenter x:Name="PART_SelectedContentHost" ContentSource="SelectedContent" Margin="{TemplateBinding Padding}" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" /> </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="TabStripPlacement" Value="Bottom"> <Setter Property="Grid.Row" TargetName="headerPanel" Value="1" /> <Setter Property="Grid.Row" TargetName="contentPanel" Value="0" /> <Setter Property="Height" TargetName="RowDefinition0" Value="*" /> <Setter Property="Height" TargetName="RowDefinition1" Value="Auto" /> <Setter Property="Margin" TargetName="headerPanel" Value="2,0,2,2" /> </Trigger> <Trigger Property="TabStripPlacement" Value="Left"> <Setter Property="Grid.Row" TargetName="headerPanel" Value="0" /> <Setter Property="Grid.Row" TargetName="contentPanel" Value="0" /> <Setter Property="Grid.Column" TargetName="headerPanel" Value="0" /> <Setter Property="Grid.Column" TargetName="contentPanel" Value="1" /> <Setter Property="Width" TargetName="ColumnDefinition0" Value="Auto" /> <Setter Property="Width" TargetName="ColumnDefinition1" Value="*" /> <Setter Property="Height" TargetName="RowDefinition0" Value="*" /> <Setter Property="Height" TargetName="RowDefinition1" Value="0" /> <Setter Property="Margin" TargetName="headerPanel" Value="2,2,0,2" /> </Trigger> <Trigger Property="TabStripPlacement" Value="Right"> <Setter Property="Grid.Row" TargetName="headerPanel" Value="0" /> <Setter Property="Grid.Row" TargetName="contentPanel" Value="0" /> <Setter Property="Grid.Column" TargetName="headerPanel" Value="1" /> <Setter Property="Grid.Column" TargetName="contentPanel" Value="0" /> <Setter Property="Width" TargetName="ColumnDefinition0" Value="*" /> <Setter Property="Width" TargetName="ColumnDefinition1" Value="Auto" /> <Setter Property="Height" TargetName="RowDefinition0" Value="*" /> <Setter Property="Height" TargetName="RowDefinition1" Value="0" /> <Setter Property="Margin" TargetName="headerPanel" Value="0,2,2,2" /> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="TextElement.Foreground" TargetName="templateRoot" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> ``` Find the `TabPanel` with the name 'headerPanel' and set its left margin to 0. One last thing, if you used Blend it should have set your `TabControl`'s style to use your new style but if not you need make sure you set the style yourself: ``` Style="{StaticResource TabControlStyle1}" ```
Oracle RAC and sequences I have various database applications that use sequences, I´m migrating these applications to Oracle RAC from 10g without RAC to 11g with RAC. I need ordered sequences and gaps are tolerated. I'm thinking in cache sequences with order, I don´t know what are the effect in performance. Do you think this is a good option? What are your experience with sequences and RAC? Thanks,
Exactly what do you mean by "ordered" in this context? By default, each node in the cluster has a separate cache of sequence numbers. So node 1 may be handing out values 1-100 while node 2 is handing out values 101-200. The values returned from a single node are sequential, but session A on node 1 may get a value of 15 while session B on node 2 gets a value of 107 so the values returned across sessions appear out of order. If you specify that the sequence has to be ordered, you're basically defeating the purpose of the sequence cache because Oracle now has to communicate among nodes every time you request a new sequence value. That has the potential to create a decent amount of performance overhead. If you're using the sequence as a sort of timestamp, that overhead may be necessary but it's not generally desirable. The overhead difference in practical terms is going to be highly application dependent-- it will be unmeasurably small for some applications and a significant problem for others. The number of RAC nodes, the speed of the interconnect, and how much interconnect traffic there is will also contribute. And since this is primarily a scalability issue, the practical effect is going to limit how well your application scales up which is inherently non-linear. Doubling the transaction volume your application handles is going to far more than double the overhead. If you specify NOCACHE, the choice of ORDER or NOORDER is basically irrelevent. If you specify ORDER, the choice of CACHE or NOCACHE is basically irrelevent. So CACHE NOORDER is by far the most efficient, the other three are relatively interchangable. They are all going to involve inter-node coordination and network traffic every time you request a sequence value which is, obviously, a potential bottleneck. It would generally be preferrable to add a TIMESTAMP column to the table to store the actual timestamp rather than relying on the sequence to provide a timestamp order.
LiquidHaskell: failing DeMorgan's law I am having troubles proving the following law with LiquidHaskell: [![DeMorgan's law](https://i.stack.imgur.com/LHqzn.png)](https://i.stack.imgur.com/LHqzn.png) It is known as (one of) DeMorgan's law, and simply states that the negation of `or`ing two values must be the same as `and`ing the negation of each. It's been proven for a long time, and is an example in LiquidHaskell's [tutorial](https://ucsd-progsys.github.io/liquidhaskell-tutorial/02-logic.html). I am following along in the tutorial, but fail to get the following code to pass: ``` -- Test.hs module Main where main :: IO () main = return () (==>) :: Bool -> Bool -> Bool False ==> False = True False ==> True = True True ==> True = True True ==> False = False (<=>) :: Bool -> Bool -> Bool False <=> False = True False <=> True = False True <=> True = True True <=> False = False {-@ type TRUE = {v:Bool | Prop v} @-} {-@ type FALSE = {v:Bool | not (Prop v)} @-} {-@ deMorgan :: Bool -> Bool -> TRUE @-} deMorgan :: Bool -> Bool -> Bool deMorgan a b = not (a || b) <=> (not a && not b) ``` When running `liquid Test.hs`, I get the following output: ``` LiquidHaskell Copyright 2009-15 Regents of the University of California. All Rights Reserved. **** DONE: Parsed All Specifications ****************************************** **** DONE: Loaded Targets ***************************************************** **** DONE: Extracted Core using GHC ******************************************* Working 0% [.................................................................] Done solving. **** DONE: solve ************************************************************** **** DONE: annotate *********************************************************** **** RESULT: UNSAFE ************************************************************ Test.hs:23:16-48: Error: Liquid Type Mismatch 23 | deMorgan a b = not (a || b) <=> (not a && not b) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Inferred type VV : Bool not a subtype of Required type VV : {VV : Bool | Prop VV} In Context ``` Now I'm by no means a LiquidHaskell expert, but I'm pretty sure something must be wrong. I have convinced myself that the identity holds a few years ago, but to make sure I called the function with every possible input, and eventually ran ``` λ: :l Test.hs λ: import Test.QuickCheck λ: quickCheck deMorgan >>> +++ OK, passed 100 tests. ``` So I don't seem to have a typo in the Haskell code, the error must lie in the LiquidHaskell specification. It seems that LiquidHaskell cannot infer that the resulting `Bool` is strictly `TRUE`: ``` Inferred type VV : Bool not a subtype of Required type VV : {VV : Bool | Prop VV} ``` What is my mistake here? Any help is appreciated! PS: I'm using the `z3` solver, and running GHC 7.10.3. LiquidHaskell version is `2009-15`.
LiquidHaskell cannot prove your program safe because it does not have a strong enough type for `(<=>)`. We do infer types for functions, but the inference is based on the other type signatures in the program. Specifically, we need to figure out that ``` {-@ (<=>) :: p:Bool -> q:Bool -> {v:Bool | Prop v <=> (Prop p <=> Prop q)} @-} ``` (The `Prop` syntax is how we lift a Haskell `Bool` to an SMT boolean.) In order for for LiquidHaskell to infer this type, it would need to see a predicate `Prop v <=> (Prop p <=> Prop q)` somewhere in another type signature (for some `v`, `p`, and `q`). This fragment doesn't appear anywhere, so we need to provide the signature explicitly. It's an unfortunate limitation of LiquidHaskell, but is crucial for retaining decidability. PS: Here's a link to a working version of your example. <http://goto.ucsd.edu:8090/index.html#?demo=permalink%2F1461434240_7574.hs>
Why can I compare sbyte to all the other numeric types \*except\* ulong? You can do >, <, ==, etc. comparisons between sbyte and byte, int, uint, short, ushort, long, double, and float. But not ulong. My brain is exploding. Can anyone explain why sbyte can be compared to uint but **not** ulong? ``` public bool sbyte_ulong_compare(sbyte x, ulong y) { return x < y; // compiler error CS0019 } ``` Also, using `unchecked` doesn't make things work any better. Brain melting. Another edit. This works: ``` public bool sbyte_ulong_compare(sbyte x, ulong y) { // // returns x < y // if (x < 0) return true; if (y > 127) return true; return ((long)x < (long)y); } ```
dthorpe and Jon's answers are close but not quite correct. The correct reasoning is as follows. The specification states: > > For an operation of the form x op y, > where op is a comparison operator, > overload resolution is > applied to select a specific operator > implementation. > > > OK, what are the operator implementations that overload resolution has to work with? They are: ``` bool operator <(int x, int y); bool operator <(uint x, uint y); bool operator <(long x, long y); bool operator <(ulong x, ulong y); bool operator <(float x, float y); bool operator <(double x, double y); bool operator <(decimal x, decimal y); ``` Plus the enum less-than operator for all enumerated types, plus the lifted-to-nullable versions of each of the foregoing. Overload resolution must first eliminate *inapplicable* operators, and then from the remaining set of applicable operators, determine the *best* operator. The int, uint, long and enum operators (and their lifted forms) are all eliminated because ulong does not implicitly convert to those types. The uint and ulong operators (and their lifted forms) are all eliminated because sbyte does not implicitly convert to those types. That leaves ``` bool operator <(float x, float y); bool operator <(double x, double y); bool operator <(decimal x, decimal y); ``` and their lifted forms. We must now determine the *best* operator from those six. What do we mean by "best"? When comparing two operators, the one with the *more specific* operand types is the better one. By "more specific" I mean that "Tiger" is more specific than "Animal" because all Tigers are convertible to Animal but not all Animals are convertible to Tiger. Clearly the unlifted forms are better than all of the corresponding lifted forms. A non-nullable type is more specific than its corresponding nullable type because a non-nullable type is always convertible to its nullable type, but not vice-versa. We can eliminate the lifted forms. That leaves three. Which of those three is the best? float is more specific than double. Every float is convertible to double, but not every double is convertible to float. Therefore double is eliminated. That leaves two. ``` bool operator <(float x, float y); bool operator <(decimal x, decimal y); ``` Which of these is the best? There is no implicit conversion from float to decimal. There is no implicit conversion from decimal to float. Therefore neither is better than the other. Therefore no best operator can be determined. Overload resolution fails. We have decided to report a generic error message that simply says that there is no such operator that does what you want, rather than giving the seemingly bizarre and confusing error message "operator overload resolution failed because float is neither better nor worse than decimal". I think that is a reasonable design choice.
getpid and getppid return two different values When I run the code below ``` #include <stdio.h> #include <sys/types.h> //int i=0; int main(){ int id ; id = fork() ; printf("id value : %d\n",id); if ( id == 0 ) { printf ( "Child : Hello I am the child process\n"); printf ( "Child : Child’s PID: %d\n", getpid()); printf ( "Child : Parent’s PID: %d\n", getppid()); } else { printf ( "Parent : Hello I am the parent process\n" ) ; printf ( "Parent : Parent’s PID: %d\n", getpid()); printf ( "Parent : Child’s PID: %d\n", id); } } ``` My output is ``` id value : 20173 Parent : Hello I am the parent process Parent : Parent’s PID: 20172 Parent : Child’s PID: 20173 id value : 0 Child : Hello I am the child process Child : Child’s PID: 20173 Child : Parent’s PID: 1 ``` How can the parent's PID(20172) differ from the child's parent's ID (1)? Shouldn't those two be equal?
What's happening is that the parent is terminating before the child runs. this leaves the child as an orphan and it gets adopted by the root process with PID of 1. If you put a delay or read data from stdin rather than letting the parent terminate you'll see the result you expect. > > Process **ID 1 is usually the init process** primarily responsible for starting and shutting down the system. The init (short for initialization) is a daemon process that is the direct or indirect ancestor of all other processes. [wiki link for init](http://en.wikipedia.org/wiki/Init) > > > As user314104 points out the wait() and waitpid() functions are designed to allow a parent process to suspend itself until the state of a child process changes. So a call to wait() in the parent branch of your if statement would cause the parent to wait for the child to terminate.
Fail a test with Chai.js In JUnit you can fail a test by doing: ``` fail("Exception not thrown"); ``` What's the best way to achieve the same using Chai.js?
There are many ways to fake a failure – like the `assert.fail()` mentioned by @DmytroShevchenko –, but usually, it is possible to avoid these crutches and express the intent of the test in a better way, which will lead to more meaningful messages if the tests fail. For instance, if you expect a exception to be thrown, why not say so directly: ``` expect( function () { // do stuff here which you expect to throw an exception } ).to.throw( Error ); ``` As you can see, when testing exceptions, you have to wrap your code in an anonymous function. Of course, you can refine the test by checking for a more specific error type, expected error message etc. See `.throw` in the [Chai docs](http://chaijs.com/api/bdd/) for more.
Invalid argument specification: Positional argument after varargs I'm trying to pass 2 scalar and 2 list variable into user keyword but I got "Invalid argument specification: Positional argument after varargs." Isn't it possible to pass more than one list variable into user keyword? What I'm trying to: ``` *** Test Cases *** Sample Case Personal Details Page Fill Form ${firstName} ${surname} @{dateofbirth} @{nextsalarydate} *** Keywords *** Personal Details Page Fill Form [Arguments] ${firstName} ${surname} @{dateofbirth} @{nextsalarydate} Input Text id = firstName ${firstName} Input Text id = lastName ${surname} Personal Details Page Select Date of Birth ${dateofbirth[0]} ${dateofbirth[1]} ${dateofbirth[2]} Personal Details Page Select Next Salary Date ${nextsalarydate[0]} ${nextsalarydate[1]} ${nextsalarydate[2]} ```
When you pass more than one list variable into user keyword, you should use '$' for list instead of '@'. Try as below: ``` *** Test Cases *** Sample Case Personal Details Page Fill Form ${firstName} ${surname} ${dateofbirth} ${nextsalarydate} *** Keywords *** Personal Details Page Fill Form [Arguments] ${firstName} ${surname} ${dateofbirth} ${nextsalarydate} Input Text id = firstName ${firstName} Input Text id = lastName ${surname} Personal Details Page Select Date of Birth ${dateofbirth[0]} ${dateofbirth[1]} ${dateofbirth[2]} Personal Details Page Select Next Salary Date ${nextsalarydate[0]} ${nextsalarydate[1]} ${nextsalarydate[2]} ```
Dynamically creating DIVs from Database in ASP.NET It is a beginner question. I want to create divs dynamically from database (**MSSQL**). For example, i want to show comments below an entry. The **Comment** table connected with **Entry** table by **EntryID**. My aspx code is like: ``` <div class="commentBody" runat="server"> <asp:Label ID="commentSender" runat="server" Text=""></asp:Label> <asp:Label ID="commentDate" runat="server" Text=""></asp:Label> <asp:Label ID="commentText" runat="server" Text=""></asp:Label> </div> ``` This will be repeat for all comments. And I am working all codebehind (without evals). My c# code: ``` protected void YorumlariGetir() { string selectComments = "SELECT * FROM Comment WHERE Comment.EntryID = @EntryID"; SqlConnection conn = new SqlConnection(constr); SqlCommand cmd = new SqlCommand(selectComments, conn); cmd.Parameters.AddWithValue("@EntryID", Session["EntryID"].ToString()); try { conn.Open(); // HERE I WANT TO CALL A LOOP FOR COMMENTS } catch (Exception ex) { Response.Write("Hata: " + ex.Message); } finally { conn.Close(); } } ``` I can use repeaters or foreach loop. But i don't know how and need an example at this point. Thanks for help.
EDIT: Answer fully revised. In your question you ask for a way to do looping and add comments on each loop iteration. You can do this but there are far better ways using built in ASP.NET controls. I will first show you a basic example that simply iterates the SqlDataReader object and manually creates HTML. Then I will show a much better solution. I do not recommend the first option if you are able to implement the second one. On both solutions, I strongly suggest to specifically name your fields in the select query instead of using an asterisk to select all fields. Using `SELECT *` can cause issues if the table structure changes. Also, you may be selecting data columns you do not need which wastes resources. First, here is very simple example using the [SqlDataReader](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader%28v=vs.110%29.aspx) class. This will work, but remember there is a better way. ``` try { conn.Open(); // HERE I WANT TO CALL A LOOP FOR COMMENTS SqlDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { // create the div to wrap around the comment HtmlGenericControl div = new HtmlGenericControl("div"); div.Attributes.Add("style", "commentBody"); // create three labels and add them to the div // 1,2,3 are the ordinal positions of the column names, this may need corrected since I have no idea what your table looks like. div.Controls.Add(new Label() { Text = reader.GetString(1) }); div.Controls.Add(new Label() { Text = reader.GetString(2) }); div.Controls.Add(new Label() { Text = reader.GetString(3) }); // add the div to the page somehow, these can be added to any HTML control that can act as a container. I would suggest a plain old div. MyMainDiv.Controls.Add(div); } } ``` Now, the above method will work, but it is a clumsy, old-fashioned, way to handle displaying data. Modern .NET applications should use better solutions where available. A better solution would be to use [Data Binding](http://support.microsoft.com/kb/307860). There are many articles and tutorials on this across the Internet, so if this is a new idea, you can do some tutorials to learn the finer points of Data Binding. To use the Repeater class, first add a Repeater control to your ASPX page: ``` <asp:Repeater id="Repeater1" runat="server"> <ItemTemplate> <div class="commentBody"> <span class="commentSender"><%# DataBinder.Eval(Container.DataItem,"aucommentSenderid") %></span> <span class="commentDate"><%# DataBinder.Eval(Container.DataItem,"aucommentDateid") %></span> <span class="commentText"><%# DataBinder.Eval(Container.DataItem,"aucommentTextid") %></span> </div> </ItemTemplate> </asp:Repeater> ``` Next, add some code-behind to create a datasource and attach this to the Repeater control: ``` SqlDataAdapter da = new SqlDataAdapter(cmd); // use your existing SqlCommand here (don't use select *) DataSet ds = new DataSet(); // create a DataSet object to hold you table(s)... this can contain more than 1 table da.Fill(ds, "Comment"); // fill this dataset with everything from the Comment Table Repeater1.DataSource = ds.Tables["Comment"]; // attach the data table to the control Repeater1.DataBind(); // This causes the HTML to be automatically rendered when the page loads. ```
Using Inline::CPP vs SWIG - when? In [this](https://stackoverflow.com/q/11606694/632407) question i saw two different answers how to directly call functions written in C++ 1. [Inline::CPP](https://metacpan.org/module/Inline::CPP) (and here are more, like Inline::C, Inline::Lua, etc..) 2. [SWIG](http://www.swig.org/) 3. Handmade (as daxim told - majority of modules are handwritten) I just browsed nearly all questions in SO tagged [perl][swig] for finding answer for the next questions: - What are the main differences using (choosing between) SWIG and Inline::CPP or Handwritten? - When is the "good practice" - recommented to use Inline::CPP (or Inline:C) and when is recommented to use SWIG or Handwritten? As I thinking about it, using SWIG is more universal for other uses, like asked [in this question](https://stackoverflow.com/q/25268/734304) and Inline::CPP is perl-specific. But, *from the perl's point of view,* is here some (any) significant difference?
I haven't used SWIG, so I cannot speak directly to it. But I'm [pretty familiar](https://github.com/daoswald/Inline-CPP.git) with [Inline::CPP](https://metacpan.org/module/Inline::CPP). If you would like to compose C++ code that gets compiled and becomes callable from within Perl, Inline::CPP facilitates this. So long as the C++ code doesn't change, it should only compile once. If you base a module on Inline::CPP, the code will be compiled at module install time, so another user never really sees the first time compilation lag; it happens at install time, just before the testing phase. Inline::CPP is not 100% free of portability isues. The target user must have a C++ compiler that is of similar flavor to the C compiler used to build Perl, and the C++ standard libraries should be of versions that produce binary-compatible code with Perl. Inline::CPP has about a 94% success rate with the CPAN testers. And those last 6% almost always boil down to issues of the installation process not correctly deciphering what C++ compiler and libraries to use. ...and of those, it usually comes down to the libraries. Let's assume you as a module author find yourself in that 95% who have no problem getting Inline::CPP installed. If you know that your target audience will fall into that same category, then producing a module based on Inline::CPP is simple. You basically have to add a couple of directives (VERSION and NAME), and swap out your Makefile.PL's ExtUtils::MakeMaker call to Inline::MakeMaker (it will invoke ExtUtils::MakeMaker). You might also want a CONFIGURE\_REQUIRES directive to specify a current version of ExtUtils::MakeMaker when you create your distribution; this insures that your users have a cleaner install experience. Now if you're creating the module for general consumption and have no idea whether your target user will fit that 94% majority who can use Inline::CPP, you might be better off removing the Inline::CPP dependency. You might want to do this just to minimize the dependency chain anyway; it's nicer for your users. In that case, compose your code to work with Inline::CPP, and then use InlineX::CPP2XS to convert it to a plain old XS module. Your user will now be able to install without the process pulling Inline::CPP in first. C++ is a large language, and Inline::CPP handles a large subset of it. Pay attention to the typemap file to determine what sorts of parameters can be passed (and converted) automatically, and what sorts are better dealt with using "guts and API" calls. One feature I wouldn't recommend using is automatic string conversion, as it would produce Unicode-unfriendly conversions. Better to handle strings explicitly through API calls. The portion of C++ that isn't handled gracefully by Inline::CPP is template metaprogramming. You're free to use templates in your code, and free to use the STL. However, you cannot simply pass STL type parameters and hope that Inline::CPP will know how to convert them. It deals with POD (basic data types), not STL stuff. Furthermore, if you compose a template-based function or object method, the C++ compiler won't know what context Perl plans to call the function in, so it won't know what type to apply to the template at compiletime. Consequently, the functions and object methods exposed directly to Inline::CPP need to be plain functions or methods; not template functions or classes. These limitations in practice aren't hard to deal with as long as you know what to expect. If you want to expose a template class directly to Inline::CPP, just write a wrapper class that either inherits or composes itself of the template class, but gives it a concrete type for Inline::CPP to work with. Inline::CPP is also useful in automatically generating function wrappers for existing C++ libraries. The documentation explains how to do that. One of the advantages to Inline::CPP over Swig is that if you already have some experience with [perlguts](http://perldoc.perl.org/perlguts.html), [perlapi](http://perldoc.perl.org/perlapi.html), and [perlcall](http://perldoc.perl.org/perlcall.html), you will feel right at home already. With Swig, you'll have to learn the Swig way of doing things first, and then figure out how to apply that to Perl, and possibly, how to do it in a way that is CPAN-distributable. Another advantage of using Inline::CPP is that it is a somewhat familiar tool in the Perl community. You are going to find a lot more people who understand Perl XS, Inline::C, and to some extent Inline::CPP than you will find people who have used Swig with Perl. Although XS can be messy, it's a road more heavily travelled than using Perl with Swig. Inline::CPP is also a common topic on the [inline@perl.org](http://lists.perl.org/list/inline.html) mailing list. In addition to myself, the maintainer of Inline::C and several other Inline-family maintainers frequent the list, and do our best to assist people who need a hand getting going with the Inline family of modules. You might also find [my Perl Mongers talk on Inline::CPP](https://github.com/daoswald/Inline-C-Perl-Mongers-Talk.git) useful in exploring how it might work for you. Additionally, [Math::Prime::FastSieve](https://metacpan.org/module/Math::Prime::FastSieve) stands as a proof-of-concept for basing a module on Inline::CPP (with an Inline::CPP dependency). Furthermore, Rob (sisyphus), the current Inline maintainer, and author of InlineX::CPP2XS has actually included an example in the [InlineX::CPP2XS](https://metacpan.org/module/InlineX::CPP2XS) distribution that takes my Math::Prime::FastSieve and converts it to plain XS code using his InlineX::CPP2XS.
Android Development: Efficient Syntax Highlighting Tips? I've developed my own syntax highlighting library for Android, and it works great, but the problem is it slows down the typing. I've tried using an AsyncTask to perform the regular expressions in the background then apply the necassary colours but it still slowed down the typing process. Currently, it reads the whole EditText, I thought of instead getting the line the text cursor is on, getting that lines CharSequence then performing the regular expressions on that line instead of the whole document, but I really don't know how I could get the line the user is working on :(.
Unless you're only doing single-line regexn/highlighting, your proposed strategy may not work. For example, you probably can't tell if you're in a multi-line comment without, well, scanning multiple lines. :-) If you have not done so already, use Traceview to identify where the slowdowns are specifically. It may be that you can optimize enough other things. For example, maybe you're compiling all your `Pattern` objects on the fly rather than defining them statically. Beyond that, I think a typical pattern is to only apply syntax highlighting when the user pauses. One possible way of implementing that would be: Step #1: On every text change (which you have presumably already hooked into), `postDelayed()` a `Runnable` and save the timestamp retrieved from `SystemClock.uptimeMillis()` in a data member of your `EditText` subclass (or wherever you have the syntax coloring logic). For the purposes of this answer, I'll call your delay period that you use with `postDelayed()` as `DELAY`. Step #2: The `Runnable` compares the current time from `SystemClock.uptimeMillis()` with the time of the last text change. If the time difference is less than `DELAY`, you know the user typed something in between when this `Runnable` was scheduled and now, so you just do nothing. If the time difference >= `DELAY`, though, you run through your syntax coloring logic. This way, you skip applying the syntax coloring until the user pauses, thereby not interrupting their typing. You can tweak `DELAY`, or perhaps make it configurable. BTW, you *are* planning on releasing this as an open source library, right? :-)
Objective C: Unsigned int compare So I ran into a huge issue at work because I had something like this in my code: ``` int foo = -1; NSArray *bar = [[NSArray alloc] initWithObjects:@"1",@"2",@"3", nil]; if (foo > [bar count]){ NSLog(@"Wow, that's messed up."); } else { NSLog(@"Rock on!"); } ``` As you probably already know by me posting this, the output is: ***"Wow, that's messed up."*** From what I gather, objective C is converting my negative number to a "signed" int and thus, killing my compare. I saw other posts about this and they all stated what the problem was but none of them suggested any simple solutions to get this comparison to actually work. Also, I'm shocked that there are no compiler warnings, as these are causing serious issues for me.
**Try this** ``` - (IBAction)btnDoSomething:(id)sender { int foo = -1; NSArray *bar = [[NSArray alloc] initWithObjects:@"1",@"2",@"3", nil]; if ( foo > (signed)[bar count] ) { NSLog(@"Wow, that's messed up."); } else { NSLog(@"Rock on!"); } } ``` --- **Working** If you are comparing two different type of variables then it will implicitly convert data type of both variables to higher type of them. In this example, variable foo is of type *signed int* and array count is of *unsigned int*, So it will convert data type of foo to *unsigned int* then value of foo will become large number, which is smaller than array count 3. So in this example you need to down cast array count to *signed int*. --- **Issues** When your array count exceeds max limit of signed int then after casting it will rounded back like [ -(negative) max limit -> 0 -> + max limit ] which is unexpected result. **Solution** - Avoid type casting if you are not sure about maximum array length. - Do casting if you are sure for limit(maximum array length will not exceeds *signed int* max limit). --- For more details check this <http://visualcplus.blogspot.in/2006/02/lesson-4-casting-data-types.html>
laravel eager loading using with() vs load() after creating the parent model I am creating a **Reply** model and then trying to return the object with it's **owner** relation. Here is the code that returns an empty object: ``` //file: Thread.php //this returns an empty object !!?? public function addReply($reply) { $new_reply = $this->replies()->create($reply); return $new_reply->with('owner'); } ``` However, if i swap the *with()* method for *load()* method to load the **owner** relation, i get the expected result. That is the reply object is returned with it's associated **owner** relation: ``` //this works { $new_reply = $this->replies()->create($reply); return $new_reply->load('owner'); } ``` i don't understand why. Looking for clarifications. Thanks, Yeasir
This is because you should use `with` when you don't have object yet (you are making query), and when you already have an object you should use `load`. Examples: **Collection of users**: ``` $users = User::with('profile')->get(); ``` or: ``` $users = User::all(); $users->load('profile'); ``` **Single user**: ``` $user = User::with('profile')->where('email','sample@example.com')->first(); ``` or ``` $user = User::where('email','sample@example.com')->first(); $user->load('profile'); ``` **Methods implementation in Laravel** Also you can look at `with` method implementation: ``` public static function with($relations) { return (new static)->newQuery()->with( is_string($relations) ? func_get_args() : $relations ); } ``` so it's starting new query so in fact it won't execute the query until you use `get`, `first` and so on where is `load` implementation is like this: ``` public function load($relations) { $query = $this->newQuery()->with( is_string($relations) ? func_get_args() : $relations ); $query->eagerLoadRelations([$this]); return $this; } ``` so it's returning the same object, but it load relationship for this object.
I can't bind events to elements ``` const color = ['red', 'orange', 'yellow', 'green', 'blue', 'navy', 'purple']; let a = []; for (let i = 0; i < color.length; i++) { a[i] = document.createElement("input"); a[i].type = 'button'; a[i].id = 'b' + (i + 1); a[i].value = color[i]; a[i].addEventListener('click', function() { alert('color'); }) document.body.appendChild(a[i]); document.body.innerHTML += "<br>" console.log(a[0].innerHTML); } ``` It seems that the listener is not getting bound despite the `addEventListener`. What is the problem?
The problem is that, when concatenating with the `innerHTML` of a container (for example, with your `document.body.innerHTML += "<br>"`), the container will be *emptied* and then re-parsed with the new HTML string. If you previously attached a listener to an element in the container, that listener will not be in the HTML string, so it will not transfer over to the *new* element in the same position. ``` const div1 = document.querySelector('#somediv'); document.body.innerHTML += ''; const div2 = document.querySelector('#somediv'); console.log(div1 === div2); // False, the container's contents were re-parsed, the new div is different! ``` ``` <div id="somediv"></div> ``` Either append your `br` using the same `appendChild` method you're using for the `a[i]`: ``` const color = ['red', 'orange', 'yellow', 'green', 'blue', 'navy', 'purple']; let a = []; for (let i = 0; i < color.length; i++) { a[i] = document.createElement("input"); a[i].type = 'button'; a[i].id = 'b' + (i + 1); a[i].value = color[i]; a[i].addEventListener('click', function() { alert('color'); }) document.body.appendChild(a[i]); document.body.appendChild(document.createElement('br')); } ``` Or use `insertAdjacentHTML` instead, which can act similarly to `.innerHTML +=`, but unlike `.innerHTML +=`, does *not* re-create all elements in the container: ``` const color = ['red', 'orange', 'yellow', 'green', 'blue', 'navy', 'purple']; let a = []; for (let i = 0; i < color.length; i++) { a[i] = document.createElement("input"); a[i].type = 'button'; a[i].id = 'b' + (i + 1); a[i].value = color[i]; a[i].addEventListener('click', function() { alert('color'); }) document.body.appendChild(a[i]); document.body.insertAdjacentHTML('beforeend', '<br>'); } ```
Batch script get html site and parse content (without wget, curl or other external app) I need to work with windows cmd functionality only. I need two vars/strings from a website to use in the batchscript for validate actions with it. To not make it too simple this website needs authentification in addition. I found this somewhere: ``` @set @x=0 /* :: ChkHTTP.cmd @echo off setlocal set "URL=http://www.google.com" cscript /nologo /e:jscript "%~f0" %URL% | find "200" > nul if %ErrorLevel% EQU 0 ( echo Web server ok % Put your code here % ) else ( echo Web server error reported ) goto :EOF JScript */ var x=new ActiveXObject("Microsoft.XMLHTTP"); x.open("GET",WSH.Arguments(0));x.send(); while (x.ReadyState!=4) {WSH.Sleep(50)}; WSH.Echo(x.status) ``` But I'm not sure if it's possible to get the site content this way too instead of status answer and the more I don't know how to implement website authentification to this. The above code does not work correctly as it will always produce error because of the pipe, but this seemed nearer to my needs of parsing the content I hoped.
I've only ever used wget to fetch web content from a Windows batch script. Using an XHR via JScript was a fantastic idea! But the script you're trying to plunder appears to be intended for checking whether a web server is responding, not for fetching content. With some modifications, you can use it to fetch a web page and do whatever processing you need. ``` @if (@a==@b) @end /* :: fetch.bat <url> :: fetch a web page @echo off setlocal if "%~1"=="" goto usage echo "%~1" | findstr /i "https*://" >NUL || goto usage set "URL=%~1" for /f "delims=" %%I in ('cscript /nologo /e:jscript "%~f0" "%URL%"') do ( rem process the HTML line-by-line echo(%%I ) goto :EOF :usage echo Usage: %~nx0 URL echo for example: %~nx0 http://www.google.com/ echo; echo The URL must be fully qualified, including the http:// or https:// goto :EOF JScript */ var x=new ActiveXObject("Microsoft.XMLHTTP"); x.open("GET",WSH.Arguments(0),true); x.setRequestHeader('User-Agent','XMLHTTP/1.0'); x.send(''); while (x.readyState!=4) {WSH.Sleep(50)}; WSH.Echo(x.responseText); ```
Smalltalk - iterate over a dict array with conditions I'm working on a Smalltalk small method, I want this method to iterate over a dictionary array and to return True or False depend on the conditions. The dictionary array is an instance variable, name dictArray. It looks like: `[{'name': toto, 'age': 12}, {'name': tata, 'age': 25}]` So I want to iterate over dictArray and verify for each item the name and the age. If it matches I return true else false and the end of the iteration. In python it should look like: ``` for item in dictArray: if item['name'] == aName and item['age'] == aAge: return True return False ``` I can't find documentation with this special case (array iteration + condition + return) Hope someone can help me!
To test whether a Collection contains an element that matches a condition, use `anySatisfy:`. It answers true iff there is a matching element. ``` dictArray anySatisfy: [:each | (each at: 'name') = aName and: [(each at: 'age') = anAge]] ``` Reference: <https://www.gnu.org/software/smalltalk/manual-base/html_node/Iterable_002denumeration.html> The way described above is the preferred way to write it. The following is only for explanation how it relates to your Python code example. `anySatisfy:` can be implemented in terms of `do:` ``` anySatisfy: aBlock self do: [:each | (aBlock value: each) ifTrue: [^ true]]. ^ false ``` Or spelled out with your condition: ``` dictArray do: [:each | ((each at: 'name') = aName and: [(each at: 'age') = anAge]) ifTrue: [^ true]]. ^ false ``` This is the equivalent of your Python code.
Handling of closures in data.table I am using the data.table package to return a list of function closures in a `j` expression as output by the `approxfun` function from the stats package. Basically, on each Date, I would like a closure that allows me to calculate an arbitrary yval based on an arbitrary xval as determined by `approxfun`. However, `approxfun` is only valid when there are at least two unique values of x passed to the function. In the case where there is only one unique value of x, I would like to return a function that returns the one unique value of y. In the code below, I perform this step by check the `.N` value and returning a different function depending on whether or not `.N` is `> 1`. ``` library(data.table) set.seed(10) N <- 3 x <- data.table(Date = Sys.Date() + rep(1:N, each = 3), xval = c(0, 30, 90), yval = rnorm(N * 3)) x <- x[-c(2:3), ] ##interpolation happens correctly x2 <- x[order(Date, xval), { if(.N > 1){ afun <- approxfun(xval, yval, rule = 1) }else{ afun <- function(v) yval } print(afun(30)) list(Date, afun = list(afun)) }, by = Date] ##evaluation does NOT happen correctly, the val used is the last... sapply(x2[, afun], do.call, args = list(v = 30)) ``` When evaluating the function 'afun' in the context of the `j` expression, the correct value of 'yval' is printed. However, when I go back after the fact to evaluate the first function, the yval returned is the *last* yval in the group created by the 'by' grouping for the function that is not created by `approxfun` (all the closures created by `approxfun` work as expected). My suspicion is that this has to do with something I am missing with lazy evaluation. I tried the additional code below using the 'force' function but was unsuccessful. ``` x3 <- x[order(Date, xval), { if(.N > 1){ afun <- approxfun(xval, yval, rule = 1) }else{ fn <- function(x){ force(x) function(v) x } afun <- fn(yval) } print(afun(30)) list(Date, afun = list(afun)) }, by = Date] sapply(x3[, afun], do.call, args = list(v = 30)) ``` Has anyone else encountered this issue? Is it something I am missing with base R or something I am missing with data.table? Thanks in advance for the help
Yes, typical data.table reference vs copy FAQ. This works as expected: ``` x2 <- x[order(Date, xval), { if(.N > 1){ afun <- approxfun(xval, yval, rule = 1) }else{ fn <- function(){ #ensure the value is copied x <- copy(yval) function(v) x } afun <- fn() } print(afun(30)) list(Date, afun = list(afun)) }, by = Date] #[1] 0.01874617 #[1] 0.2945451 #[1] -0.363676 sapply(x2[, afun], do.call, args = list(v = 30)) #[1] 0.01874617 0.29454513 -0.36367602 ```
Wider than 16:9 I would like more horizontal width on my desktop. To date, the only wider than 16:9 monitor I can find is by [AlienWare and costs $7000](http://tech.yahoo.com/blog/hughes/22260) and has a rubbish resolution. I don't have enough room for two (or more) monitors on the tiny desk I have and I can't move the computer anywhere else. However I want more width than the 1920x1080 that I currently have. Are there any monitors that are 2:1 (or more) and won't mean selling a kidney to own?
Doubtfully. The monitor resolutions are determined by certain standards that have to do with timings and such. 16:9 is pretty much as close to 2:1 you're going to get. Your only *affordable* bet is going to be getting more monitors and making room. I'd try to find a triple monitor setup because then you don't have the gap down the middle of your field-of-view. It's going to take an insane setup to drive that high of a resolution though. A good overview of standard resolutions can be found [on Wikipedia](http://en.wikipedia.org/wiki/File:Vector_Video_Standards2.svg): [![alt text](https://i.stack.imgur.com/JRLQ7.png)](http://en.wikipedia.org/wiki/File:Vector_Video_Standards2.svg)
PySpark: Randomize rows in dataframe I have a dataframe and I want to randomize rows in the dataframe. I tried sampling the data by giving a fraction of 1, which didn't work (interestingly this works in Pandas).
It works in Pandas because taking sample in local systems is typically solved by shuffling data. Spark from the other hand avoids shuffling by performing linear scans over the data. It means that sampling in Spark only randomizes members of the sample not an order. You can order `DataFrame` by a column of random numbers: ``` from pyspark.sql.functions import rand df = sc.parallelize(range(20)).map(lambda x: (x, )).toDF(["x"]) df.orderBy(rand()).show(3) ## +---+ ## | x| ## +---+ ## | 2| ## | 7| ## | 14| ## +---+ ## only showing top 3 rows ``` but it is: - expensive - because it requires full shuffle and it something you typically want to avoid. - suspicious - because order of values in a `DataFrame` is not something you can really depend on in non-trivial cases and since `DataFrame` doesn't support indexing it is relatively useless without collecting.
How to provide a client certificate to http-client-tls? I am using [http-client-tls](http://hackage.haskell.org/package/http-client-tls-0.2.1.2) to connect to a TLS-enabled server that requires a client certificate. I suspect I need to tweak [TLSSettings](http://hackage.haskell.org/package/connection-0.2.1/docs/Network-Connection.html#t:TLSSettings) with a loaded certificate and correct cypher-suites parameters but it is definitely not clear how to do this. Does anybody have some example code that uses client-side certificates?
Thanks to Moritz Agerman for sharing his code. Here is a full Haskell module that can use `crt.pem` and `key.pem` files to provide client-side certificate as requested by server: ``` {-# LANGUAGE OverloadedStrings #-} module TLS where import Data.Default import Network.Connection import Network.HTTP.Client import Network.HTTP.Client.TLS import Network.TLS import Network.TLS.Extra.Cipher import Servant.Client makeClientManager :: String -> Scheme -> IO Manager makeClientManager hostname Https = mkMngr hostname "crt.pem" "key.pem" makeClientManager _ Http = newManager defaultManagerSettings mkMngr :: String -> FilePath -> FilePath -> IO Manager mkMngr hostName crtFile keyFile = do creds <- either error Just `fmap` credentialLoadX509 crtFile keyFile let hooks = def { onCertificateRequest = \_ -> return creds , onServerCertificate = \_ _ _ _ -> return [] } clientParams = (defaultParamsClient hostName "") { clientHooks = hooks , clientSupported = def { supportedCiphers = ciphersuite_all } } tlsSettings = TLSSettings clientParams newManager $ mkManagerSettings tlsSettings Nothing ``` Not sure if this does bypass server certificate validation or not as `onServerCertificate` hook is a constant `[]`.
How to make git log cut long comments? I have a git log alias that prints each commit as a single line. Since some people write far too long one-liners in the commit log, many commits wrap to a new line. How can I format the git log output to cut the comment after 50 characters? I found this in the git-log man page but it will only pad short comments, not cut long ones. ``` %<(<N>[,trunc|ltrunc|mtrunc]): make the next placeholder take at least N columns, padding spaces on the right if necessary. Optionally truncate at the beginning (ltrunc), the middle (mtrunc) or the end (trunc) if the output is longer than N columns. Note that truncating only works correctly with N >= 2. ```
It is not that clear in the documentation just which characters are needed but the following example cuts the subject line to 50 characters: ``` git log --oneline --format="%h %<(50,trunc)%s" ``` The format specification is `%<` and the arguments for that need to be in parentheses. In this case, 50 chars and truncate the excess. For instance, performing this on the msysGit repository yields: ``` C:\src\msysgit>git log -n 5 --format="%h [%<(12,trunc)%aN] [%<(12,trunc)%cN] %<(50,trunc)%s" 218ed04 [Sebastian ..] [Sebastian ..] Merge pull request #154 from csware/tortoisegitp.. 8a920b9 [Sven Stric..] [Sven Stric..] Installer: Detect TortoiseGitPlink from Tortoise.. 448e125 [dscho ] [dscho ] Merge pull request #152 from csware/syscommand db8d1bf [Sven Stric..] [Sven Stric..] Perl readline creates empty sys$command files if.. 753d3d6 [Johannes S..] [Johannes S..] Git for Windows 1.8.5.2-preview20131230 ```
Can't display content in infowindow, I'm using google maps API v3 thanks in advance, I'm stuck at the moment trying to figure out why my code is not working properly, I have been trying to display content using infowindow for the last few hours, I can't find why the content is not displaying inside the box. At the moment I am able to display multiple markers in the map in different locations depending the coordinates, now I just need to display the infowindow with some relevant content inside when clicking on the marker. I can display the infowindow but it is blank, it is not showing the string. Here I paste my function. Thanks again. ``` function initialize() { var mapOptions = { mapTypeId: google.maps.MapTypeId.ROADMAP, mapTypeControl: false }; var map = new google.maps.Map(document.getElementById("map_canvas"),mapOptions); var infowindow = new google.maps.InfoWindow(); var marker, i; var bounds = new google.maps.LatLngBounds(); var image = 'http://193.168.3.5/Location&Tracking/images/taxiv7_trans_small.png'; for (i = 0; i < markers.length; i++) { var pos = new google.maps.LatLng(markers[i][1], markers[i][2]); bounds.extend(pos); marker = new google.maps.Marker({ position: pos, map: map, icon: image }); function placeMarker(location) { var marker = new google.maps.Marker({ position: location, map: map }); var infowindow = new google.maps.InfoWindow({ content: 'Latitude: ' + location.lat() + '<br>Longitude: ' + location.lng() }); infowindow.open(map,marker); } var infowindow = new google.maps.InfoWindow({ content:"Hello World!" }); google.maps.event.addListener(marker, 'click', (function(marker, i) { return function() { infowindow.setContent(markers[i][0]); infowindow.open(map, marker); } })(marker, i)); } map.fitBounds(bounds); } ```
Finally I found the solution to my own problem: The text inside the infowindow was displayed but the text colour was white as the background, witch it maked invisible. I have changed the text color by applying html code to it and change it to blue. I also was repeating when giving content: value and infowindow.open twice which didn't make any sense over writing the previous one. Here I paste the corrected code, it works for me now. Thanks anyway. ``` function initialize() { //Create an object variable containing set of properties, to pass to the map var mapOptions = { mapTypeId: google.maps.MapTypeId.ROADMAP, mapTypeControl: false }; // Intializing map by calling constructor var map = new google.maps.Map(document.getElementById("map_canvas"),mapOptions); var infowindow = new google.maps.InfoWindow(); var marker, i; var bounds = new google.maps.LatLngBounds(); //set image to the icon marker var image = 'http://193.168.3.5/Location&Tracking/images/taxiv7_trans_small.png'; for (i = 0; i < markers.length; i++) { var pos = new google.maps.LatLng(markers[i][1], markers[i][2]); bounds.extend(pos); marker = new google.maps.Marker({ position: pos, map: map, icon: image }); google.maps.event.addListener(marker, 'click', (function(marker, i) { return function() { var infowindow = new google.maps.InfoWindow({ content:'<p style="color:blue;"> This is marker No.'+markers[i][0]+'</p>'}); infowindow.open(map, marker); } })(marker, i)); } map.fitBounds(bounds); } ```
what I do for these conditions to follow FP? I'm reading FP and I have two basic questions: 1. FP says function should take one input and gives single output. So what should I do with `void` methods? It doesn't return anything right? 2. FP says function should have single resresponsibility, then how do we handle `log` statements inside the method? That doesn't violate the rule? Wish to know how they handle these things in Scala, Haskell. Thanks in advance.
I'm assuming you're reading a book called "Functional Programming", although it would help to know who the author is as well. In any case, these questions are relatively easy to answer and I'll give my answers with respect to Haskell because I don't know Scala. --- > > So what should I do with void methods? It doesn't return anything right? > > > There are no `void` methods in a pure functional language like Haskell. A pure function has no side effects, so a pure function without a return value is meaningless, something like ``` f :: Int -> () f x = let y = x * x + 3 in () ``` won't do any computation, `y` is never calculated and all inputs you give will return the same value. However, if you have an impure function, such as one that writes a file or prints something to the screen then it must exist in a *monadic context*. If you don't understand monads yet, don't worry. They take a bit to get used to, but they're a very powerful and useful abstraction that can make a lot of problems easier. A monad is something like `IO`, and in Haskell this takes a type parameter to indicate the value that can be stored inside this context. So you can have something like ``` putStrLn :: String -> IO () ``` Or ``` -- FYI: FilePath is an alias for String writeFile :: FilePath -> String -> IO () ``` these have side effects, denoted by the return value of `IO something`, and the `()` something means that there is no meaningful result from that operation. In Python 3, for example, the `print` function returns `None` because there isn't anything meaningful to return after printing a value to the screen. The `()` can also mean that a monadic context has a meaningful value, such as in `readFile` or `getLine`: ``` getLine :: IO String readFile :: FilePath -> IO String ``` When writing your `main` function, you could do something like ``` main = do putStrLn "Enter a filename:" fname <- getLine -- fname has type String writeFile fname "This text will be in a file" contents <- readFile fname putStrLn "I wrote the following text to the file:" putStrLn contents ``` --- > > FP says function should have single resresponsibility, then how do we handle log statements inside the method? That doesn't violate the rule? > > > Most functions don't need logging inside them. I know that sounds weird, but it's true. In Haskell and most other functional languages, you'll write a lot of small, easily testable functions that each do one step. It's very common to have lots of 1 or 2 line functions in your application. When you actually do need to do logging, say you're building a web server, there are a couple different approaches you can take. There is actually a monad out there called `Writer` that lets you aggregate values as you perform operations. These operations don't have to be impure and do IO, they can be entirely pure. However, a true logging framework that one might use for a web server or large application would likely come with its own framework. This is so that you can set up logging to the screen, to files, network locations, email, and more. This monad will wrap the `IO` monad so that it can perform these side effects. A more advanced one would probably use some more advanced libraries like monad transformers or extensible effects. These let you "combine" different monads together so you can use utilities for both at the same time. You might see code like ``` type MyApp a = LogT IO a -- log :: Monad m => LogLevel -> String -> LogT m () getConnection :: Socket -> MyApp Connection getConnection sock = do log DEBUG "Waiting for next connection" conn <- liftIO $ acceptConnection sock log INFO $ "Accepted connection from IP: " ++ show (connectionIP conn) return conn ``` I'm not expecting you to understand this code fully, but I hope you can see that it has logging and network operations mixed together. The `liftIO` function is a common one with monad transformers that "transforms" an IO operation into a new monad that wraps IO. This may sound pretty confusing, and it can be at first if you're used to Python, Java, or C++ like languages. I certainly was! But after I got used to thinking about problems in this different way makes me wish I had these constructs in OOP languages all the time.
Do screenreaders ever access the content of an SVG? The code I'm working on needs to pass the tests from Tenon.io, and it's flagging issues that occur within SVGs, specifically Test ID 75 (This 'id' is being used more than once). This is failing because the SVGs have identical ids for similar elements because they were generated by the same program (Illustrator I believe) and more than one appears on a page. I would think that any content inside an SVG would be irrelevant and shouldn't be flagged or even traversed by either a screenreader or Tenon.io's crawler. I've tested the page where this issue appears in VoiceOver and it is ignored. Do other screenreaders do the same? Is there risk of any screenreader traversing an SVG DOM?
Yes, Screen readers can read an SVG as long as that screen reader has been coded to read a SVG. SVGs have a number of accessibility tags which can be used and read by screen readers to describe what the SVG is representing or is meant to show. The main accessibility tags are - `<title>` which is used to title an SVG - `<desc>` which is used to give a description of what the SVG is showing - `<text>` which is text already on an SVG which the screen reader can access instead of using vectors to simulate characters There are also normal properties you can use which screen readers use to help identify an object such as `role` to specify what the SVG is used for (like an `img`). An alternative is just to create a fallback which is accessible to all screen readers as some do better than others at reading certain things. Here are some good docs you can read up on which may be able to help: - [SitePoint's SVG Accessibility Guide](https://www.sitepoint.com/tips-accessible-svg/) - [CSS-Tricks SVG Accessibility Guide](https://css-tricks.com/accessible-svgs/) - [W3C Doc on SVG Accessibility API Mappings](https://www.w3.org/TR/svg-aam-1.0/) - [W3C Accessibility Features of SVG](https://www.w3.org/TR/SVG-access/)
Polygon triangulation for globe Is it real to fill all polygons? [Codepen](https://codepen.io/St1myL/pen/JxWQBO?editors=0010). As I get it `ThreeGeoJSON` can not fill polygons, outlines only. Also I've tried [Earcut](https://github.com/mapbox/earcut) for triangulation. `drawThreeGeo(data, radius, 'sphere', {color: 'yellow' // I want to edit fill color of lands, not outline color})`
I suggest you to use better map: [countries.geojson](https://github.com/datasets/geo-countries/blob/master/data/countries.geojson) The solution consists of following steps, for each shape: 1. Put vertices inside of shape, so that when triangulated, it could bend around the globe, 2. Run <https://github.com/mapbox/delaunator> to build triangulated mesh, 3. Step 2 will create triangles *outside* the shape too, we need to remove them by looking into each triangle, and deciding if it belongs to shape or not, 4. Bend the triangulated mesh with `convertCoordinates` You can test my jsfiddle: <http://jsfiddle.net/mmalex/pg5a4132/> **Warning**: it is quite slow because of high level of detail of input. [![triangulate planet three.js with delaunay triangulation](https://i.stack.imgur.com/HjoQN.png)](https://i.stack.imgur.com/HjoQN.png) [![triangulate planet three.js wireframe](https://i.stack.imgur.com/zeuEQ.png)](https://i.stack.imgur.com/zeuEQ.png) [![hover triangulated shape with three.js](https://i.stack.imgur.com/aCFr5.png)](https://i.stack.imgur.com/aCFr5.png) The complete solution: ``` /* Draw GeoJSON Iterates through the latitude and longitude values, converts the values to XYZ coordinates, and draws the geoJSON geometries. */ let TRIANGULATION_DENSITY = 5; // make it smaller for more dense mesh function verts2array(coords) { let flat = []; for (let k = 0; k < coords.length; k++) { flat.push(coords[k][0], coords[k][1]); } return flat; } function array2verts(arr) { let coords = []; for (let k = 0; k < arr.length; k += 2) { coords.push([arr[k], arr[k + 1]]); } return coords; } function findBBox(points) { let min = { x: 1e99, y: 1e99 }; let max = { x: -1e99, y: -1e99 }; for (var point_num = 0; point_num < points.length; point_num++) { if (points[point_num][0] < min.x) { min.x = points[point_num][0]; } if (points[point_num][0] > max.x) { max.x = points[point_num][0]; } if (points[point_num][1] < min.y) { min.y = points[point_num][1]; } if (points[point_num][1] > max.y) { max.y = points[point_num][1]; } } return { min: min, max: max }; } function isInside(point, vs) { // ray-casting algorithm based on // http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html var x = point[0], y = point[1]; var inside = false; for (var i = 0, j = vs.length - 1; i < vs.length; j = i++) { var xi = vs[i][0], yi = vs[i][1]; var xj = vs[j][0], yj = vs[j][1]; var intersect = ((yi > y) != (yj > y)) && (x < (xj - xi) * (y - yi) / (yj - yi) + xi); if (intersect) inside = !inside; } return inside; } function genInnerVerts(points) { let res = []; for (let k = 0; k < points.length; k++) { res.push(points[k]); } let bbox = findBBox(points); let step = TRIANGULATION_DENSITY; let k = 0; for (let x = bbox.min.x + step / 2; x < bbox.max.x; x += step) { for (let y = bbox.min.y + step / 2; y < bbox.max.y; y += step) { let newp = [x, y]; if (isInside(newp, points)) { res.push(newp); } k++; } } return res; } function removeOuterTriangles(delaunator, points) { let newTriangles = []; for (let k = 0; k < delaunator.triangles.length; k += 3) { let t0 = delaunator.triangles[k]; let t1 = delaunator.triangles[k + 1]; let t2 = delaunator.triangles[k + 2]; let x0 = delaunator.coords[2 * t0]; let y0 = delaunator.coords[2 * t0 + 1]; let x1 = delaunator.coords[2 * t1]; let y1 = delaunator.coords[2 * t1 + 1]; let x2 = delaunator.coords[2 * t2]; let y2 = delaunator.coords[2 * t2 + 1]; let midx = (x0 + x1 + x2) / 3; let midy = (y0 + y1 + y2) / 3; let midp = [midx, midy]; if (isInside(midp, points)) { newTriangles.push(t0, t1, t2); } } delaunator.triangles = newTriangles; } var x_values = []; var y_values = []; var z_values = []; var progressEl = $("#progress"); var clickableObjects = []; var someColors = [0x909090, 0x808080, 0xa0a0a0, 0x929292, 0x858585, 0xa9a9a9]; function drawThreeGeo(json, radius, shape, options) { var json_geom = createGeometryArray(json); var convertCoordinates = getConversionFunctionName(shape); for (var geom_num = 0; geom_num < json_geom.length; geom_num++) { console.log("Processing " + geom_num + " of " + json_geom.length + " shapes"); // if (geom_num !== 17) continue; // if (geom_num > 10) break; if (json_geom[geom_num].type == 'Point') { convertCoordinates(json_geom[geom_num].coordinates, radius); drawParticle(y_values[0], z_values[0], x_values[0], options); } else if (json_geom[geom_num].type == 'MultiPoint') { for (let point_num = 0; point_num < json_geom[geom_num].coordinates.length; point_num++) { convertCoordinates(json_geom[geom_num].coordinates[point_num], radius); drawParticle(y_values[0], z_values[0], x_values[0], options); } } else if (json_geom[geom_num].type == 'LineString') { for (let point_num = 0; point_num < json_geom[geom_num].coordinates.length; point_num++) { convertCoordinates(json_geom[geom_num].coordinates[point_num], radius); } drawLine(y_values, z_values, x_values, options); } else if (json_geom[geom_num].type == 'Polygon') { let group = createGroup(geom_num); let randomColor = someColors[Math.floor(someColors.length * Math.random())]; for (let segment_num = 0; segment_num < json_geom[geom_num].coordinates.length; segment_num++) { let coords = json_geom[geom_num].coordinates[segment_num]; let refined = genInnerVerts(coords); let flat = verts2array(refined); let d = new Delaunator(flat); removeOuterTriangles(d, coords); let delaunayVerts = array2verts(d.coords); for (let point_num = 0; point_num < delaunayVerts.length; point_num++) { // convertCoordinates(refined[point_num], radius); convertCoordinates(delaunayVerts[point_num], radius); } // drawLine(y_values, z_values, x_values, options); drawMesh(group, y_values, z_values, x_values, d.triangles, randomColor); } } else if (json_geom[geom_num].type == 'MultiLineString') { for (let segment_num = 0; segment_num < json_geom[geom_num].coordinates.length; segment_num++) { let coords = json_geom[geom_num].coordinates[segment_num]; for (let point_num = 0; point_num < coords.length; point_num++) { convertCoordinates(json_geom[geom_num].coordinates[segment_num][point_num], radius); } drawLine(y_values, z_values, x_values); } } else if (json_geom[geom_num].type == 'MultiPolygon') { let group = createGroup(geom_num); let randomColor = someColors[Math.floor(someColors.length * Math.random())]; for (let polygon_num = 0; polygon_num < json_geom[geom_num].coordinates.length; polygon_num++) { for (let segment_num = 0; segment_num < json_geom[geom_num].coordinates[polygon_num].length; segment_num++) { let coords = json_geom[geom_num].coordinates[polygon_num][segment_num]; let refined = genInnerVerts(coords); let flat = verts2array(refined); let d = new Delaunator(flat); removeOuterTriangles(d, coords); let delaunayVerts = array2verts(d.coords); for (let point_num = 0; point_num < delaunayVerts.length; point_num++) { // convertCoordinates(refined[point_num], radius); convertCoordinates(delaunayVerts[point_num], radius); } // drawLine(y_values, z_values, x_values, options); drawMesh(group, y_values, z_values, x_values, d.triangles, randomColor) } } } else { throw new Error('The geoJSON is not valid.'); } } progressEl.text("Complete!"); } function createGeometryArray(json) { var geometry_array = []; if (json.type == 'Feature') { geometry_array.push(json.geometry); } else if (json.type == 'FeatureCollection') { for (var feature_num = 0; feature_num < json.features.length; feature_num++) { geometry_array.push(json.features[feature_num].geometry); } } else if (json.type == 'GeometryCollection') { for (var geom_num = 0; geom_num < json.geometries.length; geom_num++) { geometry_array.push(json.geometries[geom_num]); } } else { throw new Error('The geoJSON is not valid.'); } //alert(geometry_array.length); return geometry_array; } function getConversionFunctionName(shape) { var conversionFunctionName; if (shape == 'sphere') { conversionFunctionName = convertToSphereCoords; } else if (shape == 'plane') { conversionFunctionName = convertToPlaneCoords; } else { throw new Error('The shape that you specified is not valid.'); } return conversionFunctionName; } function convertToSphereCoords(coordinates_array, sphere_radius) { var lon = coordinates_array[0]; var lat = coordinates_array[1]; x_values.push(Math.cos(lat * Math.PI / 180) * Math.cos(lon * Math.PI / 180) * sphere_radius); y_values.push(Math.cos(lat * Math.PI / 180) * Math.sin(lon * Math.PI / 180) * sphere_radius); z_values.push(Math.sin(lat * Math.PI / 180) * sphere_radius); } function convertToPlaneCoords(coordinates_array, radius) { var lon = coordinates_array[0]; var lat = coordinates_array[1]; var plane_offset = radius / 2; z_values.push((lat / 180) * radius); y_values.push((lon / 180) * radius); } function drawParticle(x, y, z, options) { var particle_geom = new THREE.Geometry(); particle_geom.vertices.push(new THREE.Vector3(x, y, z)); var particle_material = new THREE.ParticleSystemMaterial(options); var particle = new THREE.ParticleSystem(particle_geom, particle_material); scene.add(particle); clearArrays(); } function drawLine(x_values, y_values, z_values, options) { var line_geom = new THREE.Geometry(); createVertexForEachPoint(line_geom, x_values, y_values, z_values); var line_material = new THREE.LineBasicMaterial(options); var line = new THREE.Line(line_geom, line_material); scene.add(line); clearArrays(); } function createGroup(idx) { var group = new THREE.Group(); group.userData.userText = "_" + idx; scene.add(group); return group; } function drawMesh(group, x_values, y_values, z_values, triangles, color) { var geometry = new THREE.Geometry(); for (let k = 0; k < x_values.length; k++) { geometry.vertices.push( new THREE.Vector3(x_values[k], y_values[k], z_values[k]) ); } for (let k = 0; k < triangles.length; k += 3) { geometry.faces.push(new THREE.Face3(triangles[k], triangles[k + 1], triangles[k + 2])); } geometry.computeVertexNormals() var mesh = new THREE.Mesh(geometry, new THREE.MeshLambertMaterial({ side: THREE.DoubleSide, color: color, wireframe: true })); clickableObjects.push(mesh); group.add(mesh); clearArrays(); } function createVertexForEachPoint(object_geometry, values_axis1, values_axis2, values_axis3) { for (var i = 0; i < values_axis1.length; i++) { object_geometry.vertices.push(new THREE.Vector3(values_axis1[i], values_axis2[i], values_axis3[i])); } } function clearArrays() { x_values.length = 0; y_values.length = 0; z_values.length = 0; } var scene = new THREE.Scene(); var raycaster = new THREE.Raycaster(); var camera = new THREE.PerspectiveCamera(32, window.innerWidth / window.innerHeight, 0.5, 1000); var radius = 200; camera.position.x = 140.7744005681177; camera.position.y = 160.30950538100814; camera.position.z = 131.8637122564268; var renderer = new THREE.WebGLRenderer(); renderer.setSize(window.innerWidth, window.innerHeight); renderer.setPixelRatio(window.devicePixelRatio); document.body.appendChild(renderer.domElement); var light = new THREE.HemisphereLight(0xffffbb, 0x080820, 1); scene.add(light); var light = new THREE.AmbientLight(0x505050); // soft white light scene.add(light); var geometry = new THREE.SphereGeometry(radius, 32, 32); var material = new THREE.MeshPhongMaterial({ color: 0x1e90ff }); var sphere = new THREE.Mesh(geometry, material); scene.add(sphere); var test_json = $.getJSON("https://raw.githubusercontent.com/datasets/geo-countries/master/data/countries.geojson", function(data) { drawThreeGeo(data, radius + 1, 'sphere', { color: 'yellow' }) }); var controls = new THREE.TrackballControls(camera); controls.rotateSpeed *= 0.5; controls.zoomSpeed *= 0.5; controls.panSpeed *= 0.5; controls.minDistance = 10; controls.maxDistance = 5000; function render() { controls.update(); requestAnimationFrame(render); renderer.setClearColor(0x1e90ff, 1); renderer.render(scene, camera); } render() function convert_lat_lng(lat, lng, radius) { var phi = (90 - lat) * Math.PI / 180, theta = (180 - lng) * Math.PI / 180, position = new THREE.Vector3(); position.x = radius * Math.sin(phi) * Math.cos(theta); position.y = radius * Math.cos(phi); position.z = radius * Math.sin(phi) * Math.sin(theta); return position; } // this will be 2D coordinates of the current mouse position, [0,0] is middle of the screen. var mouse = new THREE.Vector2(); var hoveredObj; // this objects is hovered at the moment // Following two functions will convert mouse coordinates // from screen to three.js system (where [0,0] is in the middle of the screen) function updateMouseCoords(event, coordsObj) { coordsObj.x = ((event.clientX - renderer.domElement.offsetLeft + 0.5) / window.innerWidth) * 2 - 1; coordsObj.y = -((event.clientY - renderer.domElement.offsetTop + 0.5) / window.innerHeight) * 2 + 1; } function onMouseMove(event) { updateMouseCoords(event, mouse); latestMouseProjection = undefined; clickedObj = undefined; raycaster.setFromCamera(mouse, camera); { var intersects = raycaster.intersectObjects(clickableObjects); let setGroupColor = function(group, colorHex) { for (let i = 0; i < group.children.length; i++) { if (!group.children[i].userData.color) { group.children[i].userData.color = hoveredObj.parent.children[i].material.color.clone(); group.children[i].material.color.set(colorHex); group.children[i].material.needsUpdate = true; } } } let resetGroupColor = function(group) { // set all shapes of the group to initial color for (let i = 0; i < group.children.length; i++) { if (group.children[i].userData.color) { group.children[i].material.color = group.children[i].userData.color; delete group.children[i].userData.color; group.children[i].material.needsUpdate = true; } } } if (intersects.length > 0) { latestMouseProjection = intersects[0].point; // reset colors for previously hovered group if (hoveredObj) { resetGroupColor(hoveredObj.parent); } hoveredObj = intersects[0].object; if (!hoveredObj.parent) return; // set colors for hovered group setGroupColor(hoveredObj.parent, 0xff0000); } else { if (!hoveredObj || !hoveredObj.parent) return; // nothing is hovered => just reset colors on the last group resetGroupColor(hoveredObj.parent); hoveredObj = undefined; console.log("<deselected>"); } } } window.addEventListener('mousemove', onMouseMove, false); ```
Floating point resolution at a given number I would like to know the epsilon of a floating point number around a given value. `std::numeric_limits<floating_point_type>::epsilon()` provides that only for the number 1.0, while I would like a function to work on any number. Is there any standard library solution to this? If not - how should I implement the function?
Well, the easiest solution to find the epsilon immediately above the value (that is, the distance from that value to the next representable value) would just be ``` std::nextafter(x, std::numeric_limits<floating_point_type>::infinity()) - x ``` Similarly to find the epsilon below the value, you could do ``` x - std::nextafter(x, -std::numeric_limits<floating_point_type>::infinity()) ``` Note that those two won't be the same if x is an exact power of two. Now, there is one *slight* caveat there: the calculated epsilon above `FLT_MAX` will be infinity (arguably that's *kind of* the correct answer, but it doesn't quite match IEEE-754's rounding rules) and the epsilon above infinity will be NaN (which, well, I don't know how I feel about that). In all other cases, the result will be exact.
ANSI C do you have to use malloc() when creating a struct? Let's say I have this `struct` in ANSI C: ``` typedef struct _point { float x; float y; } Point; ``` and this function to create this `struct`: ``` Point createpoint(float x, float y) { Point p; p.x = x; p.y = y; return p; } ``` This allows me to create a `struct` with this function i.e: ``` int main() { Point pointOne = createpoint(5, 6); Point pointTwo = createpoint(10, 4); float distance = calculatedistancefunc(pointOne, pointTwo); /* ...other stuff */ return 0; } ``` Someone told me that this code is not valid, because the `struct` does not get `malloc`'d in the `createpoint(float x, float y)` function before it gets returned, and that the `struct` will be deleted. However, when I use my `struct` like this, it doesn't seem to get deleted. So my question is: do I have to `malloc` this `struct`, and why? / why not?
Whatever you are doing is entirely correct. The statement - ``` return p; ``` in the function returns a **copy** of the local variable `p`. But if you want the same object that was created in the function, then you need to `malloc` it. However, you need to `free` it later. ``` Point createpoint(float x, float y) { Point p; p.x = x; p.y = y; return p; } // p is no longer valid from this point. So, what you are returning is a copy of it. ``` But - ``` Point* createpoint(float x, float y) { Point *p = malloc(sizeof(Point)); p->x = x; p->y = y; return p; }// Now you return the object that p is pointing to. ```
Why are Reason Arrays Mutable? I suppose I am asking about the rationale behind this design decision. Reason arrays being mutable sticks out as an aberration amongst its other data structures (list, record, hashmap, set) that are immutable by default. Is there a reason for this? Is there an immutable alternative?
There's really no such thing as "Reason arrays". Reason is an alternative syntax to OCaml, and OCaml has mutable arrays. Reason is however typically used with the BuckleScript back-end that compiles to JavaScript, which also has mutable arrays, but the reason why are slightly different. - In OCaml proper, arrays are used when you want the characteristics of an array, typically for its performance profile, but you might also want to use it for its memory layout which is very straightforward and easy to interact with from other languages, but also necessary to be able to communicate with hardware through the access and mutation of shared address spaces. - With BuckleScript, arrays map straight to JavaScript arrays, which in addition to being mutable is also dynamically sized and so effectively an array-list. Here too, you'd typically use it for performance and interoperation with JavaScript, but because the implementation is slightly different, the performance characteristics are subtly different as well. In both cases, if you want something array-like but immutable you'd typically use a list, but there are other options as well, such as [Immutable.re's Vector](https://facebookincubator.github.io/immutable-re/api.html#-vector). It might be better to ask why OCaml doesn't also include an immutable array data type in its standard library. I'm not sure anyone can give a definite answer to that, but it might just be because it's not been requested enough, perhaps because list does the immutable array-like thing well enough.
Positioning an element inside the Canvas by its center (instead of the top left corner) using only XAML in WPF The common question about positioning an element inside a `Canvas` is "How to position the center of element (instead of the top left corner)". - [WPF: Resizing a circle, keeping the center point instead of TopLeft?](https://stackoverflow.com/questions/3423834/wpf-resizing-a-circle-keeping-the-center-point-instead-of-topleft) - [WPF Center Ellipse at X, Y](https://stackoverflow.com/questions/5422897/wpf-center-ellipse-at-x-y) - [WPF element positioning on a Canvas](https://stackoverflow.com/questions/3682111/wpf-element-positioning-on-a-canvas) Several solutions are presented, but they all have drawbacks. The easiest solution is to accommodate the element size during while setting the `Canvas.Left` and `Canvas.Top` properties programmatically. This works, but only once. This solution doesn't support bindings and it will break when the element size is changed. You also cannot set the `Canvas.Left` or `Canvas.Top` using Another set of solutions involve translate transformations utilizing either `RenderTransform` or `Margin`. These solutions require binding some property to the `-0.5 * Width` or `-0.5 * Height`. Such binding requires creating a custom `ValueConverter` and is impossible to create using only XAML. **So, is there a simple way to position an element inside canvas so that its `Canvas.Left` and `Canvas.Top` correspond to the element's center and both size and position properties can be bound to some other properties?**
*XAML and bindings seem very powerful, but sometimes there are simple problems that require very complex solutions. In my bindings library creating such binding would be as easy as writing `element.Center = position` or `element.TopLeft = position - element.Size / 2`, but don't let me get carried away.* I've found a very simple solution which uses only XAML and supports binding both size and position properties of the element. It seems that when the WPF control with alignment set too `Stretch` or `Center` is placed inside the canvas, the element "gravitates" towards centering as the `(Canvas.Left, Canvas.Top)` point (the state that we desire), but is stopped by the "angle plate" placed at the same `(Canvas.Left, Canvas.Top)` point. How do I know about this "gravitation"? It's evident when you ease the block by setting the `Margin` of the element to a negative value. Setting the negative margin allows the element to move towards its center goal. The element moves until the `Margin` reaches `(-Height / 2, -Width / 2)` so that the element becomes perfectly centered at the `(Canvas.Left, Canvas.Top)` point. Further changes don't cause any movement since the element is already perfectly positioned. **Solution: set `Margin="-1000000"`.** So in the following code the ellipses are both centered at the (200, 200) point. The first ellipse has `Left` and `Top` properties corresponding to the ellipse center allowing to easily bind them with some other objects' properties. ``` <Canvas> <Ellipse Width="100" Height="100" Canvas.Left="200" Canvas.Top="200" Opacity="0.5" Fill="Red" Margin="-100000" /> <Ellipse Width="100" Height="100" Canvas.Left="150" Canvas.Top="150" Opacity="0.5" Fill="Blue" /> </Canvas> ``` The bad thing is this solution only work in WPF. Silverlight and WinRT don't have the described behavior.
Zoom background image only I want to zoom the background image only ``` .prod_img:hover { webkit-transform: scale(1.04); -moz-transform: scale(1.04); -o-transform: scale(1.04); -ms-transform: scale(1.04); transform: scale(1.04); -webkit-transition: all 2s ease-in-out; -moz-transition: all 2s ease-in-out; -o-transition: all 2s ease-in-out; -ms-transition: all 2s ease-in-out; transition: all 2s ease-in-out; } .prod_img { content: ' '; -webkit-transition: all 2s ease-in-out; -moz-transition: all 2s ease-in-out; -o-transition: all 2s ease-in-out; -ms-transition: all 2s ease-in-out; transition: all 2s ease-in-out; height: 580px; width: 300px; } .protransparentbg { position: absolute; background: rgba(51, 51, 51, .8); } ``` ``` <<div id="prod_main"> <div id="product_content"> <li class="prod_img prod_img1" id="prod_img1" style="background-image: url(http://images.all-free-download.com/images/graphicthumb/beautiful_landscape_picture_02_hd_pictures_166284.jpg);background-size: cover;background-position: center center;"> <div class="protransparentbg"> <h4 class="">FIBER FLOOR MAT</h4> </div> </li> </div> </div> ``` this is my code but in this zoom effect works for the background image and also it effect the content in the image.how to give the effect only to the background image.
You have to change in css part and use `pesudo elements` to get it Remove `background-image` in html part and use it in css `pesudo element` ``` .prod_img { -webkit-transition: all 2s ease-in-out; -moz-transition: all 2s ease-in-out; -o-transition: all 2s ease-in-out; -ms-transition: all 2s ease-in-out; transition: all 2s ease-in-out; height: 580px; width: 300px; position: relative; } .prod_img:before { content: ' '; position: absolute; top: 0px; left: 0px; width: 100%; height: 100%; background-image: url(http://images.all-free-download.com/images/graphicthumb/beautiful_landscape_picture_02_hd_pictures_166284.jpg); background-size: cover; background-position: center center; " } .protransparentbg { position: absolute; left: 20px; background: rgba(51, 51, 51, .8); } .prod_img:hover:before { webkit-transform: scale(1.04); -moz-transform: scale(1.04); -o-transform: scale(1.04); -ms-transform: scale(1.04); transform: scale(1.04); -webkit-transition: all 2s ease-in-out; -moz-transition: all 2s ease-in-out; -o-transition: all 2s ease-in-out; -ms-transition: all 2s ease-in-out; transition: all 2s ease-in-out; } ``` ``` <div id="prod_main"> <div id="product_content"> <li class="prod_img prod_img1" id="prod_img1"> <div class="protransparentbg"> <h4 class="">FIBER FLOOR MAT</h4> </div> </li> </div> </div> ```
A better query to find list of Employess who have max salary in their department? I have a table: `[tblEmp]` ``` EmpId | EmpName | DeptId ``` and a table `[tblSalary]` ``` EmpId | Salary ``` and I need to find the list of employees who have max salary in their department. I could achieve this by: ``` SELECT * FROM tblEmp JOIN tblSal ON tblSal.EmpId = tblEmp.EmpId WHERE LTRIM(STR(deptid)) + LTRIM(STR(salary)) IN ( SELECT LTRIM(STR(deptid)) + LTRIM(STR(MAX(salary))) FROM tblSal JOIN tblEmp ON tblSal.EmpId = tblEmp.EmpId GROUP BY DeptId ) ``` Is there a better way to achieve the list ?
You could try using [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx). > > Returns the sequential number of a row within a partition of a result > set, starting at 1 for the first row in each partition. > > > Something like ``` ;WITH Employees AS ( SELECT e.*, ROW_NUMBER() OVER(PARTITION BY e.DeptId ORDER BY s.salary DESC) RowID FROM [tblEmp] e INNER JOIN [tblSalary] s ON e.EmpId = s.EmpId ) SELECT * FROM Employees WHERE RowID = 1 ``` This will however not return Employees that have the same salry in the same department. For that you might want to look at [RANK (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms176102.aspx) or [DENSE\_RANK (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms173825.aspx)instead of ROW\_NUMBER. > > **Rank** : Returns the rank of each row within the partition of a result set. The > rank of a row is one plus the number of ranks that come before the row > in question. > > > If two or more rows tie for a rank, each tied rows receives the same > rank. > > > **Dense\_Rank** : Returns the rank of rows within the partition of a result > set, without any gaps in the ranking. The rank of a row is one plus > the number of distinct ranks that come before the row in question. > > >
How to protect individual cell using phpspreadsheet I want to protect particular cell content from being amended. When I tried to protect whole sheet, no problem. ``` $sheet->getProtection()->setSheet(true)->setDeleteRows(true); ``` But, could not set protection for individual cell. I tried the following codes. 1 ``` $sheet->protectCellsByColumnAndRow(0, 1, 100, 100, 'asdf'); ``` 2 ``` $sheet->protectCells('A1','password',false); ``` Thanks in advance.
Here is the solution. First, enable the worksheet protection. Then, unlock all the cell by change the default style of spreadsheet's protection. After that, lock the cell you want by specify the cell's coordinate. The worksheet protection applied to locked cell only. So, the cell that you locked could not be edited anymore when you open that worksheet. ``` $spreadsheet->getActiveSheet()->getProtection()->setSheet(true); $spreadsheet->getDefaultStyle()->getProtection()->setLocked(false); $sheet->getStyle('A1')->getProtection()->setLocked(\PhpOffice\PhpSpreadsheet\Style\Protection::PROTECTION_PROTECTED); ``` [link-unlock all cells](https://stackoverflow.com/questions/48376624/unlock-all-cells-on-phpexcel) and [link-lock individual cell](https://stackoverflow.com/questions/17046207/phpexcel-lock-particular-cell) had helped me.
LINQ expression to optimize syntax? ``` foreach (var item in mainCanvas.Children) { if (item is Button) { (item as Button).Content = "this is a button"; } } ``` Can I use LINQ or other feature of .NET 4 to be more concise (maybe performant)?
You can use [**`Enumerable.OfType`**](http://msdn.microsoft.com/en-us/library/bb360913.aspx): ``` foreach (var button in mainCanvas.Children.OfType<Button>()) { button.Content = "this is a button"; } ``` --- **Performance Measurements** **Method 1**: OPs original suggestion ``` foreach (var item in mainCanvas.Children) { if (item is Button) { (item as Button).Content = "this is a button"; } } ``` **Method 2**: OfType ``` foreach (var button in mainCanvas.Children.OfType<Button>()) { button.Content = "this is a button"; } ``` **Method 3**: Only cast once ``` foreach (var item in mainCanvas.Children) { Button button = item as Button; if (button != null) { button.Content = "this is a button"; } } ``` **Method 4**: for loop: ``` List<object> children = mainCanvas.Children; for (int i = 0; i < children.Count; ++i) { object item = children[i]; if (item is Button) { (item as Button).Content = "this is a button"; } } ``` **Results** ``` Iterations per second Method 1: 18539180 Method 2: 7376857 Method 3: 19280965 Method 4: 20739241 ``` **Conclusion** - The biggest improvement can be gained by using a simple `for` loop instead of `foreach`. - It is also possible to improve performance slightly by casting only once. - Using `OfType` is considerably slower. But remember to optimize readability first, and only optimize performance if you have performance profiled and found that this specific code is the performance bottleneck.
Converting list to dataframe in R So I have a list, say: ``` L1 <- list(1:10, 5:14, 10:19) ``` Now I am trying to get the output of the list as dataframe such that my output looks: ``` 1. 1 2 3 4 5 6 7 8 9 10 2. 5 6 7 8 9 10 11 12 13 14 3. 10 11 12 13 14 15 16 17 18 19 ``` I am using ``` as.data.frame(L1, row.names = TRUE) ``` and ``` list_vect2df(L1) ``` But none of them are giving the required output
You can `unlist` and use `matrix`, then converting to `data.frame`. It seems to be faster for this case. ``` as.data.frame(matrix(unlist(L1),nrow=length(L1),byrow=TRUE) microbenchmark::microbenchmark( a= map_dfr(L1, ~as.data.frame(t(.x))), b= do.call(rbind, lapply(L1, function(x) as.data.frame(t(x)))), c= as.data.frame(t(as.data.frame(L1))), d= data.table::transpose(as.data.frame(L1)), e= as.data.frame(matrix(unlist(L1),nrow=length(L1),byrow=TRUE)), times = 100,unit = "relative") # Unit: relative # expr min lq mean median uq max neval # a 9.146545 8.548656 8.859087 8.859051 9.449237 7.265274 100 # b 13.879833 11.523000 11.433790 10.924726 10.797251 24.012107 100 # c 12.719835 10.635809 10.442108 10.229913 10.259789 7.020377 100 # d 10.439881 9.143530 9.205734 8.859026 9.176125 6.624454 100 # e 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 100 ```
How to set Notepad++ as TortoiseSVN default editor? Sometimes I browse a repository and want to look at some files there, without checking them out (let's say I'm looking for a particular file). I've got three choices - double clicking the file opens the default editor. For .cs files that would be the big and slow Visual Studio. And, naturally, it has to open a fresh copy. - Right-clicking also offers the option to "Open With..." and then I can select what program I want - Right clicking also offers the option to "Edit" which then opens Notepad. I'd like to use Notepad++ as the default editor. Or it can be the "Edit" entry on the context menu that opens it, but not the full and lengthy "Open with..." dialog. Can I do that? I haven't yet found such a setting, but maybe there's some hidden option or something.
Like Mand Beckett said, and this is the full registry path: ``` HKEY_LOCAL_MACHINE\SOFTWARE\Classes\SystemFileAssociations\text\shell\edit\command ``` Change (Default) value in this key. Remember to leave %1 there. The problem with solution above is that it will only handle files that are perceived as text (have value PerceivedType=text in registry). Most text files are already perceived as such in the registry, but you can't open, for instance, a .lib file in this manner. To be able to open really all files from repository browser with Notepad++ edit (Default) value in the following key (add the key if not there): ``` HKEY_LOCAL_MACHINE\SOFTWARE\Classes\*\shell\edit\command ``` NOTE: In registry editor new key will have (Default) value of type REG\_SZ, not REG\_EXPAND\_SZ. If you want REG\_EXPAND\_SZ you must create a key with .reg file. Another option is to set all relevant file types to open from Notepad++ by default instead from Visual Studio. Start Notepad++ as administrator and in Settings > Preferences > File Association just add those file associations that you use.
Does Scala have guards? I started learning scala a few days ago and when learning it, I am comparing it with other [functional programming](http://en.wikipedia.org/wiki/Functional_programming) languages like ([Haskell](http://en.wikipedia.org/wiki/Haskell_%28programming_language%29), [Erlang](http://en.wikipedia.org/wiki/Erlang_%28programming_language%29)) which I had some familiarity with. Does Scala has [guard](http://en.wikipedia.org/wiki/Guard_%28computer_science%29) sequences available? I went through pattern matching in Scala, but is there any concept equivalent to guards with `otherwise` and all?
Yes, it uses the keyword `if`. From the [Case Classes](http://www.scala-lang.org/node/107) section of A Tour of Scala, near the bottom: ``` def isIdentityFun(term: Term): Boolean = term match { case Fun(x, Var(y)) if x == y => true case _ => false } ``` (This isn't mentioned on the [Pattern Matching](http://www.scala-lang.org/node/120) page, maybe because the Tour is such a quick overview.) --- In Haskell, `otherwise` is actually just a variable bound to `True`. So it doesn't add any power to the concept of pattern matching. You can get it just by repeating your initial pattern without the guard: ``` // if this is your guarded match case Fun(x, Var(y)) if x == y => true // and this is your 'otherwise' match case Fun(x, Var(y)) if true => false // you could just write this: case Fun(x, Var(y)) => false ```
How to use Perl's Text::Aspell to spellcheck a text? I want to add spell checking to my Perl program. Looks like [Text::Aspell](https://metacpan.org/pod/Text::Aspell) should do what I need, but it only offers a function to check single words. ``` use strict; use warnings; use Text::Aspell; my $input = "This doesn't look too bad. Me&you. with/without. 1..2..3..go!"; my $aspell = Text::Aspell->new(); $aspell->set_option('lang', 'en'); print "$input: ", $aspell->check($input), "\n"; ``` This prints: ``` This doesn't look too bad. Me&you. with/without. 1..2..3..go!: 0 ``` So clearly it does only take single words, then how do I separate a text into words? A simple `split` at white space: ``` foreach my $word (split /\s/, $input) { next unless($word =~ /\w/); print "$word: ", $aspell->check($word), "\n"; } ``` This gets problems with punctuation marks that don't have white space: ``` This: 1 doesn't: 1 look: 1 too: 1 bad.: 0 Me&you.: 0 with/without.: 0 1..2..3..go!: 0 ``` I guess I could mention the punctuation characters: ``` foreach my $word (split qr{[,.;!:\s#"\?&%@\(\)\[\]/\d]}, $input) { next unless($word =~ /\w/); print "$word: ", $aspell->check($word), "\n"; } ``` This gets reasonable output: ``` This: 1 doesn't: 1 look: 1 too: 1 bad: 1 Me: 1 you: 1 with: 1 without: 1 go: 1 ``` but seems clumsy and I'm wondering if there is an easier (less code for me to write, less brittle) way. How do I spell check a text?
`Text::Aspell` has no options to check a whole string, and instead only checks single words. Instead of splitting the string by yourself, I would suggest to use a module that already does that for you, such as [`Text::SpellChecker`](https://metacpan.org/pod/Text::SpellChecker). For instance: ``` use strict; use warnings; use Text::SpellChecker; use feature 'say'; my $input = "This doesn't look too bad. Me&you. with/without. 1..2..3..go!"; my $checker = Text::SpellChecker->new(text => $input); $checker->set_options(aspell => { 'lang' => 'en' }); while (my $word = $checker->next_word) { say "Invalid word: $word"; } ``` Or, ``` my $checker = Text::SpellChecker->new(text => $input); $checker->set_options(aspell => { 'lang' => 'en' }); if ($checker->next_word) { say "The string is not valid."; } else { say "The string is valid."; } ``` The [documentation](https://metacpan.org/pod/Text::SpellChecker#SYNOPSIS) of the module shows how you could interactively replace erroneous words: ``` while (my $word = $checker->next_word) { print $checker->highlighted_text, "\n", "$word : ", (join "\t", @{$checker->suggestions}), "\nChoose a new word : "; chomp (my $new_word = <STDIN>); $checker->replace(new_word => $new_word) if $new_word; } ``` If you want to check each word of the input string individually yourself, you could have a look at how `Text::SpellCheck` splits the string into words (this is done by the [`next_word`](https://metacpan.org/dist/Text-SpellChecker/source/lib/Text/SpellChecker.pm#L278-295) function). It uses the following regex: ``` while ($self->{text} =~ m/\b(\p{L}+(?:'\p{L}+)?)/g) { ... } ```
Sequelize How compare year of a date in query I'm trying to make this query: ``` SELECT * FROM TABLEA AS A WHERE YEAR(A.dateField)='2016' ``` How can I perfome this query above in sequelize style? ``` TABLEA.findAll({ where:{}//???? } ``` Thanks!
``` TABLEA.findAll({ where: sequelize.where(sequelize.fn('YEAR', sequelize.col('dateField')), 2016) }); ``` You have to use `.where` here, because the lefthand side of the expression (the key) is an object, so it cannot be used in the regular POJO style as an object key. If you want to combine it with other conditions you could do: ``` TABLEA.findAll({ where: { $and: [ sequelize.where(sequelize.fn('YEAR', sequelize.col('dateField')), 2016), { foo: 'bar' } ] } }); ``` <https://sequelize.org/v3/docs/querying/#operators>
Find the nearest/closest value in a sorted List I was wondering if it is possible to find the closest element in a sorted `List` for a element **that is not** in the list. For example if we had the values [1,3,6,7] and we are looking for the element closest to 4, it should return 3, because 3 is the biggest number in the array, that is smaller than 4. I hope it makes sense, because English is not my native language.
Because the collection is sorted, you can do a modified binary search in `O( log n )` : ``` public static int search(int value, int[] a) { if(value < a[0]) { return a[0]; } if(value > a[a.length-1]) { return a[a.length-1]; } int lo = 0; int hi = a.length - 1; while (lo <= hi) { int mid = (hi + lo) / 2; if (value < a[mid]) { hi = mid - 1; } else if (value > a[mid]) { lo = mid + 1; } else { return a[mid]; } } // lo == hi + 1 return (a[lo] - value) < (value - a[hi]) ? a[lo] : a[hi]; } ``` Since most of the code above is binary search, you can leverage the [`binarySearch(...)`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Arrays.html#binarySearch(byte%5B%5D,byte)) provided in the std library and examine the value of the `insertion point`: ``` public static int usingBinarySearch(int value, int[] a) { if (value <= a[0]) { return a[0]; } if (value >= a[a.length - 1]) { return a[a.length - 1]; } int result = Arrays.binarySearch(a, value); if (result >= 0) { return a[result]; } int insertionPoint = -result - 1; return (a[insertionPoint] - value) < (value - a[insertionPoint - 1]) ? a[insertionPoint] : a[insertionPoint - 1]; } ```
Calculate "energy" of columns with pandas I try to calculate the signal energy of my `pandas.DataFrame` following this [formula for discrete-time signal](https://en.wikipedia.org/wiki/Energy_(signal_processing)). I tried with [`apply`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html#pandas.DataFrame.apply) and `applymap`, also with reduce, as suggested here: [How do I columnwise reduce a pandas dataframe?](https://stackoverflow.com/questions/31872693/how-do-i-columnwise-reduce-a-pandas-dataframe) . But all I tried resulted doing the operation for each element, not for the whole column. This not a signal processing specific question, it's just an example how to apply a "summarize" (*I don't know the right term for this*) function to columns. My workaround, was to get the raw `numpy.array` data and do my calculations. But I am pretty sure there is a pandatic way to do this (and surly a more numpyic way). ``` import pandas as pd import numpy as np d = np.array([[2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [0, -1, 2, -3, 4, -5, 6, -7, 8, -9], [0, 1, -2, 3, -4, 5, -6, 7, -8, 9]]).transpose() df = pd.DataFrame(d) energies = [] # a same as d a = df.as_matrix() assert(np.array_equal(a, d)) for column in range(a.shape[1]): energies.append(sum(a[:,column] ** 2)) print(energies) # [40, 285, 285] ``` Thanks in advance!
You could do the following for dataframe output - ``` (df**2).sum(axis=0) # Or (df**2).sum(0) ``` For performance, we could work with array extracted from the dataframe - ``` (df.values**2).sum(axis=0) # Or (df.values**2).sum(0) ``` For further performance boost, there's `np.einsum` - ``` a = df.values out = np.einsum('ij,ij->j',a,a) ``` Runtime test - ``` In [31]: df = pd.DataFrame(np.random.randint(0,9,(1000,30))) In [32]: %timeit (df**2).sum(0) 1000 loops, best of 3: 518 µs per loop In [33]: %timeit (df.values**2).sum(0) 10000 loops, best of 3: 40.2 µs per loop In [34]: def einsum_based(a): ...: a = df.values ...: return np.einsum('ij,ij->j',a,a) ...: In [35]: %timeit einsum_based(a) 10000 loops, best of 3: 32.2 µs per loop ```
Vue with jest - Test with asynchronous call How to make my test wait for the result of my api? I'm using vue and jest to test my components. I want to test the method that writes a client to my database. In my component I have the following method: ``` methods: { onSubmitClient(){ axios.post(`urlApi`, this.dados).then(res => { return res; }) } } ``` **in my test** ``` describe('login.vue', () => { let wrapper beforeAll(()=>{ wrapper = mount(client, { stubs: ['router-link'], store, data() { return { dados: { name: 'tes name', city: 'test city' }, }; } }) }) it('store client', () => { res = wrapper.vm.onSubmitLogin() console.log(res); }) }) ``` My test does not wait for the API call to complete. I need to wait for the API call to know if the test worked. How can I make my test wait for API return?
There are several issues in your code. First, you cannot `return` [from an async call](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call). Instead, you should be probably setting up some data in your `onSubmitClient`, and returning the whole `axios`call, which is a Promise. for instance: ``` onSubmitClient(){ return axios.post(`urlApi`, this.dados).then(res => { this.result = res; return res; }) } ``` I assume the method here is storing a `result` from the server. Maybe you don't want that; it is just an example. I'll come back to it later. Ok, so now, you could call `onSubmitClient` in your wrapper and see if `this.result` is already set. As you already know, this does not work straightforward. In order for a jest test to wait for asynchronous code, [you need either to provide a `done` callback function or return a promise](https://jestjs.io/docs/en/asynchronous.html). I'll show an example with the former: ``` it('store client', (done) => { wrapper.vm.onSubmitLogin().then((res) => { expect(wrapper.vm.dados).toEqual(res); done(); }) }); ``` Now this code should just work, but still there is an issue with it, as @jonsharpe says in a comment. You usually don't want to perform **real network requests** in unitary tests because they are slow and unrealiable. Also, unitary tests are meant to test components in isolation, and here we are testing not only that our component **sets this.result properly** when the request is made. We are also testing that there is a webserver up and running that is actually working. So, what I would do in this scenario to test **that single piece of functionality**, is to extract the request to another method, mock it with `vue-test-utils` and `jest.fn`, and then assert that `onSubmitClient` does its work: The component: ``` export default { data() { return { http: axios, ... }, methods: { onSubmitClient(){ this.http.post(`urlApi`, this.dados).then(res => { this.result = res; }) } } } } ``` The test: ``` it('store client', (done) => { const fakeResponse = {foo: 'bar'}; var post = jest.fn(); var http : { post, }; var wrapper = mount(client, { stubs: ['router-link'], store, data() { return { dados: { name: 'tes name', city: 'test city' }, http, //now, the component under test will user a mock to perform the http post request. } } }); wrapper.vm.onSubmitLogin().then( () => { expect(post).toHaveBeenCalled(); expect(wrapper.vm.result).toEqual(fakeResponse); done(); }) }); ``` Now, your test asserts two things: 1. `post` gets called. 2. `this.result` is set as it should be. If you don't want to store anything in your component from the server, just drop the second assertion and the `this.result = res` line in the method. So basically this covers why your test is not waiting for the async request and some issues in your code. There are still some things to consider (f.i. I think a global `wrapper` is bad idea, and I would always prefer `shallowMount` over `mount` when testing components behavior), but this answer should help you a lot. PS: didn't test the code, so maybe I messed up something. If the thing just doesn't work, look for syntax errors or similar issues.
How to bind a Polymorphic Properties of a Models in .NET core I have an ASP.NET Core Web API end point which takes (FromBody) The Search object defined below ``` public class Search { public int PageSize {get;set;} public Expression Query{get;set;} } public class Expression { public string Type {get;set;} } public class AndExpression { public IList<Expression> Expressions {get;set;} } public class MatchesExpression { public string FieldId {get;set;} public string Value {get;set;} public string Operator {get;set;} } ``` So... if I post the following JSON to my endpoint { "pageSize":10, "query": { "fieldId": "body", "value": "cake", "operator": "matches" } } I successfully get a Search Object, but the Query property is of type Expression, not MatchesExpression. This is clearly a polymorphic issue. This article (towards the end) gives a good example of a how to deal with this issue when your entire model is polymorphic. <https://learn.microsoft.com/en-us/aspnet/core/mvc/advanced/custom-model-binding?view=aspnetcore-5.0> In my case, the property of my Model "Query" is polymorphic, so Im unsure how to build a ModelBinder for my Search object that will allow me to handle the Query Property I Imagine, I need to write a model binder to construct the search object and then follow the pattern described for the property, however I cannot locate any examples of how to implement a model binder that isnt utterly trivial. Any suggestions on how to achieve this? Good sources of information?
So.. I gave up with ModelBInders (because Im using the FromBody attribute which isnt compatible with my aims). Instead I wrote a System.Text.Json JsonConvertor to handle the polymorphism (see shonky code below) ``` using Searchy.Models; using System; using System.Collections.Generic; using System.Globalization; using System.Linq; using System.Text.Json; using System.Text.Json.Serialization; using System.Threading.Tasks; namespace Searchy { public class ExpressionJsonConverter : JsonConverter<Expression> { public override Expression Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) { Utf8JsonReader readerClone = reader; using (var jsonDocument = JsonDocument.ParseValue(ref readerClone)) { if (!jsonDocument.RootElement.TryGetProperty("type", out var typeProperty)) { throw new JsonException(); } switch (typeProperty.GetString()) { case "comparison": return JsonSerializer.Deserialize<Comparison>(ref reader, options); case "and": return JsonSerializer.Deserialize<And>(ref reader, options); } } return null; } public override void Write( Utf8JsonWriter writer, Expression expression, JsonSerializerOptions options) { } } } ``` My Expression class also had the following attribue ``` [JsonConverter(typeof(ExpressionJsonConverter))] ```
iBatis to MyBatis migration efforts? I am using iBatis-2.3.4.726 in my production application. I want to migrate my production application to use MyBatis. What points i need to consider while migration process? Is there any configuration changes or MyBatis supports iBatis configuration as deprecated commands?
Before using [migration guide mentioned by Satish](http://code.google.com/p/mybatis/wiki/DocUpgrade3) ([new repo link](https://github.com/mybatis/ibatis2mybatis) / [wiki](https://github.com/mybatis/ibatis2mybatis/wiki)), make sure that you've read **all the comments**, especially the last one that list which changes have to be done **manually** after using converter: > > - `<procedure>` is deprecated in mybatis. Converter is changing this to `<update>`. This will create problems where we need result set from procedure call. So manually updated with `<select>`. > > > - Dynamic query part mentioned inside `<dynamic>` tag are not converted by tool - Both `#` and `$` can be escaped by doubling in iBatis. This is not required in mybatis. - `typeAlias` should be defined in `sql-map-config` instead of mapper itself. - When result map with `groupBy` changed into mybatis style using `collection`, `id` property is not set properly by the converter. - `jdbcType="INT"` is not recognized in mybatis. Updated to `"INTEGER"` - `nullValue` in `resultMap` deprecated, we need to update query with `ISNULL` expression. What I'd like to add is that converter seems to **drop `timeout` parameter** that could be present in `<procedure>` tag in iBatis. Make sure to copy all occurences to the generated XML.
LLVM IR: efficiently summing a vector I'm writing a compiler that's generating LLVM IR instructions. I'm working extensively with vectors. I would like to be able to sum all the elements in a vector. Right now I'm just extracting each element individually and adding them up manually, but it strikes me that this is precisely the sort of thing that the hardware should be able to help with (as it sounds like a pretty common operation). But there doesn't seem to be an intrinsic to do it. What's the best way to do this? I'm using LLVM 3.2.
First of all, even without using intrinsics, you can generate `log(n)` vector additions (with n being vector length) instead of `n` scalar additions, here's an example with vector size 8: ``` define i32 @sum(<8 x i32> %a) { %v1 = shufflevector <8 x i32> %a, <8 x i32> undef, <4 x i32> <i32 0, i32 1, i32 2, i32 3> %v2 = shufflevector <8 x i32> %a, <8 x i32> undef, <4 x i32> <i32 4, i32 5, i32 6, i32 7> %sum1 = add <4 x i32> %v1, %v2 %v3 = shufflevector <4 x i32> %sum1, <4 x i32> undef, <2 x i32> <i32 0, i32 1> %v4 = shufflevector <4 x i32> %sum1, <4 x i32> undef, <2 x i32> <i32 2, i32 3> %sum2 = add <2 x i32> %v3, %v4 %v5 = extractelement <2 x i32> %sum2, i32 0 %v6 = extractelement <2 x i32> %sum2, i32 1 %sum3 = add i32 %v5, %v6 ret i32 %sum3 } ``` If your target has support for these vector additions then it seems highly likely the above will be lowered to use those instructions, giving you performance. Regarding intrinsics, there are no target-independent intrinsics to handle this. If you're compiling to x86, though, you do have access to the `hadd` instrinsics (e.g. `llvm.x86.int_x86_ssse3_phadd_sw_128` to add two `<4 x i32>` vectors together). You'll still have to do something similar to the above, only the `add` instructions could be replaced. For more information about this you can search for "horizontal sum" or "horizontal vector sum"; for instance, here are some relevant stackoverflow questions for a horizontal sum on x86: - [horizontal sum of 8 packed 32bit floats](https://stackoverflow.com/questions/13879609/horizontal-sum-of-8-packed-32bit-floats) - [Fastest way to do horizontal vector sum with AVX instructions](https://stackoverflow.com/questions/9775538/fastest-way-to-do-horizontal-vector-sum-with-avx-instructions) - [Fastest way to do horizontal float vector sum on x86](https://stackoverflow.com/questions/6996764/fastest-way-to-do-horizontal-float-vector-sum-on-x86)
Extracting everything between two symbols in a string I have a vector containing some names. I want to extract the title on every row, basically everything between the ", " (included the white space) and "." ``` > head(combi$Name) [1] "Braund, Mr. Owen Harris" [2] "Cumings, Mrs. John Bradley (Florence Briggs Thayer)" [3] "Heikkinen, Miss. Laina" [4] "Futrelle, Mrs. Jacques Heath (Lily May Peel)" [5] "Allen, Mr. William Henry" [6] "Moran, Mr. James" ``` I suppose `gsub` might come useful but I have difficulties on find the right regular expressions to accomplish my needs.
**1) sub** With `sub` ``` > sub(".*, ([^.]*)\\..*", "\\1", Name) [1] "Mr" "Mrs" "Miss" "Mrs" "Mr" "Mr" ``` **1a) sub variation** This approach with `gsub` also works: ``` > sub(".*, |\\..*", "", Name) [1] "Mr" "Mrs" "Miss" "Mrs" "Mr" "Mr" ``` **2) strapplyc** or using `strapplyc` in the gusbfn package it can be done with a simpler regular expression: ``` > library(gsubfn) > > strapplyc(Name, ", ([^.]*)\\.", simplify = TRUE) [1] "Mr" "Mrs" "Miss" "Mrs" "Mr" "Mr" ``` **2a) strapplyc variation** This one seems to have the simplest regular expression of them all. ``` > library(gsubfn) > > sapply(strapplyc(Name, "\\w+"), "[", 2) [1] "Mr" "Mrs" "Miss" "Mrs" "Mr" "Mr" ``` **3) strsplit** A third way is using `strsplit` ``` > sapply(strsplit(Name, ", |\\."), "[", 2) [1] "Mr" "Mrs" "Miss" "Mrs" "Mr" "Mr" ``` *Added* additional solutions. Changed `gsub` to `sub` (although `gsub` works too).
Scheduling in Rx .NET Expected all be executed on the main thread of .NET Core 2.0 console app, so the output being blocked for 10 seconds: ``` static void Main(string[] args) { WriteLine($"We are on {Thread.CurrentThread.ManagedThreadId}"); var subject = new Subject<long>(); var subscription = subject.Subscribe( i => WriteLine($"tick on {Thread.CurrentThread.ManagedThreadId}")); var timer = Observable.Interval(TimeSpan.FromSeconds(1)) .SubscribeOn(Scheduler.CurrentThread) .Subscribe(i => subject.OnNext(i)); Thread.Sleep(10000); } ``` Not the case though – a new line comes to console every other second being dispatched by random threads: > > > ``` > We are on 1 > tick on 4 > tick on 5 > tick on 4 > tick on 4 > tick on 4 > tick on 4 > tick on 4 > tick on 4 > tick on 5 > > ``` > > What did I do wrong?
The `Scheduler.CurrentThread` / `CurrentThreadScheduler` will queue items on the same thread that made call to schedule, which will be the thread that the timer happens to run on. Calling `Scheduler.CurrentThread` does not pin the execution of items scheduled via it to the thread that you make the call to `Scheduler.CurrentThread` on but rather the thread that calls `.Schedule()`. Also, you call `SubscribeOn()` which only affects the thread where the `.Subscribe()` call is going to be made. If you want to control the execution of the item processing, you rather want to call `.ObserveOn()`. If you want everything to run on the main thread, I suggest running the timer on the main thread, by specifying a scheduler on the interval observable: ``` Observable.Interval(TimeSpan.FromSeconds(1), Scheduler.CurrentThread) ```
Inno Setup Folders I want to make an Inno Setup script that installs an .exe file, several .dll files, a zip folder, and a regular folder. I'm fairly certain that I go about the .exe and .dll files like any ordinary file. However, how do I go about the two folders? My script as it stands: ``` [Setup] AppName=My Program AppVersion=1.5 DefaultDirName={pf}\My Program DefaultGroupName=My Program Compression=lzma2 SolidCompression=yes OutputDir=userdocs:Inno Setup Examples Output [Files] Source: "MyProg.exe"; DestDir: "{app}" ```
If you are creating empty directories, you can add a [Dirs] section to your script. If you want to put a directory of files into the setup and install them, you can use the recursesubdirs flag on your files section. Here's an example of both. ``` ; Script generated by the Inno Setup Script Wizard. ; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES! [Setup] ; NOTE: The value of AppId uniquely identifies this application. ; Do not use the same AppId value in installers for other applications. ; (To generate a new GUID, click Tools | Generate GUID inside the IDE.) AppId={{2CC00BF8-CC76-41A1-92AB-CD40FFC9C6E1} AppName=My Program AppVersion=1.5 ;AppVerName=My Program 1.5 AppPublisher=My Company, Inc. AppPublisherURL=http://www.example.com/ AppSupportURL=http://www.example.com/ AppUpdatesURL=http://www.example.com/ DefaultDirName={pf}\My Program DefaultGroupName=My Program OutputBaseFilename=setup Compression=lzma SolidCompression=yes [Dirs] Name: "Examples" [Languages] Name: "english"; MessagesFile: "compiler:Default.isl" [Tasks] Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked [Files] Source: "C:\source\MyProg.exe"; DestDir: "{app}"; Flags: ignoreversion Source: "c:\source\examples\*.*"; DestDir: "{app}\examples"; Flags: recursesubdirs ; NOTE: Don't use "Flags: ignoreversion" on any shared system files [Icons] Name: "{group}\My Program"; Filename: "{app}\MyProg.exe" Name: "{commondesktop}\My Program"; Filename: "{app}\MyProg.exe"; Tasks: desktopicon [Run] Filename: "{app}\MyProg.exe"; Description: "{cm:LaunchProgram,My Program}"; Flags: nowait postinstall skipifsilent ```
Must I provide my project source code if I use a library licensed under Apache 2.0? I have a private (i.e. no chance of sharing the source) and commercial application, now I would like to use a library which is under the [Apache 2.0 license](http://www.apache.org/licenses/LICENSE-2.0.html). I've read the Apache license and FAQ section, but I am not clear about this. Is it the same as GPL3 which forces the application to provide the source code?
The Apache 2.0 license is very different from the GPL license, in at least two aspects: 1. Under the Apache 2.0 license, you are allowed to distribute binaries without providing the source code with it. (Under the GPL, you must always provide the source code) 2. The GPL license carries over to the entire application. The Apache 2.0 license does not and applies only to those parts that explicitly state they fall under the Apache 2.0 license. This means that if you use a library with Apache 2.0 license in your project, the permissions/rights/obligations from the Apache 2.0 license *do not* suddenly carry over to your code. To distribute a (binary or unmodified) copy of an Apache 2.0 licensed library with your application, you must meet two requirements: - The users of your application must receive a copy of the Apache 2.0 license. To avoid confusion, you should also state which parts of the distribution the license applies to. - The users of your application must receive a copy of the NOTICES file that came with the library, if there is such a file.
Ruby: Alter class static method in a code block Given the Thread class with it current method. Now inside a test, I want to do this: ``` def test_alter_current_thread Thread.current = a_stubbed_method # do something that involve the work of Thread.current Thread.current = default_thread_current end ``` Basically, I want to alter the method of a class inside a test method and recover it after that. I know it sound complex for another language, like Java & C# (in Java, only powerful mock framework can do it). But it's ruby and I hope such *nasty* stuff would be available
You might want to take a look at a Ruby mocking framework like [Mocha](http://mocha.rubyforge.org/), but in terms of using plain Ruby it can be done using `alias_method` (documentation [here](http://ruby-doc.org/core/classes/Module.html#M001653)) e.g. beforehand: ``` class Thread class << self alias_method :old_current, :current end end ``` then define your new method ``` class Thread def self.current # implementation here end end ``` then afterwards restore the old method: ``` class Thread class << self alias_method :current, :old_current end end ``` **Update to illustrate doing this from within a test** If you want to do this from within a test you could define some helper methods as follows: ``` def replace_class_method(cls, meth, new_impl) cls.class_eval("class << self; alias_method :old_#{meth}, :#{meth}; end") cls.class_eval(new_impl) end def restore_class_method(cls, meth) cls.class_eval("class << self; alias_method :#{meth}, :old_#{meth}; end") end ``` `replace_class_method` is expecting a class constant, the name of a class method and the new method definition as a string. `restore_class_method` takes the class and the method name and then aliases the original method back in place. Your test would then be along the lines of: ``` def test new_impl = <<EOM def self.current "replaced!" end EOM replace_class_method(Thread, 'current', s) puts "Replaced method call: #{Thread.current}" restore_class_method(Thread, 'current') puts "Restored method call: #{Thread.current}" end ``` You could also write a little wrapper method which would replace a method, yield to a block and then ensure that the original method was reinstated afterwards e.g. ``` def with_replaced_method(cls, meth, new_impl) replace_class_method(cls, meth, new_impl) begin result = yield ensure restore_class_method(cls, meth) end return result end ``` Inside your test method this could then be used as: ``` with_replaced_method(Thread, 'current', new_impl) do # test code using the replaced method goes here end # after this point the original method definition is restored ``` As mentioned in the original answer, you can probably find a framework to do this for you but hopefully the above code is interesting and useful anyway.
Bootstrap 3 collapse accordion: collapse all works but then cannot expand all while maintaining data-parent I'm using Bootstrap 3 and trying to setup the following accordion/collapse structure: 1. Onload: Each accordion panel in a group is fully collapsed and functions as documented/expected. 2. Button click: Each accordion panel expands and clicking the toggles has no effect (including URL anchor effects). 3. Another button click: All panels return to onload state; all collapsed and clickable as normal. I've made it to step 2, but when I click the button again at step 3 it has no effect. I also see no console errors reported in Chrome Dev Tools or by running the code through my local JSHint. I'd like this cycle to be repeatable each time the button is clicked. I've setup my code here <http://bootply.com/98140> and here <http://jsfiddle.net/A9vCx/> I'd love to know what I'm doing wrong and I appreciate suggestions. Thank you! My HTML: ``` <button class="collapse-init">Click to disable accordion behavior</button> <br><br> <div class="panel-group" id="accordion"> <div class="panel panel-default"> <div class="panel-heading"> <h4 class="panel-title"> <a data-toggle="collapse" data-parent="#accordion" href="#collapseOne"> Collapsible Group Item #1 </a> </h4> </div> <div id="collapseOne" class="panel-collapse collapse"> <div class="panel-body"> Anim pariatur cliche reprehenderit, enim eiusmod high life accusamus terry richardson ad squid. </div> </div> </div> <div class="panel panel-default"> <div class="panel-heading"> <h4 class="panel-title"> <a data-toggle="collapse" data-parent="#accordion" href="#collapseTwo"> Collapsible Group Item #2 </a> </h4> </div> <div id="collapseTwo" class="panel-collapse collapse"> <div class="panel-body"> Anim pariatur cliche reprehenderit, enim eiusmod high life accusamus terry richardson ad squid. </div> </div> </div> <div class="panel panel-default"> <div class="panel-heading"> <h4 class="panel-title"> <a data-toggle="collapse" data-parent="#accordion" href="#collapseThree"> Collapsible Group Item #3 </a> </h4> </div> <div id="collapseThree" class="panel-collapse collapse"> <div class="panel-body"> Anim pariatur cliche reprehenderit, enim eiusmod high life accusamus terry richardson ad squid. </div> </div> </div> </div> ``` My JS: ``` $(function() { var $active = true; $('.panel-title > a').click(function(e) { e.preventDefault(); }); $('.collapse-init').on('click', function() { if(!$active) { $active = true; $('.panel-title > a').attr('data-toggle', 'collapse'); $('.panel-collapse').collapse({'toggle': true, 'parent': '#accordion'}); $(this).html('Click to disable accordion behavior'); } else { $active = false; $('.panel-collapse').collapse({'toggle': true, 'parent': '#accordion'}); $('.panel-title > a').removeAttr('data-toggle'); $(this).html('Click to enable accordion behavior'); } }); }); ```
# Updated Answer Trying to open multiple panels of a collapse control that is setup as an accordion i.e. with the `data-parent` attribute set, can prove quite problematic and buggy (see this question on [multiple panels open after programmatically opening a panel](https://stackoverflow.com/a/19158601/1366033)) Instead, the best approach would be to: 1. Allow each panel to toggle individually 2. Then, enforce the accordion behavior manually where appropriate. --- **To allow each panel to toggle individually**, on the `data-toggle="collapse"` element, set the `data-target` attribute to the `.collapse` panel ID selector (instead of setting the `data-parent` attribute to the parent control. You can read more about this in the question [Modify Twitter Bootstrap collapse plugin to keep accordions open](https://stackoverflow.com/a/11658976/1366033). Roughly, each panel should look like this: ``` <div class="panel panel-default"> <div class="panel-heading"> <h4 class="panel-title" data-toggle="collapse" **data-target="#collapseOne">** Collapsible Group Item #1 </h4> </div> <div **id="collapseOne"** class="panel-collapse collapse"> <div class="panel-body"></div> </div> </div> ``` --- **To manually enforce the accordion behavior**, you can create a handler for the collapse show event which occurs just before any panels are displayed. Use this to ensure any other open panels are closed before the selected one is shown (see this [answer to multiple panels open](https://stackoverflow.com/a/19158601/1366033)). You'll also only want the code to execute when the panels are active. To do all that, add the following code: ``` $('#accordion').on('show.bs.collapse', function () { if (active) $('#accordion .in').collapse('hide'); }); ``` --- Then use `show` and `hide` to toggle the visibility of each of the panels and `data-toggle` to enable and disable the controls. ``` $('#collapse-init').click(function () { if (active) { active = false; **$('.panel-collapse').collapse('show'); $('.panel-title').attr('data-toggle', '');** $(this).text('Enable accordion behavior'); } else { active = true; **$('.panel-collapse').collapse('hide'); $('.panel-title').attr('data-toggle', 'collapse');** $(this).text('Disable accordion behavior'); } }); ``` # [Working demo in jsFiddle](http://jsfiddle.net/KyleMit/f8ypa/)
How to make IntelliJ IDEA recognise code created by macros? ### Background I have an sbt-managed Scala project that uses the [usual sbt project layout](http://www.scala-sbt.org/0.13.0/docs/Detailed-Topics/Macro-Projects.html) for Scala projects with macros, i.e., a subproject that contains the macros a main project that is the actual application and that depends on the macro subproject. The macros are [macro annotations](http://docs.scala-lang.org/overviews/macros/annotations.html) which, in essence, generate companion objects for regular classes. The generated companion objects declare, amongst other members, apply/unapply methods. I used the sbt-idea plugin to generate a corresponding IntelliJ IDEA project, and I use the sbt console from IDEA's sbt-plugin to compile and run my Scala application. Everything works more or less fine, except that the generated companion objects, and more importantly, their members such as apply/unapply, are not recognised by IDEA. Thus, I get a squiggly line everywhere I, e.g., an apply method. My setup is IntelliJ IDEA CE 133.471 with the plugins SBT 1.5.1 and Scala 0.28.363 on Windows 7 x64. ### Questions How do I get IntelliJ IDEA to recognise code (classes, objects, methods, ...) that has been generated by Scala macros (macro annotations, to be precise)? Are other IDEs, e.g., Eclipse, known to work better in such a setting? ### Related [This question](https://stackoverflow.com/questions/21696660/) (which is less detailed) essentially asks the same, but has not gotten a reply yet (2014-02-26). [According to a JetBrains developer](http://devnet.jetbrains.com/message/5510815) the feature I requested is on their long-term to-do list, but won't be implemented any time soon (2014-03-05).
With the latest Scala plugin build, there is an API which can be used to write your own plugin to support your macros: <http://blog.jetbrains.com/scala/2015/10/14/intellij-api-to-build-scala-macros-support/> > > Now, everyone can use this API to make their macros more friendly to their favorite IDE. To do that, you have to implement [SyntheticMembersInjector](https://github.com/JetBrains/intellij-scala/blob/idea15.x/src/org/jetbrains/plugins/scala/lang/psi/impl/toplevel/typedef/SyntheticMembersInjector.scala), and register it in the plugin.xml file: > > > > ``` > <extensions defaultExtensionNs="org.intellij.scala"> > <syntheticMemberInjector implementation="org.jetbrains.example.injector.Injector"/> > </extensions> > > ``` > >
fill\_between with matplotlib and a where condition of two lists I am trying to shade the area before the point of intersection of the two curves produced by this example code: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0,100,10) y1 = [0,2,4,6,8,5,4,3,2,1] y2 = [0,1,3,5,6,8,9,12,13,14] fig = plt.figure() ax = fig.add_subplot(111) ax.plot(t_list,y1,linestyle='-') ax.plot(t_list,y2,linestyle='--') plt.show() ``` Simply using: ``` ax.fill_between(x,y1,y2,where=y1>=y2,color='grey',alpha='0.5') ``` Does no work and gives the following error: "ValueError: Argument dimensions are incompatible" I tried to convert the lists into arrays: ``` z1 = np.array(y1) z2 = np.array(y2) ``` Then: ``` ax.fill_between(x,y1,y2,where=z1>=z2,color='grey',alpha='0.5') ``` Not the entire area was shaded. I know I have to find the point of intersection between the two curves by interpolating but have not seen a simple way to do it.
You are completely right, you need to interpolate. And that is ludicrously complicated, as you need to add the `interpolate=True` keyword argument to the call to `fill_between`. ``` ax.fill_between(x,y1,y2,where=z1>=z2,color='grey', interpolate=True) ``` [![enter image description here](https://i.stack.imgur.com/Hvb6p.png)](https://i.stack.imgur.com/Hvb6p.png) Complete code to reproduce: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0,100,10) y1 = [0,2,4,6,8,5,4,3,2,1] y2 = [0,1,3,5,6,8,9,12,13,14] fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x,y1,linestyle='-') ax.plot(x,y2,linestyle='--') z1 = np.array(y1) z2 = np.array(y2) ax.fill_between(x,y1,y2,where=z1>=z2,color='grey',alpha=0.5, interpolate=True) plt.show() ```
Join tables by date range I am looking for simple method to join two tables by date range. 1 table contains exact date, another table contains two variables identifying beginning and ending of the time period. I need to join tables if date in first table is withing range from second table. ``` data1 <- data.table(date = c('2010-01-21', '2010-01-25', '2010-02-02', '2010-02-09'), name = c('id1','id2','id3','id4')) data2 <- data.table(beginning=c('2010-01-15', '2010-01-23', '2010-01-30', '2010-02-05'), ending = c('2010-01-22','2010-01-29','2010-02-04','2010-02-13'), class = c(1,2,3,4)) result <- data.table(date = c('2010-01-21', '2010-01-25', '2010-02-02', '2010-02-09'), beginning=c('2010-01-15', '2010-01-23', '2010-01-30', '2010-02-05'), ending = c('2010-01-22','2010-01-29','2010-02-04','2010-02-13'), name = c('id1','id2','id3','id4'), class = c(1,2,3,4)) ``` Any help please? I found few difficult examples but they don't even work on my data because of formats. I need something like: ``` select * from data1 left join select * from data2 where data2.beginning <= data1.date <= data2.ending ``` Thanks
I know the following looks horrible in base, but here's what I came up with. It's better to use the 'sqldf' package (see below). ``` library(data.table) data1 <- data.table(date = c('2010-01-21', '2010-01-25', '2010-02-02', '2010-02-09'), name = c('id1','id2','id3','id4')) data2 <- data.table(beginning=c('2010-01-15', '2010-01-23', '2010-01-30', '2010-02-05'), ending = c('2010-01-22','2010-01-29','2010-02-04','2010-02-13'), class = c(1,2,3,4)) result <- cbind(data1,"beginning"=sapply(1:nrow(data2),function(x) data2$beginning[data2$beginning[x]<data1$date & data2$ending[x]>data1$date]), "ending"=sapply(1:nrow(data2),function(x) data2$ending[data2$beginning[x]<data1$date & data2$ending[x]>data1$date]), "class"=sapply(1:nrow(data2),function(x) data2$class[data2$beginning[x]<data1$date & data2$ending[x]>data1$date])) ``` Using the package sqldf: ``` library(sqldf) result = sqldf("select * from data1 left join data2 on data1.date between data2.beginning and data2.ending") ``` Using data.table this is simply ``` data1[data2, on = .(date >= beginning, date <= ending)] ```
Perl check if a line starting with a word from an array and return the matched value to variable I checked following topic [Perl check a line contains at list one word of an array](https://stackoverflow.com/questions/43658293/perl-check-a-line-contains-at-list-one-word-of-an-array), but I still confuse how to make it more efficient for my case. I use example from topic above. I have an array, called @exampleWords: ``` my @exampleWords = ("balloon", "space", "monkey", "fruit" ); ``` and I have a line contains of a sentence for example: ``` my $line = "monkey space always unlimited"; ``` How do I can check if $line starting with a match word in array, and return the matched word into a variable? in example above, the matched word is "monkey". current solution in my mind is: loop each word in array and check if the $line starting with a $word. ``` my $matchWord = ""; foreach my $word(@exampleWords) { if ($line =~ /^$word/) { $matchWord = $word; last; } } ``` I am still looking more efficient solution.. thank you...
In principle, you have to iterate over possible words to match. However, you can also construct an alternation regex pattern with them so that the regex engine starts once, unlike with the loop where it is started for *every* iteration. Also, now the iteration goes by highly optimized C code. How do these compare? Let's benchmark them, using the core module [Benchmark](http://perldoc.perl.org/Benchmark.html). **For a tiny array, matching around its middle** (your example) ``` use warnings; use strict; use Benchmark qw( cmpthese ); my @ary = ("balloon", "space", "monkey", "fruit"); my $line = "monkey space always unlimited"; sub regex { my ($line, @ary) = @_; my $match; my $re = join '|', map { quotemeta } @ary; if ($line =~ /^($re)/) { $match = $1; } return $match; } sub loop { my ($line, @ary) = @_; my $match; foreach my $word (@ary) { if ($line =~ /^$word/) { # see note at end $match = $word; last; } } return $match; } cmpthese(-10, { regex => sub { regex ($line, @ary) }, loop => sub { loop ($line, @ary) }, }); ``` This produces, on both a very good machine with v5.16 and on an older one with v5.10 ``` Rate loop regex loop 222791/s -- -70% regex 742962/s 233% -- ``` Thus regex is way more efficient. **For a 40 times larger array, matching around the middle** I build this array by `@ary = qw(...) x 20`, then add a word (`'AHA'`), then repeat 20 more times. I prepend that very word to the string, so that's what gets matched. I make the string much larger, too, even though this shouldn't matter for matching. In this case the regex is even more convincing ``` Rate loop regex loop 9300/s -- -82% regex 50873/s 447% -- ``` and yet more so with v5.10 on the older machine, with `574%`. On v5.27.2 the regex is faster by `1188%`, so by a clean order of magnitude. But it is the rate of the loop that drops to only `6723/s`, against the above `9330/s`. So this only shows that the regex "startup" is more expensive in newer Perls, thus the loop falls further behind. **For the same large array, with the match word near its beginning** I move the match-word `AHA` in the array right past the original 4-word list ``` Rate loop regex loop 36710/s -- -3% regex 37666/s 3% -- ``` So the match needs to happen very, very early so that the loop catches up with the regex. While this can happen often in specific use cases it cannot be expected in general, of course. Note that the regex had far less work to do as well. Thus it's clear that the loop's problem is that it starts the regex engine anew for every iteration. Here it only had to do it a few times and the regex's advantage all but evaporated, even though it also matched much sooner. --- As for programmer's efficiency, take your pick. There are yet other ways using higher level libraries so that you don't have to write the loop. For instance, using core [List::Util](http://perldoc.perl.org/List/Util.html) ``` use List::Util qw(first); my $match = first { $line =~ /^$_/ } @ary; ``` This benchmarks between the same and around 10% *slower* than your loop when added. --- A note on regex used in the question. If the first word in `$line` is `puppy` the regex `/^$word/` will match it with `pup`. This may or may not be intended (but think of `flu` for `fluent` instead), but if it isn't it can be fixed by adding the *word boundary* anchor `\b`, ``` $line =~ /^$word\b/ ``` The same can be used with the alternation pattern, which was written so to mimic the code in the question. So add the word boundary anchor, for `/^($re)\b/`. Another way is to sort the list by the length of words, `sort { length $b <=> length $a } @ary`, per [Borodin](https://stackoverflow.com/users/622310/borodin)'s comment. This may affect the problem in a more complex way, please consider.
Python list + list vs. list.append() Today I spent about 20 minutes trying to figure out why this worked as expected: ``` users_stories_dict[a] = s + [b] ``` but this would have a `None` value: ``` users_stories_dict[a] = s.append(b) ``` Anyone know why the append function does not return the new list? I'm looking for some sort of sensible reason this decision was made; it looks like a Python novice gotcha to me right now.
`append` works by actually modifying a list, and so all the magic is in side-effects. Accordingly, the result returned by `append` is None. In other words, what one wants is: `s.append(b)` and then: `users_stories_dict[a] = s` But, you've already figured that much out. As to why it was done this way, while I don't really know, my guess is that it might have something to do with a `0` (or `false`) exit value indicating that an operation proceeded normally, and by returning `None` for functions whose role is to modify their arguments in-place you report that the modification succeeded. But I agree that it would be nice if it returned the modified list back. At least, Python's behavior is consistent across all such functions.
Exceptions that can't be caught by try-catch block in application code MSDN states that `StackOverflowException` [can't be caught by try-catch block](http://msdn.microsoft.com/en-en/library/system.stackoverflowexception.aspx) starting with .NET Framework 2. > > Starting with the .NET Framework version 2.0, a StackOverflowException object cannot be caught by a try-catch block and the corresponding process is terminated by default. > > > Are there any other exceptions with the same behavior?
Yes, there are some others: - The ThreadAbortedException is special. It will always be re-raised when caught unless the catch block calls ResetAbort(). It is entirely uncatchable when the CLR performs a rude abort of the thread. Done when the AppDomain gets unloaded for example, typically at program exit. - Any native exceptions thrown by unmanaged code in a thread that got started by native code are uncatchable. The common scenario here is COM components that start their own threads. The CLR is powerless to trap such exceptions, it doesn't know about the thread and can't inject a catch block. If the native code doesn't catch the exception then Windows terminates the process. - Any exceptions thrown by finalizers, unless they are critical finalizers. They'll abort the finalizer thread which terminates the process. - Starting with .NET 4.0, an ExecutionEngineException is uncatchable. It is thrown by the CLR when it detects that its internal data structures are compromised. Most typically by an AccessViolationException that's raised while the garbage collector is busy. Continuing to execute managed code when the GC heap is compromised is a risky proposition, and exploitable, .NET 4 pulled the plug on it entirely. - Starting with the .NET 4.0 version of the CLR, but possibly also present in unmanaged code that you interop with in earlier versions, Microsoft's secure CRT can terminate a program instantly when a security problem is detected. This is not actually an exception under the hood, the process is instantly terminated since the code considers the process compromised and not capable of safely processing exceptions. A common case is where the stack frame of native function is smashed, a common problem in native code and used by viral code to tinker with the return address to run arbitrary code. An attack scenario called "stack buffer overflow". There were a few false alarms in CLR code, early after the .NET 4.0 release but I haven't seen any in quite a while. You can trigger such an abort yourself by writing beyond the bounds of a *stackalloc*. - Quite infamously, exceptions thrown by Windows message handlers when you run code in 32-bit mode in the WOW64 emulation layer on a 64-bit operating system and you have a debugger attached. Best known for the troublesome Load event in Winforms but also present for other messages and in other runtime environments. The ugly details are in [this answer](https://stackoverflow.com/questions/4933958/vs2010-does-not-show-unhandled-exception-message-in-a-winforms-application-on-a/4934010#4934010). - Starting with .NET 4.5, exceptions that Microsoft classifies as *Corrupted State Exceptions* (CSEs). They *can* be caught, but that should only ever be done by a top-level exception handler that doesn't do anything but generate a diagnostic for the user's benefit and terminates the app unconditionally. Backgrounder is available in [this magazine article](https://msdn.microsoft.com/en-us/magazine/dd419661.aspx). - Any exception that is thrown by the jitter *before* your code can start running cannot be caught or reported. Failure to compile your Main() method is the common case, typically a FileNotFoundException.
Will repeatedly normalizing an IEEE floating point vector mutate it? If I take a (nonzero) floating point vector (an `(x, y, z)` vector), and normalize it to unit length, is normalizing it a second time guaranteed to return the same result?
I am not aware of a relevant result from the literature. A quick test demonstrates that normalizing a 3D-vector twice frequently leads to small differences between the normalized vector and the re-normalized one, even when care is taken to perform the normalization accurately, for example, by performing it in higher-precision arithmetic. I used the ISO-C99 program below for this quick test, compiling it with my compiler's "strict" floating-point settings (`icl /fp:strict`) for an x64 platform. ``` #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <string.h> #include <math.h> #define USE_HYPOT (1) // Fixes via: Greg Rose, KISS: A Bit Too Simple. http://eprint.iacr.org/2011/007 static uint32_t z=362436069,w=521288629,jsr=362436069,jcong=123456789; #define znew (z=36969*(z&0xffff)+(z>>16)) #define wnew (w=18000*(w&0xffff)+(w>>16)) #define MWC ((znew<<16)+wnew) #define SHR3 (jsr^=(jsr<<13),jsr^=(jsr>>17),jsr^=(jsr<<5)) /* 2^32-1 */ #define CONG (jcong=69069*jcong+13579) /* 2^32 */ #define KISS ((MWC^CONG)+SHR3) float uint32_as_float (uint32_t a) { float r; memcpy (&r, &a, sizeof(r)); return r; } void normalize_3d (float *a, float *b, float *c) { #if USE_HYPOT double l = hypot (hypot ((double)a[0], (double)b[0]), (double)c[0]); #else // USE_HYPOT double l = sqrt ((double)a[0]*a[0] + (double)b[0]*b[0] + (double)c[0]*c[0]); #endif // USE_HYPOT *a = (float)((double)a[0] / l); *b = (float)((double)b[0] / l); *c = (float)((double)c[0] / l); } int main (void) { float a, aa, aaa, b, bb, bbb, c, cc, ccc; do { /* generate random vector */ do { a = uint32_as_float (KISS & ~0x80000000u); } while (isnanf (a) || (a > 0x1.0p126f) || (a < 0x1.0p-126f)); do { b = uint32_as_float (KISS & ~0x80000000u); } while (isnanf (b) || (b > 0x1.0p126f) || (b < 0x1.0p-126f)); do { c = uint32_as_float (KISS & ~0x80000000u); } while (isnanf (c) || (c > 0x1.0p126f) || (c < 0x1.0p-126f)); /* normalize vector once */ aa = a; bb = b; cc = c; normalize_3d (&aa, &bb, &cc); /* re-normalize normalized vector */ aaa = aa; bbb = bb; ccc = cc; normalize_3d (&aaa, &bbb, &ccc); /* check whether normalized vector is equal to re-normalized one */ if ((aa != aaa) || (bb != bbb) || (cc != ccc)) { printf ("norm = (%15.6a, %15.6a, %15.6a) re-norm = (%15.6a, %15.6a, %15.6a)\n", aa, bb, cc, aaa, bbb, ccc); } } while (1); return EXIT_SUCCESS; } ```
Difference of Headless browsers for automation The main difference is, execution on GUI bases and non GUI bases(Headless). I am looking for difference between all Headless browsers with each other, But unfortunately I didn't find any. I go through one by one, Which makes more confusion. It would be great if someone can share short information with differences, which makes things clear.
## Browser A [**Browser**](https://searchwindevelopment.techtarget.com/definition/browser) is an application program that provides a way to look at and interact with all the information on the World Wide Web. Technically a [**Browser**](https://www.computerhope.com/jargon/b/browser.htm), alternatively referred as a **Web Browser** or **Internet Browser**, is a client program that uses HTTP (Hypertext Transfer Protocol) to make requests of Web servers throughout the Internet on behalf of the Browser User. --- ## Headless Browser A **Headless Browser** is also a *Web Browser* but without a graphical user interface (GUI) but can be controlled programmatically which can be extensively used for *automation*, *testing*, and other purposes. --- ## Why to use Headless Browsers? There are a lot of advantages and disadvantages in using the Headless Browsers. Using a headless browser might not be very helpful for browsing the Web, but for *Automating* tasks and tests it’s **awesome**. --- ## Advantages of Headless Browsers There is a lot of advantages in using *Headless Browsers*. Some of tham are as follows: - A definite advantage of using *Headless Browsers* is that they are typically faster than real browsers. The reason for being faster is because we are not starting up a *Browser GUI* and can bypass all the time a real browser takes to load **CSS**, **JavaScript** and open and render [HTML DOM](https://www.w3schools.com/js/js_htmldom.asp). - Performancewise you can typically see a **2x** to **15x** faster performance when using a headless browser. - While [Scraping Websites](https://rads.stackoverflow.com/amzn/click/com/1491910291) you don’t necessarily want to have to manually start up a website. So you can access the website headlessly and just scrape the HTML. You don’t need to render a *Full Browser* to do that. - Lot of developers use a *Headless Browser* for **unit testing** *code changes* for their websites and mobile apps. Being able to do all this from a command line without having to manually refresh or start a browser saves them lots and effort. --- ## When You Might NOT Want to Use a Headless Browser There can be number of reasons why you may opt to use a *Real Browser* instead of a *Headless Browser*. A couple of instances: - You need to mimic real users. - You need to visually see the test run. - If you need to do lots of debugging, headless debugging can be difficult. --- ## Which headless browsers are better? As you rightly pointed that *...the main difference is in the execution on GUI bases and non GUI bases(Headless)...*, so from *Testing Perspective* a lot will depend on the [Browser Engine](https://github.com/dhamaniasad/HeadlessBrowsers#browser-engines) implemented under the hood by any particular browser. For example, here are some of the *Browser Engines* which fully render web pages or run JavaScript in a virtual DOM. - [Chromium Embedded Framework](https://bitbucket.org/chromiumembedded/cef): **CEF** is a open source project based on the Google Chromium project with **JavaScript** support and **BSD** license. - [Erik](https://github.com/phimage/Erik): **Erik** is a *Headless Browser* on top of Kanna and WebKit with **Swift** support and **MIT** license. - [jBrowserDriver](https://github.com/machinepublishers/jbrowserdriver): **jBrowserDriver** is a *Selenium-compatible Headless Browser* which is *WebKit-based* and works with *Selenium Server* through **Java** binding support and **Apache License v2.0** license. - [PhantomJS](http://phantomjs.org/): PhantomJS is a headless WebKit scriptable with a JavaScript API. It has fast and native support for various web standards: DOM handling, CSS selector, JSON, Canvas, and SVG with **JavaScript**, **Python**, **Ruby**, **Java**, **C#**, **Haskell**, **Objective-C**, **Perl**, **PHP** and **R**(via [Selenium](https://docs.seleniumhq.org/about/platforms.jsp#programming-languages)) support and **BSD 3-Clause** license. - [Splash](https://github.com/scrapinghub/splash): Splash is a javascript rendering service with an HTTP API. It's a lightweight browser with an HTTP API, implemented in Python using Twisted and QT with almost all the laungage binding arts and **BSD 3-Clause** license. You can find a related discussion in [Which drivers support “no-browser”/“headless” testing?](https://stackoverflow.com/questions/48368002/which-drivers-support-no-browser-headless-testing)
How to serve cloudstorage files using app engine SDK In app engine I can serve cloudstorage files like a pdf using the default bucket of my application: ``` http://storage.googleapis.com/<appid>.appspot.com/<file_name> ``` But how can I serve local cloudstorage files in the SDK, without making use of a blob\_key? I write to the default bucket like this: ``` gcs_file_name = '/%s/%s' % (app_identity.get_default_gcs_bucket_name(), file_name) with gcs.open(gcs_file_name, 'w') as f: f.write(data) ``` The name of the default bucket in the SDK = 'app\_default\_bucket' In the SDK datastore I have a Kind: **GsFileInfo** showing: filename: /app\_default\_bucket/example.pdf **Update and workaround: You can get a serving url for NON image files like css, js and pdf.** ``` gs_file = '/gs/%s/%s/%s' % (app_identity.get_default_gcs_bucket_name(), folder, filename) serving_url = images.get_serving_url(blobstore.create_gs_key(gs_file)) ```
**UPDATE I found this feature to serve cloudstorage files using the SDK:** [This feature has not been documented yet.](https://gist.github.com/voscausa/9541133) ``` http://localhost:8080/_ah/gcs/app_default_bucket/filename ``` This meands we do not need the img serving url to serve NON images as shown below !!! To create e serving url for cloudstorage files like images, css, js and pdf's in the default\_bucket, I use this code for testing(SDK) and GAE production: **IMPORTANT: the images.get\_serving\_url() works also for NON images in the SDK!!** In the SDK you stll need the blobstore to read a blob and create a serving url for a cloudstorage object. I also added the code to read, write and upload cloudstorage blobs in the SDK and GAE production. The code can be found [here](https://gist.github.com/voscausa/9541133).
for loop inside jquery function I am trying to repeat something inside a jquery function. I tried a for loop, but it seems it doesnt like the syntax. for instance i have the variable ``` var number = 2; ``` now i have ``` $('tr').html('<td id="'+number+'"></td>'); ``` what i want to do is loop from 0 to number (0,1,2) so that in the end i end up having 3 . Thanks
There is *probably* a better way, but this should work. ``` var loops = [1,2,3]; $.each(loops, function(index, val) { $('tr').html('<td id="myCell' + index + '"></td>'); }); ``` This should also work (regular JS): ``` var i; for(i=0; i<3; i++) { $('tr').html('<td id="myCell' + i + '"></td>'); } ``` Note how i prefixed id with the word 'myCell', to ensure XHTML compliancy. (thanks to @Peter Ajtai for pointing that out). **EDIT** I just noticed another problem - you're using the **.html** function to add the cells. But **.html** replaces the entire html of the matched element. So you'll only ever end up with the last cell. :) You're probably looking for the **.append** function: ``` $('tr').append('<td id="myCell' + i + '"></td>'); ``` EDIT 2 -- moved the double quote before myCell rather than after.
How to get 1D column array and 1D row array from 2D array? (C# .NET) i have `double[,] Array;`. Is it possible to get something like `double[] ColumnArray0 = Array[0,].toArray()` and `double[] RowArray1 = Array[,1].toArray()` without making a copy of every elemet(using for)? Thanks.
Arrays are a memory area where all entries are stored in a consecutive way. Depending on the data layout in memory this is only possible for either rows or columns. Instead of the 2d array `double[,]` type it is in your case better to use an array of arrays `double[][]` ``` double[][] Array2d = new double[10][]; Array2d[0] = new double[10]; Array2d[1] = new double[10]; ... and then: double[] RowArray0 = Array2d[0]; ``` Depending on how you put the data in your array, you can also treat the `Array2d` as a column array. But to have both at the same time is not possible. Also have a look here: [Multidimensional Array [][] vs [,]](https://stackoverflow.com/questions/12567329/multidimensional-array-vs)
Find and delete latter duplicate records I have a table that has 0.5 to 1 million records approx. The table has duplicate records by a varchar field. Something like ``` ID int(11) TITLE varchar(100) ``` There are many records that have duplicate TITLE values. I need to remove the duplicate records. The first step is to find them. I suppose there are two methods: 1. To use `GROUP BY TITLE HAVING COUNT(*) > 1` 2. Go record by record using `SELECT ID, TITLE FROM MY_TABLE` and then for each record use a `SELECT ID FROM MY_TABLE WHERE TITLE='".$row['TITLE']." AND ID <> '".$row['ID']."'` The question is that which method is efficient? Or is there another more efficient method? I am using php as the base language.
**In order to delete all duplicate records (having same title) you can use this query:** ``` DELETE B FROM your_table A INNER JOIN your_table B ON A.TITLE = B.TITLE AND A.ID < B.ID. ``` This query will delete all the latter duplicate records while keeping the first one. --- **In order to keep the latest while deleting all the previous duplicate records:** ``` DELETE A FROM your_table A INNER JOIN your_table B ON A.TITLE = B.TITLE AND A.ID < B.ID. ``` --- **EDIT:** If you want to see which latter (duplicate) records will be deleted before executing the delete operation run the following query: ``` SELECT B.* FROM your_table A INNER JOIN your_table B ON A.TITLE = B.TITLE AND A.ID < B.ID. ``` --- > > This assumes that all NEW ID's will be greater than than prior ID's. A > pretty safe assumption but caution is warranted > > > Thanks @xQbert for this addition.
How can I include Wordpress posts in a custom PHP file? This is what i want to do: I want `/summary.php` to include 5 latest posts (only the extract) from my blog, which lives in `/wp`. Is there any way to include Wordpress in `/summary.php` and only print the html for these posts? (Maybe i should parse the rss?)
Take a look to [Integrating WordPress with your Website](http://codex.wordpress.org/Integrating_WordPress_with_Your_Website) This is an example from that page, that shows the first ten posts in alphabetical order: ``` <?php require('/the/path/to/your/wp-blog-header.php'); ?> <?php $posts = get_posts('numberposts=10&order=ASC&orderby=post_title'); foreach ($posts as $post) : start_wp(); ?> <?php the_date(); echo "<br />"; ?> <?php the_title(); ?> <?php the_excerpt(); ?> <?php endforeach; ?> ``` Use `$posts = get_posts('numberposts=10');` if you want the 10 latest posts.
How to get the current response length from a http.ResponseWriter Given a `http.ResponseWriter` that has had some number of `.Write()`s done to it, can the current accumulated length of the response be obtained directly? I'm pretty sure this could be done with a `Hijack` conversion, but I'd like to know if it can be done directly.
Even if you'd know what you get, underlying the `http.ResponseWriter` interface, the chances are low, that there is something usable. If you [look closer](http://golang.org/src/pkg/net/http/server.go#L110) at the struct you get using the standard `ServeHTTP` of the http package, you'll see that there's no way to get to the length of the buffer but hijacking it. What you can do alternatively, is shadowing the writer: ``` type ResponseWriterWithLength struct { http.ResponseWriter length int } func (w *ResponseWriterWithLength) Write(b []byte) (n int, err error) { n, err = w.ResponseWriter.Write(b) w.length += n return } func (w *ResponseWriterWithLength) Length() int { return w.length } func MyServant(w http.ResponseWriter, r *http.Request) { lengthWriter := &ResponseWriterWithLength{w, 0} } ``` You might even want to write your own version of `http.Server`, which serves the `ResponseWriterWithLength` by default.
OpenGL sutherland-hodgman polygon clipping algorithm in homogeneous coordinates (4D, CCS) I have two questions. (I marked 1, 2 below) In OpenGl, the clipping is done by sutherland-hodgman. However, I wonder how to work sutherland-hodgman algorithm in homogeneous system (4D) I made a situation. In VCS, there is a line, R= (0, 3, -2, 1), S = (0, 0, 1, 1) (End points of the line) And a frustum is right = 1, left = -1, near = 1, far = 3, top = 4, bottom = -4 Therefore, the projection matrix P is ``` 1 0 0 0 0 1/4 0 0 0 0 -2 -3 0 0 -1 0 ``` If we calculate the line with the P, then the each end points is like that ``` R' = (0, 3/4, 1, 2), S' = (0, 0, -5, -1) ``` I know that perspective division should not be done now, because if we do perspective division, the clipping result is not correct. 1. Here I am curious. What makes a correct clipping because we did not just do perspective division. What mathematical properties are here? 2. How to calculate the clipping result in above situation? (The fact that two intersections occur in the w-y coordinate system makes me confused. I thought the result line is one, not divided two parts)
I'm not quite sure whether you understood the sutherland-hodgman algorithm correctly (or at least I didn't get your example). Thus I will prove here, that it doesn't make any difference whether clipping happens before or after the perspective divide. The proof is only shown for one plane (clipping has to be done against all 6 planes), since applying multiple such clipping operations after each other makes not difference here. Let's assume we have two points (as you described) R' and S' in clip space. And we have a clipping plane P given in hessian normal form [n, p] (if we take the left plane this is [1,0,0,1]). If we would be calculating in pure 3d space (R^3), then checking whether a line crosses this plane would be done by calculating the signed distance of both points to the plane and checking if the sign is different. The signed distance for a point X = [x/w,y/w,z/w] is given by ``` D = dot(n, X) + p ``` Let's write down the actual equation we have (including the perspective divide): ``` d = n_x * x/w + n_y * y/w + n_z * z/w + p ``` In order to find the exact intersection point, we would, again in R^3 space, calculate for both points (A = R'/R'w, B = S'/S'w) the distance to the plane (da, db) and perform a linear interpolation (I will only write the equations for the x-coordinate here since y and z are working similar): ``` x = A_x * (1 - da/(da - db)) + A_y * (da/(da-db)) x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db)) ``` And w = 1 (since we interpolate between two points both having w = 1) Now we already know from [the previous discussion](https://stackoverflow.com/questions/41952225/why-clipping-should-be-done-in-ccs-not-ndcs), that clipping has to happen before the perspective divide, thus we have to adapt this equation. This means, that for each point, the clipping cube has a different scaling w. Lt's see what happens when we try to perform the the same operations in P^3 (before the perspective divide): First, we "revert" the perspective divide to get to X=[x,y,z,w] for which the distance to the plane is given by ``` d = n_x * x/w + n_y * y/w + n_z * z/w + p d = (n_x * x + n_y * y + n_z * z) / w + p d * w = n_x * x + n_y * y + n_z * z + p * w d * w = dot([n, p], [x,y,z,w]) d * w = dot(P, X) ``` Since we are only interested in the sign of the whole calculation, which we haven't changed by our operations, we can compare the `D*w`s and get the same inside-out result as in R^3. For the two points R' and S', the calculated distances in P^3 are dr = da \* R'w and ds = db \* S'w. When we now use the same interpolation equation as above but for R' and S' we get for x: ``` x' = R'x * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'x * (da * R'w)/(da * R'w - db * S'w) ``` On the first view this looks rather different from the result we got in R^3, but since we are still in P^3 (thus x'), we still have to do the perspective divide on the result (this is allowed here, since the interpolated point will always be at the border of the view-frustum and thus dividing by w will not introduce any problems). The interpolated w component is given as: ``` w' = R'w * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'w * (da * R'w)/(da * R'w - db * S'w) ``` And when calculating x/w we get ``` x = x' / w'; x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db)) ``` which is exactly the same result as when calculating everything in R^3. **Conclusion:** The interpolation gives the same result, no matter if we perform the perspective divide first and interpolation afterwards or interpolating first and dividing then. But with the second variant we avoid the problem with points flipping from behind the viewer to the front since we are only dividing points that are guaranteed to be inside (or on the border) of the viewing frustum.
How to get HTTP StatusCodes in ember-data When I invoke ``` App.store.createRecord(App.User, { name: this.get("name") }); App.store.commit(); ``` how do I know if its successful and how to wait for the asyn message?
Very limited error handling was [recently added](https://github.com/emberjs/data/commit/36a70c9839c9807248856ce2e25959bf80d6b941) to DS.RESTAdapter in ember-data master. When creating or updating records (with bulk commit disabled) and a status code between 400 and 599 is returned, the following will happen: - A `422 Unprocessable Entity` will transition the record to the "invalid" state and will add any errors returned from the server to the record's `errors` property. The adapter assumes the server will respond with JSON in the following format: ``` { errors: { name: ["can't be blank"], password: ["must be at least 8 characters", "must contain a number"] { } ``` (The error messages themselves could be arrays of strings or just strings. ember-data doesn't currently care which.) To detect this state: ``` record.get('isValid') === false ``` - All other status codes will transition the record to the "error" state. To detect this state, use: ``` record.get('isError') === true ``` More cases may eventually be handled by ember-data out of the box, but for the moment if you need something specific, you'll have to extend `DS.RESTAdapter`, customizing its `didError` function to add it in yourself.
Iteration over const collections in D In C++ (STL), we define const and non-const methods and two kinds of iterators for iteration over collection: ``` class Container { public: iterator begin(); const_iterator begin() const; }; ``` How we can extend this technique to D? My first attempt: ``` class Container(T) { class Range { ref T front(); // implementation } class ConstRange { T front() const; // implementation } Range all() { return new Range(/**/); } ConstRange all() const { return new ConstRange(/**/); } } unittest { alias list = List!int; const list L = new list; writeln(L.all()); } ``` But it failed. I have an error: `Error: nested type List.List!int.List.Range should have the same or weaker constancy as enclosing type const(List!int)` What's wrong?
The solution is to make your range exist outside the container, but still reference it. If the range is inside the container, it is subject to the transitive const rule, but if it is outside you can keep a const reference while being a mutable range. If you define them in the same file, the range can still see private members of the container. Observe: ``` class Container(T) { private T[] contents; this(T[] contents) { this.contents = contents; } RangeOver!(Container!T, T) getRange() { return RangeOver!(Container!T, T)(this); } RangeOver!(const(Container!T), const(T)) getRange() const { return RangeOver!(const(Container!T), const(T))(this); } } struct RangeOver(Container, T) { Container container; size_t iterationPosition; this(Container container) { this.container = container; this.iterationPosition = 0; } ref T front() { return container.contents[iterationPosition]; } bool empty() { return iterationPosition == container.contents.length; } void popFront() { iterationPosition++; } } void main() { import std.stdio; // mutable iteration { writeln("about to mutate..."); auto container = new Container!int([1,2,3]); foreach(ref item; container.getRange()) { writeln(item); item += 5; } writeln("mutation done"); // changes seen foreach(item; container.getRange()) writeln(item); } // const iteration { writeln("consting it up y0"); const container = new Container!int([1,2,3]); // allowed foreach(item; container.getRange()) writeln(item); } } ```
In ES5 Javascript, how do I add an item to an array and return the new array immediately, without using concat? I often find myself in the situation where I want to, in a single (atomic) operation, add an item to an array and return that new array. ``` ['a', 'b'].push('c'); ``` won't work as it returns the new length. I know the following code works ``` ['a', 'b'].concat(['c']); ``` But I find it ugly code (combining two arrays just to add a single item to the end of the first array). I can't use `Array.splice()` as it modifies the original array (and returns the removed items). `Array.slice()` does return a shallow copy but you can't add new items. **ES6** I'm aware that in `es6` you can use ``` [...['a', 'b'], 'c'] ``` But I'm looking for an `es5` solution **Lodash** I'm okay in using `lodash` **Just to be clear** I'm aware that this can be achieved in several different ways (like the `Array.concat()` method above), but I'm looking for an intuitive simple piece of code, which doesn't "misuses" other operators
> > I know the following code works `['a', 'b'].concat(['c']);` But I find > it ugly code (*combining two arrays* just to add a single item to the > end of the first array). > > > The `concat()` method can be given a single (or multiple) values without the need of encapsulating the value(s) in an array first, for example: ``` ['a', 'b'].concat('c'); // instead of .concat(['c']); ``` From [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat) (my emphasis): > > Arrays and/or **values** to concatenate into a new array. > > > Besides from that there are limited options without using extension and existing methods. Example on how to extend the Array (this will return current array though): ``` Array.prototype.append = function(item) { this.push(item); return this }; var a = [1, 2, 3]; console.log(a.append(4)) ``` Optionally create a simple function as @torazaburo suggests, which can take array and item as argument: ``` function append(arr, item) { arr.push(item); return arr; } ``` or using `concat()`: ``` function append(arr, item) { return arr.concat(item) } ```
How should I specify the path for an sqlite db in an iPhone project? After adding it as a resource, the database file itself is in the project root. I've only been able to open it by specifying the full path as OS X sees it, i.e., "/Users/Louis/Documents/Test Project/test.db". But of course there is no such path on an iPhone. I think I should define the path as "application root/test.db" but I don't know how, or if that would even work anywhere else besides my development machine. Thanks for any ideas.
To get the path of the file you've added in XCode you would use pathForResource:ofType: with your mainBundle. ``` NSString *path = [[NSBundle mainBundle] pathForResource:@"yourDb" ofType:@"sqlite"]; ``` But you can't change files in the mainBundle. So you have to copy it to another location. For example to the library of your app. You could do it like this: ``` NSString *libraryPath = [NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES) lastObject]; NSString *targetPath = [libraryPath stringByAppendingPathComponent:@"yourDB.sqlite"]; if (![[NSFileManager defaultManager] fileExistsAtPath:targetPath]) { // database doesn't exist in your library path... copy it from the bundle NSString *sourcePath = [[NSBundle mainBundle] pathForResource:@"yourDb" ofType:@"sqlite"]; NSError *error = nil; if (![[NSFileManager defaultManager] copyItemAtPath:sourcePath toPath:targetPath error:&error]) { NSLog(@"Error: %@", error); } } ```
Input/character validation on an input-element I took some time to create a function, which can validate which keys you input on an element. So far it has two uses: 1. You call `.validate()` on the object, but make sure it has a `data-match` attribute with the regex in it 2. You call `.validate("REGEX")` on the object, but replace `REGEX` with your actual regular expression Here is an example: ``` document.getElementById("input").validate("[0-9A-Fa-f]"); ``` This will *only* allow characters 0-9, A-F, and a-f. Nothing else. I want to know, if I can improve this code. So far I have it the way I want it to be, but I would surely love someone else's feedback on it as well. It is really light-weight and can easily be implemented. Thanks a lot! ``` Object.prototype.validate = function(myRegex) { var regex = this.getAttribute("data-match") || myRegex; var allowed = [8, 37, 38, 39, 40, 46]; this.addEventListener("keydown", function(e) { var c = (e.metaKey ? '⌘-' : '') + String.fromCharCode(e.keyCode); if (!e.shiftKey && !e.metaKey) c = c.toLowerCase(); if ((e.ctrlKey && (e.keyCode == 65 || e.keyCode == 97)) || c == '⌘-A') return true; if (c.match(regex) || allowed.indexOf(e.which) >= 0) { //Do nothing } else { e.preventDefault(); } }); } document.getElementById("input").validate("[0-9A-Fa-f]"); ``` ``` <input type="text" id="input" /> ```
# Whatever you do DON'T MODIFY `Object.prototype` Modifying `Object.prototype` is one of the worst things you can do in JavaScript Quoting [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/proto): > > **Warning**: Changing the `[[Prototype]]` of an object is, by the nature of how modern JavaScript engines optimize property accesses, a very slow operation, in ***every*** browser and JavaScript engine. The effects on performance of altering inheritance are subtle and far-flung, and are not limited to simply the time spent in `obj.__proto__ = ... statement`, but may extend to ***any*** code that has access to ***any*** object whose `[[Prototype]]` has been altered. If you care about performance you should avoid setting the `[[Prototype]]` of an object. Instead, create a new object with the desired `[[Prototype]]` using `Object.create()`. > > > --- ### What should I do? Preferably create a function: ``` function Validate(element, regex) { // code } ``` but if you want to extend an element use `Element.prototype`, preferable with [`Object.defineProperty`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/defineProperty): ``` Object.defineProperty(Element.prototype, 'validate', { value: function(regex) { // code } }); ``` The bonus of using this is you can instead of getters and setters which are pretty cool too. # Avoid empty `ifs` Here you have: ``` if (c.match(regex) || allowed.indexOf(e.which) >= 0) { //Do nothing } else { e.preventDefault(); } ``` instead, use a NOT (`!`): ``` if (!(c.match(regex) || allowed.indexOf(e.which) >= 0)) e.preventDefault(); ``` the best way to write this specific condition would be: ``` if(!c.match(regex) || allowed.indexOf(e.which) < 0) ``` # Use clear variable names As much as I love code-golf (writing short code), this is not it. Make sure your variable names are clear. I don't know what the `c` variable does so I can't suggest an alternative name, this also makes it more confusing to me on the workings of your code. # Use Regex literals not strings Your example shows: ``` "[0-9A-Fa-f]" ``` That's a string, not a Regex, use Regex literals (to avoid escaping also): ``` /[0-9A-Fa-f]/ ```
Best approach for managing strings in react native I am new in react native. I've been dealing with this big project, that contains too many strings that can be reused many places in the project. So I created a `strings.js` file , as in android's `strings.xml`, to store all reusable strings in one file like this, ``` export const SOME_STRING = 'Some value'; export const ANOTHER_STRING = 'Another value'; ... ``` and imports whenever i needed. **So these are my questions...** 1) Is this a good approach ? 2) Is there any alternative to this ?
You don't need to export each value. One better way I know is to export ``` const SOME_STRING = 'Some value'; const ANOTHER_STRING = 'Another value'; module.exports = { SOME_STRING:SOME_STRING, ANOTHER_STRING:ANOTHER_STRING } ``` Or you may like to wrap all of this in 1 constant object ``` const APPLICATION_CONSTANTS = { SOME_STRING : 'Some string', ANOTHER_STRING : 'Another string' } export default APPLICATION_CONSTANTS; ``` Usage ``` import APPLICATION_CONSTANTS from './strings'; APPLICATION_CONSTANTS.SOME_STRING ```
Can/how do you host a full VB6 Form in a C# WPF app? I am currently exploring the option of porting some older VB6 application to WPF with C#. The plan, in phase one, is to port several key forms and not all the application. The theoretical goal is to open the VB6 form in a container of some sort within WPF via an ActiveX dll. Is this even possible? I've tried looking at the Interop and can't seem to find a solid example of how get it to work with anything but Win32 controls, not a full form. I have full access to the old VB6 code and can modify it in anyway needed. The following screenshot of the main WPF app would serve as the wrapper/container: <http://www.evocommand.com/junk_delete_me/main_menu_mockup.png> The current VB6 maintenance screen that would be loaded in the “white space” section on the right side of the previous screen.
I was able to accomplish the task with the following steps: 1. Created a new VB6 Active X Control Project. Copied and pasted the entire contents of the VB6 form controls and code behind into the new control. There are several elements that have to be handled in switching to a control: 1. you lose the ability to display the caption of the form in the previous manner. You can work around it with alternate controls (label/borderlesstextbox, etc) that accomplish the same functionality if needed. This wasn’t a priority since each screen was being hosted in a browser like tab system in our new .Net project. 2. All mousepointer references have to be changed from Me.Mousepointer to Screen.mousepointer 3. You cannot use Me.Hide and have to alternate events to hide the .Net container. 4. Any and all references to Me.[anything] have to be removed or replaced with UserControl.[anything] if they are applicable. 5. If you use any functions that reference a [yourcontrol].Contianer.Property on a form they will need to be altered to loop through the UserControl.Controls collection instead and “Container” is invalid for vb6 ActiveX controls 6. All non-modal forms/dialog boxes must be removed from the project as there is now no Hwnd to handle in WPF. You get an error of 'Non-modal forms cannot be displayed in this host application from an ActiveX DLL, ActiveX Control, or Property page'. In our case we had a simple splash screen that would display when certain long processes/reports displayed to let the user know what was running. 2. I was unable to directly add the VB6 control via the interop to a WPF project . As such a new .Net “Windows Form Control Library” project was created. A reference to the VB6 OCX was added to the project. The VB6 Control s were then added to the .Net toolbox by “right click” –> “Add Item” and pointing a com reference to the VB6 control ocx. The .Net control was then used to host/serve the VB6 Control. 3. To display host a form in the VB6 and get it to fire the necessary initialization functionality the VB6 OCX controls were defaulted in a Visible.False manner so they were initially added to the .Net OCX as invisible controls. When needed the VB6 control is set to visible = True which fires the UserControl\_Show() event. All code formerly in Form\_Load() was moved to this event. The show event was the easiest method of accessing the Form\_Load as needed. MSDN: “The control does not receive Show events if the form is hidden and then shown again, or if the form is minimized and then restored. The control’s window remains on the form during these operations, and its Visible property doesn’t change.” 4. Wrapping the vb6 controls within a .Net Winform control resolved the issue with Radio/Option buttons being rendered as black as outlined elsewhere in one of my responses to this question without having to convert the frames to Picture boxes as suggested. 5. In the WPF app as a menu choice is selected xaml code is dynamically created and displayed via a wrapper with a WindowsFormsHost tag. A dynamically created control object from the .Net winform app is then pushed into the WindowsFormsHost tag on the xaml and the control is made visible on the .net project which fires vb6 UserControl\_Show and then load and display of the vb6 form.
Reason for uncommon OOP in Python? Instead of using common OOP, like Java and C# do with their base class `Object` or `object`, Python uses special methods for basic behaviour of objects. Python uses `__str__` which is used when the object is passed to `print`: ``` >>> class Demo: >>> def __str__(self): >>> return "representation" >>> d = Demo() >>> print(d) representation ``` The same with `len`: ``` >>> class Ruler: >>> def __len__(self): >>> return 42 >>> r = Ruler() >>> len(r) 42 ``` What I would expect is something like this: ``` >>> class Ruler: >>> def len(self): >>> return 42 >>> r = Ruler() >>> r.len() 42 ``` What is the reason for using special methods indirectly instead of calling usual methods directly?
The reason for this is explained well in the Python documentation here: <http://docs.python.org/faq/design.html#why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list> > > The major reason is history. Functions > were used for those operations that > were generic for a group of types and > which were intended to work even for > objects that didn’t have methods at > all (e.g. tuples). It is also > convenient to have a function that can > readily be applied to an amorphous > collection of objects when you use the > functional features of Python (map(), > apply() et al). > > > In fact, implementing len(), max(), > min() as a built-in function is > actually less code than implementing > them as methods for each type. One can > quibble about individual cases but > it’s a part of Python, and it’s too > late to make such fundamental changes > now. The functions have to remain to > avoid massive code breakage. > > > (This was answered in the comments, but needs to be shown as a real answer for the sake of future readers.)
Copy a range to a virtual range Is it possible to copy a range to a virtual range or does it require me to sloppily paste it in another range in the workbook? ``` dim x as range x = copy of Range("A1:A4") ``` obviously I usually use the following code ``` dim x as range set x = Range("A1:A4") ``` but in the above example it only makes x a "shortcut" to that range rather than a copy of the range object itself. Which is usually what I want but lately I have been finding it would be quite useful to totally save a range and all it's properties in memory rather than in the workbook somewhere.
> > Is it possible to copy a range to a virtual range? > > > No it is not possible. Range allways represents some existing instance(s) of cells on a worksheet in a workbook. > > Does it require me to sloppily paste it in another range in the > workbook? > > > It depends on what you want to do. You can paste everithing from one range to another, you can paste only something like e.g. formulas to another range. ``` dim x as range set x = Range("A1:A4") ``` > > But in the above example it only makes x a "shortcut" to that range > rather than a copy of the range object itself. > > > Variable `x` holds a reference to that specific range. It is not possible to made any standalone copy of a range. It is possible to create references to a range and to copy everithing / something from one range to another range. > > Lately I have been finding it would be quite useful to totally save a > range and all it's properties in memory rather than in the workbook > somewhere. > > > Again, it is not possible to save all range properties to some virtual, standalone copy of specific Range because Range allways represents an existing, concrete set of cells. What you could do is to create your own class with some properties of a Range or even all properties ... but it will be some extra work to do. Here some examples how to use range as parameter and copy it to another range. HTH. ``` Option Explicit Sub Main() Dim primaryRange As Range Set primaryRange = Worksheets(1).Range("A1:D3") CopyRangeAll someRange:=primaryRange CopyRangeFormat someRange:=primaryRange ' Value property of a range represents and 2D array of values ' So it is usefull if only values are important and all the other properties do not matter. Dim primaryRangeValues As Variant primaryRangeValues = primaryRange.value Debug.Print "primaryRangeValues (" & _ LBound(primaryRangeValues, 1) & " To " & UBound(primaryRangeValues, 1) & ", " & _ LBound(primaryRangeValues, 2) & " To " & UBound(primaryRangeValues, 2) & ")" ' Prints primaryRangeValues (1 To 3, 1 To 4) Dim value As Variant For Each value In primaryRangeValues ' This loop throught values is much quicker then to iterate through primaryRange.Cells itself. ' Use it to iterate through range when other properties except value does not matter. Debug.Print value Next value End Sub Private Sub CopyRangeAll(ByVal someRange As Range) ' Here all properties of someRange which can be copied are copied to another range. ' So the function gets a reference to specific range and uses all its properties for another range. Dim secondaryRange As Range Set secondaryRange = Worksheets(2).Range("D4:G6") someRange.Copy secondaryRange End Sub Private Sub CopyRangeFormat(ByVal someRange As Range) ' Here only formats are copied. ' Function receives reference to specific range but uses only one special property of it in that another range. Dim secondaryRange As Range Set secondaryRange = Worksheets(3).Range("G7:J9") someRange.Copy secondaryRange.PasteSpecial xlPasteFormats ' and many more e.g. xlPasteFormulas, xlPasteValues etc. End Sub ``` ![Sheet1](https://i.stack.imgur.com/0P780.jpg) ![Sheet2](https://i.stack.imgur.com/lb3tq.jpg) ![Sheet3](https://i.stack.imgur.com/QCIdD.jpg) ![Range values array](https://i.stack.imgur.com/kBpFy.jpg)
Site is accessible by domain in all browsers but Chrome I'm trying to set up a website using an Ubuntu machine running nginx. For some reason, I'm able to access the site by the domain name in Safari and Firefox, but in Chrome it's unable to access the server. However, I'm able to use curl, Postman, etc. and I get the index.html back as I'd like to. I found that in Chrome I'm able to access the site using the IP address, and I'm totally lost on where to check next. Here's my configuration file: ``` server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } ``` I've changed the response code just to make sure that this is the configuration that I'm actually hitting. Any help would be appreciated! Edit: The domain is gwilliam.dev
Your problem is because you are using a `.dev` domain. The entire `.dev` top-level domain (TLD) [is on the HSTS preload list](https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/) and that means you **must** access it using HTTPS. According to your nginx config snippet, you are only providing HTTP bindings, not HTTPS. In fact, I am surprised that you are able to access the domain using Firefox, since Firefox has been forcing `.dev` to HTTPS since at least mid-2018. You might be using a *very* old version, in which case you should upgrade immediately. The easiest way to get HTTPS support on your site is [LetsEncrypt](https://letsencrypt.org). Once you have that set up, your site should work in Chrome.
Hibernate. ClassicQueryTranslatorFactory vs ASTQueryTranslatorFactory What's the difference between those query translators (I mean differences for me as a Hibernate user). Some blogs on the internet say that ANTLR-based translator is faster. But I deem that if one of them was clearly better, then Hibernate developers would remove the other one. So.. what's the difference and why do we have both of them? In what situations should I choose first or second? In what situations I shouldn't choose one of translators?
It is an **internal hibernate configuration**; which got implemented when it got upgraded to version 3. You **should not be worried about changing it until unless there is any strong reason for it**. Also with the latest versions I think you need to change its default value. But if you want you can test it for performance improvement as told below. From the **[Hibernate Core Migration Guide : 3.0](https://community.jboss.org/wiki/HibernateCoreMigrationGuide30)**; > > **Query Language Changes** > > > New Parser - Hibernate3 comes with a brand-new, ANTLR-based HQL/SQL query translator. However, the Hibernate 2.1 query parser is still available. The query parser may be selected by setting the Hibernate property hibernate.query.factory\_class. The possible values are org.hibernate.hql.ast.ASTQueryTranslatorFactory, for the new query parser, and org.hibernate.hql.classic.ClassicQueryTranslatorFactory, for the old parser. We are working hard to make the new query parser support all queries allowed by Hibernate 2.1. > > > However, we expect that many existing applications will need to use the Hibernate 2.1 parser during the migration phase. The Hibernate 1.x syntax "from f in class bar.Foo" is no longer supported, use "from bar.Foo as f" or "from bar.Foo f". Don't use dots in named HQL parameter names. Note: there is a known bug affecting dialects with theta-style outer joins (eg. OracleDialect for Oracle 8i, TimesTen dialect, Sybase11Dialect). Try to use a dialect which supports ANSI-style joins (eg. Oracle9Dialect), or fall back to the old query parser if you experience problems. > > > Here is Forum [post](https://forum.hibernate.org/viewtopic.php?f=1&t=991832&view=previous) and a blog [post](http://www.zorched.net/2006/10/21/hibernate-query-translators/) regarding this issue. Now coming to your questions; > > what's the difference and why do we have both of them? > > > As told in the change log, hibernate 3 replaces the `ClassicQueryTranslatorFactory` with `ASTQueryTranslatorFactory`. It is an internal change and the users need not be wooried about it until the change breaks your application. > > In what situations should I choose first or second?In what situations I shouldn't choose one of translators? > > > By default `ASTQueryTranslatorFactory` is enabled, you should consider changing it only if any of your queries break while upgrading to version 3. Once again, it a story of the past(2006 or so); the latest version of hibernate is 4.1 and the query translator must be stable by now. So 99% you do not have to change any thing.