id
stringlengths 7
12
| sentence1
stringlengths 5
1.44k
| sentence2
stringlengths 6
2.06k
| label
stringclasses 4
values | domain
stringclasses 5
values |
---|---|---|---|---|
train_400 | As shown in Section 4, ARS V1 can train linear policies, which achieve the reward thresholds previously proposed in the literature, for five MuJoCo benchmarks. | aRS V1 requires a larger number of episodes, and it cannot train policies for the Humanoid-v1 task. | contrasting | NeurIPS |
train_401 | From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. | if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. | contrasting | NeurIPS |
train_402 | [2014]), which also aim to aggregate information over time from multiple parties and make use of proper scoring rules to do it. | prediction markets provide incentives through payments, rather than influence, and lack the feedback mechanism to select among experts. | contrasting | NeurIPS |
train_403 | We choose the step size η to guarantee a descent on the objective. | the gradient of the approximated structured prediction program in Theorem 1 with respect to θ r equals to if either and/or c α equal zero, then the beliefs b x,y,α (ŷ α ) can be taken from the set of probability distributions over support of the max-beliefs, namely b x,y,α (ŷ * α ) > 0 only if ŷ * α ∈ argmax ŷα r:α∈Er,x θ r φ r,α (x, ŷα ) + v∈N (α) λ x,y,v→α (ŷ α ) . | contrasting | NeurIPS |
train_404 | The key insight when proving this theorem is bounding rate of change of f . We can immediately see that f π (θ) := θ log E s,π e J/θ is a convex function since it is the perspective transformation of a convex function, namely, the cumulant generating function of the total cost J. Additionally, Theorem 1 shows that f π is lower bounded by E s,π [J], assumed to be finite, which implies that f π is non-increasing. | by directly differentiating the definition of f π , we get that θf π (θ Since we assumed that the costs, J, are upper bounded, there exist a maximum cost j M such that J ≤ j M almost surely for any starting state s, and any policy π. | contrasting | NeurIPS |
train_405 | Previous results in this setting have been limited to answering a single nonadaptive query repeatedly as the database grows [DNPR10, CSS11]. | we provide tools for richer and more adaptive analysis of growing databases. | contrasting | NeurIPS |
train_406 | in estimation of Lipschitz functions. | many functions of interest over graphical models are not Lipschitz with good Lipschitz constants. | contrasting | NeurIPS |
train_407 | First, at an 6 All the code is available at https://github.com/sanghack81/SCMMAB-NIPS2018 7 One may surmise that combinatorial bandit (CB) algorithms can be used to solve SCM-MAB instances by noting that an intervention can be encoded as a binary vector, where each dimension in the vector corresponds to intervening on a single variable with a specific value. | the two settings invoke a very different set of assumptions, which makes their solvers somewhat difficult to compare in some reasonably fair way. | contrasting | NeurIPS |
train_408 | Measures of linear dependence and feedback, based on Granger-Geweke causality (GC) [10] [11]), have been used to estimate instantaneous and lagged functional connectivity in recordings of brain activity made with electroencephalography (EEG, [6]), and electrocorticography (ECoG, [3]). | the application of GC measures to brain recordings made with functional magnetic resonance imaging (fMRI) remains controversial [22][20] [2]. | contrasting | NeurIPS |
train_409 | Note that `1 MLE 1 stops prematurely after only 50 iterations, so that training computation time is sometimes comparable to closed-form estimator. | its statistical performance measured by `2 is much inferior to other `1 MLEs with more iterations as well as Elem-GLM estimator. | contrasting | NeurIPS |
train_410 | A straightforward version of this idea would be to perform shape completion in the 3D voxel grid: f 2.5D→3D = c 3D→3D • p 2.5D→3D . | shape completion in 3D is challenging, as the manifold of plausible shapes is sparser in 3D than in 2D, and empirically this fails to reconstruct shapes well. | contrasting | NeurIPS |
train_411 | It is clearly missing the structure of the model filter. | it is possible to obtain a good estimate of it by maximizing information directly, see panel (d). | contrasting | NeurIPS |
train_412 | Our choice of g is motivated by computational considerations: For general functions g, the computation of ( 14) would require O(m) time per function evaluation, where m is the number of documents. | the specific functional form in (14) allows O(1) time per function evaluation as , where the inner term j =i S k (x i , x j ) in the RHS does not depend on w and can be precomputed. | contrasting | NeurIPS |
train_413 | This is to be expected, since mini-batching does not lead to large gains in a serial setting. | using mini-batching in a serial setting might still be beneficial for implementation reasons, resulting in constant-factor improvements in runtime (e.g. | contrasting | NeurIPS |
train_414 | The Bayesian student thus learns the linear features for N = 0 (d). | unlike the GP, it learns all of the remaining nonlinear features for N = O(dK). | contrasting | NeurIPS |
train_415 | [11] introduced parameterized models of indoor environments, constrained by rules inspired by blockworld to guarantee physical validity. | since this approach samples possible spatial layout hypothesis without clutter, it is prone to errors caused by the occlusion and tend to fit rooms in which the walls coincide with the object surfaces. | contrasting | NeurIPS |
train_416 | In classical VB, we optimize the KL divergence between q(•) and a posterior, KL(q(β , z)||p(β , z | x); its objective is a function of a fixed data set x. | the objective in Eq. | contrasting | NeurIPS |
train_417 | Most of the approaches (22; 13) leverage a large set of videos to discriminate or build a foreground model. | we segment and localize the foreground separately on each video, making our approach much more scalable. | contrasting | NeurIPS |
train_418 | SO-DLT in particular is a good example of direct adaptation of standard batch deep learning methodology to online learning, as it uses SGD during tracking to fine-tune an ensemble of deep convolutional networks. | the online adaptation of the model comes at a big computational cost and affects the speed of the method, which runs at 5 frames-persecond (FPS) on a GPU. | contrasting | NeurIPS |
train_419 | The pushforward density p l can then be approximated by the empirical measure given by the particles {x l i } n i=1 , where The first term in (10) corresponds to a weighted average steepest descent direction of the log-target density π with respect to p l . This term is responsible for transporting particles towards highprobability regions of π. | the second term can be viewed as a "repulsion force" that spreads the particles along the support of π, preventing them from collapsing around the mode of π. | contrasting | NeurIPS |
train_420 | Agarwal and Bottou [1] presented a lower bound of ⌦ m + p m log 1 ✏ . | their bound is valid only for deterministic algorithms (thus not including SDCA, SVRG, SAG, etc. | contrasting | NeurIPS |
train_421 | Interestingly, in the case of batch active learning for classification, the random batch selection strategy has been surprisingly effective and is often difficult to outperform with more sophisticated strategies [8]. | as our experiments will show, our approach will dominate random. | contrasting | NeurIPS |
train_422 | Learning uses EM-like algorithms to iteratively localize and refine discriminative image features. | hitherto, the detection accuracy has not be as good as the best methods. | contrasting | NeurIPS |
train_423 | Ideally, the empirical risk of the solution returned by STRSAGA is close to the empirical risk of the ERM over S i . | this is not possible in general. | contrasting | NeurIPS |
train_424 | This is why application of ICA to EEG recently has become popular, yielding new promising results (e.g., [6]). | compared with ICA, the sparse representation has two important advantages: 1) sources are not assumed to be mutually independent as in ICA, even be not stationary; 2) source number can be larger than the number of sensors. | contrasting | NeurIPS |
train_425 | In the case of CS, the `0 norm is relaxed to the `1 norm; for low-rank matrices, the rank operator is relaxed to the nuclear norm. | greedy algorithms [17,20] operate iteratively on the signal measurements, constructing a basis for the signal and attempting signal recovery restricted to that basis. | contrasting | NeurIPS |
train_426 | Today, computer chess programs, built essentially on search techniques and running on a simple PC, can rival or even surpass the best human players. | and in spite of several decades of significant research efforts and of progress in hardware speed, the best Go programs of today are easily defeated by an average human amateur. | contrasting | NeurIPS |
train_427 | They have been extensively analyzed and are the most commonly used methods. | aDP methods typically do not converge and they only provide weak guarantees of approximation quality. | contrasting | NeurIPS |
train_428 | The span of an MDP is the maximum di erence in value of any two states under the optimal policy (M ú ) := max The diameter of an MDP is the maximum number of expected timesteps to get between any two states D(M ú ) = max s" =s Õ min µ T µ saes Õ . PSRL's bounds are tighter since (M ) AE CD(M ) and may be exponentially smaller. | uCRL-Factored has stronger probabilistic guarantees than PSRL since its bounds hold with high probability for any MDP M ú not just in expectation. | contrasting | NeurIPS |
train_429 | Particularly the performance obtained with very small numbers of labels is much better than previous published results which shows that the method is capable of making good use of unsupervised learning. | the same model also achieves state-of-the-art results and a significant improvement over the baseline model with full labels in permutation invariant MNIST classification which suggests that the unsupervised task does not disturb supervised learning. | contrasting | NeurIPS |
train_430 | Concrete dropout layer requires negligible additional compute compared with standard dropout layers with our implementation. | using conventional dropout requires considerable resources to manually tune dropout probabilities. | contrasting | NeurIPS |
train_431 | As expected, the naive method strongly under-estimated correlations in the non-simultaneously recorded blocks, as it can only model stimulus-correlations but not noise-correlations across neurons. | 1 our stitching method predicted correlations well, matching those of the fully observed model (correlation 0.84 for stitchLDS, 0.15 for naiveLDS, figure 3d). | contrasting | NeurIPS |
train_432 | This implies f t is difficult to minimize. | if S t is small, then φ i,t is potentially linear for many i. | contrasting | NeurIPS |
train_433 | Our proofs require novel techniques and do not follow from traditional proof progressions. | we first show how we can use these results to arrive at an error bound. | contrasting | NeurIPS |
train_434 | In the BXPCA, we interpret the matrix V as a lower dimensional embedding of the data which can be used for dimensionality reduction. | the corresponding matrix for the BPM model, whose values are restricted to [0,1], is the partial membership of each data point and represents the extent to which each data point belongs to each of the K clusters. | contrasting | NeurIPS |
train_435 | For models with a simple enough graph structure, these algorithms can compute marginal probabilities exponentially faster than direct summation. | these fast exact inference methods apply only to a relatively small class of models-those for which the basic operations of marginalization, conditioning, and multiplication of constituent factors can be done efficiently. | contrasting | NeurIPS |
train_436 | In the deterministic setting, if a term revealing a new v r appeared in half of the components, we could ensure that the algorithm must make m/2 queries to find it. | a randomized algorithm could find it in two queries in expectation, which would eliminate the linear dependence on m in the lower bound! | contrasting | NeurIPS |
train_437 | The regularizers are minimized simultaneously as the network is learned, and thus no pre-training is required. | they act on individual parameters. | contrasting | NeurIPS |
train_438 | We can capture such contextual information by using features from all the regions in the image, and then also train a specific classifier of each spatial location for each object category. | the dimensionality of the feature space would become quite large, 1 and training a classifier with limited training data would not be effective. | contrasting | NeurIPS |
train_439 | These models have become very popular and won the recent evaluations of handwriting recognition [9,34,37]. | current models still need segmented text lines, and full document processing pipelines should include automatic line segmentation algorithms. | contrasting | NeurIPS |
train_440 | In this general framework, the observations are assumed to follow an exponential family distribution, with natural parameter related to a conditionally Gaussian dynamic model [5], via a nonlinear transformation. | these model specifications may still be too restrictive in practice, for the following reasons: (i) Observations are usually discrete, non-negative and with a massive number of zero values and, unfortunately, far from any standard parametric distributions (e.g., multinomial, Poisson, negative binomial and even their zero-inflated variants). | contrasting | NeurIPS |
train_441 | Given sufficiently small / large constants c T , c H , and , it is easy to see that the linear convergence implied by Theorem 9 directly gives the recovery guarantee and bound on the number of iterations stated in Theorem 5 (see Appendix A.1). | in some cases it might not be possible to design approximation algorithms with constants c T and c H sufficiently close to 1 (in constrast, increasing the sample complexity by a constant factor in order to improve is usually a direct consequence of the RIP guarantee or similar statistical regularity assumptions). | contrasting | NeurIPS |
train_442 | But standard interlacing results can give [19]). | using Mercer's theorem, we have we have K are approximately the same, up to a unitary transformation. | contrasting | NeurIPS |
train_443 | The no-regret algorithm must ensure that the regret vanishes as n → ∞ regardless of the opponent's actions. | in our case, in addition to vanishing regret, we need to satisfy the cost constraints. | contrasting | NeurIPS |
train_444 | ICA is a data driven method which relaxes the strong characteristical frequency structure assumptions. | iCA algorithms perform best when the number of the observed They generally have more sharpened summits and longer tails than a Gaussian distribution, and would be classified as super-Gaussian. | contrasting | NeurIPS |
train_445 | Li and Li [21] build a cascade classifier where each classifier is implemented as a linear SVM acting on the PCA of inner convolutional layers of the classification network. | these methods all require a large amount of extra computational cost, and some of them also result in loss of accuracy on normal examples. | contrasting | NeurIPS |
train_446 | Unlike [18], we also emphasize on the ability of the methods in recovering the number of motions. | although the methods compared in [18] (except RANSAC) theoretically have the means to do so, their estimation of the number of motions is generally unrealiable and the benchmark results in [18] were obtained by revealing the actual number of motions to the algorithms. | contrasting | NeurIPS |
train_447 | Neural spiking activity is usually analysed by averaging across multiple experimental trials, to obtain a smooth estimate of the underlying firing rates [2,3,4,5]. | even under carefully controlled experimental conditions, the animal's behavior may vary from trial-to-trial. | contrasting | NeurIPS |
train_448 | For this approach, we choose the recent spectral mixture kernels of Wilson and Adams [14], which can model a wide range of stationary covariances, and are intended to help automate kernel selection. | we note that our objective function can readily be applied to other parametric forms. | contrasting | NeurIPS |
train_449 | In the figure, as the number of sum nodes goes up, the accuracy of the standard sum-product based estimation (sum) gets better, whereas the accuracy of standard max-product based estimation (max) worsens. | our hybrid messagepassing algorithm (hybrid), on an average, results in the lowest loss compared to the other baselines, with running times similar to the sum/max product algorithms. | contrasting | NeurIPS |
train_450 | These methods have been shown to be very effective in practice. | they do not provide any guarantee on the quality of the results. | contrasting | NeurIPS |
train_451 | In previous work on sample-based tree search, indeed including POMCP [20], a complete sample state is drawn from the posterior at the root of the search tree. | this can be computationally very costly. | contrasting | NeurIPS |
train_452 | On the theoretical side, a different line of work focusing on general hypothesis classes [14] uses martingale-based sequential complexity measures to show that, information-theoretically, one can obtain oracle inequalities in the online setting at a level of generality comparable to that of the batch statistical learning. | this last result is not algorithmic. | contrasting | NeurIPS |
train_453 | We find our method generally performs the best, followed with the parallel SGLD, which is much better than its sequential counterpart; this comparison is of course in favor of parallel SGLD, since each iteration of it requires n = 100 times of likelihood evaluations compared with sequential SGLD. | by leveraging the matrix operation in MATLAB, we find that each iteration of parallel SGLD is only 3 times more expensive than sequential SGLD. | contrasting | NeurIPS |
train_454 | The standard approach for handling such a scenario is to first learn a single-output model and then produce M -Best Maximum a Posteriori (MAP) hypotheses from this model. | we learn to produce multiple outputs by formulating this task as a multiple-output structured-output prediction problem with a loss-function that effectively captures the setup of the problem. | contrasting | NeurIPS |
train_455 | The fraction S/N is critical to determining the scalability of QD-PageRank. | if every document contained vastly different words, S/N would be proportional to the number of search terms, m. this is not the case. | contrasting | NeurIPS |
train_456 | . . , h m (s) which determines the average firing rate of each neuron in response to the stimulus. | the encoding process is affected by neural noise. | contrasting | NeurIPS |
train_457 | We might address this by choosing a more flexible parametric base measure. | since the dimensionality of µ scales only linearly with the number of neurons, the empirical synchrony distribution (ESD), converges quickly even when the sample size is inadequate for estimating the full π. | contrasting | NeurIPS |
train_458 | As a result, even when the teacher network is shallow, the student network usually needs to be deeper, otherwise it will underfit. | both our theorem and our experiment show that if the shallow teacher network is in a pretty large region near identity (Figure 2), SGD always converges to the global minimum by initializing the weights I + W in this region, with equally shallow student network. | contrasting | NeurIPS |
train_459 | Theoretical understanding of the strength of depth starts from analyzing the depth efficiency, by proving the existence of deep neural networks that cannot be realized by any shallow network whose size is exponentially larger. | we argue that even for a comprehensive understanding of the depth itself, one needs to study the dual problem of width efficiency: Because, if we switch the role of depth and width in the depth efficiency theorems and the resulting statements remain true, then width would have the same power as depth for the expressiveness, at least in theory. | contrasting | NeurIPS |
train_460 | Linear and convex aggregation of densities, based on an L 2 criterion, are studied in [9], where the densities are based on a finite dictionary or an independent sample. | our proposed method allows data-adaptive kernels, and does not require and independent (holdout) sample. | contrasting | NeurIPS |
train_461 | If the second condition becomes strict, i.e., r 2 f (x) 0, then we recover the sufficient conditions for a local minimum. | to derive finite time convergence bounds for achieving an SOSP, these conditions should be relaxed. | contrasting | NeurIPS |
train_462 | Indeed, even for discrete problems simple and accurate estimators have proved to be elusive, and MCMC methods do not provide any simple way of computing the partition function. | sMC provides a straightforward estimator of the normalizing constant (i.e. | contrasting | NeurIPS |
train_463 | of ( 2) with respect to the variables w, σ, b. | in this work the focus is only on the data-dependent terms in (2), which include the empirical error term and the weighted norms of σ. | contrasting | NeurIPS |
train_464 | Moreover, the amazing RSs offered by giant e-tailers and e-marketing platforms, such as Amazon and Google, lie at the heart of online commerce and marketing on the web. | current significant challenges faced by personal assistants (e.g. | contrasting | NeurIPS |
train_465 | It requires multiple iterations through the training data, which can become impractical with large models and datasets. | our approach can be adapted to an online implementation. | contrasting | NeurIPS |
train_466 | More sophisticated variational techniques capture multiple modes using substructures or by leaving part of the original network intact and approximating the remainder . | although these methods increase the number of modes that are captured, they still exclude modes. | contrasting | NeurIPS |
train_467 | Thompson sampling balances exploration with exploitation because actions with large posterior means and actions with high variance are both more likely to appear as the optimal action in the sample r (s) . | the arg max presents difficulties in the reweighting required to perform Bayes empirical Bayes approaches. | contrasting | NeurIPS |
train_468 | The LHS of (10), by the assumption of the theorem, is at most ε 2 implying (9). | note that our anlaysis of the Fujishige-Wolfe algorithm is weaker than the best known method in terms of time complexity (IO method by [11]) on two counts: a) dependence on n, b) dependence on F . we found this algorithm significantly outperforming the IO algorithm empirically -we show two plots here. | contrasting | NeurIPS |
train_469 | For example, if we take n = 100000, L = 100, and µ = 0.01 then the basic FG method has a rate of ((L − µ)/(L + µ)) 2 = 0.9996 and the 'optimal' AFG method has a faster rate of (1 − µ/L) = 0.9900. | running n iterations of SAG has a much faster rate of (1 − 1/8n) n = 0.8825 using the same number of evaluations of f i . | contrasting | NeurIPS |
train_470 | We note that when classifying a new point one does not have to project it to the subspace V , but rather assign a sign according to the classifying hyperplane in R d . In first sight it seems that our result gives a week generalization bound since the margin obtained in the original dimension is low. | the margin of the found solution in the reduced dimension (i.e., within V ) is almost optimal (i.e. | contrasting | NeurIPS |
train_471 | ditioned stimulus (US); the two stimuli are separated by a stimulus-free gap. | in delay conditioning, the CS remains on until presentation of the US. | contrasting | NeurIPS |
train_472 | . . , β k ) and let θ * β be the minimizer of is zero for even a single outlier (x, y), then L(θ * β , ∞) will be infinite. | we can bound θ * β under an alternative loss that is less sensitive to outliers: The key idea in the proof is that replacing S with exp(β ψ) in p θ,β does not change the loss too much, in the sense that When β min 1, Hence, the error increases roughly linearly with β −1 min . | contrasting | NeurIPS |
train_473 | Without convexity or g-convexity, in general at best we might obtain local minima. | as alluded to previously, the set P d of hpd matrices possesses remarkable geometric structure that allows us to extend global optimisation to a rich class beyond just gc functions. | contrasting | NeurIPS |
train_474 | The ID-SP and LF-SP agents both experience two object play slightly more often than the ID-RP baseline, having achieved substantial one object play time. | only the ID-SP agent has discovered how to take advantage of the increased difficulty and therefore "interestingness" of two object configurations (compare blue with green horizontal line in Fig. | contrasting | NeurIPS |
train_475 | A weaker model is the oblivious adversarial model wherein the adversary generates a k-sparse vector in complete ignorance of X and w * (and ). | the adversary is still free to make arbitrary choices for the location and values of corruptions. | contrasting | NeurIPS |
train_476 | One prominent application is their use in modeling conditional independence between random variables via a graphical model. | when the number or random variables is large, and the underlying graph structure is complex, a number of computational issues need to be tackled in order to make inference feasible. | contrasting | NeurIPS |
train_477 | Our specific setup is as follows: for each (s, a) ∈ S×A an uncertainty set U(s, a) is given. | not all states are adversarial. | contrasting | NeurIPS |
train_478 | In agreement with theorem 2.1 the measure of these non-informative center points is zero. | if we use as a center point a point on one of the axes, the distribution of the distances will be very different. | contrasting | NeurIPS |
train_479 | The recent success in human action recognition with deep learning methods mostly adopt the supervised learning paradigm, which requires significant amount of manually labeled data to achieve good performance. | label collection is an expensive and time-consuming process. | contrasting | NeurIPS |
train_480 | For example the nearest neighbor classifier in the new distance is equivalent to the Lipschitz regularization (1) weighted with the density proposed in the last section. | implementing such a method requires to compute the geodesic distance in (R d , g), which is non trivial for arbitrary densities p. We suggest the following approximation which is similar in spirit to the approach in [11]. | contrasting | NeurIPS |
train_481 | On the one hand, the ultimate goal is to achieve the best possible prediction error. | budgeted computational resources need be factored in, while designing algorithms. | contrasting | NeurIPS |
train_482 | With this property, it is no longer true that all inner distances are smaller than outer distances, and therefore Theorem 6.1 does not apply. | [BBV08] prove the following lemma Lemma 6.3. | contrasting | NeurIPS |
train_483 | This is a Nyquist-type reconstruction formula. | for this theory to be applicable to a real-time setting, as in the case of BMI, we need a causal real-time decoder that estimates the signal at every time t, and an estimate of the time taken for the convergence of the reconstructed signal to the real signal. | contrasting | NeurIPS |
train_484 | Combined with a suitable likelihood function as specified in Equation 2, one can construct a regression or classification model that probabilistically accounts for uncertainties and control over-fitting through Bayesian smoothing. | if the likelihood is non-Gaussian, such as in the case of classification, inferring the posterior process is analytically intractable and requires approximations. | contrasting | NeurIPS |
train_485 | An overview of estimators of the entropy of continuous-valued distributions is given in [30]. | to our knowledge, the entropy bias of maximum entropy models in the presence of modelmisspecification has not be characterized or studied numerically. | contrasting | NeurIPS |
train_486 | To address this issue, one may be tempted to use the weak * -topology on 2 , since in this topology the closed balls are both compact and metrizable, thus universal kernels do exist on them. | the Taylor kernels do not belong to them, because -basically-the inner product • , • 2 fails to be continuous with respect to the weak * -topology as the sequence of the standard orthonormal basis vectors show. | contrasting | NeurIPS |
train_487 | From this equation we obtain, Using lemma 1 we have an upper bound on the probability that sup {t1,...,tm} | Îob − Îun | > ǫ over the random selection of features, as a function of m ′ . | the upper bound we need is on the probability that sup {t1,...,tm} |E( Îob ) − E( Îun )| > ǫ 1 . | contrasting | NeurIPS |
train_488 | Experimentally, we found that the trust score better identifies correctly-classified points for low and medium-dimension feature spaces than the model itself. | high-dimensional feature spaces were more challenging, and we demonstrate that the trust score's utility depends on the vector space used to compute the trust score differences. | contrasting | NeurIPS |
train_489 | There is a "ground truth" image G on the bottom of the pool. | overhead, a stationary camera pointing downwards is recording a video stream V . In the absence of any distortion V (x, y, t) = G(x, y) at any time t. the water surface refracts in accordance with Snell's Law. | contrasting | NeurIPS |
train_490 | Its two nested sampling loops can be parallelized in a straightforward way since the variables are independent of each other. | in practice, we use a small number of random labels, and m n. Thus we only need to parallelize the sampling for the set of random transmission times {τ ji }. | contrasting | NeurIPS |
train_491 | The mismatch between this predicted efficiency and animals' actual behaviour has been attributed to the presence of information-limiting correlations between neurons [22,23]. | deviation from independence renders most analytical treatments infeasible, necessitating the use of numerical methods (Monte Carlo simulations) for quantifying the performance of such codes [7,15]. | contrasting | NeurIPS |
train_492 | [19] proposed a model which jointly infers the true labels and estimate of evaluator's quality by modeling decisions as functions of the expertise levels of decision makers and the difficulty levels of items. | this approach neither models the error properties of decision makers, nor provides any diagnostic insights into the process of decision making. | contrasting | NeurIPS |
train_493 | One of the key benefits of RNNs is their ability to make use of previous context. | for standard RNN architectures, the range of context that can in practice be accessed is limited. | contrasting | NeurIPS |
train_494 | As mentioned before, while Theorem 1 allows for going lower than the standard log n bound on the community size for exact recovery, it requires the number of very small communities to be relatively small. | theorem 2 provides us with the option of having many small communities but requires the smallest community to be of size O(log n) . | contrasting | NeurIPS |
train_495 | To conclude that the animal probably has wings, you might consult a mental representation similar to the graph at the top of Figure 1a that specifies a dependency relationship between flying and having wings. | you might reach the same conclusion by thinking about flying creatures that you have previously encountered (e.g. | contrasting | NeurIPS |
train_496 | It is easy to see that for suitably weighted kNN graphs this is the case: the original density can be estimated from the degrees in the graph. | it is completely unclear whether the same holds true for unweighted kNN graphs. | contrasting | NeurIPS |
train_497 | If an estimated value consists of a large portion of such transitions, then the likelihoods of overestimation and underestimation are both very low. | if backward transition probability p j,j (or any p j,k with k ≤ j) is close to 1, then Var[ Dj ] increases dramatically, resulting a noticeable skewness. | contrasting | NeurIPS |
train_498 | The rate of the code is proportional to the CS budget and we use the rate as a proxy for budget throughout this analysis. | since different types of query have different costs both financially (in crowdsourcing platforms) and from the perspective of time or effort it takes from the user to process it, one needs to be careful in comparing the results of different coding schemes. | contrasting | NeurIPS |
train_499 | Note that when H 2 (f + f − ) is close to 1, e.g., when the side information is perfect, no queries are required. | that is not the case in practice, and we are interested in the region where f + and f − are "close", that is . | contrasting | NeurIPS |