blesspearl/math-stackexchange
Text Generation
•
Updated
•
10
tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
linear-algebra | <p>A matrix is diagonalizable iff it has a basis of eigenvectors. Now, why is this satisfied in case of a real symmetric matrix ? </p>
| <p>Suppose the ground field is $\mathbb C$. It is immediate then that every square matrix can be triangulated. Now, symmetry certainly implies normality ($A$ is normal if $AA^t=A^tA$ in the real case, and $AA^*=A^*A$ in the complex case). Since normality is preserved by similarity, it follows that if $A$ is symmetric, then the triangular matrix $A$ is similar to is normal. But obviously (compute!) the only normal triangular matrix is diagonal, so in fact $A$ is diagonalizable. </p>
<p>So it turns out that the criterion you mentioned for diagonalizability is not the most useful in this case. The one that is useful here is: A matrix is diagonalizable iff it is similar to a diagonal matrix. </p>
<p>Of course, the result shows that every normal matrix is diagonalizable. Of course, symmetric matrices are much more special than just being normal, and indeed the argument above does not prove the stronger result that symmetric matrices are orthogonaly diagonalizable. </p>
<p>Comment: To triangulate the matrix, use induction of the order of the matrix. For $1\times 1$ it's trivial. For $n\times n$, first find any arbitrary eigenvector $v_1$ (one such must exist). Thinking of the matrix as a linear transformation on a vector space $V$ of dimension $n$, write $V$ as $V=V_1\oplus W$, where $V_1$ is the subspace spanned by $v_1$. Then $W$ is $n-1$-dimensional, apply the induction hypothesis to $A|_{W}$ to obtain a base $v_2,\ldots, v_n$ in which $A|_W$ is triangular. It now follows that in the base $v_1,\ldots, v_n$ $A$ is triangular.</p>
| <p>This question is about the spectral theorem for (finite dimensional) real Euclidean spaces, which says that in such a space any self-adjoint operator is diagonalisable (over the real numbers) with mutually orthogonal eigenspaces (so that orthonormal bases of eigenvectors exist). This is of course a classic result that should be proved in any course on the subject, so you can look this up in any textbook. However there are quite a few questions on this site that more or less touch on this matter, yet no really satisfactory answers, so I thought this is as good a place as any for me to state what seems a good proof to me (the one I teach in my course).</p>
<p>Before starting, I would like to note this is a subtle result, as witnessed by the fact that it becomes false if one replaces the real numbers either by the rational numbers or by the complex numbers (the latter case can be salvaged by throwing in some complex conjugation, but I want to avoid the suggestion that the validity of the stated result somehow depends on that). Nonetheless there are a number of relevant considerations that are independent of the base field, which easy stuff I will state as preliminaries before we get to the meat of the subject.</p>
<p>First off, the matrix formulation in the question is just a restatement, in terms of the matrix of the operator with respect to any orthonormal basis, of the result I mentioned: under such expression the adjoint operator gets the transpose matrix, so a self-adjoint operator gets represented by a symmetric matrix. Since the basis used for such an expression bears no particular relation to our operator or the problem at hand (trying to find a basis of eigenvectors) it will be easier to ignore the matrix and reason directly in terms of the operator. The translation by expression in terms of matrices is easy and I will leave this aside; for reference, the claim I am proving translates to the claim that for every real symmetric matrix<span class="math-container">$~A$</span>, there is an orthogonal matrix <span class="math-container">$P$</span> (whose columns describe an orthonormal basis of eigenvectors) such that <span class="math-container">$P^{-1}AP=P^tAP$</span> is a diagonal matrix. It is worth noting that the converse is true and obvious: if <span class="math-container">$D$</span> is diagonal and <span class="math-container">$P$</span> orthogonal, then <span class="math-container">$A=PDP^t$</span> is symmetric (since <span class="math-container">$D^t=D$</span>).</p>
<p>A basic fact about adjoints is that for any operator <span class="math-container">$\phi$</span> on a Euclidean vector space<span class="math-container">$~V$</span>, whenever a subspace <span class="math-container">$W$</span> is stable under<span class="math-container">$~\phi$</span>, its orthogonal complement <span class="math-container">$W^\perp$</span> is stable under its adjoint<span class="math-container">$~\phi^*$</span>. For if <span class="math-container">$v\in W^\perp$</span> and <span class="math-container">$w\in W$</span>, then <span class="math-container">$\langle w\mid \phi^*(v)\rangle=\langle \phi(w)\mid v\rangle=0$</span> since <span class="math-container">$\phi(w)\in W$</span> and <span class="math-container">$v\in W^\perp$</span>, so that <span class="math-container">$\phi^*(v)\in W^\perp$</span>. Then for a <em>self-adjoint</em> operator <span class="math-container">$\phi$</span> (so with <span class="math-container">$\phi^*=\phi$</span>), the orthogonal complement of any <span class="math-container">$\phi$</span>-stable subspace is again <span class="math-container">$\phi$</span>-stable.</p>
<p>Now our focus will be on proving the following fact.</p>
<p><strong>Lemma.</strong> <em>Any self-adjoint operator <span class="math-container">$\phi$</span> on a real Euclidean vector space<span class="math-container">$~V$</span> of finite nonzero dimension has an eigenvector.</em></p>
<p>Assuming this for the moment, one easily proves our result by induction on the dimension. In dimension<span class="math-container">$~0$</span> the unique operator is diagonalisable, so the base case it trivial. Now assuming <span class="math-container">$\dim V>0$</span>, get an eigenvector <span class="math-container">$v_1$</span> by applying the lemma. The subspace <span class="math-container">$W=\langle v_1\rangle$</span> it spans is <span class="math-container">$\phi$</span>-stable by the definition of an eigenvector, and so <span class="math-container">$W^\perp$</span> is <span class="math-container">$\phi$</span>-stable as well. We can then restrict <span class="math-container">$\phi$</span> to a linear operator on <span class="math-container">$W^\perp$</span>, which is clearly self-adjoint, so our induction hypothesis gives us an orthonormal basis of <span class="math-container">$W^\perp$</span> consisting of eigenvectors for that restriction; call them <span class="math-container">$(v_2,\ldots,v_n)$</span>. Viewed as elements of <span class="math-container">$V$</span>, the vectors <span class="math-container">$v_2,\ldots,v_n$</span> are eigenvectors of<span class="math-container">$~\phi$</span>, and clearly the family <span class="math-container">$(v_1,\ldots,v_n)$</span> is orthonormal. It is an orthonormal basis of eigenvectors of<span class="math-container">$~\phi$</span>, and we are done.</p>
<p>So that was the easy stuff, the harder stuff is proving the lemma. As said, we need to use that the base field is the real numbers. I give two proofs, one in purely algebraic style, but which is based on the fundamental theorem of algebra and therefore uses the complex numbers, albeit in an indirect way, while the other uses a bit of topology and differential calculus but avoids complex numbers.</p>
<p>My first proof of the lemma is based on the fact that the irreducible polynomials in <span class="math-container">$\def\R{\Bbb R}\R[X]$</span> all have degree at most <span class="math-container">$2$</span>. This comes from decomposing the polynomial<span class="math-container">$~P$</span> into a leading coefficient and monic linear factors over the complex numbers (as one can by the fundamental theorem of algebra); any monic factor not already in <span class="math-container">$\R[X]$</span> must be <span class="math-container">$X-z$</span> with <span class="math-container">$z$</span> a non-real complex number, but then <span class="math-container">$X-\overline z$</span> is relatively prime to it and also a divisor of<span class="math-container">$~P$</span>, as is their product <span class="math-container">$(X-z)(Z-\overline z)=X^2-2\Re z+|z|^2$</span>, which lies in <span class="math-container">$\R[X]$</span>.</p>
<p>Given this we first establish (without using self-adjointness) in the context of the lemma the existence of a <span class="math-container">$\phi$</span>-stable subspace of nonzero dimension at most<span class="math-container">$~2$</span>. Start taking any monic polynomial <span class="math-container">$P\in\R[X]$</span> annihilating<span class="math-container">$~\phi$</span>; the minimal or characteristic polynomial will do, but we really only need the existence of such a polynomial which is easy. Factor <span class="math-container">$P$</span> into irreducibles in <span class="math-container">$\R[X]$</span>, say <span class="math-container">$P=P_1P_2\ldots P_k$</span>. Since <span class="math-container">$0=P[\phi]=P_1[\phi]\circ\cdots\circ P_k[\phi]$</span>, at least one of the <span class="math-container">$P_i[\phi]$</span> has nonzero kernel (indeed the sum of the dimensions of their kernels is at least <span class="math-container">$\dim V>0$</span>); choose such an<span class="math-container">$~i$</span>. If <span class="math-container">$\deg P_i=1$</span> then the kernel is a nonzero eigenspace, and any eigenvector in it spans a <span class="math-container">$\phi$</span>-stable subspace of dimension<span class="math-container">$~1$</span> (and of course we also directly get the conclusion of the lemma here). So we are left with the case <span class="math-container">$\deg P_i=2$</span>, in which case for any nonzero vector <span class="math-container">$v$</span> of its kernel, <span class="math-container">$v$</span> and <span class="math-container">$\phi(v)$</span> span a <span class="math-container">$\phi$</span>-stable subspace of dimension<span class="math-container">$~2$</span> (the point is that <span class="math-container">$\phi^2(v)$</span> lies in the subspace because <span class="math-container">$P_i[\phi](v)=0$</span>).</p>
<p>Now that we have a <span class="math-container">$\phi$</span>-stable subspace<span class="math-container">$~W$</span> of nonzero dimension at most<span class="math-container">$~2$</span>, we may restrict <span class="math-container">$\phi$</span> to <span class="math-container">$W$</span> and search an eigenvector there, which will suffice. But this means it suffices to prove the lemma with the additional hypothesis <span class="math-container">$\dim V\leq2$</span>. Since the case <span class="math-container">$\dim V=1$</span> is trivial, that can be done by showing that the characteristic polynomial of any symmetric real <span class="math-container">$2\times2$</span> matrix has a real root. But that can be shown by showing that its discriminant is non-negative, which is left as an easy exercise (it is a sum of squares).</p>
<p>(Note that the final argument shows that for <span class="math-container">$\phi$</span> self-adjoint, the case <span class="math-container">$\deg P_i=2$</span> actually never occurs.)</p>
<hr />
<p>Here is a second proof of lemma, less in the algebraic style I used so far, and less self-contained, but which I find more intuitive. It also suggests a practical method to find an eigenvector in the context of the lemma. Consider the real function<span class="math-container">$~f:V\to\R$</span> defined by <span class="math-container">$f:x\mapsto \langle x\mid \phi(x)\rangle$</span>. It is a quadratic function, therefore in particular differentiable and continuous. For its gradient at <span class="math-container">$p\in V$</span> one computes for <span class="math-container">$v\in V$</span> and <span class="math-container">$h\in\R$</span>:
<span class="math-container">$$
f(p+hv)=\langle p+hv\mid \phi(p+hv)\rangle
=f(p)+h\bigl(\langle p\mid \phi(v)\rangle
+\langle v\mid \phi(p)\rangle\bigr)
+h^2f(v)
\\=f(p)+2h\langle v\mid \phi(p)\rangle+h^2f(v)
$$</span>
(the latter equality by self-adjointness), so that gradient is <span class="math-container">$2\phi(p)$</span>. Now the point I will not prove is that the restriction of <span class="math-container">$f$</span> to the unit sphere <span class="math-container">$S=\{\,v\in V\mid\langle v\mid v\rangle=1\,\}$</span> attains its maximum somewhere on that sphere. There are many ways in analysis of showing that this is true, where the essential points are that <span class="math-container">$f$</span> is continuous and that <span class="math-container">$S$</span> is compact. Now if <span class="math-container">$p\in S$</span> is such a maximum, then every tangent vector to<span class="math-container">$~S$</span> at<span class="math-container">$~p$</span> must be orthogonal to the gradient <span class="math-container">$2\phi(p)$</span> of <span class="math-container">$f$</span> at<span class="math-container">$~p$</span> (or else one could increase the value of <span class="math-container">$f(x)$</span> near <span class="math-container">$p$</span> by varying <span class="math-container">$x$</span> along <span class="math-container">$S$</span> in the direction of that tangent vector). The tangent space of<span class="math-container">$~S$</span> at <span class="math-container">$p$</span> is <span class="math-container">$p^\perp$</span>, so this statement means that <span class="math-container">$\phi(p)\in(p^\perp)^\perp$</span>. But we know that <span class="math-container">$(p^\perp)^\perp=\langle p\rangle$</span>, and <span class="math-container">$\phi(p)\in\langle p\rangle$</span> (with <span class="math-container">$p\neq 0$</span>) means precisely that <span class="math-container">$p$</span> is an eigenvector of<span class="math-container">$~\phi$</span>. (One may check it is one for the maximal eigenvalue of<span class="math-container">$~\phi$</span>.)</p>
|
geometry | <p>I'm thinking about a circle rolling along a parabola. Would this be a parametric representation?</p>
<p>$(t + A\sin (Bt) , Ct^2 + A\cos (Bt) )$</p>
<p>A gives us the radius of the circle, B changes the frequency of the rotations, C, of course, varies the parabola. Now, if I want the circle to "match up" with the parabola as if they were both made of non-stretchy rope, what should I choose for B?</p>
<p>My first guess is 1. But, the the arc length of a parabola from 0 to 1 is much less than the length from 1 to 2. And, as I examine the graphs, it seems like I might need to vary B in order to get the graph that I want. Take a look:</p>
<p><img src="https://i.sstatic.net/voj4f.jpg" alt="I played with the constants until it looked ALMOST like what I had in mind."></p>
<p>This makes me think that the graph my equation produces will always be wrong no matter what constants I choose. It should look like a cycloid:</p>
<p><img src="https://i.sstatic.net/k3u17.jpg" alt="Cycloid"></p>
<p>But bent to fit on a parabola. [I started this becuase I wanted to know if such a curve could be self-intersecting. (I think yes.) When I was a child my mom asked me to draw what would happen if a circle rolled along the tray of the blackboard with a point on the rim tracing a line ... like most young people, I drew self-intersecting loops and my young mind was amazed to see that they did not intersect!]</p>
<p>So, other than checking to see if this is even going in the right direction, I would like to know if there is a point where the curve shown (or any curve in the family I described) is most like a cycloid-- </p>
<p>Thanks.</p>
<p>"It would be really really hard to tell" is a totally acceptable answer, though it's my current answer, and I wonder if the folks here can make it a little better.</p>
| <p>(I had been meaning to blog about roulettes a while back, but since this question came up, I'll write about this topic here.)</p>
<p>I'll use the parametric representation</p>
<p>$$\begin{pmatrix}2at\\at^2\end{pmatrix}$$</p>
<p>for a parabola opening upwards, where $a$ is the focal length, or the length of the segment joining the parabola's vertex and focus. The arclength function corresponding to this parametrization is $s(t)=a(t\sqrt{1+t^2}+\mathrm{arsinh}(t))$.</p>
<p>user8268 gave a derivation for the "cycloidal" case, and Willie used unit-speed machinery, so I'll handle the generalization to the "trochoidal case", where the tracing point is not necessarily on the rolling circle's circumference.</p>
<p>Willie's comment shows how you should consider the notion of "rolling" in deriving the parametric equations: a rotation (about the wheel's center) followed by a rotation/translation. The first key is to consider that the amount of rotation needed for your "wheel" to roll should be equivalent to the arclength along the "base curve" (in your case, the parabola).</p>
<p>I'll start with a parametrization of a circle of radius $r$ tangent to the horizontal axis at the origin:</p>
<p>$$\begin{pmatrix}-r\sin\;u\\r-r\cos\;u\end{pmatrix}$$</p>
<p>This parametrization of the circle was designed such that a positive value of the parameter $u$ corresponds to a clockwise rotation of the wheel, and the origin corresponds to the parameter value $u=0$.</p>
<p>The arclength function for this circle is $ru$; for rolling this circle, we obtain the equivalence</p>
<p>$$ru=s(t)-s(c)$$</p>
<p>where $c$ is the parameter value corresponding to the point on the base curve where the rolling starts. Solving for $u$ and substituting the resulting expression into the circle equations yields</p>
<p>$$\begin{pmatrix}-r\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-r\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>So far, this is for the "cycloidal" case, where the tracing point is on the circumference. To obtain the "trochoidal" case, what is needed is to replace the $r$ multiplying the trigonometric functions with the quantity $hr$, the distance of the tracing point from the center of the rolling circle:</p>
<p>$$\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>At this point, I note that $r$ here can be a positive or a negative quantity. For your "parabolic trochoid", negative $r$ corresponds to the circle rolling outside the parabola and positive $r$ corresponds to rolling inside the parabola. $h=1$ is the "cycloidal" case; $h > 1$ is the "prolate" case (tracing point outside the rolling circle), and $0 < h < 1$ is the "curtate" case (tracing point within the rolling circle).</p>
<p>That only takes care of the rotation corresponding to "rolling"; to get the circle into the proper position, a further rotation and a translation has to be done. The further rotation needed is a rotation by the <a href="http://mathworld.wolfram.com/TangentialAngle.html">tangential angle</a> $\phi$, where for a parametrically-represented curve $(f(t)\quad g(t))^T$, $\tan\;\phi=\frac{g^\prime(t)}{f^\prime(t)}$. (In words: $\phi$ is the angle the tangent of the curve at a given $t$ value makes with the horizontal axis.)</p>
<p>We then substitute the expression for $\phi$ into the <em>anticlockwise</em> rotation matrix</p>
<p>$$\begin{pmatrix}\cos\;\phi&-\sin\;\phi\\\sin\;\phi&\cos\;\phi\end{pmatrix}$$</p>
<p>which yields</p>
<p>$$\begin{pmatrix}\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&-\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\\\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\end{pmatrix}$$</p>
<p>For the parabola as I had parametrized it, the tangential angle rotation matrix is</p>
<p>$$\begin{pmatrix}\frac1{\sqrt{1+t^2}}&-\frac{t}{\sqrt{1+t^2}}\\\frac{t}{\sqrt{1+t^2}}&\frac1{\sqrt{1+t^2}}\end{pmatrix}$$</p>
<p>This rotation matrix can be multiplied with the "transformed circle" and then translated by the vector $(f(t)\quad g(t))^T$, finally resulting in the expression</p>
<p>$$\begin{pmatrix}f(t)\\g(t)\end{pmatrix}+\frac1{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\begin{pmatrix}f^\prime(t)&-g^\prime(t)\\g^\prime(t)&f^\prime(t)\end{pmatrix}\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>for a trochoidal curve. (What those last two transformations do, in words, is to rotate and shift the rolling circle appropriately such that the rolling circle touches an appropriate point on the base curve.)</p>
<p>Using this formula, the parametric equations for the "parabolic trochoid" (with starting point at the vertex, $c=0$) are</p>
<p>$$\begin{align*}x&=2at+\frac{r}{\sqrt{1+t^2}}\left(ht\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-t-h\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)\right)\\y&=at^2-\frac{r}{\sqrt{1+t^2}}\left(h\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)+ht\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-1\right)\end{align*}$$</p>
<p>A further generalization to a <em>space curve</em> can be made if the rolling circle is not coplanar to the parabola; I'll leave the derivation to the interested reader (hint: rotate the "transformed" rolling circle equation about the x-axis before applying the other transformations).</p>
<p>Now, for some plots:</p>
<p><img src="https://i.sstatic.net/ukYzm.png" alt="parabolic trochoids"></p>
<p>For this picture, I used a focal length $a=1$ and a radius $r=\frac34$ (negative for the "outer" ones and positive for the "inner" ones). The curtate, cycloidal, and prolate cases correspond to $h=\frac12,1,\frac32$.</p>
<hr>
<p>(added 5/2/2011)</p>
<p>I did promise to include animations and code, so here's a bunch of GIFs I had previously made in <em>Mathematica</em> 5.2:</p>
<p>Inner parabolic cycloid, $a=1,\;r=\frac34\;h=1$</p>
<p><img src="https://i.sstatic.net/RfcAB.gif" alt="inner parabolic cycloid"></p>
<p>Curtate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac12$</p>
<p><img src="https://i.sstatic.net/vWh6M.gif" alt="curtate inner parabolic trochoid"></p>
<p>Prolate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac32$</p>
<p><img src="https://i.sstatic.net/YjTZR.gif" alt="prolate inner parabolic trochoid"></p>
<p>Outer parabolic cycloid, $a=1,\;r=-\frac34\;h=1$</p>
<p><img src="https://i.sstatic.net/DhioB.gif" alt="outer parabolic cycloid"></p>
<p>Curtate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac12$</p>
<p><img src="https://i.sstatic.net/pJtsO.gif" alt="curtate outer parabolic trochoid"></p>
<p>Prolate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac32$</p>
<p><img src="https://i.sstatic.net/u5zSM.gif" alt="prolate outer parabolic trochoid"></p>
<p>The <em>Mathematica</em> code (unoptimized, sorry) is a bit too long to reproduce; those who want to experiment with parabolic trochoids can obtain a notebook from me upon request.</p>
<p>As a final bonus, here is an animation of a <em>three-dimensional</em> generalization of the prolate parabolic trochoid:</p>
<p><img src="https://i.sstatic.net/uNQsd.gif" alt="3D prolate parabolic trochoid"></p>
| <p>If I understand the question correctly:</p>
<p>Your parabola is $p(t)=(t,Ct^2)$. Its speed is $(1,2Ct)$, after normalization it is $v(t)=(1,2Ct)//\sqrt{1+(2Ct)^2)}$, hence the unit normal vector is $n(t)=(-2Ct,1)/\sqrt{1+(2Ct)^2)}$. The center of the circle is at $p(t)+An(t)$. The arc length of the parabola is $\int\sqrt{1+(2Ct)^2}dt= (2 C t \sqrt{4 C^2 t^2+1}+\sinh^{-1}(2 C t))/(4 C)=:a(t)$. The position of a marked point on the circle is $p(t)+An(t)+A\cos(a(t)-a(t_0))\,n(t)+A\sin(a(t)-a(t_0))\,v(t)$ -that's the (rather complicated) curve you're looking for.</p>
<p><strong>edit:</strong> corrected a mistake found by Willie Wong</p>
|
linear-algebra | <blockquote>
<p>Let $ A, B $ be two square matrices of order $n$. Do $ AB $ and $ BA $ have same minimal and characteristic polynomials?</p>
</blockquote>
<p>I have a proof only if $ A$ or $ B $ is invertible. Is it true for all cases?</p>
| <p>Before proving $AB$ and $BA$ have the same characteristic polynomials show that if $A_{m\times n}$ and $B_{n\times m} $ then characteristic polynomials of $AB$ and $BA$ satisfy following statement: $$x^n|xI_m-AB|=x^m|xI_n-BA|$$ therefore easily conclude if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p>
<p>Define $$C = \begin{bmatrix} xI_m & A \\B & I_n \end{bmatrix},\ D = \begin{bmatrix} I_m & 0 \\-B & xI_n \end{bmatrix}.$$ We have
$$
\begin{align*}
\det CD &= x^n|xI_m-AB|,\\
\det DC &= x^m|xI_n-BA|.
\end{align*}
$$
and we know $\det CD=\det DC$ if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p>
| <p>If $A$ is invertible then $A^{-1}(AB)A= BA$, so $AB$ and $BA$ are similar, which implies (but is stronger than) $AB$ and $BA$ have the same minimal polynomial and the same characteristic polynomial.
The same goes if $B$ is invertible.</p>
<p>In general, from the above observation, it is not too difficult to show that $AB$, and $BA$ have the same characteristic polynomial, the type of proof could depends on the field considered for the coefficient of your matrices though.
If the matrices are in $\mathcal{M}_n(\mathbb C)$, you use the fact that $\operatorname{GL}_n(\mathbb C)$ is dense in $\mathcal{M}_n(\mathbb C)$ and the continuity of the function which maps a matrix to its characteristic polynomial. There are at least 5 other ways to proceed (especially for other field than $\mathbb C$).</p>
<p>In general $AB$ and $BA$ do not have the same minimal polynomial. I'll let you search a bit for a counter example.</p>
|
geometry | <p>The volume of a $d$ dimensional hypersphere of radius $r$ is given by:</p>
<p>$$V(r,d)=\frac{(\pi r^2)^{d/2}}{\Gamma\left(\frac{d}{2}+1\right)}$$</p>
<p>What intrigues me about this, is that $V\to 0$ as $d\to\infty$ for any fixed $r$. How can this be? For fixed $r$, I would have thought adding a dimension would make the volume bigger, but apparently it does not. Anyone got a good explanation?</p>
| <p>I suppose you could say that adding a dimension "makes the volume bigger" for the hypersphere, but it does so even more for the unit you measure the volume with, namely the unit <em>cube</em>. So the numerical value of the volume does go towards zero.</p>
<p>Really, of course, it is apples to oranges because volumes of different dimensions are not commensurable -- it makes no sense to compare the <em>area</em> of the unit disk with the <em>volume</em> of the unit sphere.</p>
<p>All we can say is that in higher dimensions, a hypersphere is a successively worse approximation to a hypercube (of side length twice the radius). They coincide in dimension one, and it goes downward from there.</p>
| <p>The reason is because the length of the diagonal cube goes to infinity.</p>
<p>The cube in some sense does exactly what we expect. If it's side lengths are $1$, it will have the same volume in any dimension. So lets take a cube centered at the origin with side lengths $r$. Then what is the smallest sphere which contains this cube? It would need to have radius $r\sqrt{d}$, so that radius of the sphere required goes to infinity. </p>
<p>Perhaps this gives some intuition.</p>
|
geometry | <p>This is an idea I have had in my head for years and years and I would like to know the answer, and also I would like to know if it's somehow relevant to anything or useless.
I describe my thoughts with the following image:<br>
<img src="https://i.sstatic.net/UnAyt.png" alt="enter image description here"><br>
What would the area of the "red almost half circle" on top of the third square be, assuming you rotate the hypotenuse of a square around it's center limiting its movement so it cannot pass through the bottom of the square.<br>
My guess would be: </p>
<p>$$\ \frac{\left(\pi*(h/2)^2 - a^2\right)}{2}$$</p>
<p>And also, does this have any meaning? Have I been wandering around thinking about complete nonsense for so many years?</p>
| <p>I found this problem interesting enough to make a little animation along the line of @Blue's diagram (but I didn't want to edit their answer without permission):</p>
<p><img src="https://i.sstatic.net/5le9i.gif" alt="enter image description here"></p>
<p><em>Mathematica</em> syntax for those who are interested:</p>
<pre><code>G[d_, t_] := {t - (d t)/Sqrt[1 + t^2], d /Sqrt[1 + t^2]}
P[c_, m_] := Show[ParametricPlot[G[# Sqrt[8], t], {t, -4, 4},
PlotStyle -> {Dashed, Hue[#]}, PlotRange -> {{-1.025, 1.025}, {-.025,
2 Sqrt[2] + 0.025}}] & /@ (Range[m]/m),
ParametricPlot[G[Sqrt[8], t], {t, -1, 1}, PlotStyle -> {Red, Thick}],
Graphics[{Black, Disk[{0, 1}, .025], Opacity[0.1], Rectangle[{-1, 0}, {1, 2}],
Opacity[1], Line[{{c, 0}, G[Sqrt[8], c]}], Disk[{c, 0}, .025],
{Hue[#], Disk[G[# Sqrt[8], c], .025]} & /@ (Range[m]/m)}],
Axes -> False]
Manipulate[P[c, m], {c, -1, 1}, {m, 1, 20, 1}]
</code></pre>
| <p><img src="https://i.sstatic.net/0Z1P6.jpg" alt=""></p>
<p>Let $O$ be the center of the square, and let $\ell(\theta)$ be the line through $O$ that makes an angle $\theta$ with the horizontal line.
The line $\ell(\theta)$ intersects with the lower side of the square at a point $M_\theta$, with
$OM_\theta=\dfrac{a}{2\sin \theta }$. So, if $N_\theta$ is the other end of our 'rotating' diagonal then we have
$$ON_\theta=\rho(\theta)=h-OM_\theta=a\sqrt{2}-\dfrac{a}{2\sin \theta }.$$
Now, the area traced by $ON_\theta$ as $\theta$ varies between $\pi/4$ and $3\pi/4$ is our desired area augmented by the area of the quarter of the square. So, the desired area is
$$\eqalign{
\mathcal{A}&=\frac{1}{2}\int_{\pi/4}^{3\pi/4}\rho^2(\theta)\,d\theta-\frac{a^2}{4}\cr
&=a^2\int_{\pi/4}^{\pi/2}\left(\sqrt{2}-\frac{1}{2\sin\theta}\right)^2\,d\theta-\frac{a^2}{4}
&=a^2\left(\frac{\pi}{2}-\sqrt{2}\ln(1+\sqrt{2})\right)
}
$$
Therefore, the correct answer is about $13.6\%$ larger than the conjectured answer.</p>
|
linear-algebra | <blockquote>
<p>Show that the determinant of a matrix $A$ is equal to the product of its eigenvalues $\lambda_i$.</p>
</blockquote>
<p>So I'm having a tough time figuring this one out. I know that I have to work with the characteristic polynomial of the matrix $\det(A-\lambda I)$. But, when considering an $n \times n$ matrix, I do not know how to work out the proof. Should I just use the determinant formula for any $n \times n$ matrix? I'm guessing not, because that is quite complicated. Any insights would be great.</p>
| <p>Suppose that <span class="math-container">$\lambda_1, \ldots, \lambda_n$</span> are the eigenvalues of <span class="math-container">$A$</span>. Then the <span class="math-container">$\lambda$</span>s are also the roots of the characteristic polynomial, i.e.</p>
<p><span class="math-container">$$\begin{array}{rcl} \det (A-\lambda I)=p(\lambda)&=&(-1)^n (\lambda - \lambda_1 )(\lambda - \lambda_2)\cdots (\lambda - \lambda_n) \\ &=&(-1) (\lambda - \lambda_1 )(-1)(\lambda - \lambda_2)\cdots (-1)(\lambda - \lambda_n) \\ &=&(\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots (\lambda_n - \lambda)
\end{array}$$</span></p>
<p>The first equality follows from the factorization of a polynomial given its roots; the leading (highest degree) coefficient <span class="math-container">$(-1)^n$</span> can be obtained by expanding the determinant along the diagonal.</p>
<p>Now, by setting <span class="math-container">$\lambda$</span> to zero (simply because it is a variable) we get on the left side <span class="math-container">$\det(A)$</span>, and on the right side <span class="math-container">$\lambda_1 \lambda_2\cdots\lambda_n$</span>, that is, we indeed obtain the desired result</p>
<p><span class="math-container">$$ \det(A) = \lambda_1 \lambda_2\cdots\lambda_n$$</span></p>
<p>So the determinant of the matrix is equal to the product of its eigenvalues.</p>
| <p>I am a beginning Linear Algebra learner and this is just my humble opinion. </p>
<p>One idea presented above is that </p>
<p>Suppose that $\lambda_1,\ldots \lambda_2$ are eigenvalues of $A$. </p>
<p>Then the $\lambda$s are also the roots of the characteristic polynomial, i.e.</p>
<p>$$\det(A−\lambda I)=(\lambda_1-\lambda)(\lambda_2−\lambda)\cdots(\lambda_n−\lambda)$$.</p>
<p>Now, by setting $\lambda$ to zero (simply because it is a variable) we get on the left side $\det(A)$, and on the right side $\lambda_1\lambda_2\ldots \lambda_n$, that is, we indeed obtain the desired result</p>
<p>$$\det(A)=\lambda_1\lambda_2\ldots \lambda_n$$.</p>
<p>I dont think that this works generally but only for the case when $\det(A) = 0$. </p>
<p>Because, when we write down the characteristic equation, we use the relation $\det(A - \lambda I) = 0$ Following the same logic, the only case where $\det(A - \lambda I) = \det(A) = 0$ is that $\lambda = 0$.
The relationship $\det(A - \lambda I) = 0$ must be obeyed even for the special case $\lambda = 0$, which implies, $\det(A) = 0$</p>
<p><strong>UPDATED POST</strong></p>
<p>Here i propose a way to prove the theorem for a 2 by 2 case.
Let $A$ be a 2 by 2 matrix. </p>
<p>$$ A = \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{pmatrix}$$ </p>
<p>The idea is to use a certain property of determinants, </p>
<p>$$ \begin{vmatrix} a_{11} + b_{11} & a_{12} \\ a_{21} + b_{21} & a_{22}\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{vmatrix} + \begin{vmatrix} b_{11} & a_{12}\\b_{21} & a_{22}\\\end{vmatrix}$$</p>
<p>Let $ \lambda_1$ and $\lambda_2$ be the 2 eigenvalues of the matrix $A$. (The eigenvalues can be distinct, or repeated, real or complex it doesn't matter.)</p>
<p>The two eigenvalues $\lambda_1$ and $\lambda_2$ must satisfy the following condition :</p>
<p>$$\det (A -I\lambda) = 0 $$
Where $\lambda$ is the eigenvalue of $A$.</p>
<p>Therefore,
$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = 0 $$</p>
<p>Therefore, using the property of determinants provided above, I will try to <em>decompose</em> the determinant into parts. </p>
<p>$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}= \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix}-\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}$$</p>
<p>The final determinant can be further reduced. </p>
<p>$$
\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} - \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix}
$$</p>
<p>Substituting the final determinant, we will have </p>
<p>$$
\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} + \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix} = 0
$$</p>
<p>In a polynomial
$$ a_{n}\lambda^n + a_{n-1}\lambda^{n-1} ........a_{1}\lambda + a_{0}\lambda^0 = 0$$
We have the product of root being the coefficient of the term with the 0th power, $a_{0}$.</p>
<p>From the decomposed determinant, the only term which doesn't involve $\lambda$ would be the first term </p>
<p>$$
\begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\\end{vmatrix} = \det (A)
$$</p>
<p>Therefore, the product of roots aka product of eigenvalues of $A$ is equivalent to the determinant of $A$. </p>
<p>I am having difficulties to generalize this idea of proof to the $n$ by $$ case though, as it is complex and time consuming for me. </p>
|
logic | <p>Most of the systems mathematicians are interested in are consistent, which means, by Gödel's incompleteness theorems, that there must be unprovable statements.</p>
<p>I've seen a simple natural language statement here and elsewhere that's supposed to illustrate this: "I am not a provable statement." which leads to a paradox if false and logical disconnect if true (i.e. logic doesn't work to prove it by definition). Like this answer explains: <a href="https://math.stackexchange.com/a/453764/197692">https://math.stackexchange.com/a/453764/197692</a>.</p>
<p>The natural language statement is simple enough for people to get why there's a problem here. But Gödel's incompleteness theorems show that similar statements exist within mathematical systems.</p>
<p>My question then is, are there a <em>simple</em> unprovable statements, that would seem intuitively true to the layperson, or is intuitively unprovable, to illustrate the same concept in, say, integer arithmetic or algebra?</p>
<p>My understanding is that the continuum hypothesis is an example of an unprovable statement in Zermelo-Fraenkel set theory, but that's not really simple or intuitive.</p>
<p>Can someone give a good example you can point to and say "That's what Gödel's incompleteness theorems are talking about"? Or is this just something that is fundamentally hard to show mathematically?</p>
<p>Update:
There are some fantastic answers here that are certainly accessible. It will be difficult to pick a "right" one.</p>
<p>Originally I was hoping for something a high school student could understand, without having to explain axiomatic set theory, or Peano Arithmetic, or countable versus uncountable, or non-euclidean geometry. But the impression I am getting is that in a sufficiently well developed mathematical system, mathematician's have plumbed the depths of it to the point where potentially unprovable statements either remain as conjecture and are therefore hard to grasp by nature (because very smart people are stumped by them), or, once shown to be unprovable, become axiomatic in some new system or branch of systems.</p>
| <p>Here's a nice example that I think is easier to understand than the usual examples of Goodstein's theorem, Paris-Harrington, etc. Take a countably infinite paint box; this means that it has one color of paint for each positive integer; we can therefore call the colors <span class="math-container">$C_1, C_2, $</span> and so on. Take the set of real numbers, and imagine that each real number is painted with one of the colors of paint.</p>
<p>Now ask the question: Are there four real numbers <span class="math-container">$a,b,c,d$</span>, all painted the same color, and not all zero, such that <span class="math-container">$$a+b=c+d?$$</span></p>
<p>It seems reasonable to imagine that the answer depends on how exactly the numbers have been colored. For example, if you were to color every real number with color <span class="math-container">$C_1$</span>, then obviously there are <span class="math-container">$a,b,c,d$</span> satisfying the two desiderata. But one can at least entertain the possibility that if the real numbers were colored in a sufficiently complicated way, there would not be four numbers of the same color with <span class="math-container">$a+b=c+d$</span>; perhaps a sufficiently clever painter could arrange that for any four numbers with <span class="math-container">$a+b=c+d$</span> there would always be at least one of a different color than the rest.</p>
<p>So now you can ask the question: Must such <span class="math-container">$a,b,c,d$</span> exist <em>regardless</em> of how cleverly the numbers are actually colored?</p>
<p>And the answer, proved by Erdős in 1943 is: yes, <em>if and only if the continuum hypothesis is false</em>, and is therefore independent of the usual foundational axioms for mathematics.</p>
<hr>
<p>The result is mentioned in </p>
<ul>
<li>Fox, Jacob “<a href="http://math.mit.edu/~fox/paper-foxrado.pdf" rel="noreferrer">An infinite color analogue of Rado's theorem</a>”, Journal of Combinatorial Theory Series A <strong>114</strong> (2007), 1456–1469.</li>
</ul>
<p>Fox says that the result I described follows from a more general result of Erdős and Kakutani, that the continuum hypothesis is equivalent to there being a countable coloring of the reals such that each monochromatic subset is linearly independent over <span class="math-container">$\Bbb Q$</span>, which is proved in:</p>
<ul>
<li>Erdős, P and S. Kakutani “<a href="http://projecteuclid.org/euclid.bams/1183505209" rel="noreferrer">On non-denumerable graphs</a>”, Bull. Amer. Math. Soc. <strong>49</strong> (1943) 457–461.</li>
</ul>
<p>A proof for the <span class="math-container">$a+b=c+d$</span> situation, originally proved by Erdős, is given in:</p>
<ul>
<li>Davies, R.O. “<a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=2073404" rel="noreferrer">Partitioning the plane into denumerably many sets without repeated distance</a>” Proc. Cambridge Philos. Soc. <strong>72</strong> (1972) 179–183.</li>
</ul>
| <p>Any statement which is not logically valid (read: always true) is unprovable. The statement $\exists x\exists y(x>y)$ is not provable from the theory of linear orders, since it is false in the singleton order. On the other hand, it is not disprovable since any other order type would satisfy it.</p>
<p>The statement $\exists x(x^2-2=0)$ is not provable from the axioms of the field, since $\Bbb Q$ thinks this is false, and $\Bbb C$ thinks it is true.</p>
<p>The statement "$G$ is an Abelian group" is not provable since given a group $G$ it can be Abelian and it could be non-Abelian.</p>
<p>The statement "$f\colon\Bbb{R\to R}$ is continuous/differentiable/continuously differentiable/smooth/analytic/a polynomial" and so on and so forth, are all unprovable, because just like that given an arbitrary function we don't know anything about it. Even if we know it is continuous we can't know if it is continuously differentiable, or smooth, or anything else. So these are all additional assumptions we have to make.</p>
<p>Of course, given a particular function, like $f(x)=e^x$ we can sit down and prove things about it, but the statement "$f$ is a continuous function" cannot be proved or disproved until further assumptions are added.</p>
<p>And that's the point that I am trying to make here. Every statement which cannot be always proved will be unprovable from some assumptions. But you ask for an intuitive statement, and that causes a problem.</p>
<p>The problem with "intuitive statement" is that the more you work in mathematics, the more your intuition is decomposed and reconstructed according to the topic you work with. The continuum hypothesis is perfectly intuitive and simple for me, it is true that understanding <strong>how</strong> it can be unprovable is difficult, but the statement itself is not very difficult once you cleared up the basic notions like cardinality and power sets.</p>
<p>Finally, let me just add that there are plenty of theories which are complete and consistent and we work with them. Some of them are even recursively enumerable. The incompleteness theorem gives us <em>three</em> conditions from which incompleteness follows, any two won't suffice. (1) Consistent, (2) Recursively enumerable, (3) Interprets arithmetics.</p>
<p>There are complete theories which satisfy the first two, and there are complete theories which are consistent and interpret arithmetics, and of course any inconsistent theory is complete.</p>
|
probability | <p>Could you kindly list here all the criteria you know which guarantee that a <em>continuous local martingale</em> is in fact a true martingale? Which of these are valid for a general local martingale (non necessarily continuous)? Possible references to the listed results would be appreciated.</p>
| <p>Here you are :</p>
<p>From Protter's book "Stochastic Integration and Differential Equations" Second Edition (page 73 and 74)</p>
<p>First :
Let $M$ be a local martingale. Then $M$ is a martingale with
$E(M_t^2) < \infty, \forall t > 0$, if and only if $E([M,M]_t) < \infty, \forall t > 0$. If $E([M,M]_t) < \infty$, then $E(M_t^2) = E([M,M]_t)$. </p>
<p>Second :</p>
<p>If $M$ is a local martingale and $E([M, M]_\infty) < \infty$, then $M$ is a
square integrable martingale (i.e. $sup_{t>0} E(M_t^2) = E(M_\infty^2) < \infty$). Moreover $E(M_t^2) = E([M, M]_t), \forall t \in [0,\infty]$. </p>
<p>Third :</p>
<p>From George Lowther's Fantastic Blog, for positive Local Martingales that are (shall I say) weak-unique solution of some SDEs.</p>
<p>Take a look at it by yourself :
<a href="http://almostsure.wordpress.com/category/stochastic-processes/">http://almostsure.wordpress.com/category/stochastic-processes/</a></p>
<p>Fourth :</p>
<p>For a positive continuous local martingales $Y$ that can written as Doléans-Dade exponential of a (continuous)-local martingale $M$, if $E(e^{\frac{1}{2}[M,M]_\infty})<\infty$ is true (that's Novikov's condition over $M$), then $Y$ is a uniformely integrable martingale.(I think there are some variants around the same theme)</p>
<p>I think I can remember I read a paper with another criteria but i don't have it with me right now. I 'll to try find it and give this last criteria when I find it.</p>
<p>Regards</p>
| <p>I found by myself other criteria that I think it is worth adding to this list.</p>
<p>5) $M$ is a local martingale of class DL iff $M$ is a martingale</p>
<p>6) If $M$ is a bounded local martingale, then it is a martingale.</p>
<p>7) If $M$ is a local martingale and $E(\sup_{s \in [0,t]} |M_s|) < \infty \, \forall t \geq 0$, then $M$ is a martingale.</p>
<p>8) Let $M$ be a local martingale and $(T_n)$ a reducing sequence for it. If $E(\sup_{n} |M_{t \wedge T_n}|) < \infty \, \forall t \geq 0$, then $M$ is a martingale.</p>
<p>9) Suppose we have a process $(M_t)_{t\geq 0}$ of the form $M_t=f(t,W_t)$. Then $M$ is a local martingale iff $(\frac{\partial}{\partial t}+\frac{1}{2}\frac{\partial^2}{\partial x^2})f(t,x)=0$.
If moreover $\forall \, \varepsilon >0$ $\exists C(\varepsilon,t)$ such that $|f(s,x)|\leq C e^{\epsilon x^2} \, \forall s \geq 0$, then $M$ is a martingale.</p>
|
geometry | <p>In the textbook I am reading, it says a dimension is the number of independent parameters needed to specify a point. In order to make a circle, you need two points to specify the $x$ and $y$ position of a circle, but apparently a circle can be described with only the $x$-coordinate? How is this possible without the $y$-coordinate also?</p>
| <p>Suppose we're talking about a unit circle. We could specify any point on it as:
$$(\sin(\theta),\cos(\theta))$$
which uses only one parameter. We could also notice that there are only $2$ points with a given $x$ coordinate:
$$(x,\pm\sqrt{1-x^2})$$
and we would generally not consider having to specify a sign as being an additional parameter, since it is discrete, whereas we consider only continuous parameters for dimension.</p>
<p>That said, a <a href="http://en.wikipedia.org/wiki/Hilbert_curve">Hilbert curve</a> or <a href="http://en.wikipedia.org/wiki/Z-order_curve">Z-order curve</a> parameterizes a square in just one parameter, but we would certainly not say a square is one dimensional. The definition of dimension that you were given is kind of sloppy - really, the fact that the circle is of dimension one can be taken more to mean "If you zoom in really close to a circle, it looks basically like a line" and this happens, more or less, to mean that it can be paramaterized in one variable.</p>
| <p>Continuing ploosu2, the circle can be parameterized with one parameter (even for those who have not studied trig functions)...
$$
x = \frac{2t}{1+t^2},\qquad y=\frac{1-t^2}{1+t^2}
$$</p>
|
probability | <p>Whats the difference between <em>probability density function</em> and <em>probability distribution function</em>? </p>
| <p><strong>Distribution Function</strong></p>
<ol>
<li>The probability distribution function / probability function has ambiguous definition. They may be referred to:
<ul>
<li>Probability density function (PDF) </li>
<li>Cumulative distribution function (CDF)</li>
<li>or probability mass function (PMF) (statement from Wikipedia)</li>
</ul></li>
<li>But what confirm is:
<ul>
<li>Discrete case: Probability Mass Function (PMF)</li>
<li>Continuous case: Probability Density Function (PDF)</li>
<li>Both cases: Cumulative distribution function (CDF)</li>
</ul></li>
<li>Probability at certain <span class="math-container">$x$</span> value, <span class="math-container">$P(X = x)$</span> can be directly obtained in:
<ul>
<li>PMF for discrete case</li>
<li>PDF for continuous case</li>
</ul></li>
<li>Probability for values less than <span class="math-container">$x$</span>, <span class="math-container">$P(X < x)$</span> or Probability for values within a range from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, <span class="math-container">$P(a < X < b)$</span> can be directly obtained in:
<ul>
<li>CDF for both discrete / continuous case</li>
</ul></li>
<li>Distribution function is referred to CDF or Cumulative Frequency Function (see <a href="http://mathworld.wolfram.com/DistributionFunction.html" rel="noreferrer">this</a>)</li>
</ol>
<p><strong>In terms of Acquisition and Plot Generation Method</strong></p>
<ol>
<li>Collected data appear as discrete when:
<ul>
<li>The measurement of a subject is naturally discrete type, such as numbers resulted from dice rolled, count of people.</li>
<li>The measurement is digitized machine data, which has no intermediate values between quantized levels due to sampling process.</li>
<li>In later case, when resolution higher, the measurement is closer to analog/continuous signal/variable.</li>
</ul></li>
<li>Way of generate a PMF from discrete data:
<ul>
<li>Plot a histogram of the data for all the <span class="math-container">$x$</span>'s, the <span class="math-container">$y$</span>-axis is the frequency or quantity at every <span class="math-container">$x$</span>.</li>
<li>Scale the <span class="math-container">$y$</span>-axis by dividing with total number of data collected (data size) <span class="math-container">$\longrightarrow$</span> and this is called PMF.</li>
</ul></li>
<li>Way of generate a PDF from discrete / continuous data:
<ul>
<li>Find a continuous equation that models the collected data, let say normal distribution equation.</li>
<li>Calculate the parameters required in the equation from the collected data. For example, parameters for normal distribution equation are mean and standard deviation. Calculate them from collected data.</li>
<li>Based on the parameters, plot the equation with continuous <span class="math-container">$x$</span>-value <span class="math-container">$\longrightarrow$</span> that is called PDF.</li>
</ul></li>
<li>How to generate a CDF:
<ul>
<li>In discrete case, CDF accumulates the <span class="math-container">$y$</span> values in PMF at each discrete <span class="math-container">$x$</span> and less than <span class="math-container">$x$</span>. Repeat this for every <span class="math-container">$x$</span>. The final plot is a monotonically increasing until <span class="math-container">$1$</span> in the last <span class="math-container">$x$</span> <span class="math-container">$\longrightarrow$</span> this is called discrete CDF.</li>
<li>In continuous case, integrate PDF over <span class="math-container">$x$</span>; the result is a continuous CDF.</li>
</ul></li>
</ol>
<p><strong>Why PMF, PDF and CDF?</strong></p>
<ol>
<li>PMF is preferred when
<ul>
<li>Probability at every <span class="math-container">$x$</span> value is interest of study. This makes sense when studying a discrete data - such as we interest to probability of getting certain number from a dice roll.</li>
</ul></li>
<li>PDF is preferred when
<ul>
<li>We wish to model a collected data with a continuous function, by using few parameters such as mean to speculate the population distribution.</li>
</ul></li>
<li>CDF is preferred when
<ul>
<li>Cumulative probability in a range is point of interest. </li>
<li>Especially in the case of continuous data, CDF much makes sense than PDF - e.g., probability of students' height less than <span class="math-container">$170$</span> cm (CDF) is much informative than the probability at exact <span class="math-container">$170$</span> cm (PDF).</li>
</ul></li>
</ol>
| <p>The relation between the probability density funtion <span class="math-container">$f$</span> and the cumulative distribution function <span class="math-container">$F$</span> is...</p>
<ul>
<li><p>if <span class="math-container">$f$</span> is discrete:
<span class="math-container">$$
F(k) = \sum_{i \le k} f(i)
$$</span></p></li>
<li><p>if <span class="math-container">$f$</span> is continuous:
<span class="math-container">$$
F(x) = \int_{y \le x} f(y)\,dy
$$</span></p></li>
</ul>
|
logic | <p>For some reason, be it some bad habit or something else, I can not understand why the statement "p only if q" would translate into p implies q. For instance, I have the statement "Samir will attend the party only if Kanti will be there." The way I interpret this is, "It is true that Samir will attend the party only if it is true that Kanti will be at the party;" which, in my mind, becomes "If Kanti will be at the party, then Samir will be there." </p>
<p>Can someone convince me of the right way?</p>
<p>EDIT:</p>
<p>I have read them carefully, and probably have done so for over a year. I understand what sufficient conditions and necessary conditions are. I understand the conditional relationship in almost all of its forms, except the form "q only if p." <strong>What I do not understand is, why is p the necessary condition and q the sufficient condition.</strong> I am <strong>not</strong> asking, what are the sufficient and necessary conditions, rather, I am asking <strong>why.</strong></p>
| <p>Think about it: "$p$ only if $q$" means that $q$ is a <strong>necessary condition</strong> for $p$. It means that $p$ can occur <strong>only when</strong> $q$ has occurred. This means that whenever we have $p$, it must also be that we have $q$, as $p$ can happen only if we have $q$: that is to say, that $p$ <strong>cannot happen</strong> if we <strong>do not</strong> have $q$. </p>
<p>The critical line is <em>whenever we have $p$, we must also have $q$</em>: this allows us to say that $p \Rightarrow q$, or $p$ implies $q$.</p>
<p>To use this on your example: we have the statement "Samir will attend the party only if Kanti attends the party." So if Samir attends the party, then Kanti must be at the party, because Samir will attend the party <strong>only if</strong> Kanti attends the party.</p>
<p>EDIT: It is a common mistake to read <em>only if</em> as a stronger form of <em>if</em>. It is important to emphasize that <em>$q$ if $p$</em> means that $p$ is a <strong>sufficient condition</strong> for $q$, and that <em>$q$ only if $p$</em> means that $p$ is a <strong>necessary condition</strong> for $q$.</p>
<p>Furthermore, we can supply more intuition on this fact: Consider $q$ <em>only if</em> $p$. It means that $q$ can occur only when $p$ has occurred: so if we don't have $p$, we can't have $q$, because $p$ is necessary for $q$. We note that <em>if we don't have $p$, then we can't have $q$</em> is a logical statement in itself: $\lnot p \Rightarrow \lnot q$. We know that all logical statements of this form are equivalent to their contrapositives. Take the contrapositive of $\lnot p \Rightarrow \lnot q$: it is $\lnot \lnot q \Rightarrow \lnot \lnot p$, which is equivalent to $q \Rightarrow p$.</p>
| <p>I don't think there's really anything to <em>understand</em> here. One simply has to learn as a fact that in mathematics jargon the words "only if" invariably encode that particular meaning. It is not really forced by the everyday meanings of "only" and "if" in isolation; it's just how it is.</p>
<p>By this I mean that the mathematical meaning is certainly a <em>possible</em> meaning of the English phrase "only if", the mathematical meaning is not the <em>only</em> possible way "only if" can be used in everyday English, and it just needs to be memorized as a fact that the meaning in mathematics is less flexible than in ordinary conversation.</p>
<p>To see that the mathematical meaning is at least <em>possible</em> for ordinary language, consider the sentence</p>
<blockquote>
<p>John smokes only on Saturdays.</p>
</blockquote>
<p>From this we can conclude that if we see John pulsing on a cigarette, then today must be a Saturday. We <em>cannot</em>, out of ordinary common sense, conclude that if we look at the calendar and it says today is Saturday, then John must <em>currently</em> be lighting up -- because the claim doesn't say that John smokes <em>continously</em> for the entire Saturday, or even every Saturday.</p>
<p>Now, if we can agree that there's no essential difference between "if" and "when" in this context, this might as well he phrased as</p>
<blockquote>
<p>John is smoking now <em>only if</em> today is a Saturday.</p>
</blockquote>
<p>which (according to the above analysis) ought to mean, mathematically,
$$ \mathit{smokes}(\mathit{John}) \implies \mathit{today}=\mathit{Saturday} $$</p>
|
linear-algebra | <p>I happened to stumble upon the following matrix:
$$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$</p>
<p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then:
$$ P(A)=\begin{bmatrix}
P(a) & P'(a) \\
0 & P(a)
\end{bmatrix}$$</p>
<p>Where $P'(a)$ is the derivative evaluated at $a$.</p>
<p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me:
$$ \exp(A)=\begin{bmatrix}
e^a & e^a \\
0 & e^a
\end{bmatrix}$$
and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p>
<p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get:
$$ P(A)=\begin{bmatrix}
\frac{1}{a} & -\frac{1}{a^2} \\
0 & \frac{1}{a}
\end{bmatrix}$$
And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p>
<p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p>
<p>I have two questions:</p>
<ol>
<li><p>Why is this happening?</p></li>
<li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li>
</ol>
| <p>If $$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$
then by induction you can prove that
$$ A^n = \begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} \tag 1
$$
for $n \ge 1 $. If $f$ can be developed into a power series
$$
f(z) = \sum_{n=0}^\infty c_n z^n
$$
then
$$
f'(z) = \sum_{n=1}^\infty n c_n z^{n-1}
$$
and it follows that
$$
f(A) = \sum_{n=0}^\infty c_n A^n = I + \sum_{n=1}^\infty c_n
\begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} = \begin{bmatrix}
f(a) & f'(a) \\
0 & f(a)
\end{bmatrix} \tag 2
$$
From $(1)$ and
$$
A^{-1} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}
$$
one gets
$$
A^{-n} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}^n =
(-a^{-2})^{n} \begin{bmatrix}
-a & 1 \\
0 & -a
\end{bmatrix}^n \\ =
(-1)^n a^{-2n} \begin{bmatrix}
(-a)^n & n (-a)^{n-1} \\
0 & (-a)^n
\end{bmatrix} =
\begin{bmatrix}
a^{-n} & -n a^{-n-1} \\
0 & a^{-n}
\end{bmatrix}
$$
which means that $(1)$ holds for negative exponents as well.
As a consequence, $(2)$ can be generalized to functions
admitting a Laurent series representation:
$$
f(z) = \sum_{n=-\infty}^\infty c_n z^n
$$</p>
| <p>It's a general statement if <span class="math-container">$J_{k}$</span> is a Jordan block and <span class="math-container">$f$</span> a function matrix then
<span class="math-container">\begin{equation}
f(J)=\left(\begin{array}{ccccc}
f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & \frac{f''(\lambda_{0})}{2!} & \ldots & \frac{f^{(n-1)}(\lambda_{0})}{(n-1)!}\\
0 & f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & & \vdots\\
0 & 0 & f(\lambda_{0}) & \ddots & \frac{f''(\lambda_{0})}{2!}\\
\vdots & \vdots & \vdots & \ddots & \frac{f'(\lambda_{0})}{1!}\\
0 & 0 & 0 & \ldots & f(\lambda_{0})
\end{array}\right)
\end{equation}</span>
where
<span class="math-container">\begin{equation}
J=\left(\begin{array}{ccccc}
\lambda_{0} & 1 & 0 & 0\\
0 & \lambda_{0} & 1& 0\\
0 & 0 & \ddots & 1\\
0 & 0 & 0 & \lambda_{0}
\end{array}\right)
\end{equation}</span>
This statement can be demonstrated in various ways (none of them short), but it's a quite known formula. I think you can find it in various books, like in Horn and Johnson's <em>Matrix Analysis</em>.</p>
|
linear-algebra | <p>I'm in the process of writing an application which identifies the closest matrix from a set of square matrices $M$ to a given square matrix $A$. The closest can be defined as the most similar.</p>
<p>I think finding the distance between two given matrices is a fair approach since the smallest Euclidean distance is used to identify the closeness of vectors. </p>
<p>I found that the distance between two matrices ($A,B$) could be calculated using the <a href="http://mathworld.wolfram.com/FrobeniusNorm.html">Frobenius distance</a> $F$:</p>
<p>$$F_{A,B} = \sqrt{trace((A-B)*(A-B)')} $$</p>
<p>where $B'$ represents the conjugate transpose of B.</p>
<p>I have the following points I need to clarify</p>
<ul>
<li>Is the distance between matrices a fair measure of similarity?</li>
<li>If distance is used, is Frobenius distance a fair measure for this problem? any other suggestions?</li>
</ul>
| <p>Some suggestions. Too long for a comment:</p>
<p>As I said, there are many ways to measure the "distance" between two matrices. If the matrices are $\mathbf{A} = (a_{ij})$ and $\mathbf{B} = (b_{ij})$, then some examples are:
$$
d_1(\mathbf{A}, \mathbf{B}) = \sum_{i=1}^n \sum_{j=1}^n |a_{ij} - b_{ij}|
$$
$$
d_2(\mathbf{A}, \mathbf{B}) = \sqrt{\sum_{i=1}^n \sum_{j=1}^n (a_{ij} - b_{ij})^2}
$$
$$
d_\infty(\mathbf{A}, \mathbf{B}) = \max_{1 \le i \le n}\max_{1 \le j \le n} |a_{ij} - b_{ij}|
$$
$$
d_m(\mathbf{A}, \mathbf{B}) = \max\{ \|(\mathbf{A} - \mathbf{B})\mathbf{x}\| : \mathbf{x} \in \mathbb{R}^n, \|\mathbf{x}\| = 1 \}
$$
I'm sure there are many others. If you look up "matrix norms", you'll find lots of material. And if $\|\;\|$ is any matrix norm, then $\| \mathbf{A} - \mathbf{B}\|$ gives you a measure of the "distance" between two matrices $\mathbf{A}$ and $\mathbf{B}$.</p>
<p>Or, you could simply count the number of positions where $|a_{ij} - b_{ij}|$ is larger than some threshold number. This doesn't have all the nice properties of a distance derived from a norm, but it still might be suitable for your needs.</p>
<p>These distance measures all have somewhat different properties. For example, the third one shown above will tell you that two matrices are far apart even if all their entries are the same except for a large difference in one position.</p>
| <p>If we have two matrices $A,B$.
Distance between $A$ and $B$ can be calculated using Singular values or $2$ norms.</p>
<p>You may use Distance $= \vert(\text{fnorm}(A)-\text{fnorm}(B))\vert$
where fnorm = sq root of sum of squares of all singular values. </p>
|
linear-algebra | <p>The largest eigenvalue of a <a href="https://en.wikipedia.org/wiki/Stochastic_matrix" rel="noreferrer">stochastic matrix</a> (i.e. a matrix whose entries are positive and whose rows add up to $1$) is $1$.</p>
<p>Wikipedia marks this as a special case of the <a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem" rel="noreferrer">Perron-Frobenius theorem</a>, but I wonder if there is a simpler (more direct) way to demonstrate this result.</p>
| <p>Here's a really elementary proof (which is a slight modification of <a href="https://math.stackexchange.com/questions/8695/no-solutions-to-a-matrix-inequality/8702#8702">Fanfan's answer to a question of mine</a>). As Calle shows, it is easy to see that the eigenvalue $1$ is obtained. Now, suppose $Ax = \lambda x$ for some $\lambda > 1$. Since the rows of $A$ are nonnegative and sum to $1$, each element of vector $Ax$ is a convex combination of the components of $x$, which can be no greater than $x_{max}$, the largest component of $x$. On the other hand, at least one element of $\lambda x$ is greater than $x_{max}$, which proves that $\lambda > 1$ is impossible.</p>
| <p>Say <span class="math-container">$A$</span> is a <span class="math-container">$n \times n$</span> row stochastic matrix. Now:
<span class="math-container">$$A \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix} =
\begin{pmatrix}
\sum_{i=1}^n a_{1i} \\ \sum_{i=1}^n a_{2i} \\ \vdots \\ \sum_{i=1}^n a_{ni}
\end{pmatrix}
=
\begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix}
$$</span>
Thus the eigenvalue <span class="math-container">$1$</span> is attained.</p>
<p>To show that the this is the largest eigenvalue you can use the <a href="http://en.wikipedia.org/wiki/Gershgorin_circle_theorem" rel="noreferrer">Gershgorin circle theorem</a>. Take row <span class="math-container">$k$</span> in <span class="math-container">$A$</span>. The diagonal element will be <span class="math-container">$a_{kk}$</span> and the radius will be <span class="math-container">$\sum_{i\neq k} |a_{ki}| = \sum_{i \neq k} a_{ki}$</span> since all <span class="math-container">$a_{ki} \geq 0$</span>. This will be a circle with its center in <span class="math-container">$a_{kk} \in [0,1]$</span>, and a radius of <span class="math-container">$\sum_{i \neq k} a_{ki} = 1-a_{kk}$</span>. So this circle will have <span class="math-container">$1$</span> on its perimeter. This is true for all Gershgorin circles for this matrix (since <span class="math-container">$k$</span> was taken arbitrarily). Thus, since all eigenvalues lie in the union of the Gershgorin circles, all eigenvalues <span class="math-container">$\lambda_i$</span> satisfy <span class="math-container">$|\lambda_i| \leq 1$</span>.</p>
|
combinatorics | <p>I'm having a hard time finding the pattern. Let's say we have a set</p>
<p>$$S = \{1, 2, 3\}$$</p>
<p>The subsets are:</p>
<p>$$P = \{ \{\}, \{1\}, \{2\}, \{3\}, \{1, 2\}, \{1, 3\}, \{2, 3\}, \{1, 2, 3\} \}$$</p>
<p>And the value I'm looking for, is the sum of the cardinalities of all of these subsets. That is, for this example, $$0+1+1+1+2+2+2+3=12$$</p>
<p><strong>What's the formula for this value?</strong></p>
<p>I can sort of see a pattern, but I can't generalize it.</p>
| <p>Here is a bijective argument. Fix a finite set $S$. Let us count the number of pairs $(X,x)$ where $X$ is a subset of $S$ and $x \in X$. We have two ways of doing this, depending which coordinate we fix first.</p>
<p><strong>First way</strong>: For each set $X$, there are $|X|$ elements $x \in X$, so the count is $\sum_{X \subseteq S} |X|$. </p>
<p><strong>Second way:</strong> For each element $x \in S$, there are $2^{|S|-1}$ sets $X$ with $x \in X$. We get them all by taking the union of $\{x\}$ with an arbitrary subset of $S\setminus\{x\}$. Thus, the count is $\sum_{x \in S} 2^{|S|-1} = |S| 2^{|S|-1}$.</p>
<p>Since both methods count the same thing, we get
$$\sum_{X \subseteq S} |X| = |S| 2^{|S|-1},$$
as in the other answers.</p>
| <p>Each time an element appears in a set, it contributes $1$ to the value you are looking for. For a given element, it appears in exactly half of the subsets, i.e. $2^{n-1}$ sets. As there are $n$ total elements, you have $$n2^{n-1}$$ as others have pointed out.</p>
|
linear-algebra | <p>I'm starting a very long quest to learn about math, so that I can program games. I'm mostly a corporate developer, and it's somewhat boring and non exciting. When I began my career, I chose it because I wanted to create games.</p>
<p>I'm told that Linear Algebra is the best place to start. Where should I go?</p>
| <p>You are right: Linear Algebra is not just the "best" place to start. It's THE place to start.</p>
<p>Among all the books cited in <a href="http://en.wikipedia.org/wiki/Linear_algebra" rel="nofollow noreferrer">Wikipedia - Linear Algebra</a>, I would recommend:</p>
<ul>
<li>Strang, Gilbert, Linear Algebra and Its Applications (4th ed.)</li>
</ul>
<p>Strang's book has at least two reasons for being recommended. First, it's extremely easy and short. Second, it's the book they use at MIT for the extremely good video Linear Algebra course you'll find in the <a href="https://math.stackexchange.com/a/4338/391081">link of Unreasonable Sin</a>.</p>
<p>For a view towards applications (though maybe not necessarily your applications) and still elementary:</p>
<ul>
<li>B. Noble & J.W. Daniel: Applied Linear Algebra, Prentice-Hall, 1977</li>
</ul>
<p>Linear algebra has two sides: one more "theoretical", the other one more "applied". Strang's book is just elementary, but perhaps "theoretical". Noble-Daniel is definitively "applied". The distinction from the two points of view relies in the emphasis they put on "abstract" vector spaces vs specific ones such as <span class="math-container">$\mathbb{R}^n$</span> or <span class="math-container">$\mathbb{C}^n$</span>, or on matrices vs linear maps.</p>
<p>Maybe because of my penchant towards "pure" maths, I must admit that sometimes I find matrices somewhat annoying. They are funny, specific, whereas linear maps can look more "abstract" and "ethereal". But, for instance: I can't stand the proof that the matrix product is associative, whereas the corresponding associativity for the composition of (linear or non linear) maps is true..., well, just because it can't help to be true the first moment you write it down.</p>
<p>Anyway, at a more advanced level in the "theoretical" side you can use:</p>
<ul>
<li><p>Greub, Werner H., Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer</p>
</li>
<li><p>Halmos, Paul R., Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer</p>
</li>
<li><p>Shilov, Georgi E., Linear algebra, Dover Publications</p>
</li>
</ul>
<p>In the "applied" (?) side, a book that I love and you'll appreciate if you want to study, for instance, the exponential of a matrix is <a href="https://rads.stackoverflow.com/amzn/click/com/0486445542" rel="nofollow noreferrer" rel="nofollow noreferrer">Gantmacher</a>.</p>
<p>And, at any time, you'll need to do a lot of exercises. Lipschutz's is second to none in this:</p>
<ul>
<li>Lipschutz, Seymour, 3,000 Solved Problems in Linear Algebra, McGraw-Hill</li>
</ul>
<p>Enjoy! :-)</p>
| <p>I'm very surprised no one's yet listed Sheldon Axler's <a href="https://books.google.com/books?id=5qYxBQAAQBAJ&source=gbs_similarbooks" rel="nofollow noreferrer">Linear Algebra Done Right</a> - unlike Strang and Lang, which are really great books, Linear Algebra Done Right has a lot of "common sense", and is great for someone who wants to understand what the point of it all is, as it carefully reorders the standard curriculum a bit to help someone understand what it's all about.</p>
<p>With a lot of the standard curriculum, you can get stuck in proofs and eigenvalues and kernels, before you ever appreciate the intuition and applications of what it's all about. This is great if you're a typical pure math type who deals with abstraction easily, but given the asker's description, I don't think that a rigorous pure math course is what he/she's asking for.</p>
<p>For the very practical view, yet also not at all sacrificing depth, I don't think you can do better than Linear Algebra Done Right - and if you are thirsty for more, after you've tried it, Lang and Strang are both great texts.</p>
|
linear-algebra | <p>In which cases is the inverse of a matrix equal to its transpose, that is, when do we have <span class="math-container">$A^{-1} = A^{T}$</span>? Is it when <span class="math-container">$A$</span> is orthogonal? </p>
| <p>If $A^{-1}=A^T$, then $A^TA=I$. This means that each column has unit length and is perpendicular to every other column. That means it is an orthonormal matrix.</p>
| <p>You're right. This is the definition of orthogonal matrix.</p>
|
matrices | <p>Is there an intuitive meaning for the <a href="http://mathworld.wolfram.com/SpectralNorm.html">spectral norm</a> of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks</p>
| <p>The spectral norm (also know as Induced 2-norm) is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.</p>
<p>The maximum singular value is the square root of the maximum eigenvalue or the maximum eigenvalue if the matrix is symmetric/hermitian</p>
| <p>Let us consider the singular value decomposition (SVD) of a matrix <span class="math-container">$X = U S V^T$</span>, where <span class="math-container">$U$</span> and <span class="math-container">$V$</span> are matrices containing the left and right singular vectors of <span class="math-container">$X$</span> in their columns. <span class="math-container">$S$</span> is a diagonal matrix containing the singular values. A intuitive way to think of the norm of <span class="math-container">$X$</span> is in terms of the norm of the singular value vector in the diagonal of <span class="math-container">$S$</span>. This is because the singular values measure the <em>energy</em> of the matrix in various principal directions.</p>
<p>One can now extend the <span class="math-container">$p$</span>-norm for a finite-dimensional vector to a <span class="math-container">$m\times n$</span> matrix by working on this singular value vector:</p>
<p><span class="math-container">\begin{align}
\|X\|_p &= \left( \sum_{i=1}^{\text{min}(m,n)} \sigma_i^p \right)^{1/p}
\end{align}</span></p>
<p>This is called the <em>Schatten norm</em> of <span class="math-container">$X$</span>. Specific choices of <span class="math-container">$p$</span> yield commonly used matrix norms:</p>
<ol>
<li><span class="math-container">$p=0$</span>: Gives the rank of the matrix (number of non-zero singular values).</li>
<li><span class="math-container">$p=1$</span>: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.</li>
<li><span class="math-container">$p=2$</span>: Gives the Frobenius norm (square root of the sum of squares of singular values).</li>
<li><span class="math-container">$p=\infty$</span>: Gives the spectral norm (max. singular value).</li>
</ol>
|
number-theory | <p>The question is written like this:</p>
<blockquote>
<p>Is it possible to find an infinite set of points in the plane, not all on the same straight line, such that the distance between <strong>EVERY</strong> pair of points is rational?</p>
</blockquote>
<p>This would be so easy if these points could be on the same straight line, but I couldn't get any idea to solve the question above(not all points on the same straight line). I believe there must be a kind of concatenation between the points but I couldn't figure it out.</p>
<p>What I tried is totally mess. I tried to draw some triangles and to connect some points from one triangle to another, but in vain.</p>
<p><strong>Note:</strong> I want to see a real example of such an infinite set of points in the plane that can be an answer for the question. A graph for these points would be helpful.</p>
| <p>You can even find infinitely many such points on the unit circle: Let $\mathscr S$ be the set of all points on the unit circle such that $\tan \left(\frac {\theta}4\right)\in \mathbb Q$. If $(\cos(\alpha),\sin(\alpha))$ and $(\cos(\beta),\sin(\beta))$ are two points on the circle then a little geometry tells us that the distance between them is (the absolute value of) $$2 \sin \left(\frac {\alpha}2\right)\cos \left(\frac {\beta}2\right)-2 \sin \left(\frac {\beta}2\right)\cos \left(\frac {\alpha}2\right)$$ and, if the points are both in $\mathscr S$ then this is rational.</p>
<p>Details: The distance formula is an immediate consequence of the fact that, if two points on the circle have an angle $\phi$ between them, then the distance between them is (the absolute value of) $2\sin \frac {\phi}2$. For the rationality note that $$z=\tan \frac {\phi}2 \implies \cos \phi= \frac {1-z^2}{1+z^2} \quad \& \quad \sin \phi= \frac {2z}{1+z^2}$$</p>
<p>Note: Of course $\mathscr S$ is dense on the circle. So far as I am aware, it is unknown whether you can find such a set which is dense on the entire plane.</p>
| <p>Yes, it's possible. For instance, you could start with $(0,1)$ and $(0,0)$, and then put points along the $x$-axis, noting that there are infinitely many different right triangles with rational sides and one leg equal to $1$. For instance, $(3/4,0)$ will have distance $5/4$ to $(0,1)$.</p>
<p>This means that <em>most</em> if the points are on a single line (the $x$-axis), but one point, $(0,1)$, is not on that line.</p>
|
probability | <p>I give you a hat which has <span class="math-container">$10$</span> coins inside of it. <span class="math-container">$1$</span> out of the <span class="math-container">$10$</span> have two heads on it, and the rest of them are fair. You draw a coin at random from the jar and flip it <span class="math-container">$5$</span> times. If you flip heads <span class="math-container">$5$</span> times in a row, what is the probability that you get heads on your next flip?</p>
<p>I tried to approach this question by using Bayes: Let <span class="math-container">$R$</span> be the event that the coin with both heads is drawn and <span class="math-container">$F$</span> be the event that <span class="math-container">$5$</span> heads are flipped in a row. Then
<span class="math-container">$$\begin{align*}
P(R|F) &= \frac{P(F|R)P(R)}{P(F)} \\ &= \frac{1\cdot 1/10}{1\cdot 1/10 + 1/2^5\cdot 9/10} \\ &= 32/41
\end{align*}$$</span></p>
<p>Thus the probability that you get heads on the next flip is</p>
<p><span class="math-container">$$\begin{align*}
P(H|R)P(R) + P(H|R')P(R') &= 1\cdot 32/41 + 1/2\cdot (1 - 32/41) \\ &= 73/82
\end{align*}$$</span></p>
<p>However, according to my friend, this is a trick question because the flip after the first <span class="math-container">$5$</span> flips is independent of the first <span class="math-container">$5$</span> flips, and therefore the correct probability is
<span class="math-container">$$1\cdot 1/10+1/2\cdot 9/10 = 11/20$$</span></p>
<p>Is this true or not?</p>
| <p>To convince your friend that he is wrong, you could modify the question:</p>
<blockquote>
<p>A hat contains ten 6-sided dice. Nine dice have scores 1, 2, 3, 4, 5, 6, and the other dice has 6 on every face. Randomly choose one dice, toss it <span class="math-container">$1000$</span> times, and write down the results. Repeat this procedure many times. Now look at the trials in which the first <span class="math-container">$999$</span> tosses were all 6's: what proportion of those trials have the <span class="math-container">$1000^\text{th}$</span> toss also being a 6?</p>
</blockquote>
<p>Common sense tells us that the proportion is extremely close to <span class="math-container">$1$</span>. <span class="math-container">$\left(\text{The theoretical proportion is}\dfrac{6^{1000}+9}{6^{1000}+54}.\right)$</span></p>
<p>But according to your friend's method, the theoretical proportion would be <span class="math-container">$\dfrac{1}{10}(1)+\dfrac{9}{10}\left(\dfrac{1}{6}\right)=\dfrac{1}{4}$</span>. I think your friend would see that this is clearly wrong.</p>
| <p>The main idea behind this problem is a topic known as <em>predictive posterior probability</em>.</p>
<p>Let <span class="math-container">$P$</span> denote the probability of the coin you randomly selected landing on heads.</p>
<p>Then <span class="math-container">$P$</span> is a random variable supported on <span class="math-container">$\{0.5,1\}$</span> and has pdf <span class="math-container">$$\mathbb{P}(P=0.5)=0.9\\ \mathbb{P}(P=1)=0.1$$</span></p>
<p>Let <span class="math-container">$E=\{HHHHH\}$</span> denote the "evidence" you witness. The <em>posterior</em> probability that <span class="math-container">$P=0.5$</span> given this evidence <span class="math-container">$E$</span> can be evaluated using Bayes' rule:</p>
<p><span class="math-container">$$\begin{eqnarray*}\mathbb{P}(P=0.5|E) &=& \frac{\mathbb{P}(E|P=0.5)\mathbb{P}(P=0.5)}{\mathbb{P}(E|P=0.5)\mathbb{P}(P=0.5)+\mathbb{P}(E|P=1)\mathbb{P}(P=1)} \\ &=& \frac{(0.5)^5\times 0.9}{(0.5)^5\times 0.9+1^5\times 0.1} \\ &=& \frac{9}{41}\end{eqnarray*}$$</span> The <em>posterior</em> pdf of <span class="math-container">$P$</span> given <span class="math-container">$E$</span> is supported on <span class="math-container">$\{0.5,1\}$</span> and has pdf <span class="math-container">$$\mathbb{P}\left(P=0.5|E\right)=\frac{9}{41} \\ \mathbb{P}\left(P=1|E\right)=\frac{32}{41}$$</span> Finally, the <em>posterior predictive probability</em> that we flip heads again given this evidence <span class="math-container">$E$</span> is <span class="math-container">$$\begin{eqnarray*}\mathbb{P}(\text{Next flip heads}|E) &=& \mathbb{P}(\text{Next flip heads}|E,P=0.5)\mathbb{P}(P=0.5|E)+\mathbb{P}(\text{Next flip heads}|E,P=1)\mathbb{P}(P=1|E) \\ &=& \frac{1}{2}\times \frac{9}{41}+1\times \frac{32}{41} \\ &=& \frac{73}{82}\end{eqnarray*}$$</span></p>
|
geometry | <p>What is the simplest way to find out the area of a triangle if the coordinates of the three vertices are given in $x$-$y$ plane? </p>
<p>One approach is to find the length of each side from the coordinates given and then apply <a href="https://en.wikipedia.org/wiki/Heron's_formula" rel="noreferrer"><em>Heron's formula</em></a>. Is this the best way possible? </p>
<p>Is it possible to compare the area of triangles with their coordinates provided without actually calculating side lengths?</p>
| <p>What you are looking for is called the <a href="http://en.wikipedia.org/wiki/Shoelace_formula" rel="nofollow noreferrer">shoelace formula</a>:</p>
<p><span class="math-container">\begin{align*}
\text{Area}
&= \frac12 \big| (x_A - x_C) (y_B - y_A) - (x_A - x_B) (y_C - y_A) \big|\\
&= \frac12 \big| x_A y_B + x_B y_C + x_C y_A - x_A y_C - x_C y_B - x_B y_A \big|\\
&= \frac12 \Big|\det \begin{bmatrix}
x_A & x_B & x_C \\
y_A & y_B & y_C \\
1 & 1 & 1
\end{bmatrix}\Big|
\end{align*}</span></p>
<p>The last line indicates how to generalize the formula to higher dimensions.</p>
<p><strong>PS.</strong> Another generalization of the formula is obtained by noting that it follows from a discrete version of the <a href="https://en.wikipedia.org/wiki/Green%27s_theorem" rel="nofollow noreferrer">Green's theorem</a>:</p>
<p><span class="math-container">$$ \text{Area} = \iint_{\text{domain}}dx\,dy = \frac12\oint_{\text{boundary}}x\,dy - y\,dx $$</span></p>
<p>Thus the signed (oriented) area of a polygon with <span class="math-container">$n$</span> vertices <span class="math-container">$(x_i,y_i)$</span> is given by</p>
<p><span class="math-container">$$ \text{Area} = \frac12\sum_{i=0}^{n-1} x_i y_{i+1} - x_{i+1} y_i $$</span></p>
<p>where indices are added modulo <span class="math-container">$n$</span>.</p>
| <p>You know that <strong>AB × AC</strong> is a vector perpendicular to the plane ABC such that |<strong>AB × AC</strong>|= Area of the parallelogram ABA’C. Thus this area is equal to ½ |AB × AC|.</p>
<p><a href="https://i.sstatic.net/3oDbh.png" rel="noreferrer"><img src="https://i.sstatic.net/3oDbh.png" alt="enter image description here" /></a></p>
<p>From <strong>AB</strong>= <span class="math-container">$(x_2 -x_1, y_2-y_1)$</span>; <strong>AC</strong>= <span class="math-container">$(x_3-x_1, y_3-y_1)$</span>, we deduce then</p>
<p>Area of <span class="math-container">$\Delta ABC$</span> = <span class="math-container">$\frac12$$[(x_2-x_1)(y_3-y_1)- (x_3-x_1)(y_2-y_1)]$</span></p>
|
linear-algebra | <p>I am looking for an intuitive reason for a projection matrix of an orthogonal projection to be symmetric. The algebraic proof is straightforward yet somewhat unsatisfactory.</p>
<p>Take for example another property: $P=P^2$. It's clear that applying the projection one more time shouldn't change anything and hence the equality.</p>
<p>So what's the reason behind $P^T=P$?</p>
| <p>In general, if $P = P^2$, then $P$ is the projection onto $\operatorname{im}(P)$ along $\operatorname{ker}(P)$, so that $$\mathbb{R}^n = \operatorname{im}(P) \oplus \operatorname{ker}(P),$$ but $\operatorname{im}(P)$ and $\operatorname{ker}(P)$ need not be orthogonal subspaces. Given that $P = P^2$, you can check that $\operatorname{im}(P) \perp \operatorname{ker}(P)$ if and only if $P = P^T$, justifying the terminology "orthogonal projection."</p>
| <p>There are some nice and succinct answers already. If you'd like even more intuition with as little math and higher level linear algebra concepts as possible, consider two arbitrary vectors <span class="math-container">$v$</span> and <span class="math-container">$w$</span>.</p>
<h2>Simplest Answer</h2>
<p>Take the dot product of one vector with the projection of the other vector.
<span class="math-container">$$
(P v) \cdot w
$$</span>
<span class="math-container">$$
v \cdot (P w)
$$</span></p>
<p>In both dot products above, one of the terms (<span class="math-container">$P v$</span> or <span class="math-container">$P w$</span>) lies entirely in the subspace you project onto. Therefore, both dot products ignore every vector component that is <em>not</em> in this subspace - they consider only components <em>in</em> the subspace. This means both dot products are equal to each other, and are in fact equal to:
<span class="math-container">$$
(P v) \cdot (P w)
$$</span></p>
<p>Since <span class="math-container">$(P v) \cdot w = v \cdot (P w)$</span>, it doesn't matter whether we apply the projection matrix to the first or second argument of the dot product operation. Some simple identities then imply <span class="math-container">$P = P^T$</span>, so <span class="math-container">$P$</span> is symmetric (See step 2 below if you aren't familiar with this property).</p>
<h2>Less intuitive Answer</h2>
<p>If the above explanation isn't intuitive, we can use a little more math.</p>
<h1>Step 1.</h1>
<p>First, prove that the two dot products above are equal.</p>
<p>Decompose <span class="math-container">$v$</span> and <span class="math-container">$w$</span>:
<span class="math-container">$$
v = v_p + v_n
$$</span>
<span class="math-container">$$
w = w_p + w_n
$$</span></p>
<p>In this notation the <span class="math-container">$p$</span> subscript indicates the component of the vector in the subspace of <span class="math-container">$P$</span>, and the <span class="math-container">$n$</span> subscript indicates the component of the vector outside (normal to) the subspace of <span class="math-container">$P$</span>.</p>
<p>The projection of a vector lies in a subspace. The dot product of anything in this subspace with anything orthogonal to this subspace is zero. We use this fact on the dot product of one vector with the projection of the other vector:
<span class="math-container">$$
(P v) \cdot w \hspace{1cm} v \cdot (P w)
$$</span>
<span class="math-container">$$
v_p \cdot w \hspace{1cm} v \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot (w_p + w_n) \hspace{1cm} (v_p + v_n) \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot w_p + v_p \cdot w_n \hspace{1cm} v_p \cdot w_p + v_n \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot w_p \hspace{1cm} v_p \cdot w_p
$$</span>
Therefore
<span class="math-container">$$
(Pv) \cdot w = v \cdot (Pw)
$$</span></p>
<h1>Step 2.</h1>
<p>Next, we can show that a consequence of this equality is that the projection matrix P must be symmetric. Here we begin by expressing the dot product in terms of transposes and matrix multiplication (using the identity <span class="math-container">$x \cdot y = x^T y$</span> ):
<span class="math-container">$$
(P v) \cdot w = v \cdot (P w)
$$</span>
<span class="math-container">$$
(P v)^T w = v^T (P w)
$$</span>
<span class="math-container">$$
v^T P^T w = v^T P w
$$</span>
Since v and w can be any vectors, the above equality implies:
<span class="math-container">$$
P^T = P
$$</span></p>
|
differentiation | <p>Are continuous functions always differentiable? Are there any examples in dimension <span class="math-container">$n > 1$</span>?</p>
| <p>No. <a href="http://en.wikipedia.org/wiki/Karl_Weierstrass" rel="noreferrer">Weierstraß</a> gave in 1872 the first published example of a <a href="http://en.wikipedia.org/wiki/Weierstrass_function" rel="noreferrer">continuous function that's nowhere differentiable</a>.</p>
| <p>No, consider the example of $f(x) = |x|$. This function is continuous but not differentiable at $x = 0$.</p>
<p>There are even more bizare functions that are not differentiable everywhere, yet still continuous. This class of functions lead to the development of the study of fractals.</p>
|
linear-algebra | <p>Here's a cute problem that was frequently given by the late Herbert Wilf during his talks. </p>
<p><strong>Problem:</strong> Let $A$ be an $n \times n$ matrix with entries from $\{0,1\}$ having all positive eigenvalues. Prove that all of the eigenvalues of $A$ are $1$.</p>
<p><strong>Proof:</strong></p>
<blockquote class="spoiler">
<p> Use the AM-GM inequality to relate the trace and determinant.</p>
</blockquote>
<p>Is there any other proof?</p>
| <p>If one wants to use the AM-GM inequality, you could proceed as follows:
Since $A$ has all $1$'s or $0$'s on the diagonal, it follows that $tr(A)\leq n$.
Now calculating the determinant by expanding along any row/column, one can easily see that the determinant is an integer, since it is a sum of products of matrix entries (up to sign). Since all eigenvalues are positive, this integer must be positive. AM-GM inequality implies
$$det(A)^{\frac{1}{n}}=\left(\prod_{i}\lambda_{i}\right)^{\frac{1}{n}}\leq \frac{1}{n}\sum_{i=1}^{n}\lambda_{i}\leq 1.$$
Since $det(A)\neq 0$, and $m^{\frac{1}{n}}>1$ for $m>1$, the above inequality forces $det(A)=1$.
We therefore have equality which happens precisely when $\lambda_{i}=\lambda_{j}$ for all $i,j$. Combining this with the above equality gives the result.</p>
| <p>Suppose that A has a column with only zero entries, then we must have zero as an eigenvalue. (e.g. expanding det(A-rI) using that column). So it must be true that in satisfying the OP's requirements we must have each column containing a 1. The same holds true for the rows by the same argument. Now suppose that we have a linear relationship between the rows, then there exists a linear combination of these rows that gives rise to a new matrix with a zero row. We've already seen that this is not allowed so we must have linearly independent rows. I am trying to force the form of the permissible matrices to be restricted enough to give the result. Linear independence of the rows gives us a glimpse at the invertability of such matrices but alas not its diagonalizability. The minimal polynomial $(1-r)^n$ results from upper or lower triangular matrices with ones along the diagonal and I suspect that we may be able to complete this proof by looking at what happens to this polynomial when there are deviations from the triangular shape. The result linked by user1551 may be the key. Trying to gain some intuition about what are the possibilities leads one to match the binomial theorem with Newton's identities: <a href="http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums" rel="nofollow">http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums</a></p>
<p>and the fact that the trace must be $n$(diagonal ones) and the determinant 1. I would like to show that any deviation from this minimal polynomial must lead to a non-positive eigenvalue. Two aspects of the analysis, a combinatorial argument to show what types of modifications(from triangular) are permissible while maintaining row/column independence and looking at the geometrical effects of the non-dominant terms in the resulting characteristic polynomials. Maybe some type of induction argument will surface here. </p>
|
matrices | <p>If the matrix is positive definite, then all its eigenvalues are strictly positive. </p>
<p>Is the converse also true?<br>
That is, if the eigenvalues are strictly positive, then matrix is positive definite?<br>
Can you give example of $2 \times 2$ matrix with $2$ positive eigenvalues but is not positive definite?</p>
| <p>I think this is false. Let <span class="math-container">$A = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}$</span> be a 2x2 matrix, in the canonical basis of <span class="math-container">$\mathbb R^2$</span>. Then A has a double eigenvalue b=1. If <span class="math-container">$v=\begin{pmatrix}1\\1\end{pmatrix}$</span>, then <span class="math-container">$\langle v, Av \rangle < 0$</span>.</p>
<p>The point is that the matrix can have all its eigenvalues strictly positive, but it does not follow that it is positive definite.</p>
| <p>This question does a great job of illustrating the problem with thinking about these things in terms of coordinates. The thing that is positive-definite is not a matrix $M$ but the <em>quadratic form</em> $x \mapsto x^T M x$, which is a very different beast from the linear transformation $x \mapsto M x$. For one thing, the quadratic form does not depend on the antisymmetric part of $M$, so using an asymmetric matrix to define a quadratic form is redundant. And there is <em>no reason</em> that an asymmetric matrix and its symmetrization need to be at all related; in particular, they do not need to have the same eigenvalues. </p>
|
differentiation | <p>I am a Software Engineering student and this year I learned about how CPUs work, it turns out that electronic engineers and I also see it a lot in my field, we do use derivatives with discontinuous functions. For instance in order to calculate the optimal amount of ripple adders so as to minimise the execution time of the addition process:</p>
<p><span class="math-container">$$\text{ExecutionTime}(n, k) = \Delta(4k+\frac{2n}{k}-4)$$</span>
<span class="math-container">$$\frac{d\,\text{ExecutionTime}(n, k)}{dk}=4\Delta-\frac{2n\Delta}{k^2}=0$$</span>
<span class="math-container">$$k= \sqrt{\frac{n}{2}}$$</span></p>
<p>where <span class="math-container">$n$</span> is the number of bits in the numbers to add, <span class="math-container">$k$</span> is the amount of adders in ripple and <span class="math-container">$\Delta$</span> is the "delta gate" (the time that takes to a gate to operate).</p>
<p>Clearly you can see that the execution time function is not continuous at all because <span class="math-container">$k$</span> is a natural number and so is <span class="math-container">$n$</span>.
This is driving me crazy because on the one hand I understand that I can analyse the function as a continuous one and get results in that way, and indeed I think that's what we do ("I think", that's why I am asking), but my intuition and knowledge about mathematical analysis tells me that this is completely wrong, because the truth is that the function is not continuous and will never be and because of that, the derivative with respect to <span class="math-container">$k$</span> or <span class="math-container">$n$</span> does not exist because there is no rate of change.</p>
<p>If someone could explain me if my first guess is correct or not and why, I'd appreciate it a lot, thanks for reading and helping!</p>
| <p>In general, computing the extrema of a continuous function and rounding them to integers does <em>not</em> yield the extrema of the restriction of that function to the integers. It is not hard to construct examples.</p>
<p>However, your particular function is <em>convex</em> on the domain <span class="math-container">$k>0$</span>. In this case the extremum is at one or both of the two integers nearest to the <em>unique</em> extremum of the continuous function.</p>
<p>It would have been nice to explicitly state this fact when determining the minimum by this method, as it is really not obvious, but unfortunately such subtleties are often forgotten (or never known in the first place) in such applied fields. So I commend you for noticing the problem and asking!</p>
| <p>The main question here seems to be "why can we differentiate a function only defined on integers?". The proper answer, as divined by the OP, is that we can't--there is no unique way to define such a derivative, because we can interpolate the function in many different ways. However, in the cases that you are seeing, what we are really interested in is not the derivative of the function, per se, but rather the extrema of the function. The derivative is just a tool used to find the extrema.</p>
<p>So what's really going on here is that we start out with a function <span class="math-container">$f:\mathbb{N}\rightarrow \mathbb{R}$</span> defined only on positive integers, and we implicitly <em>extend</em> <span class="math-container">$f$</span> to another function <span class="math-container">$\tilde{f}:\mathbb{R}\rightarrow\mathbb{R}$</span> defined on all real numbers. By "extend" we mean that values of <span class="math-container">$\tilde{f}$</span> coincide with those of <span class="math-container">$f$</span> on the integers. Now, here's the crux of the matter: If we can show that there is some integer <span class="math-container">$n$</span> such that <span class="math-container">$\tilde{f}(n)\geq \tilde{f}(m)$</span> for all integers <span class="math-container">$m$</span>, i.e. <span class="math-container">$n$</span> is a maximum of <span class="math-container">$\tilde{f}$</span> <em>over the integers</em>, then we know the same is true for <span class="math-container">$f$</span>, our original function. The advantage of doing this is that now can use calculus and derivatives to analyze <span class="math-container">$\tilde{f}$</span>. It doesn't matter how we extend <span class="math-container">$f$</span> to <span class="math-container">$\tilde{f}$</span>, because at the end of the day we're are only using <span class="math-container">$\tilde{f}$</span> as a tool to find properties of <span class="math-container">$f$</span>, like maxima.</p>
<p>In many cases, there is a natural way to extend <span class="math-container">$f$</span> to <span class="math-container">$\tilde{f}$</span>. In your case, <span class="math-container">$f=\text{ExecutionTime}$</span>, and to extend it you just take the formula <span class="math-container">$\Delta \left(4k + \frac{2n}{k} - 4\right)$</span> and allow <span class="math-container">$n$</span> and <span class="math-container">$k$</span> to be real-valued instead of integer-valued. You could have extended it a different way--e.g. <span class="math-container">$\Delta \left(4k + \frac{2n}{k} - 4\right) + \sin(2\pi k)$</span> is also a valid extension of <span class="math-container">$\text{ExecutionTime}(n,k)$</span>, but this is not as convenient. And all we are trying to do is find a convenient way to analyze the original, integer-valued function, so if there's a straightforward way to do it we might as well use it.</p>
<hr />
<p>As an illustrative example, an interesting (and non-trivial) case of this idea of extending an integer-valued function to a continuous-valued one is the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="noreferrer">gamma function</a> <span class="math-container">$\Gamma$</span>, which is a continuous extension of the integer-valued factorial function. <span class="math-container">$\Gamma$</span> is not the only way to extend the factorial function, but it is for most purposes (in fact, all purposes that I know of) the most convenient.</p>
|
geometry | <p>I need to find the volume of the region defined by
$$\begin{align*}
a^2+b^2+c^2+d^2&\leq1,\\
a^2+b^2+c^2+e^2&\leq1,\\
a^2+b^2+d^2+e^2&\leq1,\\
a^2+c^2+d^2+e^2&\leq1 &\text{ and }\\
b^2+c^2+d^2+e^2&\leq1.
\end{align*}$$
I don't necessarily need a full solution but any starting points would be very useful.</p>
| <p>It turns out that this is much easier to do in <a href="http://en.wikipedia.org/wiki/Hyperspherical_coordinates#Hyperspherical_coordinates">hyperspherical coordinates</a>. I'll deviate somewhat from convention by swapping the sines and cosines of the angles in order to get a more pleasant integration region, so the relationship between my coordinates and the Cartesian coordinates is</p>
<p>$$
\begin{eqnarray}
a &=& r \sin \phi_1\\
b &=& r \cos \phi_1 \sin \phi_2\\
c &=& r \cos \phi_1 \cos \phi_2 \sin \phi_3\\
d &=& r \cos \phi_1 \cos \phi_2 \cos \phi_3 \sin \phi_4\\
e &=& r \cos \phi_1 \cos \phi_2 \cos \phi_3 \cos \phi_4\;,
\end{eqnarray}
$$</p>
<p>and the Jacobian determinant is $r^4\cos^3\phi_1\cos^2\phi_2\cos\phi_3$. As stated in my other answer, we can impose positivity and a certain ordering on the Cartesian coordinates by symmetry, so the desired volume is $2^55!$ times the volume for positive Cartesian coordinates with $a\le b\le c\le d\le e$. This translates into the constraints $0 \le \phi_4\le \pi/4$ and $0\le\sin\phi_i\le\cos\phi_i\sin\phi_{i+1}$ for $i=1,2,3$, and the latter becomes $0\le\phi_i\le\arctan\sin\phi_{i+1}$. The boundary of the volume also takes a simple form: Because of the ordering of the coordinates, the only relevant constraint is $b^2+c^2+d^2+e^2\le1$, and this becomes $r^2\cos^2\phi_1\le1$, so $0\le r\le\sec\phi_1$. Then the volume can readily be evaluated with a little help from our electronic friends:</p>
<p>$$
\begin{eqnarray}
V_5
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^4\cos^3\phi_1\cos^2\phi_2\cos\phi_3\mathrm dr\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\frac15\sec^2\phi_1\cos^2\phi_2\cos\phi_3\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\frac15\sin\phi_2\cos^2\phi_2\cos\phi_3\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\left[-\frac1{15}\cos^3\phi_2\right]_0^{\arctan\sin\phi_3}\cos\phi_3\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_4}\frac1{15}\left(1-\left(1+\sin^2\phi_3\right)^{-3/2}\right)\cos\phi_3\mathrm d\phi_3\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\left[\frac1{15}\left(1-\left(1+\sin^2\phi_3\right)^{-1/2}\right)\sin\phi_3\right]_0^{\arctan\sin\phi_4}\mathrm d\phi_4
\\
&=&
2^55!\int_0^{\pi/4}\frac1{15}\left(\left(1+\sin^2\phi_4\right)^{-1/2}-\left(1+2\sin^2\phi_4\right)^{-1/2}\right)\sin\phi_4\mathrm d\phi_4
\\
&=&
2^55!\left[\frac1{15}\left(\frac1{\sqrt2}\arctan\frac{\sqrt2\cos\phi_4}{\sqrt{1+2\sin^2\phi_4}}-\arctan\frac{\cos \phi_4}{\sqrt{1+\sin^2\phi_4}}\right)\right]_0^{\pi/4}
\\
&=&
2^8\left(\frac1{\sqrt2}\arctan\frac1{\sqrt2}-\arctan\frac1{\sqrt3}-\frac1{\sqrt2}\arctan\sqrt2+\arctan1\right)
\\
&=&
2^8\left(\frac\pi{12}+\frac{\mathrm{arccot}\sqrt2-\arctan\sqrt2}{\sqrt2}\right)
\\
&\approx&
5.5035\;,
\end{eqnarray}
$$</p>
<p>which is consistent with my numerical results.</p>
<p>The same approach readily yields the volume in four dimensions:</p>
<p>$$
\begin{eqnarray}
V_4
&=&
2^44!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^3\cos^2\phi_1\cos\phi_2\mathrm dr\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\frac14\sec^2\phi_1\cos\phi_2\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_3}\frac14\sin\phi_2\cos\phi_2\mathrm d\phi_2\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\left[\frac18\sin^2\phi_2\right]_0^{\arctan\sin\phi_3}\mathrm d\phi_3
\\
&=&
2^44!\int_0^{\pi/4}\frac18\frac{\sin^2\phi_3}{1+\sin^2\phi_3}\mathrm d\phi_3
\\
&=&
2^44!\left[\frac18\left(\phi_3-\frac1{\sqrt2}\arctan\left(\sqrt2\tan\phi_3\right)\right)\right]_0^{\pi/4}
\\
&=&
12\left(\pi-2\sqrt2\arctan\sqrt2\right)
\\
&\approx&
5.2746\;.
\end{eqnarray}
$$</p>
<p>The calculation for three dimensions becomes almost trivial:</p>
<p>$$
\begin{eqnarray}
V_3
&=&
2^33!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^2\cos\phi_1\mathrm dr\mathrm d\phi_1\mathrm d\phi_2
\\
&=&
2^33!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_2}\frac13\sec^2\phi_1\mathrm d\phi_1\mathrm d\phi_2
\\
&=&
2^33!\int_0^{\pi/4}\frac13\sin\phi_2\mathrm d\phi_2
\\
&=&
8(2-\sqrt2)
\\
&\approx&
4.6863\;.
\end{eqnarray}
$$</p>
<p>With all these integrals miraculously working out, one might be tempted to conjecture that there's a pattern here, with a closed form for all dimensions, and perhaps even that the sequence $4,4.69,5.27,5.50,\dotsc$ monotonically converges. However, that doesn't seem to be the case. For six dimensions, the integrals become intractable:</p>
<p>$$
\begin{eqnarray}
V_6
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\int_0^{\sec\phi_1}r^5\cos^4\phi_1\cos^3\phi_2\cos^2\phi_3\cos\phi_4\mathrm dr\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\int_0^{\arctan\sin\phi_2}\frac16\sec^2\phi_1\cos^3\phi_2\cos^2\phi_3\cos\phi_4\mathrm d\phi_1\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\int_0^{\arctan\sin\phi_3}\frac16\sin\phi_2\cos^3\phi_2\cos^2\phi_3\cos\phi_4\mathrm d\phi_2\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\left[-\frac1{24}\cos^4\phi_2\right]_0^{\arctan\sin\phi_3}\cos^2\phi_3\cos\phi_4\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\int_0^{\arctan\sin\phi_4}\frac1{24}\left(1-\left(1+\sin^2\phi_3\right)^{-2}\right)\cos^2\phi_3\cos\phi_4\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\left[\frac1{48}\left(\phi_3+\sin\phi_3\cos\phi_3-\frac1{\sqrt2}\arctan\left(\sqrt2\tan\phi_3\right)-\frac{\sin\phi_3\cos\phi_3}{1+\sin^2\phi_3}\right)\right]_0^{\arctan\sin\phi_4}\cos\phi_4\mathrm d\phi_3\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\int_0^{\arctan\sin\phi_5}\frac1{48}\left(\arctan \sin \phi_4 + \frac{\sin \phi_4}{1 + \sin^2 \phi_4}-\frac1{\sqrt2}\arctan\left(\sqrt2\sin \phi_4\right)-\frac{\sin \phi_4}{1 + 2 \sin^2\phi_4}\right) \cos \phi_4\mathrm d\phi_4\mathrm d\phi_5
\\
&=&
2^66!\int_0^{\pi/4}\left[\frac1{96} \sin\phi_4 \left(2 \arctan\sin\phi_4-\sqrt2\arctan\left(\sqrt2 \sin\phi_4\right)\right)\right]_0^{\arctan\sin\phi_5}\mathrm d\phi_5
\\
&=&
480\int_0^{\pi/4}\sin\arctan\sin\phi_5 \left(2 \arctan\sin\arctan\sin\phi_5-\sqrt2\arctan\left(\sqrt2 \sin\arctan\sin\phi_5\right)\right)\mathrm d\phi_5
\\
&\approx&
5.3361
\;.
\end{eqnarray}
$$</p>
<p>Wolfram|Alpha doesn't find a closed form for that last integral (and I don't blame it), so it seems that you may have asked for the last of these volumes that can be given in closed form. Note that $V_2<V_3<V_4<V_5>V_6$. The results from Monte Carlo integration for higher dimensions show a monotonic decrease thereafter. This is also the behaviour of the volume of the unit hypersphere:</p>
<p>$$
\begin{array}{|c|c|c|c|c|c|}
d&\text{sphere}&\text{cylinders}&\text{sphere}&\text{cylinders}&\text{ratio}\\
\hline
2&\pi&4&3.1416&4&1.2732\\
3&4\pi/3&8(2-\sqrt2)&4.1888&4.6863&1.1188\\
4&\pi^2/2&12\left(\pi-2\sqrt2\arctan\sqrt2\right)&4.9348&5.2746&1.0689\\
5&8\pi^2/15&2^8\left(\frac\pi{12}+\frac{\mathrm{arccot}\sqrt2-\arctan\sqrt2}{\sqrt2}\right)&5.2638&5.5036&1.0456\\
6&\pi^3/6&&5.1677&5.3361&1.0326\\
7&16\pi^3/105&&4.7248&4.8408&1.0246\\
8&\pi^4/24&&4.0587&4.1367&1.0192\\
\hline
\end{array}
$$</p>
<p>The ratio seems to converge to $1$ fairly rapidly, so in high dimensions almost all of the intersection of the cylinders lies within the unit hypersphere.</p>
<p>P.S.: Inspired by leonbloy's approach, I improved the Monte Carlo integration by integrating the admissible radius over uniformly sampled directions. The standard error was less than $10^{-5}$ in all cases, and the results would have to deviate by at least three standard errors to change the rounding of the fourth digit, so the new numbers are in all likelihood the correct numbers rounded to four digits. The results show that leonbloy's estimate converges quite rapdily.</p>
| <p>There's reflection symmetry in each of the coordinates, so the volume is $2^5$ times the volume for positive coordinates. There's also permutation symmetry among the coordinates, so the volume is $5!$ times the volume with the additional constraint $a\le b\le c\le d\le e$. Then it remains to find the integration boundaries and solve the integrals.</p>
<p>The lower bound for $a$ is $0$. The upper bound for $a$, given the above constraints, is attained when $a=b=c=d=e$, and is thus $\sqrt{1/4}=1/2$. The lower bound for $b$ is $a$, and the upper bound for $b$ is again $1/2$. Then it gets slightly more complicated. The lower bound for $c$ is $b$, but for the upper bound for $c$ we have to take $c=d=e$ with $b$ given, which yields $\sqrt{(1-b^2)/3}$. Likewise, the lower bound for $d$ is $c$, and the upper bound for $d$ is attained for $d=e$ with $b$ and $c$ given, which yields $\sqrt{(1-b^2-c^2)/2}$. Finally, the lower bound for $e$ is $d$ and the upper bound for $e$ is $\sqrt{1-b^2-c^2-d^2}$. Putting it all together, the desired volume is</p>
<p>$$V_5=2^55!\int_0^{1/2}\int_a^{1/2}\int_b^{\sqrt{(1-b^2)/3}}\int_c^{\sqrt{(1-b^2-c^2)/2}}\int_d^{\sqrt{1-b^2-c^2-d^2}}\mathrm de\mathrm dd\mathrm dc\mathrm db\mathrm da\;.$$</p>
<p>That's a bit of a nightmare to work out; Wolfram Alpha gives up on even small parts of it, so let's do the corresponding thing in $3$ and $4$ dimensions first. In $3$ dimensions, we have</p>
<p>$$
\begin{eqnarray}
V_3
&=&
2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\int_b^{\sqrt{1-b^2}}\mathrm dc\mathrm db\mathrm da
\\
&=&
2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\left(\sqrt{1-b^2}-b\right)\mathrm db\mathrm da
\\
&=&
2^33!\int_0^{\sqrt{1/2}}\frac12\left(\arcsin\sqrt{\frac12}-\arcsin a-a\sqrt{1-a^2}+a^2\right)\mathrm da
\\
&=&
2^33!\frac16\left(2-\sqrt2\right)
\\
&=&
8\left(2-\sqrt2\right)\;.
\end{eqnarray}$$</p>
<p>I've worked out part of the answer for $4$ dimensions. There are some miraculous cancellations that make me think that a) there must be a better way to do this (perhaps anon's answer, if it can be fixed) and b) this might be workable for $5$ dimensions, too. I have other things to do now, but I'll check back and if there's no correct solution yet I'll try to finish the solution for $4$ dimensions.</p>
|
geometry | <p>Source: <a href="http://www.math.uci.edu/%7Ekrubin/oldcourses/12.194/ps1.pdf" rel="noreferrer">German Mathematical Olympiad</a></p>
<h3>Problem:</h3>
<blockquote>
<p>On an arbitrarily large chessboard, a generalized knight moves by jumping p squares in one direction and q squares in a perpendicular direction, p, q > 0. Show that such a knight can return to its original position only after an even number of moves.</p>
</blockquote>
<h3>Attempt:</h3>
<p>Assume, wlog, the knight moves <span class="math-container">$q$</span> steps <strong>to the right</strong> after its <span class="math-container">$p$</span> steps. Let the valid moves for the knight be "LU", "UR", "DL", "RD" i.e. when it moves <strong>L</strong>eft, it has to go <strong>U</strong>p("LU"), or when it goes <strong>U</strong>p , it has to go <strong>R</strong>ight("UR") and so on.</p>
<p>Let the knight be stationed at <span class="math-container">$(0,0)$</span>. We note that after any move its coordinates will be integer multiples of <span class="math-container">$p,q$</span>. Let its final position be <span class="math-container">$(pk, qr)$</span> for <span class="math-container">$ k,r\in\mathbb{Z}$</span>. We follow sign conventions of coordinate system.</p>
<p>Let knight move by <span class="math-container">$-pk$</span> horizontally and <span class="math-container">$-qk$</span> vertically by repeated application of one step. So, its new position is <span class="math-container">$(0,q(r-k))$</span> I am thinking that somehow I need to cancel that <span class="math-container">$q(r-k)$</span> to achieve <span class="math-container">$(0,0)$</span>, but don't be able to do the same.</p>
<p>Any hints please?</p>
| <p>Case I: If $p+q$ is odd, then the knight's square changes colour after each move, so we are done.</p>
<p>Case II: If $p$ and $q$ are both odd, then the $x$-coordinate changes by an odd number after every move, so it is odd after an odd number of moves. So the $x$-coordinate can be zero only after an even number of moves.</p>
<p>Case III: If $p$ and $q$ are both even, we can keep dividing each of them by $2$ until we reach Case I or Case II. (Dividing $p$ and $q$ by the same amount doesn't change the shape of the knight's path, only its size.)</p>
| <p>This uses complex numbers.</p>
<p>Define $z=p+qi$. Say that the knight starts at $0$ on the complex plane. Note that, in one move, the knight may add or subtract $z$, $iz$, $\bar z$, $i\bar z$ to his position.</p>
<p>Thus, at any point, the knight is at a point of the form:
$$(a+bi)z+(c+di)\bar z$$
where $a$ and $b$ are integers.</p>
<p>Note that the parity (evenness/oddness) of the quantity $a+b+c+d$ changes after every move. This means it's even after an even number of moves and odd after an odd number of moves. Also note that:
$$a+b+c+d\equiv a^2+b^2-c^2-d^2\pmod2$$
(This is because $x\equiv x^2\pmod2$ and $x\equiv-x\pmod2$ for all $x$.)</p>
<p>Now, let's say that the knight has reached its original position. Then:
\begin{align}
(a+bi)z+(c+di)\bar z&=0\\
(a+bi)z&=-(c+di)\bar z\\
|a+bi||z|&=|c+di||z|\\
|a+bi|&=|c+di|\\
\sqrt{a^2+b^2}&=\sqrt{c^2+d^2}\\
a^2+b^2&=c^2+d^2\\
a^2+b^2-c^2-d^2&=0\\
a^2+b^2-c^2-d^2&\equiv0\pmod2\\
a+b+c+d&\equiv0\pmod2
\end{align}
Thus, the number of moves is even.</p>
<blockquote>
<p>Interestingly, this implies that $p$ and $q$ do not need to be integers. They can each be any real number. The only constraint is that we can't have $p=q=0$.</p>
</blockquote>
|
linear-algebra | <p>When someone wants to solve a system of linear equations like</p>
<p>$$\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}\,,$$</p>
<p>they might use this logic: </p>
<p>$$\begin{align}
\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}
\iff &\begin{cases} -2x-y=0 \\ 3x+y=4 \end{cases}
\\
\color{maroon}{\implies} &\begin{cases} -2x-y=0\\ x=4 \end{cases}
\iff \begin{cases} -2(4)-y=0\\ x=4 \end{cases}
\iff \begin{cases} y=-8\\ x=4 \end{cases}
\,.\end{align}$$</p>
<p>Then they conclude that $(x, y) = (4, -8)$ is a solution to the system. This turns out to be correct, but the logic seems flawed to me. As I see it, all this proves is that
$$
\forall{x,y\in\mathbb{R}}\quad
\bigg(
\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}
\color{maroon}{\implies}
\begin{cases} y=-8\\ x=4 \end{cases}
\bigg)\,.
$$</p>
<p>But this statement leaves the possibility open that there is no pair $(x, y)$ in $\mathbb{R}^2$ that satisfies the system of equations.</p>
<p>$$
\text{What if}\;
\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}
\;\text{has no solution?}
$$</p>
<p>It seems to me that to really be sure we've solved the equation, we <em>have</em> to plug back in for $x$ and $y$. I'm not talking about checking our work for simple mistakes. This seems like a matter of logical necessity. But of course, most people don't bother to plug back in, and it never seems to backfire on them. So why does no one plug back in?</p>
<p><strong>P.S.</strong> It would be great if I could understand this for systems of two variables, but I would be deeply thrilled to understand it for systems of $n$ variables. I'm starting to use Gaussian elimination on big systems in my linear algebra class, where intuition is weaker and calculations are more complex, and still no one feels the need to plug back in.</p>
| <p>You wrote this step as an implication: </p>
<blockquote>
<p>$$\begin{cases} -2x-y=0 \\ 3x+y=4 \end{cases} \implies \begin{cases} -2x-y=0\\ x=4 \end{cases}$$</p>
</blockquote>
<p>But it is in fact an equivalence:</p>
<p>$$\begin{cases} -2x-y=0 \\ 3x+y=4 \end{cases} \iff \begin{cases} -2x-y=0\\ x=4 \end{cases}$$</p>
<p>Then you have equivalences end-to-end and, as long as all steps are equivalences, you <em>proved</em> that the initial equations are equivalent to the end solutions, so you don't need to "<em>plug back</em>" and verify. Of course, carefulness is required to ensure that every step <em>is</em> in fact reversible.</p>
| <p>The key is that in solving this system of equations (or with row-reduction in general), every step is <em>reversible</em>. Following the steps forward, we see that <em>if</em> $x$ and $y$ satisfy the equations, then $x = 4$ and $y = -8$. That is, we conclude that $(4,-8)$ is the only possible solution, assuming a solution exists. <em>Conversely</em>, we can follow the arrows in the other direction to find that if $x = 4$ and $y = -8$, then the equations hold.</p>
<p>Take a second to confirm that those $\iff$'s aren't really $\implies$'s.</p>
<p>Compare this to a situation where the steps aren't reversible. For example:
$$
\sqrt{x^2 - 3} = -1 \implies x^2 -3 = 1 \iff x^2 = 4 \iff x = \pm 2
$$
You'll notice that "squaring both sides" isn't reversible, so we can't automatically deduce that $\pm 2$ solve the orginal equation (and in fact, there is no solution).</p>
|
probability | <p>Given the rapid rise of the <a href="http://en.wikipedia.org/wiki/Mega_Millions">Mega Millions</a> jackpot in the US (now advertised at \$640 million and equivalent to a "cash" prize of about \$448 million), I was wondering if there was ever a point at which the lottery became positive expected value (EV), and, if so, what is that point or range?</p>
<p>Also, a friend and I came up with two different ways of looking at the problem, and I'm curious if they are both valid.</p>
<p>First, it is simple to calculate the expected value of the "fixed" prizes. The first five numbers are selected from a pool of 56, the final "mega" ball from a pool of 46. (Let us ignore taxes in all of our calculations... one can adjust later for one's own tax rate which will vary by state). The expected value of all these fixed prizes is \$0.183.</p>
<p>So, then you are paying \$0.817 for the jackpot prize. My plan was then to calculate the expected number of winners of the jackpot (multiple winners split the prize) to get an expected jackpot amount and multiply by the probability of selecting the winning numbers (given by $\binom{56}{5} * 46 = 1 \text{ in } 175,711,536$). The number of tickets sold can be easily estimated since \$0.32 of each ticket is added to the prize, so: </p>
<p>(<em>Current Cash Jackpot</em> - <em>Previous Cash Jackpot</em>) / 0.32 = Tickets Sold
$(448 - 252) / 0.32 = 612.5$ million tickets sold (!!).</p>
<p>(The cash prizes are lower than the advertised jackpot. Currently, they are about 70% of the advertised jackpot.) Obviously, one expects multiple winners, but I can't figure out how to get a precise estimate, and various web sources seem to be getting different numbers.</p>
<p><strong>Alternative methodology:</strong> My friend's methodology, which is far simpler, is to say 50% of this drawing's sales will be paid out in prizes (\$0.18 to fixed prizes and \$0.32 to the jackpot). Add to that the carried over jackpot amount (\$250 million cash prize from the unwon previous jackpot) that will also be paid out. So, your expected value is $\$250$ million / 612.5 million tickets sold = \$0.40 from the previous drawing + \$0.50 from this drawing = \$0.90 total expected value for each \$1 ticket purchased (before taxes). Is this a valid approach or is it missing something? It's far simpler than anything I found while searching the web for this.</p>
<p><strong>Added:</strong> After considering the answer below, this is why I don't think my friend's methodology can be correct: it neglects the probability that no one will win. For instance, if a $1$ ticket was sold, the expected value of that ticket would not be $250 million + 0.50 since one has to consider the probability of the jackpot not being paid out at all. So, <em>additional question:</em> what is this probability and how do we find it? (Obviously it is quite small when $612.5$ million tickets are sold and the odds of each one winning is $1:175.7$ million.) Would this allow us to salvage this methodology?</p>
<p>So, is there a point that the lottery will be positive EV? And, what is the EV this week, and the methodology for calculating it?</p>
| <p>I did a fairly <a href="http://www.circlemud.org/~jelson/megamillions">extensive analysis of this question</a> last year. The short answer is that by modeling the relationship of past jackpots to ticket sales we find that ticket sales grow super-linearly with jackpot size. Eventually, the positive expectation of a larger jackpot is outweighed by the negative expectation of ties. For MegaMillions, this happens before a ticket ever becomes EV+.</p>
| <p>An interesting thought experiment is whether it would be a good investment for a rich person to buy every possible number for \$175,711,536. This person is then guaranteed to win! Then you consider the resulting size of the pot (now a bit larger), the probability of splitting it with other winners, and the fact that you get to deduct the \$175.7M spent from your winnings before taxes. (Thanks to Michael McGowan for pointing out that last one.)</p>
<p>The current pot is \$640M, with a \$462M cash payout. The previous pot was \$252M cash payout, so using \$0.32 into the cash pot per ticket, we have 656,250,000 tickets sold. I, the rich person (who has enough servants already employed that I can send them all out to buy these tickets at no additional labor cost) will add about \$56M to the pot. So the cash pot is now \$518M.</p>
<p>If I am the only winner, then I net (\$518M + \$32M (my approximate winnings from smaller prizes)) * 0.65 (federal taxes) + 0.35 * \$176M (I get to deduct what I paid for the tickets) = \$419M. I live in California (of course), so I pay no state taxes on lottery winnings. I get a 138% return on my investment! Pretty good. Even if I did have to pay all those servants overtime for three days.</p>
<p>If I split the grand prize with just one other winner, I net \$250M. A 42% return on my investment. Still good. With two other winners, I net $194M for about a 10% gain.</p>
<p>If I have to split it with three other winners, then I lose. Now I pay no taxes, but I do not get to deduct my gambling losses against my other income. I net \$161M, about an 8% loss on my investment. If I split it with four other winners, I net \$135M, a 23% loss. Ouch.</p>
<p>So how many will win? Given the 656,250,000 other tickets sold, the expected number of other winners (assuming a random distribution of choices, so I'm ignoring the picking-birthdays-in-your-numbers problem) is 3.735. Hmm. This might not turn out well for Mr. Money Bags. Using Poisson, <span class="math-container">$p(n)={\mu^n e^{-\mu}\over n!}$</span>, where <span class="math-container">$\mu$</span> is the expected number (3.735) and <span class="math-container">$n$</span> is the number of other winners, there is only a 2.4% chance of me being the only winner, a 9% chance of one other winner, a 17% chance of two, a 21% chance of three, and then it starts going down with a 19% chance of four, 14% for five, 9% for six, and so on.</p>
<p>Summing over those, my expected return after taxes is \$159M. Close. But about a 10% loss on average.</p>
<p>Oh well. Time to call those servants back and have them make me a sandwich instead.</p>
<p><em>Update for October 23, 2018 Mega Millions jackpot:</em></p>
<p>Same calculation.</p>
<p>The game has gotten harder to win, where there are now 302,575,350 possible numbers, <em>and</em> each ticket costs \$2. So now it would cost $605,150,700 to assure a winning ticket. Also the maximum federal tax rate has gone up to 39.6%.</p>
<p>The current pot (as of Saturday morning -- it will probably go up more) has a cash value of \$904,900,000. The previous cash pot was \$565,600,000. So about 530 million more tickets have been or are expected to be purchased, using my previous assumption of 32% of the cost of a ticket going into the cash pot. Then the mean number of winning tickets, besides the one assured for Mr. Money Bags, is about 1.752. Not too bad actually.</p>
<p>Summing over the possible numbers of winners, I get a net <em>win</em> of <span class="math-container">$\approx$</span>\$60M! So if you can afford to buy all of the tickets, and can figure out how to do that in next three days, go for it! Though that win is only a 10% return on investment, so you could very well do better in the stock market. Also that win is a slim margin, and is dependent on the details in the calculation, which would need to be more carefully checked. Small changes in the assumptions can make the return negative.</p>
<p>Keep in mind that if you're not buying all of the possible tickets, this is not an indication that the expected value of one ticket is more than \$2. Buying all of the possible ticket values is <span class="math-container">$e\over e-1$</span> times as efficient as buying 302,575,350 <em>random</em> tickets, where you would have many duplicated tickets, and would have less than a 2 in 3 chance of winning.</p>
|
game-theory | <p>The <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="nofollow">Monty Hall problem or paradox</a> is famous and well-studied. But what confused me about the description was an unstated assumption.</p>
<blockquote>
<p>Suppose you're on a game show, and you're given the choice of three
doors: behind one door is a car; behind the others, goats. You pick
a door, say No. 1, and the host, who knows what's behind the doors,
opens another door, say No. 3, which has a goat. He then says to you,
"Do you want to pick door No. 2?" Is it to your advantage to switch
your choice?</p>
</blockquote>
<p>The assumption is that the host of the show does not have a choice whether to offer the switch. In fact, Monty Hall himself, in response to Steve Selvin's original formulation of the problem, pointed out that as the host he did not always offer the switch.</p>
<p>Because the host knows what's behind the doors, it would be possible and to his advantage to offer a switch more often to contestants who guess correctly. If he only offered the switch to contestants who guess correctly, all contestants who accept the offer would lose. However, if he did this consistently, the public would learn not to accept the offer and soon all contestants who first guess correctly would win.</p>
<p>If, for instance, he gave the offer to one third of incorrect guessers and two thirds of correct guessers, 2/9 contestants would be given the offer and should not switch and 2/9 contestants would be given the offer and should, which would bring the chances of winning back to 1/2 whether one accepts the offer or not, instead of 1/3 or 2/3.</p>
<p>Is this a Nash equilibrium for the Monty Hall problem (or the iterated Monty Hall problem) as a two-player zero-sum game? And if not, what is it, or aren't there any?</p>
| <p>The car probably doesn't come out of the host's salary, so he probably doesn't really want to minimize the payoff, he wants to maximize the show's ratings. But OK, let's
suppose he did want to minimize the payoff, making this a zero-sum game.
Then the optimal value of the game (in terms of the probability of winning the car) would be $1/3$. An optimal strategy for the contestant is to always refuse to switch,
ensuring that the expected payoff is $1/3$. An optimal strategy for the host is
never to offer a switch unless the contestant's first guess is correct, ensuring that
the expected payoff is no more than $1/3$.</p>
| <p>Thanks for your answer, Robert. If the optimal value is 1/3 as you showed, then I suppose there must be infinitely many mixed strategies that the host could employ that would be in equilibrium. If, as I mentioned in the question, the host offers the switch to 2/3 of correct guessers and 1/3 of incorrect guessers, 1/9 contestants will guess correctly and not be offered a switch, winning immediately. Also, 2/9 will be correct guessers offered a switch and 2/9 will be incorrect guessers offered a switch. Therefore 4/9 will be offered a switch and 2/9 will win, whether they accept it or not, since their expected value will be 1/2 whether they accept the switch or not. The rest of the contestants lose immediately.</p>
<pre><code>1/9 + 2/9 = 1/3
</code></pre>
<p>This means the host can offer any number between 0 and 2/3 of his contestants a switch without changing the expected value, as long as the number of correct guessers and incorrect guessers he offers a switch is the same. He can accomplish this easily with a mixed strategy of making correct guessers exactly twice as likely to receive an offer.</p>
<p>With any of this family of strategies, the host then cannot possibly ensure a better outcome than 1/3, which he has done. And the contestant will have the same expected value regardless of their strategy, so they cannot improve. So, these are also Nash equilibria. And you would have to agree, from a practical perspective, the host should employ one of these highly mixed strategies so that the game is more exciting, without harming his bottom line. I don't have any direct evidence, but I would hazard a guess that this is roughly what Monty Hall actually did.</p>
<p>The surprising thing about this is that the naive answer to the classic Monty Hall problem, "No, there is no benefit," (50/50) can be correct under reasonable assumptions.</p>
|
probability | <p>This is a really natural question for which I know a stunning solution. So I admit I have a solution, however I would like to see if anybody will come up with something different. The question is</p>
<blockquote>
<p>What is the probability that two numbers randomly chosen are coprime?</p>
</blockquote>
<p>More formally, calculate the limit as $n\to\infty$ of the probability that two randomly chosen numbers, both less than $n$ are coprime.</p>
| <p><strong>Here is a fairly easy approach.</strong>
<strong>Let us start with a basic observation:</strong></p>
<p><span class="math-container">$\bullet$</span> Every integer has the probability "1" to be divisible by 1.</p>
<p><span class="math-container">$\bullet$</span> A given integer is either even or odd hence has probability <span class="math-container">$"1/2"$</span> to be divisible by 2</p>
<p><span class="math-container">$\bullet$</span> Similarly, an integer has a probability <span class="math-container">$"1/3"$</span> to be divisible by 3. Because any interger is either the form <span class="math-container">$3k, 3k+1$</span> or <span class="math-container">$3k+2$</span>.</p>
<blockquote>
<p><strong>Conjecture</strong> More generally, one integer chosen amongst <span class="math-container">$"p"$</span> other integers has one chance to be divisible by <span class="math-container">$p$</span></p>
</blockquote>
<ol>
<li>From this we infer that the probability that an integer is divisible by <span class="math-container">$p$</span> is <span class="math-container">$\frac{1}{p}$</span>. This is having one chance over <span class="math-container">$p-$</span>chances to be divisible by <span class="math-container">$p$</span>.</li>
<li>Therefore the probability that two different integers are both simultaneously divisible by a prime <span class="math-container">$p$</span> is <span class="math-container">$\frac{1}{p^2}$</span></li>
<li>This means that the probability that two different integers are not simultaneously divisible by a prime <span class="math-container">$p$</span> is <span class="math-container">$$1-\frac{1}{p^2}$$</span>
<blockquote>
<ol start="4">
<li><strong>Conclusion:</strong> The probability that two different integers are never simultaneously divisible by a prime (<strong>meaning that they are co-prime</strong>)
is therefore given by<br />
<span class="math-container">$$ \color{blue}{\prod_{p, prime}\left(1-\frac{1}{p^2} \right) =
\left(\prod _{p, prime}\frac {1}{1-p^{-2}}\right)^{-1}=\frac {1}{\zeta (2)}=\frac {6}{\pi ^{2}} \approx 0,607927102 ≈ 61 \%}$$</span></li>
</ol>
</blockquote>
</li>
</ol>
<p>Where we should recall that, from the <a href="https://en.wikipedia.org/wiki/Basel_problem" rel="noreferrer">Basel problem</a> we have the following Euler identity</p>
<p><span class="math-container">$$\frac{\pi^2}{6}=\sum_{n=1}^{\infty} \frac{1}{n^2} = \zeta(2)=\prod _{p, prime}\frac {1}{1-p^{-2}}.$$</span></p>
<p>By a similar token, the probability that <span class="math-container">$m$</span> integers are co-prime is given by</p>
<p><span class="math-container">$$ \color{red}{\prod_{p, prime}\left(1-\frac{1}{p^m} \right) =
\left(\prod _{p, prime}\frac {1}{1-p^{-m}}\right)^{-1}=\frac {1}{\zeta (m)}}.$$</span></p>
<p>Here <span class="math-container">$\zeta$</span> is the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function" rel="noreferrer">Riemann zeta function</a>. <span class="math-container">$$\zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} $$</span></p>
| <p>Let's look at the function <span class="math-container">$$S(x)=\sum_{\begin{array}{c}
m,n\leq x\\
\gcd(m,n)=1\end{array}}1.$$</span> </p>
<p>Then notice that <span class="math-container">$$S(x)=\sum_{m,n\leq x}\sum_{d|\gcd(m,n)}\mu(d)=\sum_{d\leq x}\sum_{r,s\leq\frac{x}{d}}\mu(d)= \sum_{d\leq x}\mu(d)\left[\frac{x}{d}\right]^{2}$$</span></p>
<p>From here it is straight forward to see that in the limit, <span class="math-container">$$\frac{S(x)}{x^2}\rightarrow\sum_{n=1}^\infty \frac{\mu(n)}{n^2}=\frac{1}{\zeta(2)}=\frac{6}{\pi^2}.$$</span></p>
<p>However, there are still some interesting questions here. How fast does in converge, and what are the secondary terms? It turns out we can easily relate this to the summatory totient function, which has a rich history. See these two math stack exchange posts: <a href="https://math.stackexchange.com/questions/38101/how-to-derive-an-identity-between-summations-of-totient-and-mobius-functions">Totient function</a>, <a href="https://math.stackexchange.com/questions/37863/asymptotic-formula-for-munx-n2-summation">Asymptotic formula</a>. What follows below is a modification of
my answer on the second post.</p>
<p><strong>The History Of The Error Term</strong></p>
<p>In 1874, Mertens proved that <span class="math-container">$$S(x)=\frac{6}{\pi^{2}}x^{2}+O\left(x\log x\right).$$</span> Throughout we use <span class="math-container">$E(x)=S(x)-\frac{6}{\pi^2}x^2$</span> for the error function. </p>
<p>The best unconditional result is given by Walfisz 1963: <span class="math-container">$$E(x)\ll x\left(\log x\right)^{\frac{2}{3}}\left(\log\log x\right)^{\frac{4}{3}}.$$</span> </p>
<p>In 1930, Chowla and Pillai showed this cannot be improved much more, and that <span class="math-container">$E(x)$</span> is <strong>not</strong> <span class="math-container">$$o\left(x\log\log\log x\right).$$</span> </p>
<p>In particular, they showed that <span class="math-container">$\sum_{n\leq x}E(n)\sim\frac{3}{\pi^{2}}x^{2}$</span> so that <span class="math-container">$E(n)\asymp n$</span> on average. In 1950, Erdos and Shapiro proved that there exists <span class="math-container">$c$</span> such that for infinitely many positive integers <span class="math-container">$N,M$</span> we have <span class="math-container">$$E(N)>cN\log\log\log\log N\ \ \text{and}\ \ E(M)<-cM\log\log\log\log M, $$</span> </p>
<p>or more concisely </p>
<p><span class="math-container">$$E(x)=\Omega_{\pm}\left(x\log\log\log\log x\right).$$</span></p>
<p>In 1987 Montgomery improved this to </p>
<p><span class="math-container">$$E(x)=\Omega_{\pm}\left(x\sqrt{\log\log x}\right).$$</span></p>
<p>Hope you enjoyed that,</p>
<p><strong>Added:</strong> At some point, I wrote a <a href="http://enaslund.wordpress.com/2012/01/15/the-sum-of-the-totient-function-and-montgomerys-lower-bound/" rel="noreferrer">long blog post</a> about this, complete with a proof of Montgomery's lower bound.</p>
|
logic | <p>I enjoy reading about formal logic as an occasional hobby. However, one thing keeps tripping me up: I seem unable to understand what's being referred to when the word "type" (as in type theory) is mentioned.</p>
<p>Now, I understand what types are in programming, and sometimes I get the impression that types in logic are just the same thing as that: we want to set our formal systems up so that you can't add an integer to a proposition (for example), and types are the formal mechanism to specify this. Indeed, <a href="https://en.wikipedia.org/wiki/Type_theory" rel="noreferrer">the Wikipedia page for type theory</a> pretty much says this explicitly in the lead section.</p>
<p>However, it also goes on to imply that types are much more powerful than that. Overall, from everything I've read, I get the idea that types are:</p>
<ul>
<li>like types in programming</li>
<li>something that is like a set but different in certain ways</li>
<li>something that can prevent paradoxes</li>
<li>the sort of thing that could replace set theory in the foundations of mathematics</li>
<li>something that is not just analogous to the notion of a proposition but can be thought of as the same thing ("propositions as types")</li>
<li>a concept that is <em>really, really deep</em>, and closely related to higher category theory</li>
</ul>
<p>The problem is that I have trouble reconciling these things. Types in programming seem to me quite simple, practical things (alhough the type system for any given programming language can be quite complicated and interesting). But in type theory it seems that somehow the types <em>are</em> the language, or that they are responsible for its expressive power in a much deeper way than is the case in programming.</p>
<p>So I suppose my question is, for someone who understands types in (say) Haskell or C++, and who also understands first-order logic and axiomatic set theory and so on, how can I get from these concepts to the concept of <em>type theory</em> in formal logic? What <em>precisely</em> is a type in type theory, and what is the relationship between types in formal mathematics and types in computer science?</p>
<p>(I am not looking for a formal definition of a type so much as the core idea behind it. I can find several formal definitions, but I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a <em>particular</em> type theory. If I can understand the motivation better it should make it easier to follow the definitions.)</p>
| <p><strong>tl;dr</strong> Types only have meaning within type systems. There is no stand-alone definition of "type" except vague statements like "types classify terms". The notion of type in programming languages and type theory are basically the same, but different type systems correspond to different type theories. Often the term "type theory" is used specifically for a particular family of powerful type theories descended from Martin-Löf Type Theory. Agda and Idris are simultaneously proof assistants for such type theories and programming languages, so in this case there is no distinction whatsoever between the programming language and type theoretic notions of type.</p>
<p>It's not the "types" themselves that are "powerful". First, you could recast first-order logic using types. Indeed, the sorts in <a href="https://en.wikipedia.org/wiki/Many-sorted_logic" rel="noreferrer">multi-sorted first-order logic</a>, are basically the same thing as types.</p>
<p>When people talk about type theory, they often mean specifically Martin-Löf Type Theory (MLTT) or some descendant of it like the Calculus of (Inductive) Constructions. These are powerful higher-order logics that can be viewed as constructive set theories. But it is the specific system(s) that are powerful. The simply typed lambda calculus viewed from a propositions-as-types perspective is basically the proof theory of intuitionistic propositional logic which is a rather weak logical system. On the other hand, considering the equational theory of the simply typed lambda calculus (with some additional axioms) gives you something that is very close to the most direct understanding of higher-order logic as an extension of first-order logic. This view is the basis of the <a href="https://hol-theorem-prover.org/" rel="noreferrer">HOL</a> family of theorem provers.</p>
<p>Set theory is an extremely powerful logical system. ZFC set theory is a first-order theory, i.e. a theory axiomatized in first-order logic. And what does set theory accomplish? Why, it's essentially an embedding of higher-order logic into first-order logic. In first-order logic, we can't say something like $$\forall P.P(0)\land(\forall n.P(n)\Rightarrow P(n+1))\Rightarrow\forall n.P(n)$$ but, in the first-order theory of set theory, we <em>can</em> say $$\forall P.0\in P\land (\forall n.n\in P\Rightarrow n+1\in P)\Rightarrow\forall n.n\in P$$ Sets behave like "first-class" predicates.</p>
<p>While ZFC set theory and MLTT go beyond just being higher-order logic, higher-order logic on its own is already a powerful and ergonomic system as demonstrated by the HOL theorem provers for example. At any rate, as far as I can tell, having some story for doing higher-order-logic-like things is necessary to provoke any interest in something as a framework for mathematics from mathematicians. Or you can turn it around a bit and say you need some story for set-like things and "first-class" predicates do a passable job. This latter perspective is more likely to appeal to mathematicians, but to me the higher-order logic perspective better captures the common denominator.</p>
<p>At this point it should be clear there is no magical essence in "types" themselves, but instead some families of type theories (i.e. type systems from a programming perspective) are very powerful. Most "powerful" type systems for programming languages are closely related to the polymorphic lambda calculus aka System F. From the proposition-as-types perspective, these correspond to intuitionistic second-order <em>propositional</em> logics, not to be confused with second-order (predicate) logics. It allows quantification over propositions (i.e. nullary predicates) but not over terms which don't even exist in this logic. <em>Classical</em> second-order propositional logic is easily reduced to classical propositional logic (sometimes called zero-order logic). This is because $\forall P.\varphi$ is reducible to $\varphi[\top/P]\land\varphi[\bot/P]$ classically. System F is surprisingly expressive, but viewed as a logic it is quite limited and far weaker than MLTT. The type systems of Agda, Idris, and Coq are descendants of MLTT. Idris in particular and Agda to a lesser extent are dependently typed programming languages.<sup>1</sup> Generally, the notion of type in a (static) type system and in type theory are essentially the same, but the significance of a type depends on the type system/type theory it is defined within. There is no real definition of "type" on its own. If you decide to look at e.g. Agda, you should be quickly disabused of the idea that "types are the language". All of these type theories have terms and the terms are not "made out of types". They typically look just like functional programs.</p>
<p><sup>1</sup> I don't want to give the impression that "dependently typed" = "super powerful" or "MLTT derived". The LF family of languages e.g. Elf and Twelf are intentionally weak dependently typed specification languages that are far weaker than MLTT. From a propositions-as-types perspective, they correspond more to first-order logic.</p>
| <blockquote>
<p>I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a particular type theory. If I can understand the motivation better it should make it easier to follow the definitions.</p>
</blockquote>
<p>The basic idea: In ZFC set theory, there is just one kind of object - sets. In type theories, there are multiples kind of objects. Each object has a particular kind, known as its "type". </p>
<p>Type theories typically include ways to form new types from old types. For example, if we have types $A$ and $B$ we also have a new type $A \times B$ whose members are pairs $(a,b)$ where $a$ is of type $A$ and $b$ is of type $B$. </p>
<p>For the simplest type theories, such as higher-order logic, that is essentially the only change from ordinary first-order logic. In this setting, all of the information about "types" is handled in the metatheory. But these systems are barely "type theories", because the theory itself doesn't really know anything about the types. We are really just looking at first-order logic with multiple sorts. By analogy to computer science, these systems are vaguely like statically typed languages - it is not possible to even write a well-formed formula/program which is not type safe, but the program itself has no knowledge about types while it is running. </p>
<p>More typical type theories include ways to reason <em>about</em> types <em>within</em> the theory. In many type theories, such as ML type theory, the types themselves are objects of the theory. So it is possible to prove "$A$ is a type" as a <em>sentence</em> of the theory. It is also possible to prove sentences such as "$t$ has type $A$". These cannot even be expressed in systems like higher-order logic. </p>
<p>In this way, these theories are not just "first order logic with multiple sorts", they are genuinely "a theory about types". Again by analogy, these systems are vaguely analogous to programming languages in which a program can make inferences about types of objects <em>during runtime</em> (analogy: we can reason about types within the theory). </p>
<p>Another key aspect of type theories is that they often have their own <em>internal</em> logic. The <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="noreferrer">Curry-Howard correspondence</a> shows that, in particular settings, <em>formulas</em> of propositional or first-order logic correspond to <em>types</em> in particular type theories. Manipulating the types in a model of type theory corresponds, via the isomorphism, to manipulating formulas of first order logic. The isomorphism holds for many logic/type theory pairs, but it is strongest when the logic is intuitionistic, which is one reason intuitionistic/constructive logic comes up in the context of type theory. </p>
<p>In particular, logical operations on formulas become type-forming operations. The "and" operator of logic becomes the product type operation, for example, while the "or" operator becomes a kind of "union" type. In this way, each model of type theory has its own "internal logic" - which is often a model of intuitionistic logic. </p>
<p>The existence of this internal logic is one of the motivations for type theory. When we look at first-order logic, we treat the "logic" as sitting in the metatheory, and we look at the truth value of each formula within a model using classical connectives. In type theory, that is often a much less important goal. Instead we look at the collection of types in a model, and we are more interested in the way that the type forming operations work in the model than in the way the classical connectives work. </p>
|
differentiation | <p>I never understand what the trigonometric function sine is..</p>
<p>We had a table that has values of sine for different angles, we by hearted it and applied to some problems and there ends the matter. Till then, sine function is related to triangles, angles.</p>
<p>Then comes the graph. We have been told that the figure below is the graph of the function sine. This function takes angles and gives numbers between $-1$ and $1$ and we have been told that it is a continuous function as it is clear from the graph.</p>
<p><a href="https://i.sstatic.net/qpda9.gif" rel="noreferrer"><img src="https://i.sstatic.net/qpda9.gif" alt="enter image description here"></a></p>
<p>Then comes taylor expansion of sine and we have $$\sin (x)=x-\frac{x^3}{3!}+\cdots$$</p>
<p>I know that for any infinitely differentiable function, we have taylor expansion. But how do we define differentiability of the function sine?</p>
<p>We define differentiability of a function from real numbers to real numbers..</p>
<p>But sine is a function that takes angles and gives real numbers..</p>
<p>Then how do we define differentiability of such a function? are we saying the real number $1$ is the degree 1? </p>
<p>I am confused.. Help me..</p>
<p>Above content is a copy paste of a mail i received from my friend, a 1 st year undergraduate. I could answer some things vaguely i am not happy with my own answers. So, I am posting it here.. Help us (me and my friend) to understand sine function in a better way.</p>
| <p>The sine function doesn't actually operate on angles, it's a function from the real numbers to the interval [-1, 1] (or from the complex numbers to the complex numbers).</p>
<p>However, it just so happens that it's a very useful function when the input you give it relates to angles. In particular, if you express an angle as a number in radians (in other words, on a scale where an angle of $2\pi$ corresponds to a full circle), it gives you a value that relates to the ratio of two sides of a right-angled triangle that has that angle in one corner.</p>
<p>If that explanation doesn't satisfy you, then you can look at it another way - if you take it that the sine function <em>does</em> take an angle as input and outputs a number, then the differentiability of it relates to how its output changes as you change the angle slightly. If you go far enough in calculus, you'll learn about functions whose inputs and outputs are bizarre multi-dimensional concepts, and as long as the space of bizarre multi-dimensional concepts has the right properties, you can calculate derivatives in a meaningful sense, and if you can get your head around that then differentiating a function of an angle is small fry.</p>
| <p>Imagine the unit circle in the usual Cartesian plane: the set of pairs $(x, y)$ where $x$ and $y$ are real numbers. The unit circle is the set of all such pairs a distance of exactly $1$ from the origin.</p>
<p>Imagine a point moving around the circle. As it travels around the circle, it makes an angle of $t$ <em>radians</em> (not degrees!) with the positive $x$-axis. From now on we call the $x$ coordinate the cosine of $t$; and the $y$ coordinate the sine of $t$.</p>
<p>It's as simple as that. If you only remember this one fact you can figure everything else out: the definitions of the trig functions in terms of triangles, the shape of the graphs of the functions, and everything else.</p>
<p>To repeat: $\cos(t)$ and $\sin(t)$ are the $x$ and $y$ coordinates, respectively, of a point on the unit circle that makes an angle of $t$ radians with the origin and the positive $x$-axis.</p>
<p>There's a picture here ... <a href="https://en.wikipedia.org/wiki/Unit_circle">https://en.wikipedia.org/wiki/Unit_circle</a></p>
|
matrices | <p>More precisely, does the set of non-diagonalizable (over $\mathbb C$) matrices have Lebesgue measure zero in $\mathbb R^{n\times n}$ or $\mathbb C^{n\times n}$? </p>
<p>Intuitively, I would think yes, since in order for a matrix to be non-diagonalizable its characteristic polynomial would have to have a multiple root. But most monic polynomials of degree $n$ have distinct roots. Can this argument be formalized? </p>
| <p>Yes. Here is a proof over $\mathbb{C} $.</p>
<ul>
<li>Matrices with repeated eigenvalues are cut out as the zero locus of the discriminant of the characteristic polynomial, thus are algebraic sets. </li>
<li>Some matrices have unique eigenvalues, so this algebraic set is proper.</li>
<li>Proper closed algebraic sets have measure $0.$ (intuitively, a proper closed algebraic set is a locally finite union of embedded submanifolds of lower dimension)</li>
<li>(over $\mathbb{C} $) The set of matrices that aren't diagonalizable is contained in this set, so it also has measure $0$. (not over $\mathbb{R}$, see this comment <a href="https://math.stackexchange.com/a/207785/565">https://math.stackexchange.com/a/207785/565</a>)</li>
</ul>
| <p>Let $A$ be a real matrix with a non-real eigenvalue. It's rather easy to see that if you perturb $A$ a little bit $A$ still will have a non-real eigenvalue. For instance if $A$ is a rotation matrix (as in Georges answer), applying a perturbed version of $A$ will still come close to rotating the vectors by a fixed angle so this perturbed version can't have any real eigenvalues.</p>
|
combinatorics | <blockquote>
<p>All numbers <span class="math-container">$1$</span> to <span class="math-container">$155$</span> are written on a blackboard, one time each. We randomly choose two numbers and delete them, by replacing one of them with their product plus their sum. We repeat the process until there is only one number left. What is the average value of this number?</p>
</blockquote>
<p>I don't know how to approach it: For two numbers, <span class="math-container">$1$</span> and <span class="math-container">$2$</span>, the only number is <span class="math-container">$1\cdot 2+1+2=5$</span>
For three numbers, <span class="math-container">$1, 2$</span> and <span class="math-container">$3$</span>, we can opt to replace <span class="math-container">$1$</span> and <span class="math-container">$2$</span> with <span class="math-container">$5$</span> and then <span class="math-container">$3$</span> and <span class="math-container">$5$</span> with <span class="math-container">$23$</span>, or
<span class="math-container">$1$</span> and <span class="math-container">$3$</span> with <span class="math-container">$7$</span> and then <span class="math-container">$2$</span>, <span class="math-container">$7$</span> with <span class="math-container">$23$</span> or
<span class="math-container">$2$</span>, <span class="math-container">$3$</span> with <span class="math-container">$11$</span> and <span class="math-container">$1$</span>, <span class="math-container">$11$</span> with <span class="math-container">$23$</span>
so we see that no matter which two numbers we choose, the average number remains the same. Does this lead us anywhere?</p>
| <p>Claim: if <span class="math-container">$a_1,...,a_n$</span> are the <span class="math-container">$n$</span> numbers on the board then after n steps we shall be left with <span class="math-container">$(1+a_1)...(1+a_n)-1$</span>.</p>
<p>Proof: <em>induct on <span class="math-container">$n$</span></em>. Case <span class="math-container">$n=1$</span> is true, so assume the proposition holds for a fixed <span class="math-container">$n$</span> and any <span class="math-container">$a_1$</span>,...<span class="math-container">$a_n$</span>. Consider now <span class="math-container">$n+1$</span> numbers <span class="math-container">$a_1$</span>,...,<span class="math-container">$a_{n+1}$</span>. Suppose that at the first step we choose <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span>. We will be left with <span class="math-container">$ n$</span> numbers <span class="math-container">$b_1=a_1+a_2+a_1a_2$</span>, <span class="math-container">$b_2=a_3$</span>,...,<span class="math-container">$b_n=a_{n+1}$</span>, so by the induction hypothesis at the end we will be left with <span class="math-container">$(b_1+1)...(b_n+1)-1=(a_1+1)...(a_{n+1}+1)-1$</span> as needed, because <span class="math-container">$b_1+1=a_1+a_2+a_1a_2+1=(a_1+1)(a_2+1)$</span></p>
<p>Where did I get the idea of the proof from? I guess from the n=2 case: for <span class="math-container">$a_1,a_2$</span> you are left with <span class="math-container">$a_1+a_2+a_1a_2=(1+a_1)(1+a_2)-1$</span> and I also noted this formula generalises for <span class="math-container">$n=3$</span></p>
<p>So in your case we will be left with <span class="math-container">$156!-1=1\times 2\times...\times 156 -1$</span></p>
| <p>Another way to think of Sorin's observation, without appealing to induction explicitly:</p>
<p>Suppose your original numbers (both the original 155 numbers and later results) are written in <em>white</em> chalk. Now above each <em>white</em> number write that number plus one, in <em>red</em> chalk. Write new red companions to each new white number, and erase the red numbers when their white partners go away.</p>
<p>When we erase <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and write <span class="math-container">$x+y+xy$</span>, the new red number is <span class="math-container">$x+y+xy+1=(x+1)(y+1)$</span>, exactly the product of the two red companions we're erasing.</p>
<p>So we can reformulate the entire game as:</p>
<blockquote>
<p>Write in red the numbers from <span class="math-container">$2$</span> to <span class="math-container">$156$</span>. Keep erasing two numbers and writing their product instead. At the end when you have <em>one</em> red number left, subtract one and write the result in white.</p>
</blockquote>
<p>Since the order of factors is immaterial, the result must be <span class="math-container">$2\cdot 3\cdots 156-1$</span>.</p>
|
differentiation | <p>Which derivatives are eventually periodic?</p>
<p>I have noticed that is $a_{n}=f^{(n)}(x)$, the sequence $a_{n}$ becomes eventually periodic for a multitude of $f(x)$. </p>
<p>If $f(x)$ was a polynomial, and $\operatorname{deg}(f(x))=n$, note that $f^{(n)}(x)=C$ if $C$ is a constant. This implies that $f^{(n+i)}(x)=0$ for every $i$ which is a natural number. </p>
<p>If $f(x)=e^x$, note that $f(x)=f'(x)$. This implies that $f^{(n)}(x)=e^x$ for every natural number $n$. </p>
<p>If $f(x)=\sin(x)$, note that $f'(x)=\cos(x), f''(x)=-\sin(x), f'''(x)=-\cos(x), f''''(x)=\sin(x)$.</p>
<p>This implies that $f^{(4n)}(x)=f(x)$ for every natural number $n$. </p>
<p>In a similar way, if $f(x)=\cos(x)$, $f^{(4n)}(x)=f(x)$ for every natural number $n$.</p>
<p>These appear to be the only functions whose derivatives become eventually periodic. </p>
<p>What are other functions whose derivatives become eventually periodic? What is known about them? Any help would be appreciated. </p>
| <p>The sequence of derivatives being globally periodic (not eventually periodic) with period $m$ is equivalent to the differential equation </p>
<p>$$f(x)=f^{(m)}(x).$$</p>
<p>All solutions to this equation are of the form $\sum_{k=1}^m c_k e^{\lambda_k x}$ where $\lambda_k$ are solutions to the equation $\lambda^m-1=0$. Thus $\lambda_k=e^{2 k \pi i/m}$. Details can be found in any elementary differential equations textbook.</p>
<p>If you merely want eventually periodic, then you can choose an index $n \geq 1$ at which the sequence starts to be periodic and solve</p>
<p>$$f^{(n)}(x)=f^{(m+n)}(x).$$</p>
<p>The characteristic polynomial in this case is $\lambda^{m+n}-\lambda^n$. This has a $n$-fold root of $0$, and otherwise has the same roots as before. This winds up implying that it is a sum of a polynomial of degree at most $n-1$, plus a solution to the previous equation. Again, the details can be found in any elementary differential equations textbook.</p>
| <p>Let's also look at it upside down. You can define analytical (infinitely differentiable) functions with their Taylor series $\sum \frac{a_n}{n!}x^n$. Taylor series are simply all finite and infinite polynomials with coefficient sequences $(a_n)$ that satisfy the series convergence criteria ($a_n$ are the derivatives in the chosen origin point, in my example, $x=0$). Compare this to real numbers (an infinite sequence of non-repeating "digits" - cardinality of continuum). On the other hand, a set of repeating sequences is comparable to rational numbers (which have eventually repeating digits). So... the "fraction" of all functions which have repeating derivatives, is immeasureably small - it's only a very special class of functions that satisfy this criterion (see other answers for appropriate expressions).</p>
<p>EDIT: I mentioned this to illustrate the comparison of how special and rare functions with periodic derivatives are. The actual cardinality of the sets depends on the field over which you define the coefficients. If $a_n\in \mathbb{R}$, then recall that set of continuous function has $2^{\aleph_0}$, the cardinality of a continuum, so cardinalities are the same in this case. If coefficients are rational, then we have $\aleph_0^{\aleph_0}=2^{\aleph_0}$ for infinite sequences and $\aleph_0\times\aleph_0^n=\aleph_0$ for periodic ones.</p>
<p>Not only that, but you can generate all the functions with this property. Just plug any periodic sequence $(a_n)$ into the expression. It's guaranteed to converge for $x\in \mathbb{R}$, because a periodic sequence is bounded, and $n!$ dominates all powers.</p>
<p>A simple substitution can demonstrate that if the coefficients are periodic for a series around one origin point, they are periodic for all of them.</p>
|
differentiation | <p>As referred <a href="https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="noreferrer">in Wikipedia</a> (see the specified criteria there), L'Hôpital's rule says,</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)}
$$</span></p>
<p>As</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f'(x)}{g'(x)}=
\lim_{x\to c}\frac{\int f'(x)\ dx}{\int g'(x)\ dx}
$$</span></p>
<p>Just out of curiosity, can you integrate instead of taking a derivative?
Does</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int f(x)\ dx}{\int g(x)\ dx}
$$</span></p>
<p>work? (given the specifications in Wikipedia only the other way around: the function must be integrable by some method, etc.) When? Would it have any practical use? I hope this doesn't sound stupid, it just occurred to me, and I can't find the answer myself.</p>
<br/>
##Edit##
<p>(In response to the comments and answers.)</p>
<p>Take 2 functions <span class="math-container">$f$</span> and <span class="math-container">$g$</span>. When is</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int_x^c f(a)\ da}{\int_x^c g(a)\ da}
$$</span></p>
<p>true?</p>
<p>Not saying that it always works, however, it sometimes may help. Sometimes one can apply l'Hôpital's even when an indefinite form isn't reached. Maybe this only works on exceptional cases.</p>
<p>Most functions are simplified by taking their derivative, but it may happen by integration as well (say <span class="math-container">$\int \frac1{x^2}\ dx=-\frac1x+C$</span>, that is simpler). In a few of those cases, integrating functions of both nominator and denominator may simplify.</p>
<p>What do those (hypothetical) functions have to make it work? And even in those cases, is is ever useful? How? Why/why not?</p>
| <p>With L'Hôpital's rule your limit must be of the form <span class="math-container">$\dfrac 00$</span>, so your antiderivatives must take the value <span class="math-container">$0$</span> at <span class="math-container">$c$</span>. In this case you have <span class="math-container">$$\lim_{x \to c} \frac{ \int_c^x f(t) \, dt}{\int_c^x g(t) \, dt} = \lim_{x \to c} \frac{f(x)}{g(x)}$$</span> provided <span class="math-container">$g$</span> satisfies the usual hypothesis that <span class="math-container">$g(x) \not= 0$</span> in a deleted neighborhood of <span class="math-container">$c$</span>.</p>
| <p>I recently came across a situation where it was useful to go through exactly this process, so (although I'm certainly late to the party) here's an application of L'Hôpital's rule in reverse:</p>
<p>We have a list of distinct real numbers $\{x_0,\dots, x_n\}$.
We define the $(n+1)$th <em>nodal polynomial</em> as
$$
\omega_{n+1}(x) = (x-x_0)(x-x_1)\cdots(x-x_n)
$$
Similarly, the $n$th nodal polynomial is
$$
\omega_n(x) = (x-x_0)\cdots (x-x_{n-1})
$$
Now, suppose we wanted to calculate $\omega_{n+1}'(x_i)/\omega_{n}'(x_i)$ when $0 \leq i \leq n-1$. Now, we could calculate $\omega_{n}'(x_i)$ and $\omega_{n+1}'(x_i)$ explicitly and go through some tedious algebra, or we could note that because these derivatives are non-zero, we have
$$
\frac{\omega_{n+1}'(x_i)}{\omega_{n}'(x_i)} =
\lim_{x\to x_i} \frac{\omega_{n+1}'(x)}{\omega_{n}'(x)} =
\lim_{x\to x_i} \frac{\omega_{n+1}(x)}{\omega_{n}(x)} =
\lim_{x\to x_i} (x-x_{n+1}) = x_i-x_{n}
$$
It is important that both $\omega_{n+1}$ and $\omega_n$ are zero at $x_i$, so that in applying L'Hôpital's rule, we intentionally produce an indeterminate form. It should be clear though that doing so allowed us to cancel factors and thus (perhaps surprisingly) saved us some work in the end. </p>
<p>So would this method have practical use? It certainly did for me!</p>
<hr>
<p><strong>PS:</strong> If anyone is wondering, this was a handy step in proving a recursive formula involving Newton's divided differences.</p>
|
linear-algebra | <p>Let $A$ be an $n \times n$ matrix. Then the solution of the initial value problem
\begin{align*}
\dot{x}(t) = A x(t), \quad x(0) = x_0
\end{align*}
is given by $x(t) = \mathrm{e}^{At} x_0$.</p>
<p>I am interested in the following matrix
\begin{align*}
\int_{0}^T \mathrm{e}^{At}\, dt
\end{align*}
for some $T>0$. Can one write down a general solution to this without distinguishing cases (e.g. $A$ nonsingular)?</p>
<p>Is this matrix always invertible?</p>
| <p><strong>Case I.</strong> If <span class="math-container">$A$</span> is nonsingular, then
<span class="math-container">$$
\int_0^T\mathrm{e}^{tA}\,dt=\big(\mathrm{e}^{TA}-I\big)A^{-1},
$$</span>
where <span class="math-container">$I$</span> is the identity matrix.</p>
<p><strong>Case II.</strong> If <span class="math-container">$A$</span> is singular, then using the Jordan form we can write <span class="math-container">$A$</span> as
<span class="math-container">$$
A=U^{-1}\left(\begin{matrix}B&0\\0&C\end{matrix}\right)U,
$$</span>
where <span class="math-container">$C$</span> is nonsingular, and <span class="math-container">$B$</span> is strictly upper triangular. Then
<span class="math-container">$$
\mathrm{e}^{tA}=U^{-1}\left(\begin{matrix}\mathrm{e}^{tB}&0\\0&\mathrm{e}^{tC}
\end{matrix}\right)U,
$$</span>
and
<span class="math-container">$$
\int_0^T\mathrm{e}^{tA}\,dt=U^{-1}\left(\begin{matrix}\int_0^T\mathrm{e}^{tB}dt&0\\0&C^{-1}\big(\mathrm{e}^{TC}-I\big)
\end{matrix}\right)U
$$</span>
But <span class="math-container">$\int_0^T\mathrm{e}^{tB}dt$</span> may have different expressions. For example if
<span class="math-container">$$
B_1=\left(\begin{matrix}0&0\\0&0\end{matrix}\right), \quad
B_2=\left(\begin{matrix}0&1\\0&0\end{matrix}\right),
$$</span>
then
<span class="math-container">$$
\int_0^T\mathrm{e}^{tB_1}dt=\left(\begin{matrix}T&0\\0&T\end{matrix}\right), \quad
\int_0^T\mathrm{e}^{tB_2}dt=\left(\begin{matrix}T&T^2/2\\0&T\end{matrix}\right).
$$</span></p>
| <p>The general formula is the power series</p>
<p>$$ \int_0^T e^{At} dt = T \left( I + \frac{AT}{2!} + \frac{(AT)^2}{3!} + \dots + \frac{(AT)^{n-1}}{n!} + \dots \right) $$</p>
<p>Note that also</p>
<p>$$ \left(\int_0^T e^{At} dt \right) A + I = e^{AT} $$</p>
<p>is always satisfied.</p>
<p>A sufficient condition for this matrix to be non-singular is the so-called Kalman-Ho-Narendra Theorem, which states that the matrix $\int_0^T e^{At} dt$ is invertible if</p>
<p>$$ T(\mu - \lambda) \neq 2k \pi i $$</p>
<p>for any nonzero integer $k$, where $\lambda$ and $\mu$ are any pair of eigenvalues of $A$.</p>
<p>Note to the interested: This matrix also comes from the discretization of a continuous linear time invariant system. It can also be said that controllability is preserved under discretization if and only if this matrix has an inverse.</p>
|
logic | <p>There are many classic textbooks in <strong>set</strong> and <strong>category theory</strong> (as possible foundations of mathematics), among many others Jech's, Kunen's, and Awodey's.</p>
<blockquote>
<p>Are there comparable classic textbooks in <strong>type theory</strong>, introducing and motivating their matter in a generally agreed upon manner from the ground up and covering the whole field, essentially?</p>
</blockquote>
<p>If not so: why?</p>
| <p>Although not as comprehensive a textbook as, say, Jech's classic book on set theory, Jean-Yves Girard's <a href="http://www.paultaylor.eu/stable/Proofs+Types"><em>Proofs and Types</em></a> is an excellent starting point for reading about type theory. It's freely available from translator Paul Taylor's website as a PDF. Girard does assume some knowledge of the lambda calculus; if you need to learn this too, I recommend Hindley and Seldin's <a href="http://www.cambridge.org/gb/knowledge/isbn/item1175709/?site_locale=en_GB"><em>Lambda-Calculus and Combinators: An Introduction</em></a>.</p>
<p>As others have mentioned, Martin-Löf's <em>Intuitionistic Type Theory</em> would then be a good next step.</p>
<p>A different approach would be to read Benjamin Pierce's wonderful textbook, <a href="http://www.cis.upenn.edu/~bcpierce/tapl/"><em>Types and Programming Languages</em></a>. This is oriented towards the practical aspects of understanding types in the context of writing programming languages, rather than purely its mathematical characteristics or foundational promise, but nonetheless it's a very clear and well-written book, with numerous exercises.</p>
<p>The bibliography provided by the <a href="http://plato.stanford.edu/entries/type-theory/">Stanford Encyclopedia of Philosophy entry on type theory</a> is quite extensive, and might provide alternative avenues for your research.</p>
| <p>There are two main settings in which I see type theory as a foundational system.</p>
<p>The first is intuitionistic type theory, particularly the system developed by Martin-Löf. The book <em>Intuitionistic Type Theory</em> (1980) seems to be floating around the internet. </p>
<p>The other setting is second-order (and higher-order) arithmetic. Two main books on this are <em>Foundations without foundationalism</em> by Stewart Shapiro (1991) and <em>Subsystems of second order arithmetic</em> by Stephen Simpson (1999). A decent amount of constructive mathematics, for example the material in <em>Constructive Analysis</em> by Bishop and Bridges (1985), can also be formalized directly in constructive higher-order arithmetic, however the taste of many constructivists is to avoid doing this. </p>
|
logic | <p>What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus?</p>
<p>Specifically, I am interested in the following areas:</p>
<ul>
<li>Untyped lambda calculus</li>
<li>Simply-typed lambda calculus</li>
<li>Other typed lambda calculi</li>
<li>Church's Theory of Types (I'm not sure where this fits in).</li>
</ul>
<p>(As I understand, this should provide a solid basis for the understanding of type theory.)</p>
<p>Any advice and suggestions would be appreciated.</p>
| <p><img src="https://i.sstatic.net/8E8Sp.png" alt="alligators"></p>
<p><a href="http://worrydream.com/AlligatorEggs/" rel="noreferrer"><strong>Alligator Eggs</strong></a> is a cool way to learn lambda calculus.</p>
<p>Also learning functional programming languages like Scheme, Haskell etc. will be added fun.</p>
| <p>Recommendations:</p>
<ol>
<li>Barendregt & Barendsen, 1998, <a href="https://www.academia.edu/18746611/Introduction_to_lambda_calculus" rel="noreferrer">Introduction to lambda-calculus</a>;</li>
<li>Girard, Lafont & Taylor, 1987, <a href="http://www.paultaylor.eu/stable/Proofs+Types.html" rel="noreferrer">Proofs and Types</a>;</li>
<li>Sørenson & Urzyczyn, 1999, <a href="https://disi.unitn.it/%7Ebernardi/RSISE11/Papers/curry-howard.pdf" rel="noreferrer">Lectures on the Curry-Howard Isomorphism</a>.</li>
</ol>
<p>All of these are mentioned in <a href="http://lambda-the-ultimate.org/node/492" rel="noreferrer">the LtU Getting Started thread</a>.</p>
|
differentiation | <p>I've got this task I'm not able to solve. So i need to find the 100-th derivative of $$f(x)=e^{x}\cos(x)$$ where $x=\pi$.</p>
<p>I've tried using Leibniz's formula but it got me nowhere, induction doesn't seem to help either, so if you could just give me a hint, I'd be very grateful.</p>
<p>Many thanks!</p>
| <p>HINT:</p>
<p>$e^x\cos x$ is the real part of $y=e^{(1+i)x}$</p>
<p>As $1+i=\sqrt2e^{i\pi/4}$</p>
<p>$y_n=(1+i)^ne^{(1+i)x}=2^{n/2}e^x\cdot e^{i(n\pi/4+x)}$</p>
<p>Can you take it from here?</p>
| <p>Find fewer order derivatives:</p>
<p>\begin{align}
f'(x)&=&e^x (\cos x -\sin x)&\longleftarrow&\\
f''(x)&=&e^x(\cos x -\sin x -\sin x -\cos x) \\ &=& -2e^x\sin x&\longleftarrow&\\
f'''(x)&=&-2e^x(\sin x + \cos x)&\longleftarrow&\\
f''''(x)&=& -2e^x(\sin x + \cos x + \cos x -\sin x)\\ &=& -4e^x \cos x \\ &=& -4f(x)&\longleftarrow&\\
&...&\\
\therefore f^{(100)}(\pi)&=&-4^{25} f(\pi)
\end{align}</p>
|
logic | <p>I would like to know more about the <em>foundations of mathematics</em>, but I can't really figure out where it all starts. If I look in a book on <em><a href="http://rads.stackoverflow.com/amzn/click/0387900500">axiomatic set theory</a></em>, then it seems to be assumed that one already have learned about <em>languages</em>. If I look in a book about <a href="http://books.google.com/books?id=2sCuDMUruSUC&printsec=frontcover&dq=logic%20structure&hl=en&sa=X&ei=ByejT9j_MOjA2gXS8Ngl&ved=0CDAQ6AEwAA#v=onepage&q=logic%20structure&f=false">logic and structure</a>, it seems that it is assumed that one has already learned about set theory. And some books seem to assume a philosophical background. So where does it all start? </p>
<p>Where should I start if I really wanted to go back to the <em>beginning</em>? </p>
<p>Is it possible to make a bullet point list with where one start? For example:</p>
<ul>
<li>Logic</li>
<li>Language</li>
<li>Set theory</li>
</ul>
<p><strong>EDIT:</strong> I should have said that I was not necessarily looking for a soft or naive introduction to logic or set theory. What I am wondering is, where it starts. So for example, it seems like predicate logic comes before set theory. Is it even possible to say that something comes first?</p>
| <p>There are different ways to build a foundation for mathematics, but I think the closest to being the current "standard" is:</p>
<ul>
<li><p>Philosophy (optional)</p></li>
<li><p><a href="http://en.wikipedia.org/wiki/Propositional_logic">Propositional logic</a></p></li>
<li><p><a href="http://en.wikipedia.org/wiki/First-order_logic">First-order logic</a> (a.k.a. "<a href="http://en.wikipedia.org/wiki/Predicate_logic">predicate logic</a>")</p></li>
<li><p>Set theory (specifically, <a href="http://en.wikipedia.org/wiki/ZFC">ZFC</a>)</p></li>
<li><p>Everything else</p></li>
</ul>
<p>When rigorously followed (e.g., in a <a href="http://en.wikipedia.org/wiki/Hilbert_system">Hilbert system</a>), classical logic does not depend on set theory in any way (rather, it's the other way around), and I believe the only use of languages in low-level theories is to prove things <em>about</em> the theories (e.g., the <a href="http://en.wikipedia.org/wiki/Deduction_theorem">deduction theorem</a>) rather than <em>in</em> the theories. (While proving such metatheorems can make your work easier afterwards, it is not strictly necessary.)</p>
| <p>I strongly urge you to look at Goldrei [9] and Goldrei [10]. I learned about these books by chance in Fall 2011. Among foundational books, I think Goldrei's books must rate as among the best books I've ever come across relative to how little well-known they are. In particular, Goldrei [10] has been invaluable to me for some things I was working on a few months ago.</p>
<p>In case my personal situation could be of some help, in what follows I'll outline the approach I've been taking for what you asked about.</p>
<p>I too am trying to improve my understanding of ground-level foundational matters, at least I was this past Fall and Winter. (During the past few months I've been spending all my free time on something else, which is related to a subject taken by some students I've been tutoring.) I started with Lemmon's book [1], which was the text for a philosophy department's beginning symbolic logic course I took in 1979 (but I'd forgotten much of the material), and I very carefully read the text material and pretty much worked every single problem in the book.</p>
<p>After this I began reading/working through Mates [2], which was the standard beginning graduate level philosophy symbolic logic text where I was an undergraduate (but when I took the class, also in 1979, the instructor used a different text). However, I quickly decided that I was wasting my time because I had zero interest in many of the topics Mates [2] deals with and it was becoming clear to me that, after my extensive work with Lemmon [1], I could easily skip Mates [2] and precede to something at the "next level".</p>
<p>I then began Hamilton [3]. I got through the first couple of chapters, doing all the exercises (propositional logic), and then I decided to take a temporary detour and study a little deeper Hilbert style (non-standard) propositional calculus before continuing into Hamilton's predicate calculus chapter. I spent about 10 weeks on this, and have a nearly finished 50+ manuscript on how I think the subject should be presented, motivated by what seems to me to be major pedagogical shortcomings in the existing literature, especially in Hamilton's book. (Goldrei [10], which I didn't discover until later, is an exception.) In this regard, see my answer at [11]. However, at the start of the Spring 2012 semester I had to stop because some students I was tutoring in Fall 2011 wanted me to work with them this semester in a subject that I needed a lot of brush-up with (vector calculus). (I work full time, not teaching, so I have a limited amount of free time to devote to math.)</p>
<p>My intent is to return to Hamilton [3], a book I've had for over 20 years and have always wanted to work through. After Hamilton's book, I'm thinking I'll quickly work through Machover [4], which should be easy as I've already read through much of Machover's book at this point. After these "preliminaries", my goal is to very carefully work through Boolos/Burgess/Jeffrey [5], a (later edition of a) book I actually had a reading course out of in Spring 1990 but, due to other issues at the time, I wasn't able to do much justice to and I feel bad about it to this day.</p>
<p>After this (or perhaps at the same time), I intend to very carefully work through Enderton [6], a book that was strongly recommended to me back in 1986 when I was in a graduate program (different from 1990) with the intention of doing research in either descriptive set theory or in set-theoretic topology, but I had to leave after not passing my Ph.D. exams (two tries).</p>
<p>I have several other logic books, but probably the most significant for possible future study, should I continue, are Ebbinghaus/Flum/Thomas [7] and van Dalen [8]. Each of these is approximately the same level as Boolos/Burgess/Jeffrey [5] and Enderton [6], but they appear to offer more emphasis on some topics (e.g. model theory and intuitionism).</p>
<p>Everything I've mentioned is mathematical logic because set theory (naive set theory, at least) is something I've picked up a lot of in other math courses and on my own. What I'm really looking for is sufficient background in logic to understand and read about things like transitive models of ZF, forcing, etc.</p>
<p><strong>[1]</strong> E. J. Lemmon, <strong>Beginning Logic</strong> (1978)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0915144506" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0915144506</a></p>
<p><strong>[2]</strong> Benson Mates, <strong>Elementary Logic</strong> (1972)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/019501491X" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/019501491X</a></p>
<p><strong>[3]</strong> A. G. Hamilton, <strong>Logic for Mathematicians</strong> (1988)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0521368650" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0521368650</a></p>
<p><strong>[4]</strong> Moshe Machover, <strong>Set Theory, Logic and their Limitations</strong> (1996)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0521479983" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0521479983</a></p>
<p><strong>[5]</strong> George S. Boolos, John P. Burgess, and Richard C. Jeffrey, <strong>Computability and Logic</strong> (2007)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0521701465" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0521701465</a></p>
<p><strong>[6]</strong> Herbert Enderton, <strong>A Mathematical Introduction to Logic</strong> (2001)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0122384520" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0122384520</a></p>
<p><strong>[7]</strong> H.-D. Ebbinghaus, J. Flum, and W. Thomas, <strong>Mathematical Logic</strong> (1994)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0387942580" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0387942580</a></p>
<p><strong>[8]</strong> Dirk van Dalen, <strong>Logic and Structure</strong> (2008)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/3540208798" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/3540208798</a></p>
<p><strong>[9]</strong> Derek C. Goldrei, <strong>Classic Set Theory for Guided Independent Study</strong> (1996)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0412606100" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/dp/0412606100</a></p>
<p><strong>[10]</strong> Derek C. Goldrei, <strong>Propositional and Predicate Calculus: A Model of Argument</strong> (2005)</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/1852339217" rel="noreferrer" rel="nofollow noreferrer">http://www.amazon.com/gp/product/1852339217</a></p>
<p><strong>[11]</strong> <a href="https://math.stackexchange.com/a/94483/13130">Proving <span class="math-container">$(p \to (q \to r)) \to ((p \to q) \to (p \to r))$</span></a></p>
|
This dataset contains questions and answers focused on various topics within the field of mathematics. The dataset includes columns for tags, question bodies, accepted answers, and secondary answers.
The Mathematics Q&A Dataset consists of questions and their corresponding answers from various mathematical topics. This dataset is valuable for research and development in natural language processing (NLP), machine learning (ML), and educational applications.
The dataset covers a variety of mathematical topics:
The dataset is structured into the following columns:
tag
: The category or topic of the question.question_body
: The full text of the question.accepted_answer
: The accepted answer to the question.second_answer
: An additional answer to the question, providing another perspective or solution.tag | question_body | accepted_answer | second_answer |
---|---|---|---|
probability | What is the probability of getting heads in a coin toss? | The probability is 0.5, as there are two possible outcomes. | The probability of heads is 50%. |
linear-algebra | How do you find the determinant of a matrix? | To find the determinant, you can use the Laplace expansion. | You can also use row reduction. |
The dataset can be used for various purposes, including but not limited to:
To access the dataset, download the file from the repository: