tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
matrices
<p>I saw this tucked away in a <a href="https://mathoverflow.net/questions/15050">MathOverflow comment</a> and am asking this question to preserve (and advertise?) it. It's a nice problem!</p> <p><strong>Problem:</strong> Suppose $A$ and $B$ are real $n\times n$ matrices with $A^2+B^2=AB$. If $AB-BA$ is invertible, prove $n$ is a multiple of $3$.</p>
<p>Let $\omega$ be a primitive third root of unity. Then $(A+\omega B)(A+\omega^2B)=\omega(BA-AB)$. Since $A+\omega^2B=\overline{A+\omega B}$, we get that $\det(A+\omega B)(A+\omega^2B)$ $=$ $\det(A+\omega B)\det(A+\omega^2B)$ $=$ $\det(A+\omega B)\det(\overline{A+\omega B})$ $=$ $\det(A+\omega B)\overline{\det(A+\omega B)}$ is a real number, that is, $\omega^n\det(BA-AB)$ is a real number. Since $\det(BA-AB)\neq 0$ we get that $\omega^n$ is a real number, and this happens iff $3\mid n$.</p>
<p><strong>I.</strong> Let $k$ be a field and consider the algebra $\def\A{\mathcal A}\A_k=k\langle a,b:a^2+b^2-ab\rangle$. We have $$ aaa = aab - abb = abb - bbb - abb = - bbb $$ and $$ aaa = aba - bba, $$ so that $$ aba = bba - bbb. $$ Let $x=ab-ba$. Then \begin{gather} ax = aab-aba = abb - bbb - bba + bbb = abb - bba, \\ bx = bab-bba, \\ xa = aba - baa = bba - bbb - bab + bbb = bba - bab, \\ xb = abb - bab, \\ \end{gather} so that in fact \begin{gather} ax = x(-a+b), \\ bx = x(-a). \end{gather} There is an automorphism $\sigma:\A_k\to\A_k$ such that $\sigma(a)=-a+b$ and $\sigma(b)=-a$, and the two equations above immediately imply that $$ \text{$ux = x\sigma(u)$ for all $u\in\A_k$.}$$ Since $\sigma^3=\mathrm{id}_{\A_k}$, it follows that $x^3$ is central in $\A_k$.</p> <p>If $V$ is a $\A_k$-module, let us write $m_u:v\in V\mapsto uv\in V$.</p> <p><strong>II.</strong> Let $V$ be a simple finite dimensional left $\def\CC{\mathbb C}\A_\CC$-module, and let us suppose that $m_x$ is invertible. The map $m_{x^3}$ is an endomorphism of $V$ as a module because $x^3$ is central in $A$, so Schur's lemma tells us that there exists a $\lambda\in\CC$ such that $m_{x^3}=\lambda\cdot\mathrm{id}_V$. Since $m_x$ is invertible, $\lambda\neq0$, and there exists a $\mu\in\CC\setminus0$ such that $\mu^3=\lambda$. Let $y=x/\mu$, so that $m_y^3=m_{y^3}=\mathrm{id}_V$. </p> <p>It follows that the eigenvalues of $m_{y}$ are cubic roots on unity. On the other hand, we have $m_y=[m_a,m_b]/\mu$, so that $\operatorname{tr}m_y=0$. Now, the only linear combinations with integer coefficients of the three cubic roots of unity which vanish are those in which the three roots have the same coefficient. Since $\operatorname{tr} y$ is the sum of the eigenvalues of $m_y$ taken with multiplicity, we conclude that thee three roots of unity have the same multiplicity as eigenvalues of $m_y$. This is only posssible if $\dim V$ is divisible by 3.</p> <p><strong>III.</strong> More generally, let now $V$ be a finite dimensional $\A_\CC$-module such that the map $v\in V\mapsto xv\in V$ is invertible. Since $V$ has finite dimension, it has finite length as an $\A$-module, and it is an iterated extension of finitely many simple $\A$-modules. Each of this modules has the property that $x$ acts bijectively on it, so we know their dimension is divisible by $3$. It follows then that the dimension of $V$ itself is divisible by $3$.</p> <p><strong>IV.</strong> Let now $V$ be a finite dimensional $\def\RR{\mathbb R}\A_\RR$-module on which $x$ acts bijectively. Then $\CC\otimes_\RR V$ is a finite dimensional $\CC\otimes_\RR\A_\RR$-module. Since $\CC\otimes_\RR\A_\RR$ is obviously isomorphic to $\A_\CC$, we know $\dim_\CC\CC\otimes_\RR V$ is divisible by three, and then so is $\dim_\RR V$ because this is in fact equal to $\dim_\CC\CC\otimes_\RR V$.</p> <p>This conclusion is precisely the one we sought.</p> <hr> <ul> <li><p>The conclusion reached in III is in fact stronger than the one in IV.</p></li> <li><p>All this looks weird, but it is very, very natural. A pair of matrices satisfying the relation $A^2+B^2=AB$ gives a representation of the algebra $\A_k$. It is natural to try to obtain a basis for $\A_k$, at the very least, and this is usually done using Bergman's diamond lemma. The first part of the computation done in <strong>I</strong> is the start of that.</p></li> <li><p>Next, we are not interested in all $\A_k$ modules but only in those in which $X=[A,B]$ is invertible. This is the same as the modules which are actually modules over the localization of $\A_k$ at $x=[a,b]$. Comuting in such a localization is a pain, <em>unless</em> the element at which we are localizing is normal: one thus it motivated to check for normality. It turns out that $x$ is normal; there is then an endomorphism of the algebra attached to it, which is $\sigma$. One immediatelly recognized $\sigma$ to be given by a matrix of order $3$. The rest is standard representation theory.</p></li> <li><p>In a way, this actually gives a <em>method</em> to solve this sort of problems. Kevin asked on MO: «Are there really books that can teach you how to solve such problems??» and the answer is yes: this is precisely what representation theory is about!</p></li> </ul>
differentiation
<p><strong>Question:</strong> If there exists a covariant derivative, then why doesn't there also exist a "contravariant derivative"? Why are all or most forms of differentiation "covariant", or rather why do all or most forms of differentiation transform covariantly? What aspect of differentiation makes it intrinsically "covariant" and intrinsically "<em>not</em> contravariant"? Why isn't the notion of differentiation agnostic to "co/contra-variance"?</p> <p><strong>Motivation:</strong><br> To me it is unclear (on an intuitive, i.e. stupid/lazy, level) how notions of differentiation could be restrained to being either "covariant" or "contravariant", since any notion of differentiation should be linear*, and the dual of any vector space is exactly as linear as the original vector space, i.e. vector space operations in the dual vector space still commute with linear functions and operators, they same way they commute with such linear objects in the original vector space.</p> <p>So to the extent that the notion of linearity is "agnostic" to whether we are working with objects from a vector space or from its dual vector space, so I would have expected any notion of differentiation to be similarly "agnostic". Perhaps a better word would be "symmetric" -- naively, I would have expected that if a notion of "covariant differentiation" exists, then a notion of "contravariant differentiation" should also exist, because naively I would have expected one to exist if and only if the other exists.</p> <p>However, it appears that no such thing as "contravariant derivative" exists (<a href="https://math.stackexchange.com/questions/1069/intuitive-explanation-of-covariant-contravariant-and-lie-derivatives">see here on Math.SE</a>, also these two posts <a href="https://www.physicsforums.com/threads/contravariant-derivative.105662/" rel="noreferrer">[a]</a><a href="https://www.physicsforums.com/threads/contravariant-derivative.214966/" rel="noreferrer">[b]</a> on PhysicsForums), whereas obviously a notion of "covariant derivative" is used very frequently and profitably in differential geometry. Even differential operators besides the so-called "covariant derivative" seemingly transform covariantly, see <a href="https://physics.stackexchange.com/questions/126740/gradient-is-covariant-or-contravariant">this post</a> for a discussion revolving around this property for the gradient. I don't understand why this is the case.</p> <p>(* I think)</p>
<p>The "covariant" in "covariant derivative" should really be "invariant". It is a misnomer, but we are stuck with it. It is not the same "covariant" as that of a "covariant vector", and therefore, there is no "contravariant derivative". Armed with this, Wikipedia should fill in the rest for you :) </p>
<p>The reason why the covariant derivative makes a $(p,q+1)$-type tensor field out of a $(p,q)$ type tensor field is because for a tensor field $T$, $\nabla T$ is defined as $$ \nabla T(X,\text{filled arguments})=\nabla_XT(\text{filled arguments}), $$ and $\nabla_XT$ is $C^\infty(M)$-linear in $X$, so this relation defines a covariant tensor field - one that acts on vector fields. But <em>why</em> does it need be so?</p> <p>The geometric answer is that a covariant derivative is essentially a representation for a Koszul or principal connection, a device that allows for parallel transport of bundle data along curves. The reason it takes in vectors is because vectors are intrinsically tied to curves on your manifold. If your covariant derivative took in 1-forms as the directional argument instead of vectors, it would not represent a connection, because there is no way to canonically tie together curves and 1-forms without a tool like a metric tensor or a symplectic form.</p>
differentiation
<p>They are asking me to prove $$\sin(x) &gt; x - \frac{x^3}{3!},\; \text{for} \, x \, \in \, \mathbb{R}_{+}^{*}.$$ I didn't understand how to approach this kind of problem so here is how I tried: $\sin(x) + x -\frac{x^3}{6} &gt; 0 \\$ then I computed the derivative of that function to determine the critical points. So: $\left(\sin(x) + x -\frac{x^3}{6}\right)' = \cos(x) -1 + \frac{x^2}{2} \\ $ The critical points: $\cos(x) -1 + \frac{x^2}{2} = 0 \\ $ It seems that <code>x = 0</code> is a critical point. Since $\left(\cos(x) -1 + \frac{x^2}{2}\right)' = -\sin(x) + x \\ $ and $-\sin(0) + 0 = 0 \\$ The function has no local minima and maxima. Since the derivative of the function is positive, the function is strictly increasing so the lowest value is <code>f(0)</code>. Since <code>f(0) = 0</code> and <code>0 &gt; 0</code> I proved that $ \sin(x) + x -\frac{x^3}{6} &gt; 0$. I'm not sure if this solution is right. And, in general, how do you tackle this kind of problems?</p>
<p>A standard approach is to let $f(x)=\sin x-\left(x-\frac{x^3}{3!}\right)$, and to show that $f(x)\gt 0$ if $x\gt 0$. </p> <p>Note that $f(0)=0$. We will be finished if we can show that $f(x)$ is <em>increasing</em> in the interval $(0,\infty)$.</p> <p>Note that $f'(x)=\cos x-1+\frac{x^2}{2!}$. We will be finished if we can show that $f'(x)\gt 0$ in the interval $(0,\infty)$.</p> <p>Note that $f'(0)=0$. We will be finished if we can show that $f'(x)$ is increasing in $(0,\infty)$.</p> <p>So we will be finished if we can prove that $f''(x)\gt 0$ in the interval $(0,\infty)$.</p> <p>We have $f''(x)=-\sin x+x$. Since $f''(0)=0$, we will be finished if we can show that $f'''(x)\ge 0$ on $(0,\infty)$, with equality only at isolated points. This is true. </p> <p>Or else for the last step we can use the geometrically evident fact that $\frac{\sin x}{x}\lt 1$ if $x\gt 0$. </p> <p><strong>Remark:</strong> It is more attractive to integrate than to differentiate, but we used the above approach because differentiation comes before integration in most calculus courses. </p> <p>For the integration approach, let $x$ be positive. Since $\sin t\lt t$ on $(0,x)$, we have $\int_0^x (t-\sin t)\,dt\gt 0$. Integrate. We get $\cos x+\frac{x^2}{2}-1\gt 0$ (Mean Value Theorem for integrals), so $\cos t+\frac{t^2}{2}-1\gt 0$ if $t\gt 0$. </p> <p>Integrate from $0$ to $x$. We get $\sin x+\frac{x^3}{3!}-x\gt 0$, or equivalently $\sin x\gt x-\frac{x^3}{3!}$. Nicer, by a lot.</p>
<p>Take a decreasing sequence of positive real numbers $a_n$ such that $a_n\to 0$.</p> <p>Now, consider the sequence $b_k=\sum_{n=1}^k (-1)^{n-1}a_n$. The alternating series criterion guarantee us that it converges to some $b$.</p> <p>Note that $b_1=a_1$, $b_2=b_1-a_2\in(0,b_1)$, $b_3=b_2+a_3\in(b_2,b_1)$, etc. So the limit $b$ is lesser that the terms $b_{2k+1}$ and greater than $b_{2k}$.</p> <p>Then, if $x&lt;\sqrt 6$, $$\sin x=\sum_{n=1}^\infty(-1)^{n-1} \frac{x^{2n-1}}{(2n-1)!}&gt;\sum_{n=1}^2(-1)^{n-1} \frac{x^{2n-1}}{(2n-1)!}=x-\frac{x^3}{3!}$$</p> <p>If $x\geq\sqrt 6$, the function $f(x)=x-x^3/6$ is decreasing and $f(\sqrt 6)=0$, so $f(x)&lt;0$ for $x&gt;\sqrt 6$. Since $\sin x&gt;0$ for $0&lt;x&lt;\pi$, we have that $\sin x&gt;f(x)$ for $0&lt;x&lt;\pi$. (Note that $\sqrt 6&lt;\pi$).</p> <p>Last, for $x\geq \pi$, $\sin x\geq -1$ and $f(x)&lt;f(\pi)&lt;f(3)=3-4.5&lt;-1$.</p>
number-theory
<p>For example how come $\zeta(2)=\sum_{n=1}^{\infty}n^{-2}=\frac{\pi^2}{6}$. It seems counter intuitive that you can add numbers in $\mathbb{Q}$ and get an irrational number.</p>
<p>But for example $$\pi=3+0.1+0.04+0.001+0.0005+0.00009+0.000002+\cdots$$ and that surely does not seem strange to you...</p>
<p>You can't add an infinite number of rational numbers. What you can do, though, is find a limit of a sequence of partial sums. So, $\pi^2/6$ is the limit to infinity of the sequence $1, 1 + 1/4, 1 + 1/4 + 1/9, 1 + 1/4 + 1/9 + 1/16, \ldots $. Writing it so that it looks like a sum is really just a shorthand.</p> <p>In other words, $\sum^\infty_{i=1} \cdots$ is actually kind of an abbreviation for $\lim_{n\to\infty} \sum^n_{i=1} \cdots$.</p>
probability
<p>My friend gave me this puzzle:</p> <blockquote> <p>What is the probability that a point chosen at random from the interior of an equilateral triangle is closer to the center than any of its edges? </p> </blockquote> <hr> <p>I tried to draw the picture and I drew a smaller (concentric) equilateral triangle with half the side length. Since area is proportional to the square of side length, this would mean that the smaller triangle had $1/4$ the area of the bigger one. My friend tells me this is wrong. He says I am allowed to use calculus but I don't understand how geometry would need calculus. Thanks for help.</p>
<p>You are right to think of the probabilities as areas, but the set of points closer to the center is not a triangle. It's actually a weird shape with three curved edges, and the curves are parabolas. </p> <hr> <p>The set of points equidistant from a line $D$ and a fixed point $F$ is a parabola. The point $F$ is called the focus of the parabola, and the line $D$ is called the directrix. You can read more about that <a href="https://en.wikipedia.org/wiki/Conic_section#Eccentricity.2C_focus_and_directrix">here</a>.</p> <p>In your problem, if we think of the center of the triangle $T$ as the focus, then we can extend each of the three edges to give three lines that correspond to the directrices of three parabolas. </p> <p>Any point inside the area enclosed by the three parabolas will be closer to the center of $T$ than to any of the edges of $T$. The answer to your question is therefore the area enclosed by the three parabolas, divided by the area of the triangle. </p> <hr> <p>Let's call $F$ the center of $T$. Let $A$, $B$, $C$, $D$, $G$, and $H$ be points as labeled in this diagram:</p> <p><a href="https://i.sstatic.net/MTg9U.png"><img src="https://i.sstatic.net/MTg9U.png" alt="Voronoi diagram for a triangle and its center"></a></p> <p>The probability you're looking for is the same as the probability that a point chosen at random from $\triangle CFD$ is closer to $F$ than to edge $CD$. The green parabola is the set of points that are the same distance to $F$ as to edge $CD$.</p> <p>Without loss of generality, we may assume that point $C$ is the origin $(0,0)$ and that the triangle has side length $1$. Let $f(x)$ be equation describing the parabola in green. </p> <hr> <p>By similarity, we see that $$\overline{CG}=\overline{GH}=\overline{HD}=1/3$$</p> <p>An equilateral triangle with side length $1$ has area $\sqrt{3}/4$, so that means $\triangle CFD$ has area $\sqrt{3}/12$. The sum of the areas of $\triangle CAG$ and $\triangle DBH$ must be four ninths of that, or $\sqrt{3}/27$.</p> <p>$$P\left(\text{point is closer to center}\right) = \displaystyle\frac{\frac{\sqrt{3}}{12} - \frac{\sqrt{3}}{27} - \displaystyle\int_{1/3}^{2/3} f(x) \,\mathrm{d}x}{\sqrt{3}/12}$$</p> <p>We know three points that the parabola $f(x)$ passes through. This lets us create a system of equations with three variables (the coefficients of $f(x)$) and three equations. This gives</p> <p>$$f(x) = \sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}$$</p> <p>The <a href="http://goo.gl/kSMPmv">integral of this function from $1/3$ to $2/3$</a> is $$\int_{1/3}^{2/3} \left(\sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}\right) \,\mathrm{d}x = \frac{5}{54\sqrt{3}}$$ </p> <hr> <p>This <a href="http://goo.gl/xEFB0s">gives our final answer</a> of $$P\left(\text{point is closer to center}\right) = \boxed{\frac{5}{27}}$$</p>
<p>In response to Benjamin Dickman's request for a solution without calculus, referring to dtldarek's nice diagram in Zubin Mukerjee's answer (with all areas relative to that of the triangle $FCD$):</p> <p>The points $A$ and $B$ are one third along the bisectors from $F$, so the triangle $FAB$ has area $\frac19$. The vertex $V$ of the parabola is half-way between $F$ and the side $CD$, so the triangle $VAB$ has width $\frac13$ of $FCD$ and height $\frac16$ of $FCD$ and thus area $\frac1{18}$. By <a href="https://en.wikipedia.org/wiki/The_Quadrature_of_the_Parabola">Archimedes' quadrature of the parabola</a> (long predating the advent of calculus), the area between $AB$ and the parabola is $\frac43$ of the area of $VAB$. Thus the total area in $FCD$ closer to $F$ than to $CD$ is</p> <p>$$ \frac19+\frac43\cdot\frac1{18}=\frac5{27}\;. $$</p> <p>P.S.: Like Dominic108's solution, this is readily generalized to a regular $n$-gon. Let $\phi=\frac\pi n$. Then the condition $FB=BH$, expressed in terms of the height $h$ of triangle $FAB$ relative to that of $FCD$, is</p> <p>$$ \frac h{\cos\phi}=1-h\;,\\ h=\frac{\cos\phi}{1+\cos\phi}\;. $$</p> <p>This is also the width of $FAB$ relative to that of $FCD$. The height of the arc of the parabola between $A$ and $B$ is $\frac12-h$. Thus, the proportion of the area of triangle $FCD$ that's closer to $F$ than to $CD$ is</p> <p>$$ h^2+\frac43h\left(\frac12-h\right)=\frac23h-\frac13h^2=\frac{2\cos\phi(1+\cos\phi)-\cos^2\phi}{3(1+\cos\phi)^2}=\frac13-\frac1{12\cos^4\frac\phi2}\;. $$</p> <p>This doesn't seem to take rational values except for $n=3$ and for $n\to\infty$, where the limit is $\frac13-\frac1{12}=\frac14$, the value for the circle.</p>
geometry
<p>How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$ Could anyone explain this to me, please?</p> <p>I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components...</p> <p>When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.</p>
<p>There exists an infinite number of vectors in 3 dimension that are perpendicular to a fixed one. They should only satisfy the following formula: $$(3\mathbf{i}+4\mathbf{j}-2\mathbf{k}) \cdot v=0$$</p> <p>For finding all of them, just choose 2 perpendicular vectors, like $v_1=(4\mathbf{i}-3\mathbf{j})$ and $v_2=(2\mathbf{i}+3\mathbf{k})$ and any linear combination of them is also perpendicular to the original vector: $$v=((4a+2b)\mathbf{i}-3a\mathbf{j}+3b\mathbf{k}) \hspace{10 mm} a,b \in \mathbb{R}$$</p>
<p>Take cross product with any vector. You will get one such vector.</p>
differentiation
<p>Could anyone explain in simple words (and maybe with an example) what the difference between the gradient and the Jacobian is? </p> <p>The gradient is a vector with the partial derivatives, right? </p>
<p>These are two particular forms of matrix representation of the derivative of a differentiable function <span class="math-container">$f,$</span> used in two cases:</p> <ul> <li>when <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R},$</span> then for <span class="math-container">$x$</span> in <span class="math-container">$\mathbb{R}^n$</span>, <span class="math-container">$$\mathrm{grad}_x(f):=\left[\frac{\partial f}{\partial x_1}\frac{\partial f}{\partial x_2}\dots\frac{\partial f}{\partial x_n}\right]\!\bigg\rvert_x$$</span> is the matrix <span class="math-container">$1\times n$</span> of the linear map <span class="math-container">$Df(x)$</span> exprimed from the canonical base of <span class="math-container">$\mathbb{R}^n$</span> to the canonical base of <span class="math-container">$\mathbb{R}$</span> (=(1)...). Because in this case this matrix would have only one row, you can think about it as the vector <span class="math-container">$$\nabla f(x):=\left(\frac{\partial f}{\partial x_1},\frac{\partial f}{\partial x_2},\dots,\frac{\partial f}{\partial x_n}\right)\!\bigg\rvert_x\in\mathbb{R}^n.$$</span> This vector <span class="math-container">$\nabla f(x)$</span> is the unique vector of <span class="math-container">$\mathbb{R}^n$</span> such that <span class="math-container">$Df(x)(y)=\langle\nabla f(x),y\rangle$</span> for all <span class="math-container">$y\in\mathbb{R}^n$</span> (see <a href="https://en.wikipedia.org/wiki/Riesz_representation_theorem" rel="noreferrer" title="Riesz representation theorem">Riesz representation theorem</a>), where <span class="math-container">$\langle\cdot,\cdot\rangle$</span> is the usual scalar product <span class="math-container">$$\langle(x_1,\dots,x_n),(y_1,\dots,y_n)\rangle=x_1y_1+\dots+x_ny_n.$$</span> </li> <li>when <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R}^m,$</span> then for <span class="math-container">$x$</span> in <span class="math-container">$\mathbb{R}^n$</span>, <span class="math-container">$$\mathrm{Jac}_x(f)=\left.\begin{bmatrix}\frac{\partial f_1}{\partial x_1}&amp;\frac{\partial f_1}{\partial x_2}&amp;\dots&amp;\frac{\partial f_1}{\partial x_n}\\\frac{\partial f_2}{\partial x_1}&amp;\frac{\partial f_2}{\partial x_2}&amp;\dots&amp;\frac{\partial f_2}{\partial x_n}\\ \vdots&amp;\vdots&amp;&amp;\vdots\\\frac{\partial f_m}{\partial x_1}&amp;\frac{\partial f_m}{\partial x_2}&amp;\dots&amp;\frac{\partial f_m}{\partial x_n}\\\end{bmatrix}\right|_x$$</span> is the matrix <span class="math-container">$m\times n$</span> of the linear map <span class="math-container">$Df(x)$</span> exprimed from the canonical base of <span class="math-container">$\mathbb{R}^n$</span> to the canonical base of <span class="math-container">$\mathbb{R}^m.$</span></li> </ul> <p>For example, with <span class="math-container">$f:\mathbb{R}^2\to\mathbb{R}$</span> such as <span class="math-container">$f(x,y)=x^2+y$</span> you get <span class="math-container">$\mathrm{grad}_{(x,y)}(f)=[2x \,\,\,1]$</span> (or <span class="math-container">$\nabla f(x,y)=(2x,1)$</span>) and for <span class="math-container">$f:\mathbb{R}^2\to\mathbb{R}^2$</span> such as <span class="math-container">$f(x,y)=(x^2+y,y^3)$</span> you get <span class="math-container">$\mathrm{Jac}_{(x,y)}(f)=\begin{bmatrix}2x&amp;1\\0&amp;3y^2\end{bmatrix}.$</span></p>
<p>The gradient is the vector formed by the partial derivatives of a <em>scalar</em> function.</p> <p>The Jacobian matrix is the matrix formed by the partial derivatives of a <em>vector</em> function. Its vectors are the gradients of the respective components of the function.</p> <p>E.g., with some argument omissions,</p> <p><span class="math-container">$$\nabla f(x,y)=\begin{pmatrix}f'_x\\f'_y\end{pmatrix}$$</span></p> <p><span class="math-container">$$J \begin{pmatrix}f(x,y),g(x,y)\end{pmatrix}=\begin{pmatrix}f'_x&amp;&amp;g'_x\\f'_y&amp;&amp;g'_y\end{pmatrix}=\begin{pmatrix}\nabla f;\nabla g\end{pmatrix}.$$</span></p> <p>If you want, the Jacobian is a generalization of the gradient to vector functions.</p> <hr /> <p><strong>Addendum:</strong></p> <p>The first derivative of a scalar multivariate function, or gradient, is a vector,</p> <p><span class="math-container">$$\nabla f(x,y)=\begin{pmatrix}f'_x\\f'_y\end{pmatrix}.$$</span></p> <p>Thus the second derivative, which is the Jacobian of the gradient is a matrix, called the <em>Hessian</em>.</p> <p><span class="math-container">$$H(f)=\begin{pmatrix}f''_{xx}&amp;&amp;f''_{xy}\\f''_{yx}&amp;&amp;f''_{yy}\end{pmatrix}.$$</span></p> <p>Higher derivatives and vector functions require the <em>tensor</em> notation.</p>
logic
<p>I've seen both symbols used to mean "therefore" or logical implication. It seems like $\therefore$ is more frequently used when reaching the conclusion of an argument, while $\implies$ is for intermediate claims that imply each other. Is there any agreed upon way of using these symbols, or are they more or less interchangeable?</p>
<blockquote> <p>&quot;It seems like <span class="math-container">$\therefore$</span> is more frequently used when reaching the conclusion of an argument, while <span class="math-container">$\implies$</span> (alternatively <span class="math-container">$\rightarrow$</span>) is for intermediate claims that imply each other.&quot;</p> </blockquote> <p>Your supposition is largely correct; my only concern is your description of <span class="math-container">$\implies$</span> being used to denote intermediate claims (in a proof or an argument, for example) that <em><strong>imply each other.</strong></em> The <span class="math-container">$\implies$</span> denotation, as in <span class="math-container">$p \implies q$</span>, merely conveys that the preceding claim (<span class="math-container">$p$</span>, if true) implies the subsequent claim <span class="math-container">$q$</span>; i.e., it does not denote a bi-direction implication <span class="math-container">$\iff$</span> which reads &quot;if and only if&quot;.</p> <p>'<span class="math-container">$\implies$</span>' or '<span class="math-container">$\rightarrow$</span>' is often used in a &quot;modus ponens&quot; style (short in scope) argument: If <span class="math-container">$p\implies q$</span>, and if it's the case that <span class="math-container">$p$</span>, then it follows that <span class="math-container">$q$</span>.</p> <p>Typically, as you note, <span class="math-container">$\therefore$</span> helps to signify the conclusion of an argument: given what we know (or are assuming as given) to be true and given the intermediate implications which follow, we conclude that...</p> <p>So, put briefly, <span class="math-container">$\implies$</span> (&quot;which implies that&quot;) is typically shorter in scope, usually intended to link, by implication, the preceding statement and what follows from it, whereas '<span class="math-container">$\therefore$</span>' has typically, though not always, greater scope, so to speak, linking the initial assumptions/givens, the intermediate implications, with &quot;that which was to be shown&quot; in, say, a proof or argument.</p> <p>Added:</p> <p>I found the following Wikipedia entry on the meaning/use of <a href="http://en.wikipedia.org/wiki/Therefore_sign" rel="nofollow noreferrer">the symbol'<span class="math-container">$\therefore$</span>'</a>, from which I'll quote:</p> <blockquote> <p>To denote logical implication or entailment, various signs are used in mathematical logic: <span class="math-container">$\rightarrow, \;\implies, \;\supset$</span> and <span class="math-container">$\vdash$</span>, <span class="math-container">$\models$</span>. These symbols are then part of a mathematical formula, and are not considered to be punctuation. In contrast, the therefore sign <span class="math-container">$[\;\therefore\;]$</span> is traditionally used as a punctuation mark, and does not form part of a formula.</p> </blockquote> <p>It also refers to the &quot;complementary&quot; of the &quot;therefore&quot; symbol<span class="math-container">$\;\therefore\;$</span>, namely the symbol <span class="math-container">$\;\because\;$</span>, which denotes &quot;because&quot;.</p> <p>Example:</p> <p><span class="math-container">$\because$</span> All men are mortal.<br> <span class="math-container">$\because$</span> Socrates is a man.<br> <span class="math-container">$\therefore$</span> Socrates is mortal.<br></p>
<p>There are four logic symbols to get clear about:</p> <p>$$\to,\quad \vdash,\quad \vDash,\quad \therefore$$</p> <ol> <li>'$\to$' (or '$\supset$') is a symbol belonging to various formal languages (e.g. the language of propositional logic or the language of the first-order predicate calculus) to express [usually!] the truth-functional conditional. $A \to B$ is a single conditional proposition, which of course asserts neither $A$ nor $B$.</li> <li>'$\vdash$' is an expression added to logician's English (or Spanish or whatever) -- it belongs to the metalanguage in which we talk about consequence relations between formal sentences. And e.g. $A, A \to B \vdash B$ says in augmented English that in some relevant deductive system, there is a proof from the premisses $A$ and $A \to B$ to the conclusion $B$. (If we are being really pernickety we would write '$A$', '$A \to B$' $\vdash$ '$B$' but it is always understood that $\vdash$ comes with invisible quotes.)</li> <li>'$\vDash$' is another expression added to logician's English (or Spanish or whatever) -- it again belongs to the metalanguage in which we talk about consequence relations between formal sentences. And e.g. $A, A \to B \vDash B$ says that in the relevant semantics, there is no valuation which makes the premisses $A$ and $A \to B$ true and the conclusion $B$ false.</li> <li>$\therefore$ is added as punctuation to some formal languages as an inference marker. Then $A, A \to B \therefore B$ is an <em>object language</em> expression; and (unlike the metalinguistic $A, A \to B \vdash B$), this consists in <em>three</em> separate assertions $A$, $A \to B$ and $B$, with a marker that is appropriately used when the third is a consequence of the first two. (But NB an inference marker should not be thought of as <em>asserting</em> that an inference is being made.) </li> </ol> <p>As for '$\Rightarrow$', this -- like the use of 'implies' -- seems to be used informally (especially by non-logicians), in different contexts for any of the first three. So I'm afraid you just have to be careful to let context disambiguate. (And NB in the second and third uses where '$\Rightarrow$' is more appropriately read as 'implies' there's no <em>scope</em> difference with '$\therefore$'. In either case, we can have many wffs before the implication/inference marker.) </p>
matrices
<blockquote> <p>I am searching for a short <em>coordinate-free</em> proof of <span class="math-container">$\operatorname{Tr}(AB)=\operatorname{Tr}(BA)$</span> for linear operators <span class="math-container">$A$</span>, <span class="math-container">$B$</span> between finite dimensional vector spaces of the same dimension. </p> </blockquote> <p>The usual proof is to represent the operators as matrices and then use matrix multiplication. I want a coordinate-free proof. That is, one that does not make reference to an explicit matrix representation of the operator. I define trace as the sum of the eigenvalues of an operator.</p> <p>Ideally, the proof the should be shorter and require fewer preliminary lemmas than the one given in <a href="http://terrytao.wordpress.com/2013/01/13/matrix-identities-as-derivatives-of-determinant-identities/" rel="noreferrer">this blog post</a>.</p> <p>I would be especially interested in a proof that generalizes to the trace class of operators on a Hilbert space.</p>
<p>$\newcommand{\tr}{\operatorname{tr}}$Here is an exterior algebra approach. Let $V$ be an $n$-dimensional vector space and let $\tau$ be a linear operator on $V$. The alternating multilinear map $$ (v_1,\dots,v_n) \mapsto \sum_{k=1}^n v_1 \wedge\cdots\wedge \tau v_k \wedge\cdots\wedge v_n $$ induces a unique linear operator $\psi: \bigwedge^n V \to \bigwedge^n V$. The <strong>trace</strong> $\tr(\tau)$ is defined as the unique number satisfying $\psi = \tr(\tau)\iota$, where $\iota$ is the identity. (This is possible because $\bigwedge^n V$ is one-dimensional.)</p> <p>Let $\sigma$ be another linear operator. We compute \begin{align} (\tr\sigma)(\tr\tau) v_1 \wedge\cdots\wedge v_n &amp;= \sum_{k=1}^n (\tr\sigma) v_1 \wedge\cdots\wedge \tau v_k \wedge\cdots\wedge v_n \\ &amp;= \sum_{k=1}^n v_1 \wedge\cdots\wedge \sigma \tau v_k \wedge\cdots\wedge v_n \\ &amp; \qquad + \sum_{k=1}^n \sum_{j \ne k} v_1 \wedge\cdots\wedge \sigma v_j \wedge \cdots \wedge \tau v_k \wedge\cdots\wedge v_n. \end{align}</p> <p>Notice that the last sum is symmetric in $\sigma$ and $\tau$, and so is $(\tr\sigma)(\tr\tau) v_1 \wedge\cdots\wedge v_n$. Therefore $$ \sum_{k=1}^n v_1 \wedge\cdots\wedge \sigma \tau v_k \wedge\cdots\wedge v_n = \sum_{k=1}^n v_1 \wedge\cdots\wedge \tau \sigma v_k \wedge\cdots\wedge v_n, $$ i.e. $\tr(\sigma\tau)=\tr(\tau\sigma)$.</p> <hr> <p>EDIT: To see that the trace is the sum of all eigenvalues, plug in your eigenvectors in the multilinear map defined at the beginning.</p>
<p>The proof in Martin Brandenburg's answer may look scary but it is secretly about moving beads around on a string. You can see all of the relevant pictures in <a href="http://qchu.wordpress.com/2012/11/05/introduction-to-string-diagrams/" rel="noreferrer">this blog post</a> and in <a href="http://qchu.wordpress.com/2012/11/06/string-diagrams-duality-and-trace/" rel="noreferrer">this blog post</a>. The proof using pictures is the following:</p> <p><img src="https://i.sstatic.net/DFIrh.jpg" alt="enter image description here"></p> <p>In the first step $g$ gets slid down on the right and in the second step $g$ gets slid up on the left. </p> <p>You can also find several proofs of the stronger result that $AB$ and $BA$ have the same characteristic polynomial in <a href="http://qchu.wordpress.com/2012/06/05/ab-ba-and-the-spectrum/" rel="noreferrer">this blog post</a>. </p>
logic
<p>I am very interested in learning the incompleteness theorem and its proof. But first I must know what things I need to learn first.<br> My current knowledge consists of basic high school education and the foundations of linear algebra and calculus which probably wont help but I figured it's worth mentioning.<br> I prefer that you recommend books as well as abstract subjects that I should learn. Also, a place where I can find the proof would be nice to have.</p> <p>Thanks in advance!</p>
<p>Gödel's first incompleteness theorem tell us about the limitations of effectively axiomatized formal theories strong enough to do a modicum of arithmetic. So you need at least to have a notion of what an effectively axiomatized formal theory is, if you are to grasp what is going on. To understand the "formal theory" bit, it will help to have encountered a bit of formal logic; but a good intro to Gödel should explain the extra "effectively axiomatized" bit. After that, the basic argumentative moves in proving the first incompleteness theorem are surprisingly straightforward (and it was philosophically important to Gödel that this is so) -- though filling in some of the details can get fiddly: so you don't need to bring much background maths to the table in order to get to understand the proof.</p> <p>My own book <em>An Introduction to Gödel's Theorems</em> was written for people who don't have much maths background but have done an intro logic course, and lots of people seem to find it pretty clear (I assume no more than some familiarity with elementary logic). There is also a freely available abbreviated version of some of my book in the form of lecture notes at <a href="http://www.logicmatters.net/resources/pdfs/gwt/GWT.pdf">http://www.logicmatters.net/resources/pdfs/gwt/GWT.pdf</a> There are suggestions for other reading in the relevant sections of the study guide at <a href="http://www.logicmatters.net/tyl">http://www.logicmatters.net/tyl</a></p> <p>You might however find it very helpful to look at Torkel Franzen's admirable little book <em>Gödel's Theorem: An Incomplete Guide to its Use and Abuse</em> which gives an informal presentation and will give you some understanding of what's going on, before deciding whether to tackle a book like mine which goes into the mathematical details. </p>
<p>For the proof you may want to actually take a look at Gödel's 1931 paper <a href="http://www.research.ibm.com/people/h/hirzel/papers/canon00-goedel.pdf">here</a>. As for gentle introduction, along Douglas Hofstadter's <a href="http://rads.stackoverflow.com/amzn/click/0465026567"><em>Gödel,Escher, Bach</em></a> I highly recommend <a href="http://books.google.com/books?id=G29G3W_hNQkC&amp;printsec=frontcover#v=onepage&amp;q&amp;f=false">Gödel's Proof by Ernest Nagel and James Newman</a>. A preview can be found in Google books, but the actual book is really lucid and short and starts off with Problem with Inconsistency. </p>
geometry
<p>I believe that many of you know about the moving sofa problem; if not you can find the description of the problem <a href="https://en.wikipedia.org/wiki/Moving_sofa_problem" rel="noreferrer">here</a>. </p> <p><a href="https://i.sstatic.net/ihxu9.gif" rel="noreferrer"><img src="https://i.sstatic.net/ihxu9.gif" alt="From wikipedia"></a></p> <p><br>In this question I am going to rotate the L shaped hall instead of moving a sofa around the corner. By rotating the hall $180^{\circ}$ what remains between the walls will give the shape of the sofa. Like this: <br><br><br> <a href="https://i.sstatic.net/23dbk.gif" rel="noreferrer"><img src="https://i.sstatic.net/23dbk.gif" alt="enter image description here"></a></p> <p>The points on the hall have the following properties:</p> <p>\begin{eqnarray} A &amp; = &amp; \left( r\cos { \alpha } ,t\sin { \alpha } \right) \\ { A }' &amp; = &amp; \left( r\cos { \alpha } +\sqrt { 2 } \cos { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } ,t\sin { \alpha } +\sqrt { 2 } \sin { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } \right) \\ { B } &amp; = &amp; \left( r\cos { \alpha } -\frac { t\sin { \alpha } }{ \tan { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \\ { B }' &amp; = &amp; \left( r\cos { \alpha } -\frac { t\sin { \alpha } }{ \tan { \left( \frac { \alpha }{ 2 } \right) } } -\frac { 1 }{ \sin { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \\ C &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } ,0 \right) \\ { C }' &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } +\frac { 1 }{ \cos { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \end{eqnarray}</p> <p>Attention: $\alpha$ is not the angle of $AOC$, it is some angle $ADC$ where $D$ changes location on $x$ axis for $r\neq t$. I am saying this because images can create confusion. Anyways I will change them as soon as possible.</p> <p>I could consider $r=f(\alpha)$ and $t=g(\alpha)$ but for this question I am going to take $r$ and $t$ as constants. If they were functions of $\alpha$ there would appear some interesting shapes. I experimented for different functions however the areas are more difficult to calculate, that's why I am not going to share. Maybe in the future.</p> <p>We rotate the hall for $r=t$ in the example above: <br>In this case:</p> <ol> <li>point A moves on a semicircle </li> <li>The envelope of lines between A' and C' is a circular arc. One has to prove this but I assume that it is true for $r=t$.</li> </ol> <p>If my second assumption is correct the area of sofa is $A= 2r-\frac { \pi r^{ 2 } }{ 2 } +\frac { \pi }{ 2 } $. The maximum area is reached when $r = 2/\pi$ and it's value is: $$A = 2/\pi+\pi/2 = 2,207416099$$</p> <p>which matches with Hammersley's sofa. The shape is also similar or same:</p> <p><a href="https://i.sstatic.net/v0Llb.gif" rel="noreferrer"><img src="https://i.sstatic.net/v0Llb.gif" alt="enter image description here"></a></p> <p>Now I am going to increase $t$ with respect to $r$. For $r=2/\pi$ and $t=0.77$:</p> <p><a href="https://i.sstatic.net/6dzwq.gif" rel="noreferrer"><img src="https://i.sstatic.net/6dzwq.gif" alt="enter image description here"></a></p> <p>Well, this looks like <a href="https://i.sstatic.net/FBlms.png" rel="noreferrer">Gerver's sofa</a>. </p> <p>I believe thearea can be maximized by finding the equations of envelopes above and below the sofa. Look at <a href="https://math.stackexchange.com/questions/1775749/right-triangle-on-an-ellipse-find-the-area">this question</a> where @Aretino has computed the area below $ABC$. </p> <p>I don't know enough to find equations for envelopes. I am afraid that I will make mistakes. I considered to calculate area by counting number of pixels in it, but this is not a good idea because for optimizing the area I have to create many images.</p> <p>I will give a bounty of 200 for whom calculates the maximum area. As I said the most difficult part of the problem is to find equations of envelopes. @Aretino did it.</p> <p><strong>PLUS:</strong> Could following be the longest sofa where $(r,t)=((\sqrt 5+1)/2,1)$ ? <a href="https://i.sstatic.net/ojAxc.gif" rel="noreferrer"><img src="https://i.sstatic.net/ojAxc.gif" alt="enter image description here"></a></p> <p>If you want to investigate further or use animation for educational purposes here is the Geogebra file: <a href="http://ggbm.at/vemEtGyj" rel="noreferrer">http://ggbm.at/vemEtGyj</a></p> <hr> <p>Ok, I had some free time and I count number of pixels in the sofa and I am sure that I have something bigger than Hammersley's constant.</p> <p>First, I made a simulation for Hammersley's sofa where $r=t=2/\pi$ and exported the image to png in 300 dpi (6484x3342 pixels) and using Gimp counted number of pixels which have exactly same value. For Hammersley I got $3039086$ pixels. </p> <p>For the second case $r=0.59$ and $t=0.66$ and I got $3052780$ pixels. To calculate area for this case:</p> <p>$$\frac{3052780}{3039086}(2/\pi + \pi/2)=2.217362628$$</p> <p>which is slightly less than Gerver's constant which is $2.2195$. Here is the sofa:</p> <p><a href="https://i.sstatic.net/PhvTB.jpg" rel="noreferrer"><img src="https://i.sstatic.net/PhvTB.jpg" alt="enter image description here"></a></p>
<p>WARNING: this answer uses the new parameterization of points introduced by the OP: <span class="math-container">\begin{eqnarray} A &amp; = &amp; \left( r\cos { \alpha } ,t\sin { \alpha } \right) \\ { A }' &amp; = &amp; \left( r\cos { \alpha } +\sqrt { 2 } \cos { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } ,t\sin { \alpha } +\sqrt { 2 } \sin { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } \right) \\ C &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } ,0 \right) \\ { C }' &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } +\frac { 1 }{ \cos { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \end{eqnarray}</span></p> <p>Another parameterization, which also apperaed in a first version of this question, was used in a <a href="https://math.stackexchange.com/questions/1775749/right-triangle-on-an-ellipse-find-the-area/1776770#1776770">previous answer</a> to a related question.</p> <p>The inner shape of the sofa is formed by the ellipse of semiaxes <span class="math-container">$r$</span>, <span class="math-container">$t$</span> and by the envelope of lines <span class="math-container">$AC$</span> (here and in the following I'll consider only that part of the sofa in the <span class="math-container">$x\ge0$</span> half-plane).</p> <p>The equations of lines <span class="math-container">$AC$</span> can be expressed as a function of <span class="math-container">$\alpha$</span> (<span class="math-container">$0\le\alpha\le\pi$</span>) as <span class="math-container">$F(x,y,\alpha)=0$</span>, where: <span class="math-container">$$ F(x,y,\alpha)= -t y \sin\alpha \tan{\alpha\over2} - t \sin\alpha \left(x - r \cos\alpha - t \sin\alpha \tan{\alpha\over2}\right). $$</span> The equation of the envelope can be found from: <span class="math-container">$$ F(x,y,\alpha)={\partial\over\partial\alpha}F(x,y,\alpha)=0, $$</span> giving the parametric equations for the envelope: <span class="math-container">$$ \begin{align} x_{inner}=&amp; (r-t) \cos\alpha+\frac{1}{2}(t-r) \cos2\alpha+\frac{1}{2}(r+t),\\ y_{inner}=&amp; 4 (t-r) \sin\frac{\alpha}{2}\, \cos^3\frac{\alpha}{2}.\\ \end{align} $$</span></p> <p>We need not consider this envelope if <span class="math-container">$t&lt;r$</span>, because in that case <span class="math-container">$y_{inner}&lt;0$</span>. If <span class="math-container">$t&gt;r$</span> the envelope meets the ellipse at a point <span class="math-container">$P$</span>: the corresponding value of <span class="math-container">$\alpha$</span> can be found from the equation <span class="math-container">$(x_{inner}/r)^2+(y_{inner}/t)^2=1$</span>, whose solution <span class="math-container">$\alpha=\bar\alpha$</span> is given by: <span class="math-container">$$ \begin{cases} \displaystyle\bar\alpha= 2\arccos\sqrt{t\over{t+r}}, &amp;\text{for $t\le3r$;}\\ \displaystyle\bar\alpha= \arccos\sqrt{t\over{2(t-r)}}, &amp;\text{for $t\ge3r$.}\\ \end{cases} $$</span></p> <p>The corresponding values <span class="math-container">$\bar\theta$</span> for the parameter of the ellipse can be easily computed from: <span class="math-container">$\bar\theta=\arcsin(y_{inner}(\bar\alpha)/t)$</span>: <span class="math-container">$$ \begin{cases} \displaystyle\bar\theta= \arcsin\frac{4 \sqrt{rt} (t-r)}{(r+t)^2}, &amp;\text{for $t\le3r$;}\\ \displaystyle\bar\theta= \arcsin\frac{\sqrt{t(t-2 r)}}{t-r}, &amp;\text{for $t\ge3r$.}\\ \end{cases} $$</span></p> <p>For <span class="math-container">$t\ge r$</span> we can then represent half the area under the inner shape of the sofa as an integral: <span class="math-container">$$ {1\over2}Area_{inner}=\int_0^{2t-r} y\,dx= \int_{\pi/2}^{\bar\theta}t\sin\theta{d\over d\theta}(r\cos\theta)\,d\theta+ \int_{\bar\alpha}^{\pi} y_{inner}{dx_{inner}\over d\alpha}\,d\alpha. $$</span></p> <p>This can be computed explicitly, here's for instance the result for <span class="math-container">$r&lt;t&lt;3r$</span>: <span class="math-container">$$ \begin{align} {1\over2}Area_{inner}= {\pi\over4}(r^2-rt+t^2) +\frac{1}{48} (t-r)^2 \left[-24 \cos ^{-1}\frac{\sqrt{t}}{\sqrt{r+t}} +12 \sin \left(2 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right)\\ +12 \sin \left(4 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) -4 \sin \left(6 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) -3 \sin \left(8 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) \right]\\ -2 r t {\sqrt{rt} |r^2-6 r t+t^2|\over(r+t)^4} -{1\over4} r t \sin ^{-1}\frac{4 \sqrt{rt} (t-r)}{(r+t)^2}\\ \end{align} $$</span></p> <p>The outer shape of the sofa is formed by line <span class="math-container">$y=1$</span> and by the envelope of lines <span class="math-container">$A'C'$</span>. By repeating the same steps as above one can find the parametric equations of the outer envelope: <span class="math-container">$$ \begin{align} x_{outer}&amp;= (r-t) \left(\cos\alpha-{1\over2}\cos2\alpha\right) +\cos\frac{\alpha}{2}+{1\over2}(r+t)\\ y_{outer}&amp;= \sin\frac{\alpha}{2} \left(-3 (r-t) \cos\frac{\alpha}{2} +(t-r) \cos\frac{3 \alpha}{2}+1\right)\\ \end{align} $$</span> This curve meets line <span class="math-container">$y=1$</span> for <span class="math-container">$\alpha=\pi$</span> if <span class="math-container">$t-r\le\bar x$</span>, where <span class="math-container">$\bar x=\frac{1}{432} \left(17 \sqrt{26 \left(11-\sqrt{13}\right)}-29 \sqrt{2 \left(11-\sqrt{13}\right)}\right)\approx 0.287482$</span>. In that case the intersection point has coordinates <span class="math-container">$(2t-r,1)$</span> and the area under the outer shape of the sofa can be readily found: <span class="math-container">$$ {1\over2}Area_{outer}={1\over3}(r+2t)+{\pi\over4}(1-(t-r)^2) $$</span> If, on the other hand, <span class="math-container">$t-r&gt;\bar x$</span> then one must find the value of parameter <span class="math-container">$\alpha$</span> at which the envelope meets the line, by solving the equation <span class="math-container">$y_{outer}=1$</span> and looking for the smallest positive solution. This has to be done, in general, by some numerical method.</p> <p>The area of the sofa can then be found as <span class="math-container">$Area_{tot}=Area_{outer}-Area_{inner}$</span>. I used Mathematica to draw a contour plot of this area, as a function of <span class="math-container">$r$</span> (horizontal axis) and <span class="math-container">$t$</span> (vertical axis):</p> <p><a href="https://i.sstatic.net/Z2Ypa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z2Ypa.png" alt="enter image description here" /></a></p> <p>There is a clear maximum in the region around <span class="math-container">$r = 0.6$</span> and <span class="math-container">$t = 0.7$</span>. In this region one can use the simple expressions for <span class="math-container">$Area_{inner}$</span> and <span class="math-container">$Area_{outer}$</span> given above, to find the exact value of the maximum. A numerical search gives <span class="math-container">$2.217856997942074266$</span> for the maximum area, reached for <span class="math-container">$r=0.605513519698965$</span> and <span class="math-container">$t=0.6678342468712839$</span>.</p>
<p>This is not an answer to the stated question, just an outline and an example of how to compute the envelope (and thus area) numerically, pretty efficiently.</p> <p>The code seems to work, but it is not tested, or nowhere near optimal in the algorithmic level; it is just a rough initial sketch to explore the problem at hand.</p> <p>The sofa is symmetric with respect to the $x$ axis, so we only need to consider the (positive $x$) half plane. If we use $\alpha$ for the rotation, $\alpha \in [0, \pi]$, the initially vertical walls (on the right side) are the only ones we need to consider. For simplicity, I'll use $r_x$ and $r_y$ for the radiuses (OP used $r$ and $t$, respectively).</p> <p>The equation for the points that form the near side wall ($t \ge 0$) is $$\vec{p}_{nw}(t, \alpha) = \begin{cases} x_{nw}(t, \alpha) = r_x \cos(\alpha) + t \sin(\alpha/2)\\ y_{nw}(t, \alpha) = r_y \sin(\alpha) - t \cos(\alpha/2)\end{cases}$$</p> <p>Setting $x_{nw}(t, \alpha) = x$, solving for $t$, and substituting into $y_{nw}(t, \alpha)$ yields $$y_n(x, \alpha) = r_y \sin(\alpha) + \frac{r_x \cos(\alpha) - x}{\tan(\alpha/2)}$$ Because the near wall starts at angle $\alpha$, we must only consider $x \ge x_n(0,\alpha)$. We can do that in practice by defining $$\alpha_0(x) = \left\lbrace\begin{matrix} r_y\sqrt{1 - \left(\frac{x}{r_x}\right)^2},&amp;x \lt r_x\\ 0,&amp;x \ge r_x\end{matrix}\right.$$ and only considering $\alpha_0 \le \alpha \le \pi$ when evaluating $y_n(x,\alpha)$. It reaches its maximum when its derivative, $$\frac{d y_n(x,\alpha)}{d \alpha} = \frac{x - r_x}{1 - \cos(\alpha)} - (r_x - r_y)\cos(\alpha)$$ is zero. There may be two real roots, $$\begin{align}\alpha_1(x) &amp;= \arccos\left(\frac{\sqrt{ (r_x - r_y)(5 r_x - r_y - 4 x)} + (r_x - r_y)}{2 ( r_x - r_y )}\right)\\ \alpha_2(x) &amp;= \pi - \arccos\left(\frac{\sqrt{ (r_x - r_y)(5 r_x - r_y - 4 x)} - (r_x - r_y)}{2 ( r_x - r_y )}\right)\end{align}$$ In summary, the near wall is the maximum of one, two, or three values: $y_n(x,\alpha_0)$; $y_n(x,\alpha_1)$ if $\alpha_0 \lt \alpha_1 \lt \pi$; and $y_n(x,\alpha_2$) if $\alpha_0 \lt \alpha_2 \lt \pi$.</p> <p>For the far side wall, the points are $$\vec{p}_f(t, \alpha) = \begin{cases} x_f(t) = r_x \cos(\alpha) + \cos(\alpha/2) + \sin(\alpha/2) + t \sin(\alpha/2)\\ y_f(t) = r_y \sin(\alpha) + \sin(\alpha/2) - \cos(\alpha/2) - t \cos(\alpha/2)\end{cases}$$ The first added term represents the corridor width, and the second the corridor height, both $1$. Setting $x_f(t, \alpha) = x$, solving for $t$, and substituting into $y_f(t, \alpha)$ yields $$y_f(x, \alpha) = \frac{(r_x + r_y - 2x)\cos(\alpha/2) + (r_x - r_y)\cos(3\alpha/2) + 2 }{2 \sin(\alpha/2)}$$ Its derivative is $$\frac{d y_f(x, \alpha)}{d \alpha} = \frac{rx - x + \cos(\alpha/2)}{\cos(\alpha) - 1} - \cos(\alpha)(r_x - r_y)$$ It can have up to four real roots (the roots of $4(r_x-r_y)\chi^4 - 6(r_x-r_y)\chi^2 - \chi + (r_x - r_y) + (x - r_y) = 0$). While it does have analytical solutions, they are nasty, so I prefer to use a binary search instead. I utilize the fact that the sign (and zeros) of the derivative are the same as the simpler function $$d_f(x, \alpha) = \cos(\alpha)\left(\cos(\alpha)-1\right)(r_x - r_y) - \cos(\alpha/2) - r_x + x$$ which does not have poles at $\alpha=0$ or $\alpha=\pi$.</p> <p>Here is an example implementation in C:</p> <pre><code>#include &lt;stdlib.h&gt; #include &lt;string.h&gt; #include &lt;stdio.h&gt; #include &lt;math.h&gt; #define PI 3.14159265358979323846 static double near_y(const double x, const double xradius, const double yradius) { double y = (x &lt; xradius) ? yradius * sqrt(1.0 - (x/xradius)*(x/xradius)) : 0.0; if (xradius != yradius) { const double a0 = (x &lt; xradius) ? acos(x/xradius) : 0.0; const double s = (xradius - yradius)*(5*xradius - yradius - 4*x); if (s &gt;= 0.0) { const double r = 0.5 * sqrt(s) / (xradius - yradius); if (r &gt; -1.5 &amp;&amp; r &lt; 0.5) { const double a1 = acos(r + 0.5); if (a1 &gt; a0 &amp;&amp; a1 &lt; PI) { const double y1 = yradius * sin(a1) + (xradius * cos(a1) - x) / tan(0.5 * a1); if (y &lt; y1) y = y1; } } if (r &gt; -0.5 &amp;&amp; r &lt; 1.5) { const double a2 = PI - acos(r - 0.5); if (a2 &gt; a0 &amp;&amp; a2 &lt; PI) { const double y2 = yradius * sin(a2) + (xradius * cos(a2) - x) / tan(0.5 * a2); if (y &lt; y2) y = y2; } } } } return y; } </code></pre> <p>Above, <code>near_y()</code> finds the maximum $y$ coordinate the near wall reaches at point $x$. </p> <pre><code>static double far_y(const double x, const double xradius, const double yradius) { const double rxy = xradius - yradius; const double rx = xradius - x; double retval = 1.0; double anext = 0.0; double dnext = x - 1.0 - xradius; double acurr, dcurr, y; /* Outer curve starts at min(1+xradius, 2*yradius-xradius). */ if (x &lt; 1.0 + xradius &amp;&amp; x &lt; yradius + yradius - xradius) return 1.0; while (1) { acurr = anext; dcurr = dnext; anext += PI/1024.0; if (anext &gt;= PI) break; dnext = cos(anext)*(cos(anext) - 1.0)*rxy - cos(anext*0.5) - rx; if ((dcurr &lt; 0.0 &amp;&amp; dnext &gt; 0.0) || (dcurr &gt; 0.0 &amp;&amp; dnext &lt; 0.0)) { double amin = (dcurr &lt; 0.0) ? acurr : anext; double amax = (dcurr &lt; 0.0) ? anext : acurr; double a, d; do { a = 0.5 * (amin + amax); d = cos(a)*(cos(a)-1.0)*rxy - cos(a*0.5) - rx; if (d &lt; 0.0) amin = a; else if (d &gt; 0.0) amax = a; else break; } while (amax &gt; amin &amp;&amp; a != amin &amp;&amp; a != amax); y = (cos(0.5*a)*(0.5*(xradius+yradius)-x) + cos(1.5*a)*rxy*0.5 + 1.0) / sin(0.5*a); if (retval &gt; y) { retval = y; if (y &lt;= 0.0) return 0.0; } } else if (dcurr == 0.0) { y = (cos(0.5*acurr)*(0.5*(xradius+yradius)-x) + cos(1.5*acurr)*rxy*0.5 + 1.0) / sin(0.5*acurr); if (retval &gt; y) { y = retval; if (y &lt;= 0.0) return 0.0; } } } return retval; } </code></pre> <p>Above, <code>far_y()</code> finds the minimum $y$ coordinate for the far wall at $x$. the far wall reaches at the same point. It calculates the sign of the derivative for 1024 values of $\alpha$, and uses a binary search to find the root (and the extremum $y$) whenever the derivative spans zero.</p> <p>With the above two functions, we only need to divide the full sofa width ($1 + r_x$) to slices, evaluate the $y$ coordinates for each slice, multiply the $y$ coordinate difference by the double slice width (since we only calculate one half of the sofa), to obtain an estimate for the sofa area (using the midpoint rule for the integral):</p> <pre><code>double sofa_area(const unsigned int xsamples, const double xradius, const double yradius) { if (xradius &gt; 0.0 &amp;&amp; yradius &gt; 0.0) { const double dx = (1.0 + xradius) / xsamples; double area = 0.0; unsigned int i; for (i = 0; i &lt; xsamples; i++) { const double x = dx * (0.5 + i); const double ymin = near_y(x, xradius, yradius); const double ymax = far_y(x, xradius, yradius); if (ymin &lt; ymax) area += ymax - ymin; } return 2*dx*area; } else return 0.0; } </code></pre> <p>As far as I have found, the best one is <code>sofa_area(N, 0.6055, 0.6678) = 2.21785</code> (with <code>N ≥ 5000</code>, larger <code>N</code> yields more precise estimates; I checked up to <code>N = 1,000,000</code>).</p> <p>The curve the inner corner makes ($(x_{wn}(0,\alpha), y_{wn}(0,\alpha))$, $0 \le \alpha \le \pi$) is baked inside <code>near_y()</code> and <code>far_y()</code> functions. However, it would be possible to replace $y_{wn}(0,\alpha)$ with a more complicated function (perhaps a polynomial scaling $r_y$, so that it is $1$ at $\alpha = 0, \pi$?), if one re-evaluates the functions above. I personally use Maple or Mathematica for the math, so the hard part, really, is to think of a suitable function that would allow "deforming" the elliptic path in interesting ways, without making the above equations too hard or slow to implement.</p> <p>The C code itself could be optimized, also. (I don't mean micro-optimizations; I mean things like using the trapezoid rule for the integral, better root finding approach for <code>far_y()</code>, and so on.)</p>
linear-algebra
<p>Mariano mentioned somewhere that everyone should prove once in their life that every matrix is conjugate to its transpose.</p> <p>I spent quite a bit of time on it now, and still could not prove it. At the risk of devaluing myself, might I ask someone else to show me a proof?</p>
<p>This question has a nice answer using the theory of <a href="https://en.wikipedia.org/wiki/Structure_theorem_for_finitely_generated_modules_over_a_principal_ideal_domain">modules over a PID</a>. Clearly the <a href="https://en.wikipedia.org/wiki/Smith_normal_form">Smith normal forms</a> (over $K[X]$) of $XI_n-A$ and of $XI_n-A^T$ are the same (by symmetry). Therefore $A$ and $A^T$ have the same invariant factors, thus the same <a href="https://en.wikipedia.org/wiki/Rational_canonical_form">rational canonical form</a>*, and hence they are similar over$~K$.</p> <p>*The Wikipedia article at the link badly needs rewriting.</p>
<p>I had in mind an argument using the Jordan form, which reduces the question to single Jordan blocks, which can then be handled using Ted's method ---in the comments.</p> <p>There is one subtle point: the matrix which conjugates a matrix $A\in M_n(k)$ to its transpose can be taken with coefficients in $k$, no matter what the field is. On the other hand, the Jordan canonical form exists only for algebraically closed fields (or, rather, fields which split the characteristic polynomial)</p> <p>If $K$ is an algebraic closure of $k$, then we can use the above argument to find an invertible matrix $C\in M_n(K)$ such that $CA=A^tC$. Now, consider the equation $$XA=A^tX$$ in a matrix $X=(x_{ij})$ of unknowns; this is a linear equation, and <em>over $K$</em> it has non-zero solutions. Since the equation has coefficients in $k$, it follows that there are also non-zero solutions with coefficients in $k$. This solutions show $A$ and $A^t$ are conjugated, except for a detail: can you see how to assure that one of this non-zero solutions has non-zero determinant?</p>
probability
<p>I have come across this text recently. I was confused, asked a friend, she was also not certain. Can you explain please? What author is talking about here? I don't understand. Is the problem with the phrase &quot;on average&quot;?</p> <blockquote> <p>Innumerable misconceptions about probability. For example, suppose I toss a fair coin 100 times. On every “heads”, I take one step to the north. On every “tails”, I take one step to the south. After the 100th step, how far away am I, on average, from where I started? (Most kids – and more than a few teachers – say “zero” ... which is not the right answer.)</p> </blockquote> <p>In a way it is pointless to talk about misconceptions, when you don't explain the misconceptions...</p> <p>Source: <a href="https://www.av8n.com/physics/pedagogy.htm" rel="noreferrer">https://www.av8n.com/physics/pedagogy.htm</a> Section 4.2 Miscellaneous Misconceptions, item number 5</p>
<p>Since the distance can't be negative, the average distance is strictly greater than zero when at least once you don't land on zero.</p> <p>Let's say you land 2 steps above zero on the first try and 2 steps beneath zero on the second try. Then on average you will have thrown heads and tails equally many times, but the average distance from zero is 2.</p> <p>How you can see that is as follows: suppose you start with a variable <span class="math-container">$x$</span> which is initially 0 and every time you throw heads, you add 1, and every time you throw tails, you subtract 1.</p> <p>Then the average value of <span class="math-container">$x$</span> after 100 throws is 0, since they expect you to throw as many heads as tails on average.</p> <p>But the average distance is the average value of <span class="math-container">$|x|$</span> after 100 throws. Throwing a lot of heads once in one &quot;run&quot; and throwing a lot of tails in another &quot;run&quot; cancels in the average value, but adds to the average distance.</p>
<p>The average distance (in steps) from where you started is approximately <span class="math-container">$\sqrt{\frac{200}{\pi}}\approx 7.978845608$</span> and as the number of steps <span class="math-container">$N$</span> tends to infinity is asymptotic to <span class="math-container">$\sqrt{\frac{2N}{\pi}}$</span>.</p> <p>The &quot;misconception&quot; here is a confusion between the expectation of <span class="math-container">$S_N=\sum_{i=1}^NX_i$</span> with <span class="math-container">$X_i$</span> independent random variables taking values <span class="math-container">$\{-1,1\}$</span> with equal probabilities <span class="math-container">$\frac{1}{2}$</span>, and the expectation of <span class="math-container">$|S_N|$</span>; i think this confusion is highly prevalent because we instinctively visualize position and its average, rather than distance, especially because here the symmetry of the random walk draws us to the simple &quot;symmetric&quot; value, <span class="math-container">$0=-0$</span>. There is another closely related value, simpler to calculate and in some ways more natural: the squareroot of the expectation of <span class="math-container">$S_N^2$</span>, that is the standard deviation <span class="math-container">$\sigma$</span> of <span class="math-container">$S_N$</span>, as <span class="math-container">$ES_N=0$</span>.</p> <p>There are <span class="math-container">$2^N$</span> possible paths after N steps, each with probability <span class="math-container">$2^{-N}$</span>, thus as noted in other answers since all values of the random variable <span class="math-container">$|S_N|$</span> are nonnegative, it suffices to find one that is positive to prove that the average distance <span class="math-container">$E|S_N|&gt;0$</span>. You may take the path <span class="math-container">$\omega=(1,1,1,...,1,1)$</span>, &quot;all steps to the north&quot;: the final distance from where you started is <span class="math-container">$N$</span> which contributes <span class="math-container">$\frac{N}{2^N}$</span> to the expectation, so <span class="math-container">$E|S_N|\geq\frac{N}{2^N}&gt;0$</span>.</p> <p>Hölder's inequality implies that the variance <span class="math-container">$\sigma^2_{S_N}=ES_N^2\geq (E|S_N|)^2$</span>; and actually <span class="math-container">$ES_N^2=N$</span> (by independence of the <span class="math-container">$X_i$</span> and <span class="math-container">$\sigma_{X_i}^2=1$</span>) while as <span class="math-container">$N\rightarrow\infty$</span>, <span class="math-container">$(E|S_N|)^2\sim\frac{2N}{\pi}&lt;N$</span>.</p> <p>The exact formula -whose proof you can find in <a href="https://mathworld.wolfram.com/RandomWalk1-Dimensional.html" rel="nofollow noreferrer">https://mathworld.wolfram.com/RandomWalk1-Dimensional.html</a> - for <span class="math-container">$E|S_N|$</span> is <span class="math-container">$\frac{(N-1)!!}{(N-2)!!}$</span>, for <span class="math-container">$N$</span> even, and <span class="math-container">$\frac{N!!}{(N-1)!!}$</span> for <span class="math-container">$N$</span> odd. And both formulas have the same asymptotic, given above. As commented by Džuris, the average distance increases only when taking an odd-numbered step -see my reply for a direct proof, not relying on the exact formula, which is not trivial to arrive at.</p> <p>A decimal approximation of the value (rather than of the asymptotic estimate given at the beginning of this answer) is <span class="math-container">$E|S_{100}|\approx 7.95892373871787614981270502421704614$</span>, and of course the standard deviation of <span class="math-container">$S_{100}$</span> is <span class="math-container">$10$</span>.</p>
matrices
<p>I'm trying to build an intuitive geometric picture about diagonalization. Let me show what I got so far.</p> <p>Eigenvector of some linear operator signifies a direction in which operator just ''works'' like a stretching, in other words, operator preserves the direction of its eigenvector. Corresponding eigenvalue is just a value which tells us for <em>how much</em> operator stretches the eigenvector (negative stretches = flipping in the opposite direction). When we limit ourselves to real vector spaces, it's intuitively clear that rotations don't preserve direction of any non-zero vector. Actually, I'm thinking about 2D and 3D spaces as I write, so I talk about ''rotations''... for n-dimensional spaces it would be better to talk about ''operators which act like rotations on some 2D subspace''.</p> <p>But, there are non-diagonalizable matrices that aren't rotations - all non-zero nilpotent matrices. My intuitive view of nilpotent matrices is that they ''gradually collapse all dimensions/gradually lose all the information'' (if we use them over and over again), so it's clear to me why they can't be diagonalizable.</p> <p>But, again, there are non-diagonalizable matrices that aren't rotations nor nilpotent, for an example:</p> <p>$$ \begin{pmatrix} 1 &amp; 1 \\ 0 &amp; 1 \end{pmatrix} $$</p> <p>So, what's the deal with them? Is there any kind of intuitive geometric reasoning that would help me grasp why there are matrices like this one? What's their characteristic that stops them from being diagonalizable?</p>
<p>I think a very useful notion here is the idea of a "<strong>generalized eigenvector</strong>".</p> <p>An <strong>eigenvector</strong> of a matrix $A$ is a vector $v$ with associated value $\lambda$ such that $$ (A-\lambda I)v=0 $$ A <strong>generalized eigenvector</strong>, on the other hand, is a vector $w$ with the same associated value such that $$ (A-\lambda I)^kw=0 $$ That is, $(A-\lambda I)$ is nilpotent on $w$. Or, in other words: $$ (A - \lambda I)^{k-1}w=v $$ For some eigenvector $v$ with the same associated value.</p> <hr> <p>Now, let's see how this definition helps us with a non-diagonalizable matrix such as $$ A = \pmatrix{ 2 &amp; 1\\ 0 &amp; 2 } $$ For this matrix, we have $\lambda=2$ as a unique eigenvalue, and $v=\pmatrix{1\\0}$ as the associated eigenvector, which I will let you verify. $w=\pmatrix{0\\1}$ is our generalized eiegenvector. Notice that $$ (A - 2I) = \pmatrix{ 0 &amp; 1\\ 0 &amp; 0} $$ Is a nilpotent matrix of order $2$. Note that $(A - 2I)v=0$, and $(A- 2I)w=v$ so that $(A-2I)^2w=0$. But what does this mean for what the matrix $A$ does? The behavior of $v$ is fairly obvious, but with $w$ we have $$ Aw = \pmatrix{1\\2}=2w + v $$ So $w$ behaves kind of like an eigenvector, but not really. In general, a generalized eigenvector, when acted upon by $A$, gives another vector in the generalized eigenspace.</p> <hr> <p>An important related notion is <a href="http://en.wikipedia.org/wiki/Jordan_normal_form">Jordan Normal Form</a>. That is, while we can't always diagonalize a matrix by finding a basis of eigenvectors, we can always put the matrix into Jordan normal form by finding a basis of generalized eigenvectors/eigenspaces.</p> <p>I hope that helps. I'd say that the most important thing to grasp from the idea of generalized eigenvectors is that every transformation can be related to the action of a nilpotent over some subspace. </p>
<p><strong>Edit:</strong> The algebra I speak of here is <em>not</em> actually the Grassmann numbers at all -- they are <span class="math-container">$\mathbb{R}[X]/(X^n)$</span>, whose generators <em>don't</em> satisfy the anticommutativity relation even though they satisfy all the nilpotency relations. The dual-number stuff for 2 by 2 is still correct, just ignore my use of the word "Grassmann".</p> <hr> <p>Non-diagonalisable 2 by 2 matrices can be diagonalised over the <a href="https://en.wikipedia.org/wiki/Dual_number" rel="nofollow noreferrer">dual numbers</a> -- and the "weird cases" like the Galilean transformation are not fundamentally different from the nilpotent matrices.</p> <p>The intuition here is that the Galilean transformation is sort of a "boundary case" between real-diagonalisability (skews) and complex-diagonalisability (rotations) (which you can sort of think in terms of discriminants). In the case of the Galilean transformation <span class="math-container">$\left[\begin{array}{*{20}{c}}{1}&amp;{v}\\{0}&amp;{1}\end{array}\right]$</span>, it's a small perturbation away from being diagonalisable, i.e. it sort of has "repeated eigenvectors" (you can visualise this with <a href="https://shadanan.github.io/MatVis/" rel="nofollow noreferrer">MatVis</a>). So one may imagine that the two eigenvectors are only an "epsilon" away, where <span class="math-container">$\varepsilon$</span> is the unit dual satisfying <span class="math-container">$\varepsilon^2=0$</span> (called the "soul"). Indeed, its characteristic polynomial is:</p> <p><span class="math-container">$$(\lambda-1)^2=0$$</span></p> <p>Whose solutions among the dual numbers are <span class="math-container">$\lambda=1+k\varepsilon$</span> for real <span class="math-container">$k$</span>. So one may "diagonalise" the Galilean transformation over the dual numbers as e.g.:</p> <p><span class="math-container">$$\left[\begin{array}{*{20}{c}}{1}&amp;{0}\\{0}&amp;{1+v\varepsilon}\end{array}\right]$$</span></p> <p>Granted this is not unique, this is formed from the change-of-basis matrix <span class="math-container">$\left[\begin{array}{*{20}{c}}{1}&amp;{1}\\{0}&amp;{\epsilon}\end{array}\right]$</span>, but any vector of the form <span class="math-container">$(1,k\varepsilon)$</span> is a valid eigenvector. You could, if you like, consider this a canonical or "principal value" of the diagonalisation, and in general each diagonalisation corresponds to a limit you can take of real/complex-diagonalisable transformations. Another way of thinking about this is that there is an entire eigenspace spanned by <span class="math-container">$(1,0)$</span> and <span class="math-container">$(1,\varepsilon)$</span> in that little gap of multiplicity. In this sense, the geometric multiplicity is forced to be equal to the algebraic multiplicity*.</p> <p>Then a nilpotent matrix with characteristic polynomial <span class="math-container">$\lambda^2=0$</span> has solutions <span class="math-container">$\lambda=k\varepsilon$</span>, and is simply diagonalised as:</p> <p><span class="math-container">$$\left[\begin{array}{*{20}{c}}{0}&amp;{0}\\{0}&amp;{\varepsilon}\end{array}\right]$$</span></p> <p>(Think about this.) Indeed, the resulting matrix has minimal polynomial <span class="math-container">$\lambda^2=0$</span>, and the eigenvectors are as before.</p> <hr> <p>What about higher dimensional matrices? Consider:</p> <p><span class="math-container">$$\left[ {\begin{array}{*{20}{c}}0&amp;v&amp;0\\0&amp;0&amp;w\\0&amp;0&amp;0\end{array}} \right]$$</span></p> <p>This is a nilpotent matrix <span class="math-container">$A$</span> satisfying <span class="math-container">$A^3=0$</span> (but not <span class="math-container">$A^2=0$</span>). The characteristic polynomial is <span class="math-container">$\lambda^3=0$</span>. Although <span class="math-container">$\varepsilon$</span> might seem like a sensible choice, it doesn't really do the trick -- if you try a diagonalisation of the form <span class="math-container">$\mathrm{diag}(0,v\varepsilon,w\varepsilon)$</span>, it has minimal polynomial <span class="math-container">$A^2=0$</span>, which is wrong. Indeed, you won't be able to find three linearly independent eigenvectors to diagonalise the matrix this way -- they'll all take the form <span class="math-container">$(a+b\varepsilon,0,0)$</span>.</p> <p>Instead, you need to consider a generalisation of the dual numbers, called the Grassmann numbers, with the soul satisfying <span class="math-container">$\epsilon^n=0$</span>. Then the diagonalisation takes for instance the form:</p> <p><span class="math-container">$$\left[ {\begin{array}{*{20}{c}}0&amp;0&amp;0\\0&amp;{v\epsilon}&amp;0\\0&amp;0&amp;{w\epsilon}\end{array}} \right]$$</span></p> <hr> <p>*Over the reals and complexes, when one defines algebraic multiplicity (as "the multiplicity of the corresponding factor in the characteristic polynomial"), there is a single eigenvalue corresponding to that factor. This is of course no longer true over the Grassmann numbers, because they are not a field, and <span class="math-container">$ab=0$</span> no longer implies "<span class="math-container">$a=0$</span> or <span class="math-container">$b=0$</span>".</p> <p>In general, if you want to prove things about these numbers, the way to formalise them is by constructing them as the quotient <span class="math-container">$\mathbb{R}[X]/(X^n)$</span>, so you actually have something clear to work with.</p> <p>(Perhaps relevant: <a href="https://math.stackexchange.com/questions/46078/grassmann-numbers-as-eigenvalues-of-nilpotent-operators">Grassmann numbers as eigenvalues of nilpotent operators?</a> -- discussing the fact that the Grassmann numbers are not a field).</p> <p>You might wonder if this sort of approach can be applicable to LTI differential equations with repeated roots -- after all, their characteristic matrices are exactly of this Grassmann form. As pointed out in the comments, however, this diagonalisation is still not via an invertible change-of-basis matrix, it's still only of the form <span class="math-container">$PD=AP$</span>, not <span class="math-container">$D=P^{-1}AP$</span>. I don't see any way to bypass this. See my posts <a href="https://thewindingnumber.blogspot.com/2019/02/all-matrices-can-be-diagonalised.html" rel="nofollow noreferrer">All matrices can be diagonalised</a> (a re-post of this answer) and <a href="https://thewindingnumber.blogspot.com/2018/03/repeated-roots-of-differential-equations.html" rel="nofollow noreferrer">Repeated roots of differential equations</a> for ideas, I guess.</p>
probability
<p>Choose a random number between $0$ and $1$ and record its value. Do this again and add the second number to the first number. Keep doing this until the sum of the numbers exceeds $1$. What's the expected value of the number of random numbers needed to accomplish this?</p>
<p>Here is a (perhaps) more elementary method. Let $X$ be the amount of numbers you need to add until the sum exceeds $1$. Then (by linearity of expectation):</p> <p>$$ \mathbb{E}[X] = 1 + \sum_{k \geq 1} \Pr[X &gt; k] $$</p> <p>Now $X &gt; k$ if the sum of the first $k$ numbers $x_1,\ldots,x_k$ is smaller than $1$. This is exactly equal to the volume of the $k$-dimensional set:</p> <p>$$ \left\{(x_1,\ldots,x_k) : \sum_{i=1}^k x_i \leq 1, \, x_1,\ldots,x_k \geq 0\right\}$$</p> <p>This is known as the $k$-dimensional <em>simplex</em>. When $k = 1$, we get a line segment of length $1$. When $k = 2$, we get a right isosceles triangle with sides of length $1$, so the area is $1/2$. When $k=3$, we get a triangular pyramid (tetrahedron) with unit sides, so the volume is $1/6$. In general, the volume is $1/k!$, and so</p> <p>$$ \mathbb{E}[X] = 1 + \sum_{k \geq 1} \frac{1}{k!} = e. $$</p>
<p>Assuming the numbers come from a uniform distribution over $[0,1]$ and that the trials are independent, here is an outline (this is example 7.4 4h. in Sheldon Ross' <i>A First Course in Probability</i>, sixth edition): </p> <ol> <li><p>Let $X_i$ be the number obtained on the $i$'th trial. </p></li> <li><p>For $0\le x\le1$, let $Y(x)$ be the minimum number of trials needed so that the sum of the $X_i$ exceeds $x$. Set $e(x)=\Bbb E [Y(x)]$.</p></li> <li><p>Compute $e(x)$ by conditioning on the value of $X_1$: $$\tag{1} e(x)=\int_0^1 \Bbb E [ Y(x) | X_1=y]\, dy. $$</p></li> </ol> <p>Here, use the fact that $$\tag{2}\Bbb E [ Y(x) | X_1=y] = \cases{1,&amp; $y&gt;x$\cr 1+e(x-y),&amp; $y\le x $}.$$</p> <p>Substitution of $(2)$ into $(1)$ will give $$\tag{3} e(x)=1+\int_0^x e(u)\,du. $$</p> <ol start="4"> <li><p>Solve equation $(3)$ (by differentiating both sides with respect to $x$ first) for $e(x)$.</p></li> <li><p>You wish to find $e(1)$.</p></li> </ol>
logic
<p>It seems that given a statement <span class="math-container">$a = b$</span>, that <span class="math-container">$a + c = b + c$</span> is assumed also to be true.</p> <p>Why isn't this an axiom of arithmetic, like the commutative law or associative law?</p> <p>Or is it a consequence of some other axiom of arithmetic?</p> <p>Thanks!</p> <p>Edit: I understand the intuitive meaning of equality. Answers that stated that <span class="math-container">$a = b$</span> means they are the same number or object make sense but what I'm asking is if there is an explicit law of replacement that allows us to make this intuitive truth a valid mathematical deduction. For example is there an axiom of Peano's Axioms or some other axiomatic system that allows for adding or multiplying both sides of an equation by the same number? </p> <p>In all the texts I've come across I've never seen an axiom that states if <span class="math-container">$a = b$</span> then <span class="math-container">$a + c = b + c$</span>. I have however seen if <span class="math-container">$a &lt; b$</span> then <span class="math-container">$a + c &lt; b + c$</span>. In my view <span class="math-container">$&lt;$</span> and <span class="math-container">$=$</span> are similar so the absence of a definition for equality is strange.</p>
<p>If you are given that $$a = b$$ then you can always infer that $$f(a) = f(b)$$ for a function $f$. That's what it means to be a function. However, if you are given $$f(a) = f(b)$$ then you can't infer $$a = b$$ unless the function is injective (invertible) over a domain containing $a$ and $b$.</p> <p>For your problem, $f(x) = x + c$.</p>
<p>This is a basic property of equality. An equation like $$a=b$$ means that $a$ and $b$ are different names for <em>the same number</em>. If you do something to $a$, and you do the same thing to $b$, you must get the same result because $a$ and $b$ were the same to begin with.</p> <p>For example, how do we know that <a href="http://en.wikipedia.org/wiki/Samuel_Clemens">Samuel Clemens</a> and <a href="http://en.wikipedia.org/wiki/Mark_Twain">Mark Twain</a> were equal in height? Simple: Because they were the same person.</p> <p>How do we know that $a+c$ and $b+c$ are equal numbers? Because $a$ and $b$ are the same number.</p>
differentiation
<p>I'm looking for cases like $$\lim_{x \to 0} \frac {1-\cos(x)}{x^2}$$ that will not give you the answer the first time you use L'Hôpital's rule on them. For example in this case it will result in a number $\frac{1}{2}$ the second time you use L'Hôpital's rule. I want examples of limits like $\lim_{x \to c} \frac {f(x)}{g(x)}$ so that you have to use L'Hôpital's rule $5$ times, $18$ times, or say $n$ times on them to get an answer. Another question is about the case in which you use L'Hôpital's rule as many times as you want but you always end with $\lim_{x \to 0} \frac {0}{0}$. Does this case exist?</p>
<p>Sure. Do you want $18$ times? Then consider the limit$$\lim_{x\to0}\frac{x^{18}}{x^{18}}$$or the non-trivial example$$\lim_{x\to0}\frac{\sin(x^{18})}{1-\cos(x^9)}.$$For the case in which you always get $\frac00$, consider the function$$\begin{array}{rccc}f\colon&amp;\mathbb{R}&amp;\longrightarrow&amp;\mathbb{R}\\&amp;x&amp;\mapsto&amp;\begin{cases}e^{-1/x^2}&amp;\text{ if }x\neq0\\0&amp;\text{ if }x=0\end{cases}\end{array}$$and the limit$$\lim_{x\to0}\frac{f(x)}{f(x)}$$or the non-trivial example$$\lim_{x\to0}\frac{f(x)}{f(x^2)}.$$</p>
<p>A couple of rather famous limits that each require 7 applications of L’Hôpital’s rule (unless evaluated by another method) are</p> <p>$$ \lim_{x \rightarrow 0} \,\frac{\tan{(\sin x)} \; - \; \sin{(\tan x)}}{x^7} \;\;\; \text{and} \;\;\; \lim_{x \rightarrow 0} \, \frac{\tan{(\sin x)} \; - \; \sin{(\tan x)}}{\arctan{(\arcsin x)} \; - \; \arcsin{(\arctan x)}} \;\; $$</p> <p>These two limits are discussed in the chronologically listed references below, with <strong>[11]</strong> being a generalization of the tan/sin and arctan/arcsin version. (Both <strong>[10]</strong> and <strong>[11]</strong> were brought to my attention by user21820.) Another limit that requires 7 applications of L’Hôpital’s rule is the following, which I mentioned (in an incorrect way, however) at the end of <strong>[6]</strong>:</p> <p>$$ \lim_{x \rightarrow 0} \,\frac{\tan x \; – \; 24\tan \frac{x}{2} \; - 4\sin x \; + \; 15x}{x^7} $$</p> <p><strong>[1]</strong> sci.math, <a href="http://mathforum.org/kb/message.jspa?messageID=225974" rel="noreferrer">13 February 2000</a></p> <p><strong>[2]</strong> sci.math, <a href="http://mathforum.org/kb/message.jspa?messageID=225975" rel="noreferrer">16 April 2000</a></p> <p><strong>[3]</strong> sci.math, <a href="http://mathforum.org/kb/message.jspa?messageID=225976" rel="noreferrer">11 July 2000</a></p> <p><strong>[4]</strong> sci.math, <a href="http://mathforum.org/kb/message.jspa?messageID=225978" rel="noreferrer">13 August 2001</a></p> <p><strong>[5]</strong> sci.math, <a href="http://mathforum.org/kb/message.jspa?messageID=3666566" rel="noreferrer">12 February 2005</a></p> <p><strong>[6]</strong> sci.math, <a href="http://mathforum.org/kb/message.jspa?messageID=6046911" rel="noreferrer">27 December 2007</a></p> <p><strong>[7]</strong> sci.math, <a href="http://mathforum.org/kb/message.jspa?messageID=6452579" rel="noreferrer">7 October 2008</a></p> <p><strong>[8]</strong> <a href="https://mathoverflow.net/questions/20696/a-question-regarding-a-claim-of-v-i-arnold">A question regarding a claim of V. I. Arnold</a>, mathoverflow, 8 April 2010.</p> <p><strong>[9]</strong> <a href="https://math.stackexchange.com/questions/548832/how-find-this-limit-lim-x-to-0-dfrac-sin-tanx-tan-sinxx">How find this limit $\lim_{x\to 0^{+}}\dfrac{\sin{(\tan{x})}-\tan{(\sin{x})}}{x^7}$</a>, Mathematics Stack Exchange, 2 November 2013.</p> <p><strong>[10]</strong> <a href="https://math.stackexchange.com/questions/809632/limit-of-dfrac-tan-1-sin-1x-sin-1-tan-1x-tan-sinx">Limit of $\dfrac{\tan^{-1}(\sin^{-1}(x))-\sin^{-1}(\tan^{-1}(x))}{\tan(\sin(x))-\sin(\tan(x))}$ as $x \rightarrow 0$</a>, Mathematics Stack Exchange, 26 May 2014.</p> <p><strong>[11]</strong> <a href="https://math.stackexchange.com/questions/810079/lim-x-to-0-dfracfx-gxg-1x-f-1x-1-for-any-f-g-in-c1">$\lim_{x \to 0} \dfrac{f(x)-g(x)}{g^{-1}(x)-f^{-1}(x)} = 1$ for any $f,g \in C^1$ that are tangent to $\text{id}$ at $0$ with some simple condition</a>, Mathematics Stack Exchange, 26 May 2014.</p>
game-theory
<p>A nim addition table is essentially created by putting, in any cell, the smallest number not to the left of the cell and not above that cell in its column. However, I know for a fact that nim addition is equivalent to binary addition without carrying. How does the first method of creating a nim addition table translate to the second method? I am certain it has something to do with the fact that the nim table cycles every $\mathbb{Z}/2^n$. </p> <p>I know that binary addition without carrying is the same as writing out 2 numbers as a sum of powers of 2, scratching out powers appearing in both sums, and adding the remaining. So in essence, I am asking how the first method I mentioned of creating a nim addition table translates to this method. Note that I am not asking about the actual game.</p>
<p>So, to interpret your question, you have two different procedures for filling out a table of numbers which has a definite top and left edge, but continues indefinitely down and to the right. First is</p> <blockquote> <ol> <li><p>Don't fill in any cell before all the positions directly above and directly to the left of it are filled.</p></li> <li><p>Write in each cell the smallest nonnegative integer that doesn't appear in any cell directly above it or directly to the left of it</p></li> </ol> </blockquote> <p>The second is</p> <blockquote> <p>Write in the cell with (zero-based) coordinates $(i,j)$ the number $i\oplus j$, where $\oplus$ is the bitwise XOR operation (that is, as you say, binary addition with no carries).</p> </blockquote> <p>And you want to prove that these two methods result in the same value in each cell of the table.</p> <p>We can prove that by long induction on $i$ and $j$. For the induction step we imagine that the cells above and to the left of $(i,j)$ have already been filled with the $\oplus$ of their coordinates, and we want to prove that step (2) of the first procedure results in exactly $i\oplus j$.</p> <p>One half of this is easy: since $\oplus$ is a group operation, $i\oplus j$ cannot equal $i\oplus b$ or $a\oplus j$ for any $b\ne j$ or $a\ne i$ -- in particular not for any <em>smaller</em> $a$ or $b$.</p> <p>We then need to prove that $i\oplus j$ is the <em>least</em> number that hasn't yet been used in the same row or column. In other words, every $c &lt; i\oplus j$ must already have been used somewhere.</p> <p>Consider the most significant bit position where $c$ differs from $i\oplus j$. Because $c$ is <em>less than</em> $i\oplus j$, there must be a <code>0</code> in this position of $c$ and a <code>1</code> in the position of $i\oplus j$. The latter <code>1</code> must come from either $i$ or $j$. Assume it comes from $i$ (the other case is exactly similar). Then flipping that bit of $i$ produces a number $a_0$ such that $a_0\oplus j$ agrees with $c$ up to and including the bit position of first difference. With appropriate adjustments to the <em>less</em> significant bits we can get an $a$ such that $a\oplus j=c$. And this $a$ must be less than $i$, because $a$ and $i$ first differ on a bit position that has <code>1</code> in $i$ and <code>0</code> in $a$.</p> <p>This completes the induction step, and thus the proof.</p>
<p>Using your first method, the number entered in row $r$, column $c$ is the smallest non-negative integer not appearing earlier in row $r$ or column $c$; call this number $r\oplus c$. Then $r\oplus c$ is the smallest non-negative integer that does <strong>not</strong> belong to the set</p> <p>$$\{r\oplus c':c'&lt;c\}\cup\{r'\oplus c:r'&lt;r\}\;;$$</p> <p>this is commonly called the <em>minimum excluded number</em> and denoted by</p> <p>$$\operatorname{mex}\Big(\{r\oplus c':c'&lt;c\}\cup\{r'\oplus c:r'&lt;r\}\Big)\;,$$</p> <p>as in <a href="http://en.wikipedia.org/wiki/Nimber" rel="nofollow">this article</a>. This is equal to the non-carrying binary sum (or exclusive OR) of $r$ and $c$. The article gives a proof, but it’s pretty concise; I’ll try to expand it a bit. For non-negative integers $m$ and $n$ let $m\dot+n$ be the non-carrying binary sum of $m$ and $n$.</p> <p>The proof is by induction. Suppose that we’ve already filled in all of the numbers $r\oplus c'$ with $c'&lt;c$ and $r'\oplus c$ with $r'&lt;r$, i.e., all of the numbers to the left of $r\oplus c$ in row $r$ and above it in column $c$, and suppose that that $r\oplus c'=r\dot+c'$ for $c'&lt;c$ and $r'\oplus c=r'\dot+c$ for $r'&lt;r$; we’ll show that this implies that $r\oplus c=r\dot+c$.</p> <p>Let $n=r\dot+ c$. Then $r\dot+n=r\dot+r\dot+c=c$, since $r\dot+r=0$; that is, $c$ is the only non-negative integer whose noncarrying binary sum with $r$ is $n$. It follows that $r\oplus c'=r\dot+c'\ne n$ for each $c'&lt;c$ and hence that $n\notin\{r\oplus c':c'&lt;c\}$. Similarly, $r$ is the only non-negative integer whose noncarrying binary sum with $r$ is $n$, so $r'\oplus c=r'\dot+c\ne n$ for each $r'&lt;r$, and $n\notin\{r'\oplus c:r'&lt;r\}$. In other words, $n\notin\{r\oplus c':c'&lt;c\}\cup\{r'\oplus c:r'&lt;r\}$: $n=r\dot+c$ does not appear to the left of or above $r\oplus c$ and is at least a candidate to be $r\oplus c$.</p> <p>Now suppose that $k&lt;n=r\dot+c$; we want to show that $k\in\{r\oplus c':c'&lt;c\}\cup\{r'\oplus c:r'&lt;r\}$, i.e., that $k$ appears either to the left of or above $r\oplus c$, so that $r\oplus c\ne k$. This will imply that $n$ is the smallest available number and hence the value that we assign to $r\oplus c$. To do this, let $\ell=k\dot+n=k\dot+r\dot+c$.</p> <blockquote> <p><strong>Claim:</strong> Either $r&gt;\ell\dot+r=k\dot+c$, or $c&gt;\ell\dot+c=k\dot+r$.</p> </blockquote> <p>Suppose for the moment that the claim is true. If $r&gt;k\dot+c$, let $r'=k\dot+c$; then $k=(k\dot+c)\dot+c=r'\dot+c=r'\oplus c$ appears to the left of $r\oplus c$ in row $r$ and is not available to be $r\oplus c$. Similarly, if $c&gt;k\dot+r$, let $c'=k\dot+r$; then $k=r\dot+(k\dot+r)=r\dot+c'=r\oplus c'$ appears above $r\oplus c$ in column $c$ and is not available to be $r\oplus c$. In either case $k$ is not available to be $r\oplus c$. Since $k$ was an arbitrary non-negative integer less than $n=r\dot+c$, it follows that $n$ is the smallest available integer and hence that $r\oplus c=n=r\dot+c$. This completes the induction step (apart from the proof of the claim), and the result follows.</p> <blockquote> <p><strong>Proof of Claim:</strong> Consider the most significant (leftmost) bit of the binary representation of $\ell$: it must be present in exactly one or all three of $k,r$, and $c$. If that bit were present in $k$, it would be zeroed out in $\ell\dot+k$, which would then be less than $k$. However, $\ell\dot+k=r\dot+c=n&gt;k$, so this isn’t the case, and it must therefore be present in $r$ or in $c$. But then the same argument shows that if it’s present in $r$, then $\ell\dot+r&lt;r$, and if it’s present in $c$, then $\ell\dot+c&lt;c$, proving the claim. $\dashv$</p> </blockquote>
differentiation
<p>The archetypal example of this is in the equation <span class="math-container">$\frac{dy}{dx}=y.$</span> When solving by separation, you end up reaching the result <span class="math-container">\begin{align*} |y|&amp;=C_1 e^x \\ \Rightarrow y&amp;=C_2 e^x. \\ \end{align*}</span> The justification for this step, according to a lot of sources, is the fact that the + or - from the absolute value can be combined with the constant to yield a new constant. However, this falls into the pointwise trap: what prevents the function from taking the positive branch at some points and the negative branch at others? In this case, making the function piecewise destroys continuity, violating the initial differential equation, but an extra step would be needed to rigorously confirm this result.</p> <p>A bigger problem arises in the case of differential equations such as <span class="math-container">$\frac{dy}{dx}\cdot \frac{1}{2}x=y,$</span> let's say with initial condition <span class="math-container">$(1,1).$</span> Again, this equation is quite easy to solve by separation of variables to yield the result <span class="math-container">$y=x^2$</span> as your solution - or is it? As it turns out, <span class="math-container">$y=\textbf{sgn}(x)\cdot x^2$</span> <strong>also</strong> satisfies the differential equation everywhere as well as the initial condition. This, to me at least, is alarming, since I've never seen these kinds of &quot;pathological&quot; solutions addressed in the solving of differential equations, and I can't find anything on the internet about this either, despite this being an extremely simple differential equation which has definitely appeared in textbooks or tests before. Even WolframAlpha ignores this solution and gives only <span class="math-container">$y=x^2.$</span> Is this something that is just commonly overlooked, or did teachers/authors think it wasn't important, or... what? I have yet to find a satisfying explanation. I have also asked my teacher, again with no satisfying answer.</p>
<p>Since your question asks specifically about absolute values, I would like to interrogate slightly a statement from it:</p> <blockquote> <p>When solving by separation, you end up reaching the result <span class="math-container">$|y| =C_1 e^x$</span></p> </blockquote> <p>Presumably this statement is based on something you learned in single-variable calculus, viz., that <span class="math-container">$\int \frac{1}{x} \, dx = \ln |x|$</span>. One thing that's being glossed over here is the meaning of this latter equation. The expression <span class="math-container">$\frac{1}{x}$</span> does not define a function on <span class="math-container">$\mathbb{R}$</span>, and so it doesn't morally have a single antiderivative on <span class="math-container">$\mathbb{R}$</span>, as the formula suggests. Rather, it has two antiderivatives, <span class="math-container">$\ln(x)$</span> on <span class="math-container">$(0, \infty)$</span> and <span class="math-container">$\ln(-x)$</span> on <span class="math-container">$(-\infty, 0)$</span>. And sure, one can summarize this by a single formula, but it's somewhat misleading to do so. (In my experience calculus textbooks handle this point fine, but it's sufficiently sophisticated that it washes over most single-variable calculus students, and it would be an eccentric instructor who focused on drilling the point.)</p> <p>Given the issue above, one way to view what's going on is that by the time you have reached the statement <span class="math-container">$|y| = C e^x$</span> you have already passed through the dodgy part of the argument, which is about what happens in separation of variables at the points where one side or the other has division by zero. The same thing happens in the second example, where division by zero happens on both sides of the separated equation.</p>
<p>There's two possible answers to your question (at least in my mind), neither of which are particularly satisfying.</p> <p><span class="math-container">$1)$</span> The reason calculus textbooks gloss over the things you mentioned in your question is that they are providing a relatively simple description of the math. That is, they are teaching the math and adding that level of rigor and detail wouldn't provide a better/more useful understanding of differential equations.</p> <p><span class="math-container">$2)$</span> The study of differential equations can trace its roots back to solving real world physics and engineering problems. The reason the finer details might be ignored is that they simply don't matter for the vast majority of differential equations encountered in real world applications. You ask &quot;what prevents the function from taking the positive branch at some points and the negative branch at others?&quot; Well, reality mainly. There are not any physical processes (at least none that I can think of) that such a function would describe and so we can safely ignore such a solution.</p> <p>I make no judgements on the mathematical appropriateness of either of these reasonings, just that I believe them to be the most likely answers.</p>
matrices
<blockquote> <p>Let $ A, B $ be two square matrices of order $n$. Do $ AB $ and $ BA $ have same minimal and characteristic polynomials?</p> </blockquote> <p>I have a proof only if $ A$ or $ B $ is invertible. Is it true for all cases?</p>
<p>Before proving $AB$ and $BA$ have the same characteristic polynomials show that if $A_{m\times n}$ and $B_{n\times m} $ then characteristic polynomials of $AB$ and $BA$ satisfy following statement: $$x^n|xI_m-AB|=x^m|xI_n-BA|$$ therefore easily conclude if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p> <p>Define $$C = \begin{bmatrix} xI_m &amp; A \\B &amp; I_n \end{bmatrix},\ D = \begin{bmatrix} I_m &amp; 0 \\-B &amp; xI_n \end{bmatrix}.$$ We have $$ \begin{align*} \det CD &amp;= x^n|xI_m-AB|,\\ \det DC &amp;= x^m|xI_n-BA|. \end{align*} $$ and we know $\det CD=\det DC$ if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p>
<p>If $A$ is invertible then $A^{-1}(AB)A= BA$, so $AB$ and $BA$ are similar, which implies (but is stronger than) $AB$ and $BA$ have the same minimal polynomial and the same characteristic polynomial. The same goes if $B$ is invertible.</p> <p>In general, from the above observation, it is not too difficult to show that $AB$, and $BA$ have the same characteristic polynomial, the type of proof could depends on the field considered for the coefficient of your matrices though. If the matrices are in $\mathcal{M}_n(\mathbb C)$, you use the fact that $\operatorname{GL}_n(\mathbb C)$ is dense in $\mathcal{M}_n(\mathbb C)$ and the continuity of the function which maps a matrix to its characteristic polynomial. There are at least 5 other ways to proceed (especially for other field than $\mathbb C$).</p> <p>In general $AB$ and $BA$ do not have the same minimal polynomial. I'll let you search a bit for a counter example.</p>
linear-algebra
<p>I understand that a vector has direction and magnitude whereas a point doesn't.</p> <p>However, in the course notes that I am using, it is stated that a point is the same as a vector.</p> <p>Also, can you do cross product and dot product using two points instead of two vectors? I don't think so, but my roommate insists yes, and I'm kind of confused now.</p>
<p>Here's an answer without using symbols.</p> <p>The difference is precisely that between <em>location</em> and <em>displacement</em>.</p> <ul> <li>Points are <strong>locations in space</strong>.</li> <li>Vectors are <strong>displacements in space</strong>.</li> </ul> <p>An analogy with time works well.</p> <ul> <li>Times, (also called instants or datetimes) are <strong>locations in time</strong>.</li> <li>Durations are <strong>displacements in time</strong>.</li> </ul> <p>So, in time,</p> <ul> <li>4:00 p.m., noon, midnight, 12:20, 23:11, etc. are <em>times</em></li> <li>+3 hours, -2.5 hours, +17 seconds, etc., are <em>durations</em></li> </ul> <p>Notice how durations can be positive or negative; this gives them &quot;direction&quot; in addition to their pure scalar value. Now the best way to mentally distinguish times and durations is by the operations they support</p> <ul> <li>Given a time, you can add a duration to get a new time (3:00 + 2 hours = 5:00)</li> <li>You can subtract two times to get a duration (7:00 - 1:00 = 6 hours)</li> <li>You can add two durations (3 hrs, 20 min + 6 hrs, 50 min = 10 hrs, 10 min)</li> </ul> <p>But <em>you cannot add two times</em> (3:15 a.m. + noon = ???)</p> <p>Let's carry the analogy over to now talk about space:</p> <ul> <li><span class="math-container">$(3,5)$</span>, <span class="math-container">$(-2.25,7)$</span>, <span class="math-container">$(0,-1)$</span>, etc. are <em>points</em></li> <li><span class="math-container">$\langle 4,-5 \rangle$</span> is a <em>vector</em>, meaning 4 units east then 5 south, assuming north is up (sorry residents of southern hemisphere)</li> </ul> <p>Now we have exactly the same analogous operations in space as we did with time:</p> <ul> <li>You can add a point and a vector: Starting at <span class="math-container">$(4,5)$</span> and going <span class="math-container">$\langle -1,3 \rangle$</span> takes you to the point <span class="math-container">$(3,8)$</span></li> <li>You can subtract two points to get the displacement between them: <span class="math-container">$(10,10) - (3,1) = \langle 7,9 \rangle$</span>, which is the displacement you would take from the second location to get to the first</li> <li>You can add two displacements to get a compound displacement: <span class="math-container">$\langle 1,3 \rangle + \langle -5,8 \rangle = \langle -4,11 \rangle$</span>. That is, going 1 step north and 3 east, THEN going 5 south and 8 east is the same thing and just going 4 south and 11 east.</li> </ul> <p>But you cannot add two points.</p> <p>In more concrete terms: Moscow + <span class="math-container">$\langle\text{200 km north, 7000 km west}\rangle$</span> is another location (point) somewhere on earth. But Moscow + Los Angeles makes no sense.</p> <p>To summarize, a location is where (or when) you are, and a displacement is <em>how to get from one location to another</em>. Displacements have both magnitude (how far to go) and a direction (which in time, a one-dimensional space, is simply positive or negative). In space, locations are <strong>points</strong> and displacements are <strong>vectors</strong>. In time, locations are (points in) time, a.k.a. <strong>instants</strong> and displacements are <strong>durations</strong>.</p> <p><strong>EDIT 1</strong>: In response to some of the comments, I should point out that 4:00 p.m. is <em>NOT</em> a displacement, but &quot;+4 hours&quot; and &quot;-7 hours&quot; are. Sure you can get to 4:00 p.m. (an instant) by adding the displacement &quot;+16 hours&quot; to the instant midnight. You can also get to 4:00 p.m. by adding the diplacement &quot;-3 hours&quot; to 7:00 p.m. The source of the confusion between locations and displacements is that people mentally work in coordinate systems relative to some origin (whether <span class="math-container">$(0,0)$</span> or &quot;midnight&quot; or similar) and both of these concepts are represented as coordinates. I guess that was the point of the question.</p> <p><strong>EDIT 2</strong>: I added some text to make clear that durations actually have direction; I had written both -2.5 hours and +3 hours earlier, but some might have missed that the negative encapsulated a direction, and felt that a duration is &quot;only a scalar&quot; when in fact the adding of a <span class="math-container">$+$</span> or <span class="math-container">$-$</span> really does give it direction.</p> <p><strong>EDIT 3</strong>: A summary in table form:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Concept</th> <th>SPACE</th> <th>TIME</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">LOCATION</td> <td>POINT</td> <td>TIME</td> </tr> <tr> <td style="text-align: left;">DISPLACEMENT</td> <td>VECTOR</td> <td>DURATION</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Loc - Loc = Disp</td> <td>Pt - Pt = Vec</td> <td>Time - Time = Dur</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$(3,5)-(10,2) = \langle -7,3 \rangle$</span></td> <td>7:30 - 1:15 = 6hr15m</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Loc + Disp = Loc</td> <td>Pt + Vec = Pt</td> <td>Time + Dur = Time</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$(10,2)+ \langle -7,3 \rangle = (3,5)$</span></td> <td>3:15 + 2hr = 5:15</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Disp + Disp = Disp</td> <td>Vec + Vec = Vec</td> <td>Dur + Dur = Dur</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$\langle 8, -5 \rangle + \langle -7, 3 \rangle = \langle 1, -2 \rangle$</span></td> <td>3hr + 5hr = 8hr</td> </tr> </tbody> </table> </div>
<p>Points and vectors are not the same thing. Given two points in 3D space, we can make a vector from the first point to the second. And, given a vector and a point, we can start at the point and "follow" the vector to get another point.</p> <p>There is a nice fact, however: the points in 3D space (or $\mathbb{R}^n$, more generally) are in a very nice correspondence with the vectors that start at the point $(0,0,0)$. Essentially, the idea is that we can represent the vector with its ending point, and no information is lost. This is sometimes called putting the vector in "standard position".</p> <p>For a course like vector calculus, it is important to keep a good distinction between points and vectors. Points correspond to vectors that start at the origin, but we may need vectors that start at other points.</p> <p>For example, given three points $A$, $B$, and $C$ in 3D space, we may want to find the equation of the plane that spans them, If we just knew the normal vector $\vec n$ of the plane, we could write the equation directly as $\vec n \cdot (x,y,z) = \vec n \cdot A$. So we need to find that normal $\vec n$. To do that, we compute the cross product of the vectors $\vec {AB}$ and $\vec{AC}$. If we computed the cross product of $A$ and $C$ instead (pretending they are vectors in standard position), we could not get the right normal vector.</p> <p>For example, if $A = (1,0,0)$, $B = (0,1,0)$, and $C = (0,0,1)$, the normal vector of the corresponding plane would not be parallel to any coordinate axis. But if we take any two of $A$, $B$, and $C$ and compute a cross product, we will get a vector parallel to one of the coordinate axes.</p>
linear-algebra
<p>How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$ Could anyone explain this to me, please?</p> <p>I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components...</p> <p>When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.</p>
<p>There exists an infinite number of vectors in 3 dimension that are perpendicular to a fixed one. They should only satisfy the following formula: $$(3\mathbf{i}+4\mathbf{j}-2\mathbf{k}) \cdot v=0$$</p> <p>For finding all of them, just choose 2 perpendicular vectors, like $v_1=(4\mathbf{i}-3\mathbf{j})$ and $v_2=(2\mathbf{i}+3\mathbf{k})$ and any linear combination of them is also perpendicular to the original vector: $$v=((4a+2b)\mathbf{i}-3a\mathbf{j}+3b\mathbf{k}) \hspace{10 mm} a,b \in \mathbb{R}$$</p>
<p>Take cross product with any vector. You will get one such vector.</p>
linear-algebra
<blockquote> <p>$\mathbf{a}\times \mathbf{b}$ follows the right hand rule? Why not left hand rule? Why is it $a b \sin (x)$ times the perpendicular vector? Why is $\sin (x)$ used with the vectors but $\cos(x)$ is a scalar product?</p> </blockquote> <p>So why is cross product defined in the way that it is? I am mainly interested in the right hand rule defintion too as it is out of reach?</p>
<p>The cross product originally came from the <em>quaternions</em>, which extend the complex numbers with two other 'imaginary units' $j$ and $k$, that have noncommutative multiplication (i.e. you can have $uv \neq vu$), but satisfy the relations</p> <p>$$ i^2 = j^2 = k^2 = ijk = -1 $$</p> <p>AFAIK, this is the exact form that Hamilton originally conceived them. Presumably the choice that $ijk = -1$ is simply due to the convenience in writing this formula compactly, although it could have just as easily been an artifact of how he arrived at them.</p> <p>Vector algebra comes from separating the quaternions into scalars (the real multiples of $1$) and vectors (the real linear combinations of $i$, $j$, and $k$). The cross product is literally just the vector component of the ordinary product of two vector quaternions. (the scalar component is the negative of the dot product)</p> <p>The association of $i$, $j$, and $k$ to the unit vectors along the $x$, $y$, and $z$ axes is just lexicographic convenience; you're just associating them in alphabetic order.</p>
<p>If the right hand rule seems too arbitrary to you, use a definition of the cross product that doesn't make use of it (explicitly). Here's one way to construct the cross product:</p> <p>Recall that the (signed) volume of a <a href="https://en.wikipedia.org/wiki/Parallelepiped">parallelepiped</a> in $\Bbb R^3$ with sides $a, b, c$ is given by</p> <p>$$\textrm{Vol} = \det(a,b,c)$$</p> <p>where $\det(a,b,c) := \begin{vmatrix}a_1 &amp; b_1 &amp; c_1 \\ a_2 &amp; b_2 &amp; c_2 \\ a_3 &amp; b_3 &amp; c_3\end{vmatrix}$.</p> <p>Now let's fix $b$ and $c$ and allow $a$ to vary. Then what is the volume in terms of $a = (a_1, a_2, a_3)$? Let's see:</p> <p>$$\begin{align}\textrm{Vol} = \begin{vmatrix}a_1 &amp; b_1 &amp; c_1 \\ a_2 &amp; b_2 &amp; c_2 \\ a_3 &amp; b_3 &amp; c_3\end{vmatrix} &amp;= a_1\begin{vmatrix} b_2 &amp; c_2 \\ b_3 &amp; c_3\end{vmatrix} - a_2\begin{vmatrix} b_1 &amp; c_1 \\ b_3 &amp; c_3\end{vmatrix} + a_3\begin{vmatrix} b_1 &amp; c_1 \\ b_2 &amp; c_2\end{vmatrix} \\ &amp;= a_1(b_2c_3-b_3c_2)+a_2(b_3c_1-b_1c_3)+a_3(b_1c_2-b_2c_1) \\ &amp;= (a_1, a_2, a_3)\cdot (b_2c_3-b_3c_2,b_3c_1-b_1c_3,b_1c_2-b_2c_1)\end{align}$$</p> <p>So apparently the volume of a parallelopiped will always be the vector $a$ dotted with this interesting vector $(b_2c_3-b_3c_2,b_3c_1-b_1c_3,b_1c_2-b_2c_1)$. We call that vector the cross product and denote it $b\times c$.</p> <hr> <p>From the above construction we can define the cross product in either of two equivalent ways:</p> <p><strong>Implicit Definition</strong><br> Let $b,c\in \Bbb R^3$. Then define the vector $d = b\times c$ by $$a\cdot d = \det(a,b,c),\qquad \forall a\in\Bbb R^3$$</p> <p><strong>Explicit Definition</strong><br> Let $b=(b_1,b_2,b_3)$, $c=(c_1,c_2,c_3)$. Then define the vector $b\times c$ by $$b\times c = (b_2c_3-b_3c_2,b_3c_1-b_1c_3,b_1c_2-b_2c_1)$$</p> <hr> <p>Now you're probably wondering where that arbitrary right-handedness went. Surely it must be hidden in there somewhere. It is. It's in the ordered basis I'm implicitly using to give the coordinates of each of my vectors. If you choose a right-handed coordinate system, then you'll get a right-handed cross product. If you choose a left-handed coordinate system, then you'll get a <em>left</em>-handed cross product. So this definition essentially shifts the choice of chirality onto the basis for the space. This is actually rather pleasing (at least to me).</p> <hr> <p>The other properties of the cross product are readily verified from this definition. For instance, try checking that $b\times c$ is orthogonal to both $b$ and $c$. If you know the properties of determinants it should be immediately clear. Another property of the cross product, $\|b\times c\| = \|b\|\|c\|\sin(\theta)$, is easily determined by the geometry of our construction. Draw a picture and see if you can verify this one.</p>
linear-algebra
<p>I wrote an answer to <a href="https://math.stackexchange.com/questions/854154/when-is-r-a-1-rt-invertible/854160#854160">this</a> question based on determinants, but subsequently deleted it because the OP is interested in non-square matrices, which effectively blocks the use of determinants and thereby undermined the entire answer. However, it can be salvaged if there exists a function $\det$ defined on <strong>all</strong> real-valued matrices (not just the square ones) having the following properties.</p> <ol> <li>$\det$ is real-valued</li> <li>$\det$ has its usual value for square matrices</li> <li>$\det(AB)$ always equals $\det(A)\det(B)$ whenever the product $AB$ is defined.</li> <li>$\det(A) \neq 0$ iff $\det(A^\top) \neq 0$</li> </ol> <p>Does such a function exist?</p>
<p>Such a function cannot exist. Let $A = \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ 0 &amp; 0\end{pmatrix}$ and $B = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \end{pmatrix}$. Then, since both $AB$ and $BA$ are square, if there existed a function $D$ with the properties 1-3 stated there would hold \begin{align} \begin{split} 1 &amp;= \det \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \end{pmatrix} = \det(BA) = D(BA) = D(B)D(A) \\ &amp;= D(A)D(B) = D(AB) = \det(AB) = \det \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{pmatrix} = 0. \end{split} \end{align}</p>
<p>This extension of determinants has all 4 properties if A is a square matrix (except it loses the sign), and retains some attributes of determinants otherwise.</p> <p>If A has more rows than columns, then <span class="math-container">$$|A|^2=|A^{T}A|$$</span> If A has more columns than rows, then <span class="math-container">$$|A|^2=|AA^{T}|$$</span></p> <p>This has a valid and useful geometric interpretation. If you have a transformation <span class="math-container">$A$</span>, this can still return how the transformation will scale an input transformation, limited to the smaller of the dimensions of the input and output space.</p> <p>You may take this to be the absolute value of the determinant. It's always positive because, when looking at a space embedded in a higher dimensional space, a positive area can become a negative area when looked at from behind.</p>
linear-algebra
<blockquote> <p>Let $\textbf A$ denote the space of symmetric $(n\times n)$ matrices over the field $\mathbb K$, and $\textbf B$ the space of skew-symmetric $(n\times n)$ matrices over the field $\mathbb K$. Then $\dim (\textbf A)=n(n+1)/2$ and $\dim (\textbf B)=n(n-1)/2$.</p> </blockquote> <p>Short question: is there any short explanation (maybe with combinatorics) why this statement is true?</p> <p><strong>EDIT</strong>: $\dim$ refers to linear spaces.</p>
<p>All square matrices of a given size $n$ constitute a linear space of dimension $n^2$, because to every matrix element corresponds a member of the canonical base, i.e. the set of matrices having a single $1$ and all other elements $0$.</p> <p>The skew-symmetric matrices have arbitrary elements on one side with respect to the diagonal, and those elements determine the other triangle of the matrix. So they are in number of $(n^2-n)/2=n(n-1)/2$, ($-n$ to remove the diagonal).</p> <p>For the symmetric matrices the reasoning is the same, but we have to add back the elements on the diagonal: $(n^2-n)/2+n=(n^2+n)/2=n(n+1)/2$.</p>
<p>Here is my two cents:</p> <hr> <p>\begin{eqnarray} M_{n \times n}(\mathbb{R}) &amp; \text{has form} &amp; \begin{pmatrix} *&amp;*&amp;*&amp;*&amp;\cdots \\ *&amp;*&amp;*&amp;*&amp; \\ *&amp;*&amp;*&amp;*&amp; \\ *&amp;*&amp;*&amp;*&amp; \\ \vdots&amp;&amp;&amp;&amp;\ddots \end{pmatrix} \hspace{.5cm} \text{with $n^2$ elements}\\ \\ \\ Skew_{n \times n}(\mathbb{R}) &amp; \text{has form} &amp; \begin{pmatrix} 0&amp;*'&amp;*'&amp;*'&amp;\cdots \\ *&amp;0&amp;*'&amp;*'&amp; \\ *&amp;*&amp;0&amp;*'&amp; \\ *&amp;*&amp;*&amp;0&amp; \\ \vdots&amp;&amp;&amp;&amp;\ddots \end{pmatrix} \end{eqnarray} For this bottom formation the (*)s and (*')s are just operationally inverted forms of the same number, so the array here only takes $\frac{(n^2 - n)}{2}$ elements as an argument to describe it. This appears to be an array geometry question really... I suppose if what I'm saying is true, then I conclude that because $\dim(Skew_{n \times n}(\mathbb{R}) + Sym_{n \times n}(\mathbb{R})) = \dim(M_{n \times n}(\mathbb{R}))$ and $\dim(Skew_{n \times n}(\mathbb{R}))=\frac{n^2-n}{2}$ then we have that \begin{eqnarray} \frac{n^2-n}{2}+\dim(Sym_{n \times n}(\mathbb{R})))=n^2 \end{eqnarray} or \begin{eqnarray} \dim(Sym_{n \times n}(\mathbb{R})))=\frac{n^2+n}{2}. \end{eqnarray}</p>
logic
<p>I need to to understand the difference between predicates and functions in the context of Clasual Form Logic in order to define the Herbrand universe.</p> <p>If I have p(x) :- q(f(x)) would I be right in saying that p and q are predicates while f is a function because it is "nested"? By this thinking then if I have p(x) :- q(x) both p and q are predicates and I have no functions?</p> <p>If this is incorrect then how can I tell the difference between a predicate and function?</p>
<p>A predicate is a box that takes an argument and returns a Boolean value. For example, "$x \mapsto x \text{ is even}$".</p> <p>A function is a box that takes an argument and returns a value. For example, "$x \mapsto x^2$".</p> <p>Edit (following Amy's suggestions): There is some domain over which all variables range. A function takes zero or more arguments of that domain and returns another argument from that domain. A predicate takes zero or more arguments of that domain and returns a Boolean value.</p>
<p>The terms "Function" and "Predicate" are only and solely determined by the formal system in which those words are being used/defined. In most formalizations of first order predicate logic the words "predicate" and "function" classify two different kinds of signs (the function signs and the predicate signs) each of which follow different rules of use i.e. function signs are different from predicate signs because the rules for using function signs are different from the rules for using predicate signs.</p>
combinatorics
<p>Let us systematically generate all constructible points in the plane. We begin with just two points, which specify the unit distance. </p> <p><a href="https://i.sstatic.net/UfCcSm.jpg" rel="noreferrer"><img src="https://i.sstatic.net/UfCcSm.jpg" alt="enter image description here"></a></p> <p>With the straightedge, we may construct the line joining them. And with the compass, we may construct the two circles centered at each of them, having that unit segment as radius. These circles intersect each other and the line, creating four additional points of intersection. Thus, we have now six points in all.</p> <p><a href="https://i.sstatic.net/14byzm.jpg" rel="noreferrer"><img src="https://i.sstatic.net/14byzm.jpg" alt="enter image description here"></a></p> <p>Using these six points, we proceed to the next stage, constructing all possible lines and circles using those six points, and finding the resulting points of intersection. </p> <p><a href="https://i.sstatic.net/LiLBk.jpg" rel="noreferrer"><img src="https://i.sstatic.net/LiLBk.jpg" alt="enter image description here"></a></p> <p>I believe that we now have 203 points. Let us proceed in this way to systematically construct all constructible points in the plane, in a hierarchy of finite stages. At each stage, we form all possible lines and circles that may be formed from our current points using straightedge and compass, and then we find all points of intersection from the resulting figures. </p> <p>This produces what I call the <em>constructibility sequence</em>:</p> <p><span class="math-container">$$2\qquad\qquad 6\qquad\qquad 203\qquad\qquad ?$$</span></p> <p>Each entry is the number of points constructed at that stage. I have a number of questions about the constructibility sequence:</p> <p><strong>Question 1.</strong> What is the next constructibility number? </p> <p>There is no entry in the online encyclopedia of integer sequences beginning 2, 6, 203, and so I would like to create an entry for the constructibility sequence. But they request at least four numbers, and so we seem to need to know the next number. I'm not sure exactly how to proceed with this, since if one proceeds computationally, then one will inevitably have to decide if two very-close points count as identical are not, and I don't see any principled way to ensure that this is done correctly. So it seems that one will need to proceed with some kind of idealized geometric calculus, which gets the right answer about coincidence of intersection points. [<strong>Update:</strong> The sequence now exists as <a href="https://oeis.org/A333944" rel="noreferrer">A333944</a>.]</p> <p><strong>Question 2.</strong> What kind of asymptotic upper bounds can you prove on the growth of the constructibility sequence? </p> <p>At each stage, every pair of points determine a line and two circles. And every intersection point is realized as the intersection of two lines, two circles or a line and circle, which have at most two intersection points in each case. So a rough upper bound is that from <span class="math-container">$k$</span> points, we produce no more than <span class="math-container">$3k^2$</span> many lines and circles, and so at most <span class="math-container">$(3k^2)^2$</span> many pairs of line and circles, and so at most <span class="math-container">$2(3k^2)^2$</span> many points of intersection. This leads to an upper bound of growth something like <span class="math-container">$18^n2^{4^n}$</span> after <span class="math-container">$n$</span> stages. Can anyone give a better bound? </p> <p><strong>Question 3.</strong> And what of lower bounds? </p> <p>I suspect that the sequence grows very quickly, probably doubly exponentially. But to prove this, we would seem to need to identify a realm of construction patterns where there is little interference of intersection coincidence, so that one can be sure of a certain known growth in new points.</p>
<p>I have written some Haskell <a href="https://codeberg.org/teo/constructibility" rel="nofollow noreferrer">code</a> to compute the next number in the constructibility sequence. It's confirmed everything we have already established and gave me the following extra results:</p> <p>There are <span class="math-container">$149714263$</span> line-line intersections at the 4th step (computed in ~14 hours). Pace Nielsen's approximation was only off by 8! This includes some points that are between a distance of <span class="math-container">$10^{-12}$</span> and <span class="math-container">$10^{-13}$</span> from each other.</p> <p>I have found the fourth number in the constructibility sequence: <span class="math-container">$$1723816861$$</span> I computed this by splitting the first quadrant into sections along the x-axis, computing values in these sections and combining them. The program was not parallel, but the work was split amongst 6 process on 3 machines. It took approximately 6 days to complete and each process never used more than 5GB of RAM.</p> <p>My data can be found <a href="https://docs.google.com/spreadsheets/d/1upFYzrD6A9ZSTuNPfZnNe-lMmsgAuG_FircY4fq15ms/edit?usp=sharing" rel="nofollow noreferrer">here</a>. I've produced these two graphs from my data, which give a sense of how the points are distributed: <a href="https://i.sstatic.net/MBNUA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBNUA.png" alt="enter image description here" /></a> If we focus at the area from <span class="math-container">$0$</span> to <span class="math-container">$1$</span> we get: <a href="https://i.sstatic.net/VuU6O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VuU6O.png" alt="enter image description here" /></a></p> <hr /> <h2>Implementation Details:</h2> <p>I represent constructible reals as pairs of a 14 decimal digit approximation (using <a href="http://hackage.haskell.org/package/ireal" rel="nofollow noreferrer">ireal</a>) and a symbolic represenation (using <a href="http://hackage.haskell.org/package/constructible-0.1.0.1" rel="nofollow noreferrer">constructible</a>). This is done to speed up comparisons: the approximations give us quick but partial comparisons functions, while the symbolic representation gives us slower but total comparison functions.</p> <p>Lines are represented by a pair <span class="math-container">$\langle m, c \rangle$</span> such that <span class="math-container">$y = mx + c$</span>. To deal with vertical lines, we create a data-type that's enhanced with an infinite value. Circles are triples <span class="math-container">$\langle a, b, r \rangle$</span> such that <span class="math-container">$(x-a)^2 + (y-b)^2 = r^2$</span>.</p> <p>I use a sweep-line algorithm to compute the number of intersections in a given rectangle. It extends the <a href="https://en.wikipedia.org/wiki/Bentley%E2%80%93Ottmann_algorithm" rel="nofollow noreferrer">Bentley-Ottoman Algorithm</a> to allow the checking of intersections between circles as well as lines. The idea behind the algorithm is that we have a vertical line moving from the left to right face of the rectangle. We think of this line as a strictly ordered set of objects (lines and circles). This requires some care to get right. Circles need to be split into their top and bottom semi-circles, and we don't only want to order objects based on their y-coordinates but if they are equal then also their slope and we need to deal with circles that are tangential to each other at a point. The composition or order of this set can change in 3 ways as we move from left to right:</p> <ol> <li>Addition: We reach the leftmost point on an object and so we add it to our sorted set.</li> <li>Deletion: We reach the rightmost point on our object and so we remove it from our sorted set.</li> <li>Intersection: Several objects intersect. This is the only way the order of objects can change. We reorder them and note the intersection.</li> </ol> <p>We keep track of these events in a priority queue, and deal with them in the order they occur as we move from left to right.</p> <p>The big advantage of this alogrithm over the naive approach is that it doesn't require us to keep track of the collision points. It also has the advantage that if we compute the amount of collisions in two rectangles then it is very easy to combine them, since we just need to add the amount of collisions in each, making sure we aren't double counting the borders. This makes it very easy to distribute computation. It also has very reasonable RAM demands. Its RAM use should be <span class="math-container">$O(n)$</span> where <span class="math-container">$n$</span> is the amount of lines and circles and the constant is quite small.</p>
<p>Mathematica has a command to solve systems of equations over the real numbers; or one can just solve them equationally. It also has a command to find the minimal polynomial of an algebraic number. Thus intersection points between lines and circles can be found using exact arithmetic (as numbered roots of minimal polynomials over <span class="math-container">$\mathbb{Q}$</span>), as can the slopes of lines and radii of circles. Using such methods, there are exactly 17,562 distinct lines and 32,719 distinct circles on the next stage.</p> <p>Finding the minimal polynomial of an algebraic number this way is somewhat slow (there may be ways to speed that up), but these lines and circles can also be found in just a few minutes if we instead use (10 digit) floating point approximations.</p> <p>I've now optimized the code a bit, and using those floating point approximations, in a little under 21 hours I compute that there are at least <span class="math-container">$$149,714,255$$</span> distinct intersections between those 17,562 lines. This could be undercounting, because the floating point arithmetic might make us think that two distinct intersection points are the same. However, the computations shouldn't take much longer using 20 digit floating points (but they would take a lot more RAM). I expect that the numbers won't change much, if at all. But I did see changes going from 5 digit to 10 digit approximations, so trying the 20 digit computation would be useful.</p> <p>Storing those 10 digits, for a little more than hundred million intersection points, was taking most of my RAM. It appears that if I try to do the same computation with the circle intersections, it will exceed my RAM limits. However, it is certainly doable, and I'm happy to give my code to anyone who has access to a computer with a lot of RAM (just email me; my computer has 24 GB, so you'd want quite a bit more than that). The code may still have some areas where it can be sped up--but taking less than 1 day to find all intersection points between lines is already quite nice.</p> <p>Another option would be to store these points on the hard disk---but there are computers out there with enough RAM to make that an unnecessary change.</p> <hr> <p>Edited to add: I found a computer that is slightly slower than my own but had a lot of RAM. It took about 6 weeks, and about 360 GB of RAM, but the computation finished. It is still only an approximation (not exact arithmetic, only 10 digit precision past the decimal place). The number of crossings I get is <span class="math-container">$$ 1,723,814,005 $$</span> If you have a real need to do exact arithmetic, I could probably do that, but it would take a bit longer. Otherwise I'll consider this good enough.</p>
linear-algebra
<p>I am a bit confused. What is the difference between a linear and affine function? Any suggestions will be appreciated.</p>
<p>A linear function fixes the origin, whereas an affine function need not do so. An affine function is the composition of a linear function with a translation, so while the linear part fixes the origin, the translation can map it somewhere else.</p> <p>Linear functions between vector spaces preserve the vector space structure (so in particular they must fix the origin). While affine functions don't preserve the origin, they do preserve some of the other geometry of the space, such as the collection of straight lines.</p> <p>If you choose bases for vector spaces $V$ and $W$ of dimensions $m$ and $n$ respectively, and consider functions $f\colon V\to W$, then $f$ is linear if $f(v)=Av$ for some $n\times m$ matrix $A$ and $f$ is affine if $f(v)=Av+b$ for some matrix $A$ and vector $b$, where coordinate representations are used with respect to the bases chosen.</p>
<p>An affine function is the composition of a linear function followed by a translation. $ax$ is linear ; $(x+b)\circ(ax)$ is affine. see Modern basic Pure mathematics : C.Sidney</p>
logic
<p>In fact I don't understand the meaning of the word "metamathematics". I just want to know, for example, why can we use mathematical induction in the proof of logical theorems, like The Deduction Theorem, or even some more fundamental proposition like "every formula has equal numbers of left and right brackets"?</p> <p>What exactly can we use when talking about metamathematics? If induction is OK, then how about axiom of choice/determincacy? Can I use axiom of choice on collection of sets of formulas?(Of course it may be meaningless. By the way I don't understand why we can talk about a "set" of formulas either)</p> <p>I have asked one of my classmates about these, and he told me he had stopped thinking about this kind of stuff. I feel like giving up too......</p>
<p>This is not an uncommon confusion for students that are introduced to formal logic for the first time. It shows that you have a slightly wrong expectations about what metamathematics is for and what you'll get out of it.</p> <p>You're probably expecting that it <em>ought to</em> go more or less like in first-year real analysis, which started with the lecturer saying something like</p> <blockquote> <p>In high school, your teacher demanded that you take a lot of facts about the real numbers on faith. Here is where we stop taking those facts on faith and instead prove from first principles that they're true.</p> </blockquote> <p>This led to a lot of talk about axioms and painstaking quasi-formal proofs of things you already knew, and at the end of the month you were able to reduce everything to a small set of axioms including something like the supremum principle. Then, if you were lucky, Dedekind cuts or Cauchy sequences were invoked to convince you that if you believe in the counting numbers and a bit of set theory, you should also believe that there is <em>something</em> out there that satisfies the axioms of the real line.</p> <p>This makes it natural to expect that formal logic will work in the same way:</p> <blockquote> <p>As undergraduates, your teachers demanded that you take a lot of proof techniques (such as induction) on faith. Here is where we stop taking them on faith and instead prove from first principles that they're valid.</p> </blockquote> <p>But <em><strong>that is not how it goes</strong></em>. You're <em>still</em> expected to believe in ordinary mathematical reasoning for whichever reason you already did -- whether that's because they make intuitive sense to you, or because you find that the conclusions they lead to usually work in practice when you have a chance to verify them, or simply because authority says so.</p> <p>Instead, metamathematics is a quest to be precise about <em>what it is</em> you already believe in, such that we can use <em>ordinary mathematical reasoning</em> <strong>about</strong> those principles to get to know interesting things about the limits of what one can hope to prove and how different choices of what to take on faith lead to different things you can prove.</p> <p>Or, in other words, the task is to use ordinary mathematical reasoning to build a <strong>mathematical model</strong> of ordinary mathematical reasoning itself, which we can use to study it.</p> <p>Since metamathematicians are interested in knowing <em>how much</em> taken-on-faith foundation is necessary for this-or-that ordinary-mathematical argument to be made, they also tend to apply this interest to <em>their own</em> reasoning about the mathematical model. This means they are more likely to try to avoid high-powered reasoning techniques (such as general set theory) when they can -- not because such methods are <em>forbidden</em>, but because it is an interesting fact that they <em>can</em> be avoided for such-and-such purpose.</p> <p>Ultimately though, it is recognized that there are <em>some</em> principles that are so fundamental that we can't really do anything without them. Induction of the natural numbers is one of these. That's not a <em>problem</em>: it is just an interesting (empirical) fact, and after we note down that fact, we go on to use it when building our model of ordinary-mathematical-reasoning.</p> <p>After all, ordinary mathematical reasoning <em>already exists</em> -- and did so for thousands of years before formal logic was invented. We're not trying to <em>build</em> it here (the model is not the thing itself), just to better understand the thing we already have.</p> <hr /> <p>To answer your concrete question: Yes, you can (&quot;are allowed to&quot;) use the axiom of choice if you need to. It is good form to keep track of the fact that you <em>have</em> used it, such that you have an answer if you're later asked, &quot;the metamathematical argument you have just carried out, can that itself be formalized in such-and-such system?&quot; Formalizing metamathematical arguments within your model has proved to be a very powerful (though also confusing) way of establishing certain kinds of results.</p> <p>You can use the axiom of determinacy too, if that floats your boat -- so long as you're aware that doing so is not really &quot;ordinary mathematical reasoning&quot;, so it becomes doubly important to disclose faithfully that you've done so when you present your result (lest someone tries to combine it with something <em>they</em> found using AC instead, and get nonsense out of the combination).</p>
<p>This is not at all intended to be an answer to your question. (I like Henning Makholm's answer above.) But I thought you might be interested to hear <a href="https://en.wikipedia.org/wiki/Thoralf_Skolem" rel="noreferrer">Thoralf Skolem's</a> remarks on this issue, because they are quite pertinent—in particular one of his points goes exactly to your question about proving that every formula has equal numbers of left and right brackets—but they are much too long to put in a comment.</p> <blockquote> <p>Set-theoreticians are usually of the opinion that the notion of integer should be defined and that the principle of mathematical induction should be proved. But it is clear that we cannot define or prove ad infinitum; sooner or later we come to something that is not definable or provable. Our only concern, then, should be that the initial foundations be something immediately clear, natural, and not open to question. This condition is satisfied by the notion of integer and by inductive inferences, but it is decidedly not satisfied by <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory" rel="noreferrer">set-theoretic axioms of the type of Zermelo's</a> or anything else of that kind; if we were to accept the reduction of the former notions to the latter, the set-theoretic notions would have to be simpler than mathematical induction, and reasoning with them less open to question, but this runs entirely counter to the actual state of affairs.</p> <p>In a paper (1922) <a href="https://en.wikipedia.org/wiki/David_Hilbert" rel="noreferrer">Hilbert</a> makes the following remark about <a href="https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9" rel="noreferrer">Poincaré's</a> assertion that the principle of mathematical induction is not provable: “His objection that this principle could not be proved in any way other than by mathematical induction itself is unjustified and is refuted by my theory.” But then the big question is whether we can prove this principle by means of simpler principles and <em>without using any property of finite expressions or formulas that in turn rests upon mathematical induction or is equivalent to it</em>. It seems to me that this latter point was not sufficiently taken into consideration by Hilbert. For example, there is in his paper (bottom of page 170), for a lemma, a proof in which he makes use of the fact that in any arithmetic proof in which a certain sign occurs that sign must necessarily occur for a first time. Evident though this property may be on the basis of our perceptual intuition of finite expressions, a formal proof of it can surely be given only by means of mathematical induction. In set theory, at any rate, we go to the trouble of proving that every ordered finite set is well-ordered, that is, that every subset has a first element. Now why should we carefully prove this last proposition, but not the one above, which asserts that the corresponding property holds of finite arithmetic expressions occurring in proofs? Or is the use of this property not equivalent to an inductive inference?</p> <p>I do not go into Hilbert's paper in more detail, especially since I have seen only his first communication. I just want to add the following remark: It is odd to see that, since the attempt to find a foundation for arithmetic in set theory has not been very successful because of logical difficulties inherent in the latter, attempts, and indeed very contrived ones, are now being made to find a different foundation for it—as if arithmetic had not already an adequate foundation in inductive inferences and recursive definitions.</p> </blockquote> <p>(Source: Thoralf Skolem, “Some remarks on axiomatized set theory”, address to the Fifth Congress of Scandinavian Mathematicians, August 1922. English translation in <em>From Frege to Gödel</em>, p299–300. Jean van Heijenoort (ed.), Harvard University Press, 1967.)</p> <p>I think it is interesting to read this in the light of Henning Makholm's excellent answer, which I think is in harmony with Skolem's concerns. I don't know if Hilbert replied to Skolem.</p>
game-theory
<p>I am reading a book on Combinatorial Game Theory that describes a proof by John Nash that Hex is a 'first player' win, but I find the proof very confusing. This proof uses a strategy-stealing argument.</p> <p>At one point it says:</p> <blockquote> <p>With this (first) move Left becomes Second.</p> </blockquote> <p>How can making the first move make you Second?</p> <p>At the end it says:</p> <blockquote> <p>At some point, if this strategy calls for Left to place a stone where the extra sits; then she will simply make another arbitrary placement. Thus Left can win in contradiction to the hypotheses.</p> </blockquote> <p>I don't understand this at all. Why would the game call for a placement on "extra" if that spot is already occupied by a stone? Can someone explain this proof to me in a way that I can understand?</p> <p>Here's the proof:</p> <blockquote> <p>The proof is by contradiction. Let Left make the first play of the game, and assume that Right has a winning strategy. With Left's first move she puts a stone on any cell, a placement called "extra". With this move Left becomes Second, and she henceforth follows the winning strategy that is available to Right. At some point, if this strategy calls for Left to place a stone where the extra sits; then she will simply make another arbitrary placement. Thus Left can win in contradiction to the hypotheses.</p> </blockquote>
<p>The point is that in the game Hex, it never hurts to have an extra piece on the board.</p> <p>So, suppose there is a strategy for the second player, but you are stuck with being the first player. What should you do?</p> <p>Well, you can place a stone on the board, and then pretend in your own mind that it isn't there! In other words, you are imagining that the other player will now make the first move. In your mind, you are imagining that you are the second player now, and you can follow the winning strategy for the second player.</p> <p>The only time you could have trouble is if your winning strategy tells you to place a move at the position you are pretending is empty. Since it's not really empty, you can't really make a move there. But luckily, you have already moved there, so you can imagine that you are making a move there right now -- the stone is already there, so you can imagine that you are just putting it there now. But in reality you still need to make a move, so you can do the same thing you did at the beginning -- just place a stone at some random position, and then imagine that you didn't.</p> <p>The real state of the game is always just like your imagined state, except that there is an extra stone of yours on the board, which can't make things any worse. It limits the opponent's options, but if you have a winning strategy, it will work for any moves the opponent makes, so this isn't a problem either.</p> <p>The conclusion is that, if there were a strategy for the second player to win, then you could "steal" that strategy as outlined above to win even when you are the first player. This is a contradiction, because if there were really a winning strategy for the second player, then the first player would not be able to guarantee a win. Therefore, there is not in fact any strategy for the second player to win.</p>
<p>The point is just that having an "extra" piece on the board can't help your opponent.</p> <p>Assume that Right (the second player) has a winning strategy. A strategy is a function $\Phi(A, B)$ that chooses what move to make when Left has pieces at the positions in $A$ and Right has pieces at the positions in $B$. Then Left can steal this strategy as follows. On the first move, Left puts a piece at any position and calls that position $x$ ("extra"). On every subsequent move, when Left has pieces at $A$ and Right has pieces at $B$, Left considers the position $y=\Phi(B, A \setminus \{x\})$ (that is, the move Right would have made in his position if he ignored the extra piece). If $y\neq x$, then Left puts a piece at $y$ (and keeps calling the same piece "extra"). If $y=x$, then Left puts a piece at any empty space $x'$ and sets $x=x'$ (i.e., he changes which piece he is calling "extra"). This is a winning strategy for Left. But since Left and Right cannot both have winning strategies, we have a contradiction, and conclude that our assumption must be false: Right cannot have a winning strategy. Therefore Left must have one.</p>
logic
<p>Stephen Hawking believes that Gödel's Incompleteness Theorem makes the search for a 'Theory of Everything' impossible. He reasons that because there exist mathematical results that cannot be proven, there exist physical results that cannot be proven as well. Exactly how valid is his reasoning? How can he apply a mathematical theorem to an empirical science?</p>
<p><a href="http://www.damtp.cam.ac.uk/events/strings02/dirac/hawking/">Hawking's argument</a> relies on several assumptions about a "Theory of Everything". For example, Hawking states that a Theory of Everything would have to not only predict what we think of as "physical" results, it would also have to predict mathematical results. Going further, he states that a Theory of Everything would be a finite set of rules which can be used effectively in order to provide the answer to any physical question including many purely mathematical questions such as the Goldbach conjecture. If we accept that characterization of a Theory of Everything, then we don't need to worry about the incompleteness theorem, because Church's and Turing's solutions to the <a href="http://en.wikipedia.org/wiki/Entscheidungsproblem">Entscheidungsproblem</a> also show that there is no such effective system. </p> <p>But it is far from clear to me that a Theory of Everything would be able to provide answers to arbitrary mathematical questions. And it is not clear to me that a Theory of Everything would be effective. However, if we make the definition of what we mean by "Theory of Everything" strong enough then we will indeed set the goal so high that it is unattainable. </p> <p>To his credit, Hawking does not talk about results being "unprovable" in some abstract sense. He assumes that a Theory of Everything would be a particular finite set of rules, and he presents an argument that no such set of rules would be sufficient. </p>
<p>@Sperners Lemma's comment should really be promoted to an answer. For indeed, it is a fairly gross misunderstanding of what Gödel's theorem says to summarize it as asserting that "there exist mathematical results that cannot be proven", for the reason he briefly indicates.</p> <p>And incidentally, though it is a quite different issue, a Theory of Everything in the standard sense surely doesn't have to entail that every physical truth could be "proven". Let's bow to the wisdom of Wikipedia which asserts </p> <blockquote> <p>A theory of everything (ToE) or final theory is a putative theory of theoretical physics that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle.</p> </blockquote> <p>So NB a ToE is a body of <em>laws</em> which (<em>if</em> we assume that they are deterministic) will imply lots of <em>conditionals</em> of the form "if this happens, then that happens". But a ToE which wraps up all the <em>laws</em> into one neat package needn't tell us the contingent <em>initial conditions</em>, so (even if it is deterministic) need not tell us all the physical facts. </p>
matrices
<p>I wrote an answer to <a href="https://math.stackexchange.com/questions/854154/when-is-r-a-1-rt-invertible/854160#854160">this</a> question based on determinants, but subsequently deleted it because the OP is interested in non-square matrices, which effectively blocks the use of determinants and thereby undermined the entire answer. However, it can be salvaged if there exists a function $\det$ defined on <strong>all</strong> real-valued matrices (not just the square ones) having the following properties.</p> <ol> <li>$\det$ is real-valued</li> <li>$\det$ has its usual value for square matrices</li> <li>$\det(AB)$ always equals $\det(A)\det(B)$ whenever the product $AB$ is defined.</li> <li>$\det(A) \neq 0$ iff $\det(A^\top) \neq 0$</li> </ol> <p>Does such a function exist?</p>
<p>Such a function cannot exist. Let $A = \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ 0 &amp; 0\end{pmatrix}$ and $B = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \end{pmatrix}$. Then, since both $AB$ and $BA$ are square, if there existed a function $D$ with the properties 1-3 stated there would hold \begin{align} \begin{split} 1 &amp;= \det \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \end{pmatrix} = \det(BA) = D(BA) = D(B)D(A) \\ &amp;= D(A)D(B) = D(AB) = \det(AB) = \det \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{pmatrix} = 0. \end{split} \end{align}</p>
<p>This extension of determinants has all 4 properties if A is a square matrix (except it loses the sign), and retains some attributes of determinants otherwise.</p> <p>If A has more rows than columns, then <span class="math-container">$$|A|^2=|A^{T}A|$$</span> If A has more columns than rows, then <span class="math-container">$$|A|^2=|AA^{T}|$$</span></p> <p>This has a valid and useful geometric interpretation. If you have a transformation <span class="math-container">$A$</span>, this can still return how the transformation will scale an input transformation, limited to the smaller of the dimensions of the input and output space.</p> <p>You may take this to be the absolute value of the determinant. It's always positive because, when looking at a space embedded in a higher dimensional space, a positive area can become a negative area when looked at from behind.</p>
linear-algebra
<p>What is the difference, if any, between <em>kernel</em> and <em>null space</em>?</p> <p>I previously understood the <em>kernel</em> to be of a linear map and the <em>null space</em> to be of a matrix: i.e., for any linear map $f : V \to W$,</p> <p>$$ \ker(f) \cong \operatorname{null}(A), $$</p> <p>where</p> <ul> <li>$\cong$ represents isomorphism with respect to $+$ and $\cdot$, and</li> <li>$A$ is the matrix of $f$ with respect to some source and target bases.</li> </ul> <p>However, I took a class with a professor last year who used $\ker$ on matrices. Was that just an abuse of notation or have I had things mixed up all along?</p>
<p>The terminology "kernel" and "nullspace" refer to the same concept, in the context of vector spaces and linear transformations. It is more common in the literature to use the word nullspace when referring to a matrix and the word kernel when referring to an abstract linear transformation. However, using either word is valid. Note that a matrix is a linear transformation from one coordinate vector space to another. Additionally, the terminology "kernel" is used extensively to denote the analogous concept as for linear transformations for morphisms of various other algebraic structures, e.g. groups, rings, modules and in fact we have a definition of kernel in the very abstract context of abelian categories. </p>
<p>"Was that just an abuse of notation or have I had things mixed up all along?" Neither. Different courses/books will maintain/not maintain such a distinction. If a matrix represents some underlying linear transformation of a vector space, then the kernel of the matrix might mean the set of vectors sent to 0 by that transformation, or the set of lists of numbers (interpreted as vectors in $\mathbb{R}^n$ representing those vectors in a given basis, etc. </p> <p>The context should make things clear and every claim about, say, dimensions of kernels/nullspaces should still hold despite the ambiguity.</p> <p>As manos said, "kernel" is used more generally whereas "nullspace" is used essentially only in Linear Algebra.</p>
game-theory
<p>I'm looking for a good book on Game Theory. I run a software company and from the little I've heard about Game Theory, it seems interesting and potentially useful.</p> <p>I've looked on Amazon.com but wanted to ask the Mathematicians here.</p> <p>One that looks good (despite the title:) is <a href="http://rads.stackoverflow.com/amzn/click/161564055X">The Complete Idiots Guide to Game Theory</a>. But please post that as an answer if you think it's good.</p> <p>Please post one book per answer so folks can vote on each book.</p>
<p>Not a book, but very nice online lectures by B. Polak: <a href="http://academicearth.org/courses/game-theory">http://academicearth.org/courses/game-theory</a>. I watched the first part (12 lectures). I had no knowledge in this area but it's very easy to follow and mathematics is on very low level.</p>
<p>Before purchasing a text, I'd recommend getting a brief overview of the field; see, for example, <a href="http://en.wikipedia.org/wiki/Game_theory" rel="nofollow"><strong>Wikipedia's entry on Game Theory</strong></a>. You'll then be in a better position, once you've identified which aspects of the field interest you most, to select an appropriate book or website, and you'll also be able to find references that specifically target that area of interest. </p> <p>Also, if you scroll down to the bottom of the link above, you'll find a whole assortment of texts &amp; references, historically important works in the field, articles, and websites, many with annotations regarding the level of difficulty and targeted audience.</p>
differentiation
<blockquote> <p><strong>Theorem:</strong> Suppose that $f : A \to \mathbb{R}$ where $A \subseteq \mathbb{R}$. If $f$ is differentiable at $x \in A$, then $f$ is continuous at $x$.</p> </blockquote> <p>This theorem is equivalent (by the contrapositive) to the result that if $f$ is not continuous at $x \in A$ then $f$ is not differentiable at $x$. </p> <p>Why then do authors in almost every analysis book, not take continuity of $f$ as a requirement in the definition of the derivative of $f$ when we (seemingly) end up with equivalent results?</p> <p>For example I don't see why this wouldn't be a good definition of the derivative of a function</p> <blockquote> <p><strong>Definition:</strong> Let $A \subseteq \mathbb{R}$ and let $f : A \to \mathbb{R}$ be a function <em>continuous</em> at $a$. Let $a \in \operatorname{Int}(A)$. We define the derivative of $f$ at $a$ to be $$f'(a) = \lim_{t \to 0}\frac{f(a+t)-f(a)}{t}$$ provided the limit exists.</p> </blockquote> <p>I know this is probably a pedagogical issue, but why not take this instead as the definition of the derivative of a function? </p>
<p>Because that suggests that there might be functions which are discontinuous at <span class="math-container">$a$</span> for which it is still true that the limit<span class="math-container">$$\lim_{t\to0}\frac{f(a+t)-f(a)}t$$</span>exists. Besides, why add a condition that always holds?</p>
<p>Definitions tend to be minimalistic, in the sense that they don't include unnecessary/redundant information that can be derived as a consequence.</p> <p>Same reason why, for example, an equilateral triangle is defined as having all sides equal, rather than having all sides <em>and</em> all angles equal.</p>
matrices
<p>A Magic Square of order <span class="math-container">$n$</span> is an arrangement of <span class="math-container">$n^2$</span> numbers, usually distinct integers, in a square, such that the <span class="math-container">$n$</span> numbers in all rows, all columns, and both diagonals sum to the same constant.</p> <p><img src="https://i.sstatic.net/iseMy.png" alt="a 3x3 magic square" /></p> <p>How to prove that a normal <span class="math-container">$3\times 3$</span> magic square where the integers from 1 to 9 are arranged in some way, must have <span class="math-container">$5$</span> in its middle cell?</p> <p>I have tried taking <span class="math-container">$a,b,c,d,e,f,g,h,i$</span> and solving equations to calculate <span class="math-container">$e$</span> but there are so many equations that I could not manage to solve them.</p>
<p>The row, column, diagonal sum must be $15$, e.g. because three disjoint rows must add up to $1+\ldots +9=45$. The sum of all four lines through the middle is therefore $60$ and is also $1+\ldots +9=45$ plus three times the middle number.</p>
<p>A <em>normal</em> <strong>magic square</strong> of order $3$ can be represented by a $3 \times 3$ matrix</p> <p>$$\begin{bmatrix} x_{11} &amp; x_{12} &amp; x_{13}\\ x_{21} &amp; x_{22} &amp; x_{23}\\ x_{31} &amp; x_{32} &amp; x_{33}\end{bmatrix}$$</p> <p>where $x_{ij} \in \{1, 2,\dots, 9\}$. Since $1 + 2 + \dots + 9 = 45$, the sum of the elements in each column, row and diagonal must be $15$, as mentioned in the other answers. Vectorizing the matrix, these $8$ equality constraints can be written in matrix form as follows</p> <p>$$\begin{bmatrix} 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1&amp; 1 &amp; 1\\ 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1\\ 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1\\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 0\end{bmatrix} \begin{bmatrix} x_{11}\\ x_{21}\\ x_{31}\\ x_{12}\\ x_{22}\\ x_{32}\\ x_{13}\\ x_{23}\\ x_{33}\end{bmatrix} = \begin{bmatrix} 15\\ 15\\ 15\\ 15\\ 15\\ 15\\ 15\\ 15\end{bmatrix}$$</p> <p>Note that each row of the matrix above has $3$ ones and $6$ zeros. We can guess a <strong>particular</strong> solution by visual inspection of the matrix. This particular solution is $(5,5,\dots,5)$, the $9$-dimensional vector whose nine entries are all equal to $5$. To find a <strong>homogeneous</strong> solution, we compute the RREF</p> <p>$$\begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1\\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; -1 &amp; -1\\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; -1 &amp; -2\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 2\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\end{bmatrix}$$ </p> <p>Thus, the <strong>general</strong> solution is of the form</p> <p>$$\begin{bmatrix} 5\\ 5\\ 5\\ 5\\ 5\\ 5\\ 5\\ 5\\ 5\end{bmatrix} + \begin{bmatrix} 0 &amp; -1\\ -1 &amp; 0\\ 1 &amp; 1\\ 1 &amp; 2\\ 0 &amp; 0\\ -1 &amp; -2\\ -1 &amp; -1\\ 1 &amp; 0\\ 0 &amp; 1\end{bmatrix} \eta$$</p> <p>where $\eta \in \mathbb{R}^2$. Un-vectorizing, we obtain the solution set</p> <p>$$\left\{ \begin{bmatrix} 5 &amp; 5 &amp; 5\\ 5 &amp; 5 &amp; 5\\ 5 &amp; 5 &amp; 5\end{bmatrix} + \eta_1 \begin{bmatrix} 0 &amp; 1 &amp; -1\\ -1 &amp; 0 &amp; 1\\ 1 &amp; -1 &amp; 0\end{bmatrix} + \eta_2 \begin{bmatrix} -1 &amp; 2 &amp; -1\\ 0 &amp; 0 &amp; 0\\ 1 &amp; -2 &amp; 1\end{bmatrix} : (\eta_1, \eta_2) \in \mathbb{R}^2 \right\}$$</p> <p>which is a $2$-dimensional <strong>affine matrix space</strong> in $\mathbb{R}^{3 \times 3}$. For example, the magic square illustrated in the question can be decomposed as follows</p> <p>$$\begin{bmatrix} 2 &amp; 9 &amp; 4\\ 7 &amp; 5 &amp; 3\\ 6 &amp; 1 &amp; 8\end{bmatrix} = \begin{bmatrix} 5 &amp; 5 &amp; 5\\ 5 &amp; 5 &amp; 5\\ 5 &amp; 5 &amp; 5\end{bmatrix} - 2 \begin{bmatrix} 0 &amp; 1 &amp; -1\\ -1 &amp; 0 &amp; 1\\ 1 &amp; -1 &amp; 0\end{bmatrix} + 3 \begin{bmatrix} -1 &amp; 2 &amp; -1\\ 0 &amp; 0 &amp; 0\\ 1 &amp; -2 &amp; 1\end{bmatrix}$$</p> <p>Note that the sum of the elements in each column, row and diagonal of each of the two basis matrices is equal to $0$. Note also that the $(2,2)$-th entry of each of the two basis matrices is $0$. Hence, no matter what parameters $\eta_1, \eta_2$ we choose, the $(2,2)$-th entry of the normal magic square of order $3$ will be equal to $5$.</p>
matrices
<p>Given a matrix <span class="math-container">$A$</span> and column vector <span class="math-container">$x$</span>, what is the derivative of <span class="math-container">$Ax$</span> with respect to <span class="math-container">$x^T$</span> i.e. <span class="math-container">$\frac{d(Ax)}{d(x^T)}$</span>, where <span class="math-container">$x^T$</span> is the transpose of <span class="math-container">$x$</span>?</p> <p>Side note - my goal is to get the known derivative formula <span class="math-container">$\frac{d(x^TAx)}{dx} = x^T(A^T + A)$</span> from the above rule and the chain rule.</p>
<p>Let <span class="math-container">$f(x) = x^TAx$</span> and you want to evaluate <span class="math-container">$\frac{df(x)}{dx}$</span>. This is nothing but the gradient of <span class="math-container">$f(x)$</span>.</p> <p>There are two ways to represent the gradient one as a row vector or as a column vector. From what you have written, your representation of the gradient is as a row vector.</p> <p>First make sure to get the dimensions of all the vectors and matrices in place.</p> <p>Here <span class="math-container">$x \in \mathbb{R}^{n \times 1}$</span>, <span class="math-container">$A \in \mathbb{R}^{n \times n}$</span> and <span class="math-container">$f(x) \in \mathbb{R}$</span></p> <p>This will help you to make sure that your arithmetic operations are performed on vectors of appropriate dimensions.</p> <p>Now lets move on to the differentiation.</p> <p>All you need to know are the following rules for vector differentiation.</p> <p><span class="math-container">$$\frac{d(x^Ta)}{dx} = \frac{d(a^Tx)}{dx} = a^T$$</span> where <span class="math-container">$x,a \in \mathbb{R}^{n \times 1}$</span>.</p> <p>Note that <span class="math-container">$x^Ta = a^Tx$</span> since it is a scalar and the equation above can be derived easily.</p> <p>(Some people follow a different convention i.e. treating the derivative as a column vector instead of a row vector. Make sure to stick to your convention and you will end up with the same conclusion in the end)</p> <p>Make use of the above results to get,</p> <p><span class="math-container">$$\frac{d(x^TAx)}{dx} = x^T A^T + x^T A$$</span> Use product rule to get the above result i.e. first take <span class="math-container">$Ax$</span> as constant and then take <span class="math-container">$x^T A$</span> as constant.</p> <p>So, <span class="math-container">$$\frac{df(x)}{dx} = x^T(A^T + A)$$</span></p>
<p>Mathematicians kill each other about derivatives and gradients. Do not be surprised if the students do not understand one word about this subject. The previous havocs are partly caused by the Matrix Cookbook, a book that should be blacklisted. Everyone has their own definition. $\dfrac{d(f(x))}{dx}$ means either a derivative or a gradient (scandalous). We could write $D_xf$ as the derivative and $\nabla _xf$ as the gradient. The derivative is a linear application and the gradient is a vector if we accept the following definition: let $f:E\rightarrow \mathbb{R}$ where $E$ is an euclidean space. Then, for every $h\in E$, $D_xf(h)=&lt;\nabla_x(f),h&gt;$. In particular $x\rightarrow x^TAx$ has a gradient but $x\rightarrow Ax$ has not ! Using the previous definitions, one has (up to unintentional mistakes):</p> <p>Let $f:x\rightarrow Ax$ where $A\in M_n$ ; then $D_xf=A$ (no problem). On the other hand $x\rightarrow x^T$ is a bijection (a simple change of variable !) ; then we can give meaning to the derivative of $Ax$ with respect to $x^T$: consider the function $g:x^T\rightarrow A(x^T)^T$ ; the required function is $D_{x^T}g:h^T\rightarrow Ah$ where $h$ is a vector ; note that $D_{x^T}g$ is a constant. EDIT: if we choose the bases $e_1^T,\cdots,e_n^T$ and $e_1,\cdots,e_n$ (the second one is the canonical basis), then the matrix associated to $D_{x^T}g$ is $A$ again.</p> <p>Let $\phi:x\rightarrow x^TAx$ ; $D_x\phi:h\rightarrow h^TAx+x^TAh=x^T(A+A^T)h$. Moreover $&lt;\nabla_x(f),h&gt;=x^T(A+A^T)h$, that is ${\nabla_x(f)}^Th=x^T(A+A^T)h$. By identification, $\nabla_x(f)=(A+A^T)x$, a vector (formula (89) in the detestable matrix Cookbook !) ; in particular, the solution above $x^T(A+A^T)$ is not a vector !</p>
combinatorics
<p>Consider the following two generating functions: <span class="math-container">$$e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}$$</span> <span class="math-container">$$\log\left(\frac{1}{1-x}\right)=\sum_{n=1}^{\infty}\frac{x^n}{n}.$$</span> If we live in function-land, it's clear enough that there is an inverse relationship between these two things. In particular, <span class="math-container">$$e^{\log\left(\frac{1}{1-x}\right)}=1+x+x^2+x^3+\ldots$$</span> If we live in generating-function-land, this identity is really not so obvious. We can figure out that the coefficient of <span class="math-container">$x^n$</span> in <span class="math-container">$e^{\log\left(\frac{1}{1-x}\right)}$</span> is given as <span class="math-container">$$\sum_{a_1+\ldots+a_k=n}\frac{1}{a_1\cdot \cdots \cdot a_k}\cdot \frac{1}{k!}$$</span> where the sum runs over <em>all</em> ways to write <span class="math-container">$n$</span> as an ordered sum of positive integers. Supposedly, for each choice of <span class="math-container">$n$</span>, this thing sums to <span class="math-container">$1$</span>. I really don't see why. Is there a combinatorial argument that establishes this? </p>
<p>In your sum, you are distinguishing between the same collection of numbers when it occurs in different orders. So you'll have separate summands for <span class="math-container">$(a_1,a_2,a_3,a_4)=(3,1,2,1)$</span>, <span class="math-container">$(2,3,1,1)$</span>, <span class="math-container">$(1,1,3,2)$</span> etc.</p> <p>Given a multiset of <span class="math-container">$k$</span> numbers adding to <span class="math-container">$n$</span> consisting of <span class="math-container">$t_1$</span> instances of <span class="math-container">$b_1$</span> up to <span class="math-container">$t_j$</span> instances of <span class="math-container">$b_j$</span>, that contributes <span class="math-container">$$\frac{k!}{t_1!\cdot\cdots\cdot t_j!}$$</span> (a multinomial coefficient) summands to the sum, and so an overall contribution of <span class="math-container">$$\frac{1}{t_1!b_1^{t_1}\cdot\cdots\cdot t_j!b_j^{t_j}}$$</span> to the sum. But that <span class="math-container">$1/n!$</span> times the number of permutations with cycle structure <span class="math-container">$b_1^{t_1}\cdot\cdots\cdots b_j^{t_j}$</span>. So this identity states that the total number of permutations of <span class="math-container">$n$</span> objects is <span class="math-container">$n!$</span>.</p>
<p>In brief, <span class="math-container">$n!$</span> times the summand in the sum you write down is equal to the number of permutations on <span class="math-container">$n$</span> symbols that decompose into the product of disjoint cycles of lengths <span class="math-container">$a_1,\dots,a_k$</span>. More precisely, this is true if you combine all of the terms in the sum corresponding to the same multiset <span class="math-container">$\{a_1,\dots,a_k\}$</span>.</p> <p>See exercises 10.2 and 10.3 of <a href="http://jlmartin.faculty.ku.edu/~jlmartin/LectureNotes.pdf" rel="noreferrer">these notes</a> for related material.</p>
differentiation
<p>Which derivatives are eventually periodic?</p> <p>I have noticed that is $a_{n}=f^{(n)}(x)$, the sequence $a_{n}$ becomes eventually periodic for a multitude of $f(x)$. </p> <p>If $f(x)$ was a polynomial, and $\operatorname{deg}(f(x))=n$, note that $f^{(n)}(x)=C$ if $C$ is a constant. This implies that $f^{(n+i)}(x)=0$ for every $i$ which is a natural number. </p> <p>If $f(x)=e^x$, note that $f(x)=f'(x)$. This implies that $f^{(n)}(x)=e^x$ for every natural number $n$. </p> <p>If $f(x)=\sin(x)$, note that $f'(x)=\cos(x), f''(x)=-\sin(x), f'''(x)=-\cos(x), f''''(x)=\sin(x)$.</p> <p>This implies that $f^{(4n)}(x)=f(x)$ for every natural number $n$. </p> <p>In a similar way, if $f(x)=\cos(x)$, $f^{(4n)}(x)=f(x)$ for every natural number $n$.</p> <p>These appear to be the only functions whose derivatives become eventually periodic. </p> <p>What are other functions whose derivatives become eventually periodic? What is known about them? Any help would be appreciated. </p>
<p>The sequence of derivatives being globally periodic (not eventually periodic) with period $m$ is equivalent to the differential equation </p> <p>$$f(x)=f^{(m)}(x).$$</p> <p>All solutions to this equation are of the form $\sum_{k=1}^m c_k e^{\lambda_k x}$ where $\lambda_k$ are solutions to the equation $\lambda^m-1=0$. Thus $\lambda_k=e^{2 k \pi i/m}$. Details can be found in any elementary differential equations textbook.</p> <p>If you merely want eventually periodic, then you can choose an index $n \geq 1$ at which the sequence starts to be periodic and solve</p> <p>$$f^{(n)}(x)=f^{(m+n)}(x).$$</p> <p>The characteristic polynomial in this case is $\lambda^{m+n}-\lambda^n$. This has a $n$-fold root of $0$, and otherwise has the same roots as before. This winds up implying that it is a sum of a polynomial of degree at most $n-1$, plus a solution to the previous equation. Again, the details can be found in any elementary differential equations textbook.</p>
<p>Let's also look at it upside down. You can define analytical (infinitely differentiable) functions with their Taylor series $\sum \frac{a_n}{n!}x^n$. Taylor series are simply all finite and infinite polynomials with coefficient sequences $(a_n)$ that satisfy the series convergence criteria ($a_n$ are the derivatives in the chosen origin point, in my example, $x=0$). Compare this to real numbers (an infinite sequence of non-repeating "digits" - cardinality of continuum). On the other hand, a set of repeating sequences is comparable to rational numbers (which have eventually repeating digits). So... the "fraction" of all functions which have repeating derivatives, is immeasureably small - it's only a very special class of functions that satisfy this criterion (see other answers for appropriate expressions).</p> <p>EDIT: I mentioned this to illustrate the comparison of how special and rare functions with periodic derivatives are. The actual cardinality of the sets depends on the field over which you define the coefficients. If $a_n\in \mathbb{R}$, then recall that set of continuous function has $2^{\aleph_0}$, the cardinality of a continuum, so cardinalities are the same in this case. If coefficients are rational, then we have $\aleph_0^{\aleph_0}=2^{\aleph_0}$ for infinite sequences and $\aleph_0\times\aleph_0^n=\aleph_0$ for periodic ones.</p> <p>Not only that, but you can generate all the functions with this property. Just plug any periodic sequence $(a_n)$ into the expression. It's guaranteed to converge for $x\in \mathbb{R}$, because a periodic sequence is bounded, and $n!$ dominates all powers.</p> <p>A simple substitution can demonstrate that if the coefficients are periodic for a series around one origin point, they are periodic for all of them.</p>
probability
<p>Consider a population of nodes arranged in a triangular configuration as shown in the figure below, where each level $k$ has $k$ nodes. Each node, except the ones in the last level, is a parent node to two child nodes. Each node in levels $2$ and below has $1$ parent node if it is at the edge, and $2$ parent nodes otherwise.</p> <p>The single node in level $1$ is infected (red). With some probability $p_0$, it does not infect either of its child nodes in level $2$. With some probability $p_1$, it infects exactly one of its child nodes, with equal probability. With the remaining probability $p_2=1-p_0-p_1$, it infects both of its child nodes.</p> <p>Each infected node in level $2$ then acts in a similar manner on its two child nodes in level $3$, and so on down the levels. It makes <em>no</em> difference whether a node is inected by one or two parents nodes - it's still just infected.</p> <p>The figure below shows one possibility of how the disease may spread up to level $6$.</p> <p><a href="https://i.sstatic.net/NMxL0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NMxL0.png" alt="One possibile spread of the disease up to level $6$."></a></p> <p>The question is: <strong>what is the expected number of infected nodes at level $k$?</strong></p> <p>Simulations suggest that this is (at least asymptotically) linear in $k$, i.e.,</p> <p>$$ \mathbb{E}(\text{number of infected nodes in level } k) = \alpha k $$</p> <p>where $\alpha = f(p_0, p_1,p_2)$.</p> <hr> <p>This question arises out of a practical scenario in some research I'm doing. Unfortunately, the mathematics involved is beyond my current knowledge, so I'm kindly asking for your help. Pointers to relevant references are also appreciated. </p> <p>I asked a <a href="https://math.stackexchange.com/questions/1500829/a-disease-spreading-through-a-triangular-population">different version</a> of this question some time ago, which did not have the possibility of a node not infecting either of its child nodes. It now turns out that in the system I'm looking at, the probability of this happening is not negligble.</p>
<p><strong>Note:</strong> Of course, a most interesting approach would be to derive a generating function describing the probabilities of infected nodes at level $k$ with respect to the probabilities $p_0,p_1$ and $p_2$ and to derive an asymptotic estimation from it.</p> <p>This seems to be a rather tough job, currently out of reach for me and so this is a much more humble approach trying to obtain at least for small values of $k$ the expectation value $E(k)$ of the number of infected nodes at this level.</p> <p>Here I give the expectation value $E(k)$ for the number of infected nodes at level $k=1,2$ and propose an algorithm to derive the expectation values for greater values of $k$. Maybe someone with computer based assistance could use it to provide $E(k)$ for some larger values of $k$.</p> <blockquote> <p><strong>The family of graphs of infected nodes:</strong></p> <p>We focus at graphs containing infected nodes only which can be derived from one infected root node. The idea is to iteratively find at step $k$ a manageable representation of <em>all</em> graphs of this kind with diameter equal to the level $k$ based upon the graphs from step $k-1$.</p> <p><strong>Expectation value $E(k=1)$</strong></p> <p>We encode a left branch by $x$, a right branch by $y$ and a paired branch by $tx+ty$, which is marked with $t$ in order to differentiate it from single branching. No branching at all is encoded by $1$. So, starting from a root node denoted with $1$ we obtain four different graphs: \begin{array}{cccc} 1\qquad&amp;x\qquad&amp;y\qquad&amp;tx+ty\\ p_0\qquad&amp;\frac{1}{2}p_1\qquad&amp;\frac{1}{2}p_1\qquad&amp; p_2 \end{array}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Three of the graphs $x,y,tx+ty$ have diameter equal to $1$ which means they have <em>leaves</em> at level $k=1$.</p></li> <li><p>These three graphs are of interest for further generations of graphs with higher level. </p></li> <li><p>Let the polynomial $G(x,y,t)$ represent a graph. The number of terms $x^my^n$ in $G(x,y,t)$ with $m+n=k$ gives the number of nodes at level $k$.</p></li> <li><p>A node without successor nodes is weighted with $p_0$. We associate the weight $\frac{1}{2}p_1$ to $x$ resp. $y$ if they are not marked and the weight $p_2$ to $tx+ty$. </p></li> </ul> <blockquote> <p><strong>Description:</strong> The first generation of graphs corresponding to $k=1$ is obtained from a root node $1$ by <em>multiplication</em> with $(1|x|y|tx+ty)$ whereby the bar $|$ denotes alternatives. \begin{align*} 1(1|x|y|tx+ty)\qquad\rightarrow\qquad 1,x,y,tx+ty \end{align*}</p> <p>We obtain \begin{array}{c|c|c} &amp;&amp;\text{nodes at level }\\ \text{graph}&amp;\text{prob}&amp;k=1\\ \hline 1&amp;p_0&amp;0\\ x&amp;\frac{1}{2}p_1&amp;1\\ y&amp;\frac{1}{2}p_1&amp;1\\ tx+ty&amp;p_2&amp;2\\ \end{array}</p> <p>We conclude \begin{align*} E(1)&amp;=0\cdot p_0 +1\cdot\frac{1}{2}p_1 + 1\cdot\frac{1}{2}p_1+2\cdot p_2\\ &amp;=p_1+2p_2 \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p><strong>Expectation value $E(k=2)$</strong></p> <p>For the next step $k=2$ we consider all graphs from the step before having diameter equal to $k-1=1$.</p> <p>These are $x,y,tx+ty$. Each of them generates graphs for the next generation by appending at nodes at level $k$ the subgraphs $1|x|y|tx+ty$. If a graph has $n$ nodes at level $k$ we get $4^n$ graphs to analyze for the next generation.</p> <p>But, we will see that due to symmetry we can identify graphs and be able to reduce roughly by a factor two the variety of different graphs.</p> </blockquote> <p><strong>Intermezzo: Symmetry</strong></p> <p>Note the contribution of graphs which are symmetrical with respect to $x$ and $y$ is the same. They both show the same probability of occurrence.</p> <p>Instead of considering the three graphs \begin{align*} x,y,tx+ty \end{align*} we can identify $x$ and $y$. We arrange the family $\mathcal{F}_1=\{x,y,tx+ty\}$ of graphs of level $k=1$ in two equivalence classes \begin{align*} [x],[tx+ty] \end{align*} and describe the family more compactly by their equivalence classes together with a multiplication factor giving the number of elements in that class. In order to uniquely describe the different equivalence classes it is convenient to also add the probability of occurrence of a representative in the description. We use a semicolon to separate the probability weight from the polynomial representation of the graph. \begin{align*} \mathcal{[F}_1]=\{2[x;\frac{1}{2}p_1],[tx+ty;p_2]\} \end{align*}</p> <blockquote> <p>The second generation of graphs corresponding to $k=2$ is obtained from $[\mathcal{F}_1]$ via selecting a representative from each equivalence class and each node at level $k=2$ is multiplied by $(1|x|y|tx+ty)$. The probability of this graph has to be multiplied accordingly with $(p_0|\frac{1}{2}p_1|\frac{1}{2}p_1|p_2)$. We obtain this way $4+4^2=20$ graphs</p> <p>We calculate from the representative $x$ of $2[x;\frac{1}{2}p_1]$</p> <p>\begin{align*} x(1|x|y|tx+ty) &amp;\rightarrow (x|x^2|xy|tx^2+txy)\\ \frac{1}{2}p_1\left(p_0\left|\frac{1}{2}p_1\right.\left|\frac{1}{2}p_1\right.\left|\phantom{\frac{1}{2}}p_2\right.\right) &amp;\rightarrow \left(\frac{1}{2}p_0p_1\left|\frac{1}{4}p_1^2\right.\left|\frac{1}{4}p_1^2\right.\left|\frac{1}{2}p_1p_2\right.\right)\\ \end{align*}</p> <p>We obtain from $2[x;\frac{1}{2}p_1]\in[\mathcal{F}_1]$ the first part of equivalence classes of $[\mathcal{F}_2]$ with multiplicity denoting the number of graphs within an equivalence class. We list the representative and the graphs for each class. \begin{array}{c|l|c|c|l} &amp;&amp;&amp;\text{nodes at}\\ \text{mult}&amp;\text{repr}&amp;\text{prob}&amp;\text{level }2&amp;graphs\\ \hline 2&amp;x&amp;\frac{1}{2}p_0p_1&amp;0&amp;x,y\\ 2&amp;x^2&amp;\frac{1}{4}p_1^2&amp;1&amp;x^2,y^2\\ 2&amp;xy&amp;\frac{1}{4}p_1^2&amp;1&amp;xy,yx\\ 2&amp;tx^2+txy&amp;\frac{1}{2}p_1p_2&amp;2&amp;tx^2+txy,txy+ty^2\tag{1}\\ \end{array}</p> <p>We calculate from the representative $tx+ty$ of $[tx+ty;p_2]$ using a somewhat informal notation</p> <p>\begin{align*} tx&amp;(1|x|y|tx+ty)+ty(1|x|y|tx+ty)\\ &amp;\rightarrow (tx|tx^2|txy|t^2x^2+t^2xy)+(ty|tyx|ty^2|t^2xy+t^2y^2)\tag{2}\\ \end{align*}</p> </blockquote> <p>We arrange the resulting graphs in groups and associate the probabilities accordingly. The graphs are created by adding a left alternative from (2) with a right alternative from (2). The probabilities are the product of $p_2$ from $[tx+ty;p_2]$ and the corresponding probabilities of the left and right selected alternatives.</p> <blockquote> <p>\begin{array}{ll} tx+ty\qquad&amp;\qquad p_2p_0p_0\\ tx+tyx\qquad&amp;\qquad p_2p_0\frac{1}{2}p_1\\ tx+ty^2\qquad&amp;\qquad p_2p_0\frac{1}{2}p_1\\ tx+t^2xy+t^2y^2\qquad&amp;\qquad p_2p_0p_2\\ \\ tx^2+ty\qquad&amp;\qquad p_2\frac{1}{2}p_1p_0\\ tx^2+tyx\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ tx^2+ty^2\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ tx^2+t^2xy+t^2y^2\qquad&amp;\qquad p_2\frac{1}{2}p_1p_2\\ \\ txy+ty\qquad&amp;\qquad p_2\frac{1}{2}p_1p_0\\ txy+tyx\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ txy+ty^2\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ txy+t^2xy+t^2y^2\qquad&amp;\qquad p_2\frac{1}{2}p_1p_2\\ \\ t^2x^2+t^2y^2+ty\qquad&amp;\qquad p_2p_2p_0\\ t^2x^2+t^2y^2+tyx\qquad&amp;\qquad p_2p_2\frac{1}{2}p_1\\ t^2x^2+t^2y^2+ty^2\qquad&amp;\qquad p_2p_2\frac{1}{2}p_1\\ t^2x^2+t^2y^2+t^2xy+t^2y^2\qquad&amp;\qquad p_2p_2p_2\tag{3}\\ \end{array}</p> </blockquote> <p>A few words to terms like $txxytyxy$. This term can be replaced with $t^2x^3y^2$. In fact <em>any</em> walk in a graph containing $m$ $x$'s, $n$ $y$'s and $r$ $t$'s (with $t\leq m+n$) can be normalised to \begin{align*} t^rx^my^n \end{align*} If we map this walk to a lattice path in $\mathbb{Z}^2$ with the root at $(0,0)$ and with $x$ moving one step horizontally and $y$ moving one step vertically we always describe a path from $(0,0)$ to $(m,n)$. We could represent each graph as union of lattice paths.</p> <blockquote> <p>We create now a table as we did in (1). We identify graphs in (3) which belong to the same equivalence class.</p> <p>\begin{array}{c|l|c|c|l} &amp;&amp;&amp;\text{nodes at}\\ \text{mult}&amp;\text{repr}&amp;\text{prob}&amp;\text{level }2&amp;graphs\\ \hline 1&amp;tx+ty&amp;p_0^2p^2&amp;0&amp;tx+ty\\ 2&amp;tx+txy&amp;\frac{1}{2}p_0p_1p_2&amp;1&amp;tx+txy,txy+ty\\ 2&amp;tx+ty^2&amp;\frac{1}{2}p_0p_1p_2&amp;1&amp;tx+ty^2,tx^2+ty\\ 2&amp;tx+t^2xy+t^2y^2&amp;p_0p_2^2&amp;2&amp;tx+t^2xy+t^2y^2,\\ &amp;&amp;&amp;&amp;t^2x^2+t^2xy+ty\\ 2&amp;tx^2+txy&amp;\frac{1}{4}p_1^2p_2&amp;2&amp;tx^2+txy,txy+ty^2\\ 1&amp;tx^2+ty^2&amp;\frac{1}{4}p_1^2p_2&amp;2&amp;tx^2+ty^2\\ 2&amp;tx^2+t^2xy+t^2y^2&amp;\frac{1}{2}p_1p_2^2&amp;2&amp;tx^2+t^2xy+t^2y^2,\\ &amp;&amp;&amp;&amp;t^2x^2+t^2xy+ty^2\\ 1&amp;2txy&amp;\frac{1}{4}p_1^2p_2&amp;1&amp;txy+txy\\ 2&amp;txy+t^2xy+t^2y^2&amp;\frac{1}{2}p_1p_2^2&amp;3&amp;txy+t^2xy+t^2y^2,\\ &amp;&amp;&amp;&amp;t^2x^2+t^2xy+txy\\ 1&amp;t^2x^2+2t^2xy+t^2y^2&amp;p_2^3&amp;3&amp;t^2x^2+2t^2xy+t^2y^2\tag{4} \end{array}</p> <p>Combining the classes from (1) and (4) gives $[F_2]$.</p> <p>We calculate the expectation value $E(2)$ from the tables in (1) and (4) \begin{align*} E(2)&amp;=0\cdot p_0p_1 +1\cdot\frac{1}{2}p_1^2 + 1\cdot\frac{1}{2}p_1^2+2\cdot p_1p_2\\ &amp;\qquad+0\cdot p_0^2p_2+1\cdot p_0p_1p_2+1\cdot p_0p_1p_2+2\cdot 2p_0p_2^2\\ &amp;\qquad+2\cdot\frac{1}{2}p_1^2p_2+2\cdot\frac{1}{4}p_1^2p_2+2\cdot p_1p_2^2+1\cdot\frac{1}{4}p_1^2p_2\\ &amp;\qquad+3\cdot p_1p_2^2+3\cdot p_2^3\\ &amp;=p_1^2+2p_1p_2+2p_0p_1p_2+4p_0p_2^2+\frac{7}{4}p_1^2p_2+5p_1p_2^2+3p_2^3 \end{align*}</p> </blockquote> <p><strong>Algorithm for $E(k)$</strong></p> <p>Here is a short summary how $E(k)$ can be derived when $[F_{k-1}]$, the family of equivalence classes of graphs with diameter $k-1$ is already known. </p> <ul> <li><p>Take a representative $G(x,y,t)$ from each equivalence class of $[F_{k-1}]$</p></li> <li><p>Multiply each leave which is at level $k-1$ with $(1|x|y|tx+ty)$</p></li> <li><p>Multiply the probability of the representative with $(p_0|\frac{1}{2}p_1|\frac{1}{2}p_1|p_2)$ accordingly</p></li> <li><p>Use $xy$-symmetry of graphs and normalization $t^rx^my^n$ to find new equivalence classes as we did for $k=2$ above. <em>Attention</em>: There may be equivalence classes with <strong>equal</strong> polynomial representatives but with <strong>different</strong> probabilities.</p></li> <li><p>The number of nodes at level $k$ of a graph $G(x,y,t)$ is the number of terms \begin{align*} t^rx^my^n\qquad\qquad m,n\geq 0, 0\leq r\leq m+n=k \end{align*}</p></li> <li><p>Build $[F_k]$ by collecting all equivalence classes respecting the multiplicity (number of graphs) in an equivalence class</p></li> <li><p>Calculate $E(k)$</p></li> </ul>
<p>According to discussion in the comments of the question we can conclude that infections of the children of the same node are not independent. But we don't know if that dependence is horizontal or vertical. Meaning, maybe infection of one node has influence on its sibiling, or maybe the parent statistically has higher or lower influence on its children so sometimes it infects both, one or none of them</p> <p>Since disease spreads vertically only, I will consider the second case</p> <p>Let parents be named B,L,R or N if they infect both, left, right or none of the children, where $p(L)=p(R)=\frac{p_1}2$. We can draw all possible trees and count their probabilities. For instance $$B$$$$B\_\_R$$$$N\_\_L\_\_R$$$$0\_\_L\_\_0\_\_B$$ $$0\_\_\_1\_\_0\_\_1\_\_\_1$$ has probability $B^3R^2L^2N$ (0-not infected, 1-infected)</p> <p>Lets discuss all the graphs that end with $01011$, then previous row has to be one of $\{0L0B,R00B,...\}=$ $(\{R\}\times\{0,L,N\}\bigcup \{0,N\}\times\{L\})\times(\{R\}\times\{R\}\bigcup\{0,R,N\}\times\{B\})$</p> <p>Probability that $01011$ is preceded with $0L0B$ is equal to probability for $0101$ to happen, times $p(L)p(B)$</p> <p>So $p(01011)=p(1001)p(R)p(B)+p(0101)p(L)p(B) +p(1011)p(R)(p(R)^2+p(R)p(B)+p(N)p(B))+p(0111)p(L)(p(R)^2+p(R)p(B)+p(N)p(B)) +p(1101)(p(R)p(L)+p(N)p(R)+p(N)p(L))p(B) +p(1111)(p(R)p(L)+p(N)p(R)+p(N)p(L))(p(R)^2+p(R)p(B)+p(N)p(B))$</p> <p>Also $p(1011)=p(1101)$ as a mirror pair</p> <p>So, it is left to conclude a way of finding preceding combinations for a 011011110... It is made of several groups of consecutive $1$s which are independent, so they can be attached to each other by a Decartes multiplication also multiplied by $\{0,N\}$ for each pair of consecutive $0$s in between. </p> <p>Let $f(n)$ be the set of all combinations that precede to n consecutive $1$s, $f(1)=\{RN,R0,RL,0L,NL\}$ and lets define $g(n)$ as the set of all elements from $f(n)$ that start with $R$ or $0$ but leading $R$ subtituted with $B$ and leading $0$ subtituted with $L$ , $g(1)=\{BN,B0,BL,LL\}$. Then $f(n+1)=R\times f(n)\bigcup \{R,0,N\}\times g(n)$</p> <p>Exceptions are the groups on the end or start of the level, where you pick only combinations with leading or ending zero and remove that zero. Lets define sets for leading groups as $lf(n)$, sets for ending groups as $ef(n)$ and groups that are both leading and ending as $lef(n)$</p> <p>$f(1)=\{RN,R0,RL,0L,NL\}$<br> $g(1)=\{BN,B0,BL,LL\}$<br> $lf(1)=\{L\}$, $ef(1)=\{R\}$<br> $f(2)=\{RRN,RR0,R0L,RNL,RBN,RB0,RBL,RLL,NBN,NB0,NBL,NLL,0BN,0B0,0BL,0LL\}$<br> $g(2)=\{BRN,BR0,B0L,BNL,BBN,BB0,BBL,BLL,LBN,LB0,LBL,LLL\}$<br> $lf(2)=\{BN,B0,BL,LL\}$, $ef(2)=\{RR,RB,NB,0B\}$, $lef(2)=\{B\}$<br> $f(3)=\{RRRN,RRR0,RR0L,RRNL,RRBN,RRB0,RRBL,RRLL,RNBN,RNB0,RNBL,RNLL,R0BN,R0B0,R0BL,R0LL,RBRN,RBR0,RB0L,RBNL,RBBN,RBB0,RBBL,RBLL,RLBN,RLB0,RLBL,RLLL,NBRN,NBR0,NB0L,NBNL,NBBN,NBB0,NBBL,NBLL,NLBN,NLB0,NLBL,NLLL,0BRN,0BR0,0B0L,0BNL,0BBN,0BB0,0BBL,0BLL,0LBN,0LB0,0LBL,0LLL\}$<br> $lf(3)=\{BRN,BR0,B0L,BNL,BBN,BB0,BBL,BLL,LBN,LB0,LBL,LLL\}$, $ef(3)=\{RRR,RRB,RNB,R0B,RBR,RBB,RLB,NBR,NBB,NLB,0BR,0BB,0LB\}$, $lef(3)=\{BR,BB,LB\}$<br></p> <p>Lets calculate p(111),p(110),p(101),p(100),p(010),p(000). With notes that $p(L)=p(10)=\frac {p_1} 2=p(01)=p(R)$, $p(N)=p(00)=p_0$, $p(B)=p(11)=p_2$</p> <p>$prec(111)=lef(3)=\{BR,BB,LB\}$ is the set of rows that can precede to $111$, so $p(111)=p(11)p(B)(p(B)+p(R)+p(L))$, further I will write just B not p(B)<br> $prec(110)=lf(2)$,<br> $p(110)=p(11)(BN+BL+LL)+p(10)B=B(BN+BL+LL+L)$<br> $prec(101)=lf(1)\times ef(1)=\{LR\}$,<br> $p(101)=p(11)LR=BLR$<br> $prec(100)=lf(1)\times \{0,N\}=\{LN,L0\}$,<br> $p(100)=p(11)LN+p(10)L=BLN+LL$<br> $prec(010)=f(1)$,<br> $p(010)=p(11)(RN+RL+NL)+p(10)R+p(01)L=B(RN+RL+NL)+2RL$</p> <p>$p(000)$ isn't needed and $p(100)=p(001)$, $p(110)=p(011)$</p> <p>$E(2)=3p(111)+2(p(101)+2p(110))+2p(100)+p(010)= 3p(111)+2p(101)+4p(110)+2p(100)+p(010)= 3p_2^2(p_2+p_1)+2p_2p_1^2/4+4p_2(p_2p_0+p_2p_1/2+p_1^2/4+p_1/2)+p_0p_1p_2+p_1^2/2+p_0p_1p_2+p_2p_1^2/4+p_1^2/2=3p_2^3+5p_2^2p_1+\frac{7}{4}p_1^2p_2+4p_0p_2^2+2p_1p_2+2p_0p_1p_2+p_1^2$</p> <p>Now you have probabilities on 3rd level you can "easily" calculate them on 4th level</p> <p>$prec(1111)=lef(4)$<br> $prec(1110)=lf(3)$,<br> $prec(1101)=lf(2)\times ef(1)$,<br> $prec(1100)=lf(2)\times \{0,N\}$,<br> $prec(1010)=lf(1)\times f(1)$,<br> $prec(1001)=lf(1)\times \{0,N\}\times ef(1)$,<br> $prec(0110)=f(2)$,<br> $prec(1000)=lf(1)\times \{0,N\}\times \{0,N\}$,<br> $prec(0100)=f(1)\times \{0,N\}$,<br></p>
geometry
<p>You've spent your whole life in the hyperbolic plane. It's second nature to you that the area of a triangle depends only on its angles, and it seems absurd to suggest that it could ever be otherwise.</p> <p>But recently a good friend named Euclid has raised doubts about the fifth postulate of Poincaré's <em>Elements</em>. This postulate is the obvious statement that given a line $L$ and a point $p$ not on $L$ there are at least two lines through $p$ that do not meet $L$. Your friend wonders what it would be like if this assertion were replaced with the following: given a line $L$ and a point $p$ not on $L$, there is exactly one line through $p$ that does not meet $L$.</p> <p>You begin investigating this Euclidean geometry, but you find it utterly impossible to visualize intrinsically. You decide your only hope is to find a model of this geometry within your familiar hyperbolic plane. </p> <p>What model do you build?</p> <p>I do not know if there's a satisfying answer to this question, but maybe it's entertaining to try to imagine. For clarity, we Euclidean creatures have built models like the upper-half plane model or the unit-disc model to visualize hyperbolic geometry within a Euclidean domain. I'm wondering what the reverse would be.</p>
<p>Here's another version of Doug Chatham's answer, but with details.</p> <p>If you lived in Hyperbolic space, then Euclidean geometry would be natural to you as well. The reason is that you can take what is called a horosphere (in the half-space model for us, this is just a hyperplane which is parallel to our limiting hyperplane) and this surface actually has a Euclidean geometry on it!</p> <p>So unlike for us, where the hyperbolic plane cannot be embedded into Euclidean 3-space, the opposite is true: the Euclidean plane can be embedded into hyperbolic 3-space! So this is analogous to our understanding of spherical geometry. It's no surprise the spherical geometry is slightly different, however, it fits nicely into our Euclidean view of things, because spherical geometry is somewhat contained in three-dimensional geometry because of the embedding.</p>
<p>Look up "horosphere" (for example, in page 90 of the Princeton Companion to Mathematics). Wikipedia describes it on its <a href="http://en.wikipedia.org/wiki/horoball">Horoball</a> page. </p>
geometry
<p>On a disk, choose <span class="math-container">$n$</span> uniformly random points. Then draw the smallest circle enclosing those points. (<a href="https://www.personal.kent.edu/%7Ermuhamma/Compgeometry/MyCG/CG-Applets/Center/centercli.htm" rel="noreferrer">Here</a> are some algorithms for doing so.)</p> <p>The circle may or may not lie completely on the disk. For example, with <span class="math-container">$n=7$</span>, here are examples of both cases.</p> <p><a href="https://i.sstatic.net/PReoa.png" rel="noreferrer"><img src="https://i.sstatic.net/PReoa.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/xSoOX.png" rel="noreferrer"><img src="https://i.sstatic.net/xSoOX.png" alt="enter image description here" /></a></p> <blockquote> <p>What is <span class="math-container">$\lim\limits_{n\to\infty}\{\text{Probability that the circle lies completely on the disk}\}$</span>?</p> </blockquote> <p>Is the limiting probability <span class="math-container">$0$</span>? Or <span class="math-container">$1$</span>? Or something in between? My geometrical intuition fails to tell me anything.</p> <h2>The case <span class="math-container">$n=2$</span></h2> <p>I have only been able to find that, when <span class="math-container">$n=2$</span>, the probability that the smallest enclosing circle lies completely on the disk, is <span class="math-container">$2/3$</span>.</p> <p>Without loss of generality, assume that the perimeter of the disk is <span class="math-container">$x^2+y^2=1$</span>, and the two points are <span class="math-container">$(x,y)$</span> and <span class="math-container">$(0,\sqrt t)$</span> where <span class="math-container">$t$</span> is <a href="https://mathworld.wolfram.com/DiskPointPicking.html" rel="noreferrer">uniformly distributed</a> in <span class="math-container">$[0,1]$</span>.</p> <p>The smallest enclosing circle has centre <span class="math-container">$C\left(\frac{x}{2}, \frac{y+\sqrt t}{2}\right)$</span> and radius <span class="math-container">$r=\frac12\sqrt{x^2+(y-\sqrt t)^2}$</span>. If the smallest enclosing circle lies completely on the disk, then <span class="math-container">$C$</span> lies within <span class="math-container">$1-r$</span> of the origin. That is,</p> <p><span class="math-container">$$\sqrt{\left(\frac{x}{2}\right)^2+\left(\frac{y+\sqrt t}{2}\right)^2}\le 1-\frac12\sqrt{x^2+(y-\sqrt t)^2}$$</span></p> <p>which is equivalent to</p> <p><span class="math-container">$$\frac{x^2}{1-t}+y^2\le1$$</span></p> <p>The <a href="https://byjus.com/maths/area-of-ellipse/" rel="noreferrer">area</a> of this region is <span class="math-container">$\pi\sqrt{1-t}$</span>, and the area of the disk is <span class="math-container">$\pi$</span>, so the probability that the smallest enclosing circle lies completely on the disk is <span class="math-container">$\sqrt{1-t}$</span>.</p> <p>Integrating from <span class="math-container">$t=0$</span> to <span class="math-container">$t=1$</span>, the probability is <span class="math-container">$\int_0^1 \sqrt{1-t}dt=2/3$</span>.</p> <h2>Edit</h2> <p>From the comments, @Varun Vejalla has run trials that suggest that, for small values of <span class="math-container">$n$</span>, the probability (that the enclosing circle lies completely on the disk) is <span class="math-container">$\frac{n}{2n-1}$</span>, and that the limiting probability is <span class="math-container">$\frac12$</span>. There should be a way to prove these results.</p> <h2>Edit2</h2> <p>I seek to generalize this question <a href="https://mathoverflow.net/q/458571/494920">here</a>.</p>
<p>First, let me state two lemmas that demand tedious computations. Let <span class="math-container">$B(x, r)$</span> denote the circle centered at <span class="math-container">$x$</span> with radius <span class="math-container">$r$</span>.</p> <p><strong>Lemma 1</strong>: Let <span class="math-container">$B(x,r)$</span> be a circle contained in <span class="math-container">$B(0, 1)$</span>. Suppose we sample two points <span class="math-container">$p_1, p_2$</span> inside <span class="math-container">$B(0, 1)$</span>, and <span class="math-container">$B(x', r')$</span> is the circle with <span class="math-container">$p_1p_2$</span> as diameter. Then we have <span class="math-container">$$\mathbb{P}(x' \in x + dx, r' \in r + dr) = \frac{8}{\pi} r dxdr.$$</span></p> <p><strong>Lemma 2</strong>: Let <span class="math-container">$B(x,r)$</span> be a circle contained in <span class="math-container">$B(0, 1)$</span>. Suppose we sample three points <span class="math-container">$p_1, p_2, p_3$</span> inside <span class="math-container">$B(0, 1)$</span>, and <span class="math-container">$B(x', r')$</span> is the circumcircle of <span class="math-container">$p_1p_2p_3$</span>. Then we have <span class="math-container">$$\mathbb{P}(x' \in x + dx, r' \in r + dr) = \frac{24}{\pi} r^3 dxdr.$$</span> Furthermore, conditioned on this happening, the probability that <span class="math-container">$p_1p_2p_3$</span> is acute is exactly <span class="math-container">$1/2$</span>.</p> <p>Given these two lemmas, let's see how to compute the probability in question. Let <span class="math-container">$p_1, \cdots, p_n$</span> be the <span class="math-container">$n$</span> points we selected, and <span class="math-container">$C$</span> is the smallest circle containing them. For each <span class="math-container">$i &lt; j &lt; k$</span>, let <span class="math-container">$C_{ijk}$</span> denote the circumcircle of three points <span class="math-container">$p_i, p_j, p_k$</span>. For each <span class="math-container">$i &lt; j$</span>, let <span class="math-container">$D_{ij}$</span> denote the circle with diameter <span class="math-container">$p_i, p_j$</span>. Let <span class="math-container">$E$</span> denote the event that <span class="math-container">$C$</span> is contained in <span class="math-container">$B(0, 1)$</span>.</p> <p>First, a geometric statement.</p> <p><strong>Claim</strong>: Suppose no four of <span class="math-container">$p_i$</span> are concyclic, which happens with probability <span class="math-container">$1$</span>. Then exactly one of the following scenarios happen.</p> <ol> <li><p>There exists unique <span class="math-container">$1 \leq i &lt; j &lt; k \leq n$</span> such that <span class="math-container">$p_i, p_j, p_k$</span> form an acute triangle and <span class="math-container">$C_{ijk}$</span> contains all the points <span class="math-container">$p_1, \cdots, p_n$</span>. In this case, <span class="math-container">$C = C_{ijk}$</span>.</p> </li> <li><p>There exists unique <span class="math-container">$1 \leq i &lt; j \leq n$</span> such that <span class="math-container">$D_{ij}$</span> contains all the points <span class="math-container">$p_1, \cdots, p_n$</span>. In this case, <span class="math-container">$C = D_{ij}$</span>.</p> </li> </ol> <p><strong>Proof</strong>: This is not hard to show, and is listed <a href="https://en.wikipedia.org/wiki/Smallest-circle_problem" rel="noreferrer">on wikipedia</a>.</p> <p>Let <span class="math-container">$E_1$</span> be the event that <span class="math-container">$E$</span> happens and we are in scenario <span class="math-container">$1$</span>. Let <span class="math-container">$E_2$</span> be the event that <span class="math-container">$E$</span> happens and we are in scenario <span class="math-container">$2$</span>.</p> <p>We first compute the probability that <span class="math-container">$E_1$</span> happens. It is <span class="math-container">$$\mathbb{P}(E_1) = \sum_{1 \leq i &lt; j &lt; k \leq n} \mathbb{P}(\forall \ell \neq i,j,k, p_\ell \in C_{ijk} , C_{ijk} \subset B(0, 1), p_ip_jp_k \text{acute}).$$</span> Conditioned on <span class="math-container">$C_{ijk} = B(x, r)$</span>, Lemma 2 shows that this happens with probability <span class="math-container">$\frac{1}{2}r^{2(n - 3)} \mathbb{1}_{|x| + r \leq 1}$</span>. Lemma 2 also tells us the distribution of <span class="math-container">$(x, r)$</span>. Integrating over <span class="math-container">$(x, r)$</span>, we conclude that <span class="math-container">$$\mathbb{P}(\forall \ell \neq i,j,k, p_\ell \in C_{ijk} , C_{ijk} \subset B(0, 1), p_ip_jp_k \text{acute}) = \int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3 dr dx.$$</span> Thus we have <span class="math-container">$$\mathbb{P}(E_1) = \binom{n}{3}\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3 dr dx.$$</span> We can first integrate the <span class="math-container">$x$</span>-variable to get <span class="math-container">$$\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3dr dx = 12 \int_0^1 r^{2n - 3}(1 - r)^2 dr.$$</span> Note that <span class="math-container">$$\int_0^1 r^{2n - 3}(1 - r)^2 dr = \frac{(2n - 3)! * 2!}{(2n)!} = \frac{2}{2n * (2n - 1) * (2n - 2)}.$$</span> So we conclude that <span class="math-container">$$\mathbb{P}(E_1) = \frac{n - 2}{2n - 1}.$$</span> We next compute the probability that <span class="math-container">$E_2$</span> happens. It is <span class="math-container">$$\mathbb{P}(E_2) = \sum_{1 \leq i &lt; j \leq n} \mathbb{P}(\forall \ell \neq i,j, p_\ell \in D_{ij} , D_{ij} \subset B(0, 1)).$$</span> Conditioned on <span class="math-container">$D_{ij} = B(x, r)$</span>, this happens with probability <span class="math-container">$r^{2(n - 2)} \mathbb{1}_{|x| + r \leq 1}$</span>. Lemma 1 tells us the distribution of <span class="math-container">$(x, r)$</span>. So we conclude that the probability that <span class="math-container">$$\mathbb{P}(\forall \ell \neq i,j, p_\ell \in D_{ij} , D_{ij} \subset B(0, 1)) = \int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx.$$</span> So <span class="math-container">$$\mathbb{P}(E_2) = \binom{n}{2}\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx.$$</span> We compute that <span class="math-container">$$\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx = 8\int_0^1 r^{2n - 3} (1 - r)^2 dr = 8 \frac{(2n - 3)! 2!}{(2n)!}.$$</span> So we conclude that <span class="math-container">$$\mathbb{P}(E_2) = 8 \binom{n}{2} \frac{(2n - 3)! 2!}{(2n)!} = \frac{2}{2n - 1}.$$</span> Finally, we get <span class="math-container">$$\mathbb{P}(E) = \mathbb{P}(E_1) + \mathbb{P}(E_2) = \boxed{\frac{n}{2n - 1}}.$$</span> The proofs of the two lemmas are really not very interesting. The main tricks are some coordinate changes. Let's look at Lemma 1 for example. The trick is to make the coordinate change <span class="math-container">$p_1 = x + r (\cos \theta, \sin \theta), p_2 = x + r (-\cos \theta, -\sin \theta)$</span>. One can compute the Jacobian of the coordinate change as something like <span class="math-container">$$J = \begin{bmatrix} 1 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 1 \\ \cos \theta &amp; \sin \theta &amp; -\cos \theta &amp; -\sin \theta \\ -r\sin \theta &amp; r\cos \theta &amp; r\sin \theta &amp; -r\cos \theta \\ \end{bmatrix}.$$</span> And we can compute that <span class="math-container">$|\det J| = 4r$</span>. As <span class="math-container">$p_1, p_2$</span> has density function <span class="math-container">$\frac{1}{\pi^2} \mathbf{1}_{p_1, p_2 \in B(0, 1)}$</span>, the new coordinate system <span class="math-container">$(x, r, \theta)$</span> has density function <span class="math-container">$$\frac{4r}{\pi^2} \mathbf{1}_{p_1, p_2 \in B(0, 1)}$$</span> The second term can be dropped as it is always <span class="math-container">$1$</span> in the neighbor of <span class="math-container">$(x, r)$</span>. To get the density of <span class="math-container">$(x, r)$</span> you can integrate in the <span class="math-container">$\theta$</span> variable to conclude that the density of <span class="math-container">$(x, r)$</span> is <span class="math-container">$$\frac{8r}{\pi}$$</span> as desired.</p> <p>The proof of Lemma 2 is analogous, except you can use the more complicated coordinate change from <span class="math-container">$(p_1, p_2, p_3)$</span> to <span class="math-container">$(x, r, \theta_1, \theta_2, \theta_3)$</span> <span class="math-container">$$p_1 = x + r (\cos \theta_1, \sin \theta_1), p_2 = x + r (\cos \theta_2, \sin \theta_2), p_3 = x + r (\cos \theta_3, \sin \theta_3).$$</span> The Jacobian <span class="math-container">$J$</span> is now <span class="math-container">$6$</span> dimensional, and Mathematica tells me that its determinant is <span class="math-container">$$|\det J| = r^3|\sin(\theta_1 - \theta_2) + \sin(\theta_2 - \theta_3) + \sin(\theta_3 - \theta_1)|.$$</span> So we just need to integrate this in <span class="math-container">$\theta_{1,2,3}$</span>! Unfortunately, Mathematica failed to do this integration, but I imagine you can do this by hand and get the desired Lemma.</p>
<p>Intuitive answer:</p> <p>Let <span class="math-container">$R_d$</span> be the radius of the disk and let <span class="math-container">$R_e$</span> be the radius of the enclosing circle of the random points on the disk and <span class="math-container">$R_p$</span> be the radius of a circle that passes through the outermost points such that all selected random points on the disk either lie on or within <span class="math-container">$R_p$</span>.</p> <p>The answer to this question is highly dependent on the exact precise definitions of the terms and phrases used in the question.</p> <p>Assumption 1: Does the definition of &quot;enclosing disk ...(of the points)... lies completely on the disk&quot; includes the case of the enclosing circle lying exactly on the perimeter of the disk? i.e does it mean <span class="math-container">$R_e &lt; R_d$</span> or <span class="math-container">$R\leq R_d$</span>? I will assume the latter.</p> <p>Assumption 2: Does the smallest enclosing circle of the points include the case of some of the enclosed points lying on the enclosing disk? i.e. does it mean <span class="math-container">$R_e &gt; R_p$</span> or <span class="math-container">$R_e = R_p$</span>? I will assume the latter.</p> <p>It is well known that a circle can be defined by a minimum of 3 non colinear points. The question can now be boiled down to &quot;If there are infinitely points on the disk, what is the probability of at least 3 of the points being on the perimeter of the disk?&quot;</p> <p>Intuition says that if there are an infinite number of points that are either on or within the perimeter of the disk, then the probability of there being 3 points exactly on the perimeter of the disk is exactly unity. If there are at least 3 points exactly on the perimeter of the disk then the enclosing circle lies completely on the disk, so the answer to the OP question is:</p> <p>&quot;The probability that the smallest circle enclosing <span class="math-container">$n$</span> random points on a disk lies completely on the disk, as <span class="math-container">$n\to\infty$</span>, is 1.</p> <p>If we define the meaning of &quot;enclosing circle lies completely on the disk&quot; to mean strictly <span class="math-container">$R_e &lt; R_d$</span> then things get more complicated. Now the question boils down to &quot;What is the probability of an infinite number of random points on the disk not having any points exactly on the perimeter of the disk?&quot;</p> <p>If any of the random points lie exactly on the perimeter of the disk, then the enclosing circle touches the perimeter of the disk and by the definitions of this alternative interpretation of the question, the enclosing circle does not lie entirely within the perimeter of the disk. The intuitive probability of placing an infinite number of random points on the disk without any of the points landing exactly on the perimeter of the disk is zero, so the answer to this alternative interpretation of the question is:</p> <p>&quot;The probability that the smallest circle enclosing <span class="math-container">$n$</span> random points on a disk lies completely on the disk, as <span class="math-container">$n\to\infty$</span> = 0.</p>
linear-algebra
<p>I'm looking for a book to learn Algebra. The programme is the following. The units marked with a $\star$ are the ones I'm most interested in (in the sense I know nothing about) and those with a $\circ$ are those which I'm mildly comfortable with. The ones that aren't marked shouldn't be of importance. Any important topic inside a unite will be boldfaced.</p> <p><strong>U1:</strong> <em>Vector Algebra.</em> Points in the $n$-dimensional space. Vectors. Scalar product. Norm. Lines and planes. Vectorial product.</p> <p>$\circ$ <strong>U2:</strong> <em>Vector Spaces.</em> Definition. Subspaces. Linear independence. Linear combination. Generating systems. Basis. Dimesion. Sum and intersection of subspaces. Direct sum. Spaces with inner products.</p> <p>$\circ$ <strong>U3:</strong> <em>Matrices and determinants.</em> Matrix Spaces. Sum and product of matrices. Linear ecuations. Gauss-Jordan elimination. Range. <strong>Roché Frobenius Theorem. Determinants. Properties. Determinant of a product. Determinants and inverses.</strong></p> <p>$\star$ <strong>U4:</strong> <em>Linear transformations.</em> Definition. Nucleus and image. Monomorphisms, epimorphisms and isomorphisms. Composition of linear transformations. Inverse linear tranforms.</p> <p><strong>U5:</strong> <em>Complex numbers and polynomials.</em> Complex numbers. Operations. Binomial and trigonometric form. De Möivre's Theorem. Solving equations. Polynomials. Degree. Operations. Roots. Remainder theorem. Factorial decomposition. FTA. <strong>Lagrange interpolation.</strong></p> <p>$\star$ <strong>U6:</strong> <em>Linear transformations and matrices.</em> Matrix of a linear transformation. Matrix of the composition. Matrix of the inverse. Base changes. </p> <p>$\star$ <strong>U7:</strong> <em>Eigen values and eigen vectors</em> Eigen values and eigen vectors. Characteristc polynomial. Aplications. Invariant subspaces. Diagonalization. </p> <p>To let you know, I own a copy of Apostol's Calculus $\mathrm I $ which has some of those topics, precisely:</p> <ul> <li>Linear Spaces</li> <li>Linear Transformations and Matrices.</li> </ul> <p>I also have a copy of Apostol's second book of Calc $\mathrm II$which continues with</p> <ul> <li>Determinants</li> <li>Eigenvalues and eigenvectors</li> <li>Eigenvalues of operators in Euclidean spaces.</li> </ul> <p>I was reccommended <em>Linear Algebra</em> by Armando Rojo and have <em>Linear Algebra</em> by <a href="http://www.uv.es/~ivorra/">Carlos Ivorra</a>, which seems quite a good text. </p> <p>What do you reccomend? </p>
<p>"Linear Algebra Done Right" by Sheldon Axler is an excellent book.</p>
<p>Gilbert Strang has a ton of resources on his webpage, most of which are quite good:</p> <p><a href="http://www-math.mit.edu/~gs/">http://www-math.mit.edu/~gs/</a></p>
probability
<p>Is it possible to pick a random natural number? How about a random real number? Is the axiom of choice involved in this? </p>
<p>The question is what does it mean "by random". Remember that mathematics is built on precision. This means that the terms you are using has to have a precise meaning (or at least a reasonably obvious way how to translate them to such).</p> <p>Random is not one of these terms. The natural meaning of "random" really just means "arbitrary". To choose an arbitrary element from a non-empty set you don't need anything except existential instantiation, which allows us to go from "There is something" to "This is something". Of course you can't say which element was taken when you instantiate an existential quantifier. That's the true sense of the word arbitrary.</p> <p>On the other hand, from the probability point of view, random just means that you have some distributions which tells you what are the odds of an element being chosen from a particular collection of element. It might not be able to assign probability to every possible collection, though. </p> <p>Probability has a few axioms, one of them says that given countably many pairwise disjoint sets, the probability of being in one of them is the sum of all probabilities. Another is that the probability of picking out any element is $1$.</p> <p>Now it's easy to see why a countably infinite set does not have any probability which assigns singletons probability $0$. Because a countably infinite set can be written as the countable union of singleton, and the axioms of probability would mean that we assign the probability of picking <em>any</em> element from the set as $0$, which is impossible. On the other hand, you can't have that the probability for any number is equal to the other, and non-zero, since then the probability for picking just anything is infinite (it will be the sum of a constant positive number). So essentially it says that some numbers will have a better chance being chosen than others, which might not fit with the natural meaning of the word "random", which we like to think about as a uniform distribution, allowing each possibility the same chance of being selected.</p> <p>On the real numbers it's better, though, as an uncountable set, and more specifically a set which has those lovely features that the real numbers enjoy from. These allow us to define an assortment of probabilities where singletons have probability $0$. We can restrict ourselves to the interval $[0,1]$, and not the entire set of real numbers, and then we have a very nice probability, which assigns an interval $[a,b]$ the probability of its length, $b-a$.</p> <p>So far, we have no made any appeal to the axiom of choice. Or have we? It turns out that without the axiom of choice it is possible that the real numbers could be expressed as the countable union of countable sets. If we allow a probability to be countably additive, this means that there is no way to define a probability on the real numbers. We simply extend the problem from countable sets to the real numbers. Note that this doesn't mean that the real numbers are countable, the fact that the countable union of countable sets is countable depends on the axiom of choice.</p> <p>But finally, you might notice, we haven't picked any number! This is because probability doesn't really deal with picking the actual number, just with the odds of the number being in one set or another. The closest to picking a number that I can think of, an arbitrary number, is existential instantiation that we talked about earlier.</p>
<p>There are distributions over all natural numbers, such as the Geometric distribution, and distributions over all real numbers, such as the normal distribution.</p> <p>But there is no uniform distribution in either case. You can see this as follows: if there were a uniform distribution over natural numbers, it would have to assign some probability $p$ to each natural number. If $p = 0$ then the total probability $\sum_{n\in\mathbb{N}} p(n)$ is 0. But if $p &gt; 0$ then the total probability is $\infty$. In either case we don't have a valid distribution.</p> <p>Similarly, if there were a uniform distribution over real numbers, it would have to assign some probability $p$ to each interval of the form $[n, n+1)$ for $n\in\mathbb{Z}$. Then an equivalent argument to the above shows that $p$ can't be 0 nor greater than 0 so no such distribution exists.</p> <p>If you mean something else, please clarify your question.</p>
probability
<p>When flipping a coin to make important decisions in life you can flip once to choose between 2 possible outcomes. (Heads I eat cake, Tails I eat chocolate!)</p> <p>You can also flip twice to choose between 4 outcomes. (Heads-Heads, Tails-Tails, Heads-Tails, Tails-Heads)</p> <p>Can you use a coin to choose evenly between three possible choices? If so how?</p> <p>(Ignoring slight abnormalities in weight)</p>
<p>If you throw your coin $n$ times you have $2^n$ outcomes, the probability of each of which is $\frac{1}{2^n}$. The larger $n$ is, the better you can divide $2^n$ into three approximately equal parts:</p> <p>Just define $a_n=[2^n/3]$ and $b_n=[2\cdot 2^n/3]$, where $[\cdot]$ denotes rounding off (or on). Since $\frac{a_n}{2^n}\to\frac{1}{3}$ and $\frac{b_n}{2^n}\to\frac{2}{3}$ as $n\to\infty$, each of the three outcomes</p> <p>"the number of Heads is between $0$ and $a_n$",</p> <p>"the number of Heads is between $a_n$ and $b_n$", and</p> <p>"the number of Heads is between $b_n$ and $2^n$"</p> <p>has approximately the probability $\frac{1}{3}$.</p> <hr> <p><em>Alternatively</em>, you could apply your procedure to get four outcomes with the same probability (Heads-Heads, Tails-Tails, Heads-Tails, Tails-Heads) to your problem in the following way:</p> <p>Associate the three outcomes Heads-Heads, Tails-Tails, Heads-Tails with your three possible choices. In the case that Tails-Heads occurs, just repeat the experiment.</p> <p>Sooner or later you will find an outcome different from Tails-Heads.</p> <p>Indeed, by symmetry, the probability for first Heads-Heads, first Tails-Tails, or first Heads-Tails is $\frac{1}{3}$, respectively.</p> <hr> <p>(<em>Alternatively</em>, you could of course throw a die and select your first choice if the outcome is 1 or 2, select your second choice if the outcome is 3 or 4, and select your third choice if the outcome is 5 or 6.)</p>
<p>A simple (practical, low-computation) approach to choosing among three options with equal probability exploits the fact that in a run of independent flips of an unbiased coin, the chance of encountering THT before TTH occurs is 1/3. So:</p> <p>Flip a coin repeatedly, keeping track of the last three outcomes. (Save time, if you like, by assuming the first flip was T and proceeding from there.)</p> <p>Stop whenever the last three are THT or TTH.</p> <p>If the last three were THT, select option 1. Otherwise flip the coin one more time, choosing option 2 upon seeing T and option 3 otherwise.</p>
logic
<p>I am comfortable with the different sizes of infinities and Cantor's "diagonal argument" to prove that the set of all subsets of an infinite set has cardinality strictly greater than the set itself. So if we have a set $\Omega$ and $|\Omega| = \aleph_i$, then (assuming continuum hypothesis) the cardinality of $2^{\Omega}$ is $|2^{\Omega}| = \aleph_{i+1} &gt; \aleph_i$ and we have $\aleph_{i+1} = 2^{\aleph_i}$.</p> <p>I am fine with these argument.</p> <p>What I don't understand is why should there be a smallest $\infty$? Is there a proof (or) an axiom that there exists a smallest $\infty$ and that this is what we address as "countably infinite"? or to rephrase the question "why can't I find a set whose power set gives us $\mathbb{N}$"?</p> <p>The reason why I am asking this question is in "some sense" $\aleph_i = \log_2(\aleph_{i+1})$. I do not completely understand why this process should stop when $\aleph_i = \aleph_0$.</p> <p>(Though coming to think about it I can feel that if I take an infinite set with $\log_2 \aleph_0$ elements I can still put it in one-to-one correspondence with the Natural number set. So Is $\log_2 \aleph_0 = \aleph_0$ (or) am I just confused? If $n \rightarrow \infty$, then $2^n \rightarrow \infty$ faster while $\log (n) \rightarrow \infty$ slower and $\log (\log (n)) \rightarrow \infty$ even slower and $\log(\log(\log (n))) \rightarrow \infty$ even "more" slower and so on).</p> <p>Is there a clean (and relatively elementary) way to explain this to me?</p> <p>(I am totally fine if you direct me to some paper (or) webpage. I tried googling but could not find an answer to my question)</p>
<p>First, let me clear up a misunderstanding.</p> <p>Question: Does $2^\omega = \aleph_1$? More generally, does $2^{\aleph_\alpha} = \aleph_{\alpha+1}$?</p> <p>The answer of "yes" to the first question is known as the continuum hypothesis (CH), while an answer of "yes" to the second is known as the generalized continuum hypothesis (GCH).</p> <p>Answer: Both are undecidable using ZFC. That is, Godel has proven that if you assume the answer to CH is "yes", then you don't add any new contradictions to set theory. In particular, this means it's impossible to prove the answer is "no".</p> <p>Later, Cohen showed that if you assume the answer is "no", then you don't add any new contradictions to set theory. In particular, this means it's impossible to prove the answer is "yes".</p> <p>The answer for GCH is the same.</p> <p>All of this is just to say that while you are allowed to assume an answer of "yes" to GCH (which is what you did in your post), there is no way you can prove that you are correct.</p> <p>With that out of the way, let me address your actual question.</p> <p>Yes, there is a proof that $\omega$ is the smallest infinite cardinality. It all goes back to some very precise definitions. In short, when one does set theory, all one really has to work with is the "is a member of" relation $\in$. One defines an "ordinal number" to be any transitive set $X$ such that $(X,\in)$ is a well ordered set. (Here, "transitive" means "every element is a subset". It's a weird condition which basically means that $\in$ is a transitive relation). For example, if $X = \emptyset$ or $X=\{\emptyset\}$ or $X = \{\{\emptyset\},\emptyset\}$, then $X$ is an ordinal. However, if $X=\{\{\emptyset\}\}$, then $X$ is not.</p> <p>There are 2 important facts about ordinals. First, <strong>every</strong> well ordered set is (order) isomorphic to a unique ordinal. Second, for any two ordinals $\alpha$ and $\beta$, precisely one of the following holds: $\alpha\in\beta$ or $\beta\in\alpha$ or $\beta = \alpha$. In fact, it turns out the the collection of ordinals is well ordered by $\in$, modulo the detail that there is no set of ordinals.</p> <p>Now, cardinal numbers are simply special kinds of ordinal numbers. They are ordinal numbers which can't be bijected (in, perhaps NOT an order preserving way) with any smaller ordinal. It follows from this that the collection of all cardinal numbers is also well ordered. Hence, as long as there is one cardinal with a given property, there will be a least one. One example of such a property is "is infinite".</p> <p>Finally, let me just point out that for <strong>finite</strong> numbers (i.e. finite natural numbers), one usually cannot find a solution $m$ to $m = \log_2(n)$. Thus, as least from an inductive reasoning point of view, it's not surprising that there are infinite cardinals which can't be written as $2^k$ for some cardinal $k$.</p>
<p>Suppose $A$ is an infinite set. In particular, it is not empty, so there exists a $x_1\in A$. Now $A$ is infinite, so it is not $\{x_1\}$, so there exists an $x_2\in A$ such that $x_2\neq x_1$. Now $A$ is infinite, so it is not $\{x_1,x_2\}$, so there exists an $x_3\in A$ such that $x_3\neq x_1$ and $x_3\neq x_2$. Now $A$ is infinite, so it is not $\{x_1,x_2,x_3\}$, so there exists an $x_4\in A$ such that $x_4\neq x_1$, $x_4\neq x_2$ and $x_4\neq x_3$... And so on.</p> <p>This was you can construct a sequence $(x_n)_{n\geq1}$ of elements of $A$ such that $x_i\neq x_j$ if $i\neq j$. If you define a function $f:\mathbb N\to A$ so that $f(i)=x_i$ for all $i\in\mathbb N$, then $f$ is injective.</p> <p>By definition, then, the cardinal of $A$ is greater or equal to that of $\mathbb N$.</p> <p>Since $\mathbb N$ is itself infinite, this shows that the smallest infinite cardinal is $\aleph_0=|\mathbb N|$.</p>
linear-algebra
<p>Consider the class of rational functions that are the result of dividing one linear function by another:</p> <p>$$\frac{a + bx}{c + dx}$$</p> <p>One can easily compute that, for $\displaystyle x \neq \frac cd$ $$\frac{\mathrm d}{\mathrm dx}\left(\frac{a + bx}{c + dx}\right) = \frac{bc - ad}{(c+dx)^2} \lessgtr 0 \text{ as } ad - bc \gtrless 0$$ Thus, we can easily check whether such a rational function is increasing or decreasing (on any connected interval in its domain) by checking the determinant of a corresponding matrix</p> <p>\begin{pmatrix}a &amp; b \\ c &amp; d\end{pmatrix}</p> <p>This made me wonder whether there is some known deeper principle that is behind this connection between linear algebra and rational functions (seemingly distant topics), or is this probably just a coincidence?</p>
<p>I'll put it more simply. If the determinant is zero, then the linear functions $ax+b$ and $cx+d$ are rendered linearly dependent. For a pair of monovariant functions this forces the ratio between them to be a constant. The zero determinant condition is thereby a natural boundary between increasing and decreasing functions.</p>
<p>What you are looking at is a <a href="https://en.wikipedia.org/wiki/M%C3%B6bius_transformation" rel="noreferrer">Möbius transformation</a>. The relationship between matrices and these functions are given in some detail in the Wikipedia article. Most of this is not anything that I know much about, perhaps another responder will give better details.</p> <p>What you can find is that the composition of two of these functions corresponds to matrix multiplication with the matrix defined as you have inferred from the determinant issue. </p> <p>These are also related to continued fraction arithmetic since a continued fraction just is a composition of these functions. A simple continued fraction is a number $a_0+\frac{1}{a_1 + \frac{1}{a_2 + \cdots}}$ and you can see almost directly that each level of the continued fraction is something like $t+\frac{1}{x} = \frac{tx+1}{x}$ where "x" is "the rest of the continued fraction." Each time we expand a bit more of the continued fraction, we engage in just this composition of functions as above. So Gosper used this relationship to perform term-at-a-time arithmetic of continued fractions. In practice this means representing a continued fraction as a matrix product. </p> <p>For instance, $1+\sqrt{2} = 2 + \frac{1}{2+\frac{1}{2 + \cdots}}$ so you could represent it as $$\prod^{\infty} \pmatrix{2 &amp; 1 \\ 1 &amp; 0}$$ And to find out what $\frac{3}{5}(1+\sqrt{2})$ is you could then calculate, to arbitrary precision, $$\pmatrix{3 &amp; 0 \\ 0 &amp; 5}\times \prod^{\infty} \pmatrix{2 &amp; 1 \\ 1 &amp; 0}$$</p>
probability
<p><strong>I would like to generate a random axis or unit vector in 3D</strong>. In 2D it would be easy, I could just pick an angle between 0 and 2*Pi and use the unit vector pointing in that direction. </p> <p>But in 3D <strong>I don't know how can I pick a random point on a surface of a sphere</strong>.</p> <p>If I pick two angles the distribution won't be uniform on the surface of the sphere. There would be more points at the poles and less points at the equator.</p> <p>If I pick a random point in the (-1,-1,-1):(1,1,1) cube and normalise it, then there would be more chance that a point gets choosen along the diagonals than from the center of the sides. So thats not good either.</p> <p><strong>But then what's the good solution?</strong> </p>
<p>You need to use an <a href="http://en.wikipedia.org/wiki/Equal-area_projection#Equal-area">equal-area projection</a> of the sphere onto a rectangle. Such projections are widely used in cartography to draw maps of the earth that represent areas accurately.</p> <p>One of the simplest such projections is the axial projection of a sphere onto the lateral surface of a cylinder, as illustrated in the following figure:</p> <p><img src="https://i.sstatic.net/GDe76.png" alt="Cylindrical Projection"></p> <p>This projection is area-preserving, and was used by Archimedes to compute the surface area of a sphere.</p> <p>The result is that you can pick a random point on the surface of a unit sphere using the following algorithm:</p> <ol> <li><p>Choose a random value of $\theta$ between $0$ and $2\pi$.</p></li> <li><p>Choose a random value of $z$ between $-1$ and $1$.</p></li> <li><p>Compute the resulting point: $$ (x,y,z) \;=\; \left(\sqrt{1-z^2}\cos \theta,\; \sqrt{1-z^2}\sin \theta,\; z\right) $$</p></li> </ol>
<p>Another commonly used convenient method of generating a uniform random point on the sphere in $\mathbb{R}^3$ is this: Generate a standard multivariate normal random vector $(X_1, X_2, X_3)$, and then normalize it to have length 1. That is, $X_1, X_2, X_3$ are three independent standard normal random numbers. There are many well-known ways to generate normal random numbers; one of the simplest is the <a href="http://en.wikipedia.org/wiki/Box-Muller">Box-Muller algorithm</a> which produces two at a time.</p> <p>This works because the standard multivariate normal distribution is invariant under rotation (i.e. orthogonal transformations).</p> <p>This has the nice property of generalizing immediately to any number of dimensions without requiring any more thought.</p>
matrices
<blockquote> <p><strong>Possible Duplicate:</strong><br /> <a href="https://math.stackexchange.com/questions/51292/relation-of-this-antisymmetric-matrix-r-left-beginsmallmatrix0-1-10-e">Relation of this antisymmetric matrix <span class="math-container">$r = \!\left(\begin{smallmatrix}0 &amp;1\\-1 &amp; 0\end{smallmatrix}\right)$</span> to <span class="math-container">$i$</span></a></p> </blockquote> <p>On Wikipedia, it says that:</p> <blockquote> <p><strong>Matrix representation of complex numbers</strong><br /> Complex numbers <span class="math-container">$z=a+ib$</span> can also be represented by <span class="math-container">$2\times2$</span> matrices that have the following form: <span class="math-container">$$\pmatrix{a&amp;-b\\b&amp;a}$$</span></p> </blockquote> <p>I don't understand why they can be represented by these matrices or where these matrices come from.</p>
<p>No one seems to have mentioned it explicitly, so I will. The matrix <span class="math-container">$J = \left( \begin{smallmatrix} 0 &amp; -1\\1 &amp; 0 \end{smallmatrix} \right)$</span> satisfies <span class="math-container">$J^{2} = -I,$</span> where <span class="math-container">$I$</span> is the <span class="math-container">$2 \times 2$</span> identity matrix (in fact, this is because <span class="math-container">$J$</span> has eigenvalues <span class="math-container">$i$</span> and <span class="math-container">$-i$</span>, but let us put that aside for one moment). Hence there really is no difference between the matrix <span class="math-container">$aI + bJ$</span> and the complex number <span class="math-container">$a +bi.$</span></p>
<p>Look at the arithmetic operations and their actions. With + and *, these matrices form a field. And we have the isomorphism $$a + ib \mapsto \left[\matrix{a&amp;-b\cr b &amp;a}\right].$$</p>
number-theory
<p>Suppose you're trying to teach analysis to a stubborn algebraist who refuses to acknowledge the existence of any characteristic $0$ field other than $\mathbb{Q}$. How ugly are things going to get for him?</p> <p>The algebraist argues that the real numbers are a silly construction because any real number can be approximated to arbitrarily high precision by the rational numbers - i.e., given any real number $r$ and any $\epsilon&gt;0$, the set $\left\{x\in\mathbb{Q}:\left|x-r\right|&lt;\epsilon\right\}$ is nonempty, thus sating the mad gods of algebra.</p> <p>As @J.M. and @75064 pointed out to me in chat, we do start having some topology problems, for example that $f(x)=x^2$ and $g(x)=2$ are nonintersecting functions in $\mathbb{Q}$. They do, however, come <em>arbitrarily close</em> to intersecting, i.e. given any $\epsilon&gt;0$ there exist rational solutions to $\left|2-x^2\right|&lt;\epsilon$. The algebraist doesn't find this totally unsatisfying.</p> <blockquote> <p>Where is this guy <em>really</em> going to start running into trouble? Are there definitions in analysis which simply can't be reasonably formulated without leaving the rational numbers? Which concepts would be particularly difficult to understand without the rest of the reals?</p> </blockquote>
<p>What kind of <em>algebraist</em> &quot;refuses to acknowledge the existence of any characteristic <span class="math-container">$0$</span> field other than <span class="math-container">$\mathbb{Q}$</span>&quot;?? But there is a good question in here nevertheless: the basic definitions of limit, continuity, and differentiability all make sense for functions <span class="math-container">$f: \mathbb{Q} \rightarrow \mathbb{Q}$</span>. The real numbers are in many ways a much more complicated structure than <span class="math-container">$\mathbb{Q}$</span> (and in many other ways are much simpler, but never mind that here!), so it is natural to ask whether they are really necessary for calculus.</p> <p>Strangely, this question has gotten serious attention only relatively recently. For instance:</p> <p><span class="math-container">$\bullet$</span> <a href="https://books.google.com/books/about/A_companion_to_analysis.html?id=H3zGTvmtp74C" rel="noreferrer">Tom Korner's real analysis text</a> takes this question seriously and gives several examples of pathological behavior over <span class="math-container">$\mathbb{Q}$</span>.</p> <p><span class="math-container">$\bullet$</span> <a href="https://books.google.com/books/about/Introduction_to_Real_Analysis.html?id=ztGKKuDcisoC" rel="noreferrer">Michael Schramm's real analysis text</a> is unusually thorough and lucid in making logical connections between the main theorems of calculus (though there is one mistaken implication there). I found it to be a very appealing text because of this.</p> <p><span class="math-container">$\bullet$</span> <a href="http://alpha.math.uga.edu/%7Epete/2400full.pdf" rel="noreferrer">My honors calculus notes</a> often explain what goes wrong if you use an ordered field other than <span class="math-container">$\mathbb{R}$</span>.</p> <p><span class="math-container">$\bullet$</span> <span class="math-container">$\mathbb{R}$</span> is the unique ordered field in which <a href="http://alpha.math.uga.edu/%7Epete/instructors_guide_shorter.pdf" rel="noreferrer">real induction</a> is possible.</p> <p><span class="math-container">$\bullet$</span> The most comprehensive answers to your question can be found in two recent Monthly articles, <a href="https://faculty.uml.edu//jpropp/reverse.pdf" rel="noreferrer">by Jim Propp</a> and <a href="https://www.jstor.org/stable/10.4169/amer.math.monthly.120.02.099" rel="noreferrer" title="Toward a More Complete List of Completeness Axioms, The American Mathematical Monthly, Vol. 120, No. 2 (February 2013), pp. 99-114">by Holger Teismann</a>.</p> <p>But as the title of Teismann's article suggests, even the latter two articles do not complete the story.</p> <p><span class="math-container">$\bullet$</span> <a href="http://alpha.math.uga.edu/%7Epete/Clark-Diepeveen_PLUS.pdf" rel="noreferrer">Here is a short note</a> whose genesis was on this site which explains a further pathology of <span class="math-container">$\mathbb{Q}$</span>: there are absolutely convergent series which are not convergent.</p> <p><span class="math-container">$\bullet$</span> Only a few weeks ago Jim Propp wrote to tell me that <a href="https://en.wikipedia.org/wiki/Knaster%E2%80%93Tarski_theorem" rel="noreferrer">Tarski's Fixed Point Theorem</a> characterizes completeness in ordered fields and admits a nice proof using Real Induction. (I put it in my honors calculus notes.) So the fun continues...</p>
<p>This situation strikes me as about as worthwhile a use of your time as trying to reason with a student in a foreign language class who refuses to accept a grammatical construction or vocabulary word that is used every day by native speakers. Without the real numbers as a background against which constructions are made, most fundamental constructions in analysis break down, e.g., no coherent theory of power series as functions. And even algebraic functions have non-algebraic antiderivatives: if you are a fan of the function $1/x$ and you want to integrate it then you'd better be ready to accept logarithms. The theorem that a continuous function on a closed and bounded interval is uniformly continuous breaks down if you only work over the rational numbers: try $f(x) = 1/(x^2-2)$ on the <em>rational</em> closed interval $[1,2]$.</p> <p>Putting aside the issue of analysis, such a math student who continues with this attitude isn't going to get far even in algebra, considering the importance of algebraic numbers that are not rational numbers, even for solving problems that are posed only in the setting of the rational numbers. This student must have a very limited understanding of algebra. After all, what would this person say about constructions like ${\mathbf Q}[x]/(x^2-2)$?</p> <p>Back to analysis, if this person is a die hard algebraist then provide a definition of the real numbers that feels largely algebraic: the real numbers are the quotient ring $A/M$ where $A$ is the ring of Cauchy sequences in ${\mathbf Q}$ and $M$ is the ideal of sequences in ${\mathbf Q}$ that tend to $0$. This is a maximal ideal, so $A/M$ is a field, and by any of several ways one can show this is more than just the rational numbers in disguise (e.g., it contains a solution of $t^2 = 2$, or it's not countable). If this student refuses to believe $A/M$ is a new field of characteristic $0$ (though there are much easier ways to construct fields of characteristic $0$ besides the rationals), direct him to books that explain what fields are.</p>
combinatorics
<p>I want to know the minimum number of lines needed to touch every square of an <span class="math-container">$n \times n$</span> grid. The only added rule is that the line has to pass <em>inside</em> the square, not on the edge/corner. I have found a solution for <span class="math-container">$n-1$</span> lines for all <span class="math-container">$2&lt;n\le 10$</span> but not with fewer lines. I figure you can represent the lines by the squares they pass through, thus making a finite amount of &quot;lines&quot;, but I don't have many further ideas. Here is an example of what I mean by &quot;covering a square&quot;. The line just needs to pass through any part of the square: <a href="https://i.sstatic.net/MIt2B.png" rel="noreferrer"><img src="https://i.sstatic.net/MIt2B.png" alt="enter image description here" /></a></p> <p>Here is also a solution for a <span class="math-container">$3 \times 3$</span> grid with 2 lines: <a href="https://i.sstatic.net/36IKw.png" rel="noreferrer"><img src="https://i.sstatic.net/36IKw.png" alt="enter image description here" /></a></p>
<p>I have worked on and asked many others about this problem. I think this is a hard problem. According to the authors of the paper below, it is an open problem, as of 2013.</p> <p>I claim to have a construction of <span class="math-container">$n-1$</span> lines for all <span class="math-container">$n$</span>, but I never wrote it down formally. I believe this to be the correct answer, but lower bound proofs are quite difficult. For small <span class="math-container">$n$</span> (say, <span class="math-container">$n \leq 9$</span>), I could prove a matching lower bound by some ad hoc arguments.</p> <p>If you want to have an interactive go at this problem, check out my website: <a href="https://bob1123.github.io/website/linecovering.html" rel="nofollow noreferrer">https://bob1123.github.io/website/linecovering.html</a>.<br /> <strong>Update</strong> (2023-11-09): <a href="https://bob1123.github.io/linecovering.html" rel="nofollow noreferrer">https://bob1123.github.io/linecovering.html</a></p> <p>A trivial lower bound is <span class="math-container">$n/2$</span>. A better lower bound of <span class="math-container">$2n/3$</span> is known. See Eyal Ackerman and Rom Pinchasi. &quot;Covering a chessboard with staircase walks.&quot; Discrete Mathematics 313.22 (2013): 2547-2551. They generalize from lines to &quot;monotone paths,&quot; i.e. contiguous sets of squares which always move down (or stay the same) when going right or always move up (or stay the same) when going right. When covering with monotone paths, the <span class="math-container">$2n/3$</span> is tight, so to get to <span class="math-container">$n-1$</span> you need to exploit the special structure of lines.</p> <p>I will note that the only follow-up I know of is Kerimov, Azer. &quot;Covering a rectangular chessboard with staircase walks.&quot; Discrete Mathematics 338.12 (2015): 2229-2233. They generalize from <span class="math-container">$n \times n$</span> board to an <span class="math-container">$n \times m$</span> board.</p>
<p>Consider a continuous function <span class="math-container">$w$</span> in <span class="math-container">$Q=[-1,1]^2$</span> and think of your grid as of <span class="math-container">$2n\times 2n$</span> grid of squares of size <span class="math-container">$1/n$</span>. Then the sum of the values of <span class="math-container">$w$</span> along the squares intersecting a line <span class="math-container">$\ell$</span> is essentially <span class="math-container">$n\int_{\ell\cap Q} w(|dx|+|dy|)\le n\sqrt 2\int_{\ell\cap Q}w\,ds$</span> where <span class="math-container">$s$</span> is the length element on <span class="math-container">$\ell$</span>. Now, the full sum is approximately <span class="math-container">$n^2\int_Q w$</span>. Thus we have an asymptotic bound of <span class="math-container">$n$</span> times <span class="math-container">$\frac 1{\sqrt 2}\int_Q w$</span> divided by the supremum of <span class="math-container">$\int_{\ell\cap Q}w\,ds$</span> over all lines. Now take <span class="math-container">$w$</span> to be a smoothed version of <span class="math-container">$1/{\sqrt{1-|z|^2}}$</span> in the unit disk and <span class="math-container">$0$</span> outside (the function used to show that you cannot cover the unit disk by strips of total width less than <span class="math-container">$2$</span>). Then the ratio of the integral to the supremum is <span class="math-container">${2\pi}/\pi=2$</span> and we get <span class="math-container">$\sqrt 2 n$</span> versus expected <span class="math-container">$2n$</span>, which is a loss of factor of <span class="math-container">$\sqrt 2$</span>. Of course, the whole argument is a shameless plagiarism from Archimedes and it seems like there should be room for some improvement. :-)</p>
game-theory
<p>I want to self study game theory. Which math-related qualifications should I have? And can you recommend any books? Where do I have to begin?</p>
<p>I've decided to flesh out my small comment into a (hopefully respectable) answer.</p> <p>The book I read to learn Game Theory is called "<a href="http://www.rand.org/pubs/commercial_books/CB113-1.html" rel="noreferrer">The Compleat Strategyst</a>", thanks to J.M. for pointing out that it is now a free download. This was one of the first books on Game Theory, and at this point is probably very dated, but it is a nice easy introduction and, since it is free, you may as well go through it. I read the whole book and did all the examples in a couple of weeks. I said before that Linear Algebra was a prerequisite, however after flipping through it again I see that they explain all the mechanics necessary within the book itself, so unless you are also interested in the theory behind it, you will be fine without any linear algebra background.</p> <p>Since it sounds like you do want the theory (and almost any aspect of Game Theory beyond the introduction provided by that book will still require Linear Algebra) you may want to grab a Linear Algebra book. I'm partial to <a href="http://linear.axler.net/" rel="noreferrer">Axler's Linear Algebra Done Right</a>, which is (in my opinion) sufficient for self-study.</p> <p>The Wikipedia page on <a href="http://en.wikipedia.org/wiki/Game_theory" rel="noreferrer">Game Theory</a> lists many types of games. Aspects of the first five are covered at various lengths in "The Compleat Strategyst", these include:</p> <ul> <li>Cooperative or non-cooperative</li> <li>Symmetric and asymmetric</li> <li>Zero-sum and non-zero-sum</li> <li>Simultaneous and sequential</li> <li>Perfect information and imperfect information</li> </ul> <p>The rest of the math you will need to know depends on what sort of games you're interested in exploring after that, and the math required is given away largely by the name:</p> <ul> <li><a href="http://en.wikipedia.org/wiki/Combinatorial_game_theory" rel="noreferrer">Combinatorial Game Theory</a> will likely require combinatorics.</li> <li><a href="http://en.wikipedia.org/wiki/Game_theory#Infinitely_long_games" rel="noreferrer">Infinitely long games</a> seem to be related to set theory.</li> <li>Both <a href="http://en.wikipedia.org/wiki/Game_theory#Discrete_and_continuous_games" rel="noreferrer">discrete and continuous games</a> and <a href="http://en.wikipedia.org/wiki/Game_theory#Many-player_and_population_games" rel="noreferrer">many-player/population games</a> would seem to require calculus (and perhaps differential equations).</li> <li><a href="http://en.wikipedia.org/wiki/Game_theory#Stochastic_outcomes_.28and_relation_to_other_fields.29" rel="noreferrer">Stochastic outcomes</a> are related to statistics.</li> <li><a href="http://en.wikipedia.org/wiki/Game_theory#Metagames" rel="noreferrer">Metagames</a> (also sometimes referred to as "reverse Game Theory") use some fairly sophisticated mathematics, so you'll probably need a good understanding of analysis and abstract algebra.</li> </ul> <p>Also see this (somewhat duplicate) <a href="https://math.stackexchange.com/questions/43632/good-non-mathematician-book-on-game-theory">question</a> for video lectures which will give you a better understanding of what game theory is before you shell out any money to buy anything.</p>
<blockquote> <p>Shameless Advertisement: <a href="http://area51.stackexchange.com/proposals/47845/game-theory?referrer=MGta8a5T8SlJlb1swuXPYQ2">Game Theory proposal</a> on Area 51 that you should totally follow!</p> </blockquote> <p>It definitely depends on the flavor of game theory you're interested in, but in my experience, no truly introductory text requires anything beyond simple algebra and logical reasoning. Once you get into something advanced, it requires in-depth knowledge of a specific subfield (say research on permutations), but that isn't something I would prepare for, assuming you can learn quickly. Rather, it seems mathematicians and social scientists often collaborate to solve these problems.</p> <blockquote> <p>My Personal Recommendation: <a href="http://en.wikipedia.org/wiki/Game_theory">Wikipedia's entry on Game Theory</a> and then also read about each game in the <a href="http://en.wikipedia.org/wiki/List_of_games_in_game_theory">list of games</a> and probably the entry on <a href="http://en.wikipedia.org/wiki/Solution_concept">solution concepts</a>.</p> </blockquote> <p>No Math:<br> - <a href="http://rads.stackoverflow.com/amzn/click/0393310353">Thinking Strategically</a><br> - <a href="http://en.wikipedia.org/wiki/Evolution_of_cooperation">The Evolution of Cooperation</a><br> - <a href="http://en.wikipedia.org/wiki/The_Complexity_of_Cooperation">The Complexity of Cooperation</a><br> Minimal Notation<br> - <a href="http://rads.stackoverflow.com/amzn/click/0691090394">Behavioral Game Theory</a><br> - <a href="http://rads.stackoverflow.com/amzn/click/0262061945">The Theory of Learning in Games</a><br> (Technical) Textbooks<br> - <a href="http://rads.stackoverflow.com/amzn/click/0195128958">An Introduction to Game Theory</a><br> - <a href="http://www.cambridge.org/journals/nisan/downloads/Nisan_Non-printable.pdf">Algorithmic Game Theory</a><br> - <a href="http://rads.stackoverflow.com/amzn/click/0123745071">Auction Theory</a><br> Reference Books:<br> - <a href="http://rads.stackoverflow.com/amzn/click/0262514133">Combinatorial Auctions</a><br> - <a href="http://rads.stackoverflow.com/amzn/click/0444826424">Handbook of Experimental Economics Results</a><br> - <a href="http://rads.stackoverflow.com/amzn/click/0691058970">The Handbook of Experimental Economics</a><br> Journals<br> - <a href="http://www.journals.elsevier.com/games-and-economic-behavior/">Games and Economic Behavior</a> </p> <p>Reading recently-published papers is quite fun; they're usually sufficiently contained such that if you can read a paper (that is, read only its abstract, introduction, and conclusion), you can get a better idea of why concepts found in an undergraduate text are considered central.</p>
logic
<p>I'm wondering if there are any non-standard theories (built upon ZFC with some axioms weakened or replaced) that make formal sense of hypothetical set-like objects whose "cardinality" is "in between" the finite and the infinite. In a world like that non-finite may not necessarily mean infinite and there might be a "set" with countably infinite "power set".</p>
<p>There's a few things I can think of which might fit the bill:</p> <ul> <li><p>We could work in a <em>non-<span class="math-container">$\omega$</span> model</em> of ZFC. In such a model, there are sets the model <em>thinks</em> are finite, but which are actually infinite; so there's a distinction between &quot;internally infinite&quot; and &quot;externally infinite.&quot; (A similar thing goes on in <em>non-standard analysis</em>.)</p> </li> <li><p>Although their existence is ruled out by the axiom of choice, it is consistent with ZF that there are sets which are not finite but are <strong>Dedekind-finite</strong>: they don't have any non-trivial self-injections (that is, Hilbert's Hotel doesn't work for them). Such sets are similar to genuine finite sets in a number of ways: for instance, you can show that a Dedekind-finite set can be even (= partitionable into pairs) or odd (= partitionable into pairs and one singleton) or neither but not both. And in fact it is consistent with ZF that the Dedekind-finite cardinalities are linearly ordered, in which case they form a nonstandard model of true arithmetic; see <a href="https://mathoverflow.net/questions/172329/does-sageevs-result-need-an-inaccessible">https://mathoverflow.net/questions/172329/does-sageevs-result-need-an-inaccessible</a>.</p> </li> <li><p>You could also work in <strong>non-classical logic</strong> - for instance, in a <strong>topos</strong>. I don't know much about this area, but lots of subtle distinctions between classically-equivalent notions crop up; I strongly suspect you'd find some cool stuff here.</p> </li> </ul>
<p>Well, there are a few notions of "infinite" sets that aren't equivalent in $\mathsf{ZF}.$ One sort is called Dedekind-infinite ("D-infinite", for short) which is a set with a countably infinite subset, or equivalently, a set which has a proper subset of the same cardinality. So, a set is D-finite if and only if the Pigeonhole Principle holds on that set. The more common notion is Tarski-infinite (usually just called "infinite"), which describes sets for which there is no injection into any set of the form $\{0,1,2,...,n\}.$</p> <p>It turns out, then, that the following are equivalent in $\mathsf{ZF}$:</p> <ol> <li>Every D-finite set is finite.</li> <li>D-finite unions of D-finite sets are D-finite.</li> <li>Images of D-finite sets are D-finite.</li> <li>Power sets of D-finite sets are D-finite.</li> </ol> <p>Without a weak Choice principle (anything that implies $\aleph_0$ to be the smallest infinite cardinality, rather than simply a minimal infinite cardinality), the following may occur: </p> <ol> <li>There may be infinite, D-finite sets. In particular, there may be infinite sets whose cardinality is not comparable to $\aleph_0.$ Put another way, there may be infinite sets such that removing an element from such a set makes a set with strictly smaller cardinality.</li> <li>There may be a D-finite set of D-finite sets whose union is D-infinite.</li> <li>There may be a surjective function from a D-finite set to a D-infinite set.</li> <li>There may be a D-finite set whose power set is D-infinite.</li> </ol>
differentiation
<p>Integration is supposed to be the inverse of differentiation, but the integral of the derivative is not equal to the derivative of the integral:</p> <blockquote> <p>$$\dfrac{\mathrm{d}}{\mathrm{d}x}\left(\int f(x)\mathrm{d}x\right) = f(x) \neq \int\left(\dfrac{\mathrm{d}}{\mathrm{d}x}f(x)\right)\mathrm{d}x$$</p> </blockquote> <p>For instance: $$\begin{align*} &amp;\dfrac{\mathrm{d}}{\mathrm{d}x}\left(\int 2x+1\;\mathrm{d}x\right) &amp;&amp;= \dfrac{\mathrm{d}}{\mathrm{d}x}\left(x^2+x+C\right) &amp;= 2x+1\\ &amp;\int\left(\dfrac{\mathrm{d}}{\mathrm{d}x}\left(2x+1\right)\right)\mathrm{d}x &amp;&amp;= \int 2\;\mathrm{d}x &amp;= 2x+C\end{align*}$$</p> <p>Why isn't it defined such that $\dfrac{\mathrm{d}}{\mathrm{d}x}a = \dfrac{\mathrm{d}a}{\mathrm{d}x}$, where $a$ is a constant, and $\int f(x)\;\mathrm{d}x = F(x)$? Then we would have: $$\begin{align*} &amp;\dfrac{\mathrm{d}}{\mathrm{d}x}\left(\int 2x+1\;\mathrm{d}x\right) &amp;&amp;= \dfrac{\mathrm{d}}{\mathrm{d}x}\left(x^2+x\right) &amp;= 2x+1\\ &amp;\int\left(\dfrac{\mathrm{d}}{\mathrm{d}x}\left(2x+1\right)\right)\mathrm{d}x &amp;&amp;= \int \left(2+\dfrac{\mathrm{d1}}{\mathrm{d}x}\right)\;\mathrm{d}x &amp;= 2x+1\end{align*}$$</p> <p>Then we would have:</p> <blockquote> <p>$$\dfrac{\mathrm{d}}{\mathrm{d}x}\left(\int f(x)\mathrm{d}x\right) = f(x) = \int\left(\dfrac{\mathrm{d}}{\mathrm{d}x}f(x)\right)\mathrm{d}x$$</p> </blockquote> <p>So what is wrong with my thinking, and why isn't this the used definition? </p>
<p>The downvotes on this question aren't really fair. Sure, it's a rather simple question to answer trivially, but understanding exactly why derivatives and integrals aren't perfect inverses (and how to deal with that) is hugely important for everything from solving differential equations to constructing topological invariants on manifolds. Furthermore, it's common for students to think invertible operations are always preferable since they preserve all the information, so it's useful to present examples which show that noninvertibility and information loss is not only acceptable, but in many cases desirable. Some of this may go over your head initially (I tried to keep the level accessible to undergraduate students but in the end I had to assume a bit of background), but I hope you'll at least get some idea of what I'm saying and possibly take up studying some of these things in more depth if you're interested in them.</p> <p>At the most fundamental level, it's easy to see exactly what fails. Suppose we take any two arbitrary constant functions $f_1: x \mapsto c_1$ and $f_2 : x \mapsto c_2$. Then $\frac{d f_1}{dx} = \frac{d f_2}{dx} = 0$. When you define an antiderivative $\int$, you'll have to pick a particular (constant) function $\int 0 dx = c$. Assuming $c_1 \ne c_2$, we can't simultaneously satisfy both $\int \frac{d f_1}{dx} dx = c_1$ and $\int\frac{d f_2}{dx}dx=c_2$, since $c$ can't equal both $c_1$ and $c_2$. This comes directly from the definition of the derivative as $$\displaystyle \frac{df}{dx}|_{x = x_0} = \lim_{x \rightarrow x_0} \frac{f(x) - f(x_0)}{x-x_0},$$ and so there's no helping it so long as we keep this our definition of the derivative and allow arbitrary functions.</p> <p>Furthermore, this is in some sense the "only" way this can fail. Let's suppose we have two arbitrary smooth functions $g_1$ and $g_2$, such that $\frac{dg_1}{dx} = \frac{dg_2}{dx}$. Then it must hold that $\frac{d}{dx}(g_1-g_2)=0$, and so $g_1(x) - g_2(x) = c$ is some constant function. The only way the derivative of two smooth functions on $\mathbb R$ to be the same is if the difference of the two functions is a constant. Of course, for differentiation to be invertible, we need to have a function which is infinitely differentiable i.e. smooth.</p> <p>If you've studied linear algebra, you'll realize what we're doing is actually familiar in that context. (If not, while this answer could probably be written without making extensive use of linear algebra, it would necessarily be far longer, and it's probably already too long, so I apologize in advance that you'll probably have a hard time reading some of it.) Rather than thinking of functions as individual objects, it's useful to collect all smooth functions together into a single object. This is called $C^\infty(\mathbb R)$, the space of smooth functions on $\mathbb R$. On this space, we have point-wise addition and scalar multiplication, so it forms a vector space. The derivative can be thought of as a linear operator on this space $D: C^{\infty}(\mathbb R) \rightarrow C^\infty(\mathbb R), f \mapsto \frac{df}{dx}$. In this language, what we've shown is that $\ker D$ (the <a href="http://en.wikipedia.org/wiki/Kernel_(linear_algebra)">kernel</a> of $D$, that is, everything which $D$ sends to $0$) is exactly the set of constant functions. A general result from linear algebra is that a linear map is one-to-one if and only if its kernel is $0$. </p> <p>If we want to make $D$ invertible (so that we could define an anti-derivative $\int$ which is a literal inverse to $D$), we have a few options. I have to warn you that the terminology and notation here is nonstandard (there is no standard for most of it) so you'll need to use caution comparing to other sources.</p> <hr> <p><strong>Option 1: Try to add more "functions" to to $C^\infty (\mathbb R)$ which will make $D$ invertible.</strong> </p> <p>Here's one approach you can use. Define a larger space $\bar C^\infty(\mathbb R) \supset C^\infty(\mathbb R)$ by allowing addition to each function by an arbitrary finite formal sum of the form $a_1 \zeta_1 + a_2 \zeta_2 + a_3 \zeta_3 + \cdots + a_n \zeta_n$, where the $\zeta_i$ are formal parameters. A generic element of $\bar C^\infty(\mathbb R)$ is something like $f(x) + a_1 \zeta_1 + a_2 \zeta_2 + a_3 \zeta_3 + \cdots + a_n \zeta_n$ for some $f \in C^\infty(\mathbb R)$, $n \ge 0$, and $a_1, \ldots, a_n \in \mathbb R$. The $\zeta_i$ will keep track of the information we'd lose normally by differentiating, so that differentiation will be invertible.</p> <p>We'll define a derivative $\bar D$ on $\bar C^\infty(\mathbb R)$ based on the derivative $D$ we already know for $C^\infty(\mathbb R)$. For any function $f \in C^\infty (\mathbb R)$, let $(\bar D f)(x) = \frac{df}{dx} + f(0) \zeta_1$. The choice of $f(0)$ here is somewhat arbitrary; you could replace $0$ with any other real number, or even do some things which are more complicated. Also define $\bar D(\zeta_i) = \zeta_{i+1}$. This defines $\bar D$ on a set which spans our space, and so we can linearly extend it to the whole space. It's easy to see that $\bar D$ is one-to-one and onto; hence, invertible. The inverse to $\bar D$ is then $$\int (F(x') + a_1 \zeta_1 + \cdots + a_n \zeta_n) dx' = \int_0^x F(x') dx' + a_1 + a_2 \zeta_1 + \cdots+ a_n \zeta_{n-1}.$$ This is a full inverse to $\bar D$ in the sense that ordinary anti-derivatives fail to be.</p> <p>This is, in fact, <em>almost exactly</em> what the OP was trying to do. He wanted to add formal variables like $\frac{da}{dx}$ so that if $a \ne b$ are constants, $\frac{da}{dx} \ne \frac{db}{dx}$. In my notation, "$\frac{da}{dx}$" is $a \zeta_1$. That alone doesn't save you, since you don't know how to differentiate $\zeta_1$, so you have to add another parameter $\zeta_2 = \bar D \zeta_1$, and to make $\bar D$ invertible you need an infinite tower of $\zeta_i$. You also need to handle what to do when neither $f_1$ nor $f_2$ is constant, but $f_1 - f_2$ is constant. But you can definitely handle all these things if you really want to.</p> <p>The question here isn't really whether you can do it, but what it's good for. $\bar D$ isn't really a derivative anymore. It's a derivative plus extra information which remembers exactly what the derivative forgets. But derivatives aren't a purely abstract concept; they were created for applications (e.g. measuring the slope of curves), and in those applications you generally <em>do</em> want to get rid of this information. If you want to think of a function as something you can measure (e.g. in physics), the $\zeta_i$ are absolutely not measurable. Even in purely mathematical contexts, it's hard to see any immediate application of this concept. You could rewrite some of the literature in terms of your new lossless "derivative", but it's hard to see this rewriting adding any deep new insight. You shouldn't let that discourage you if you think it's an interesting thing to study, but for me, I have a hard time calling the object $\bar D$ a derivative in any sense. It isn't really anything like the slope of a curve, which doesn't care if you translate the curve vertically.</p> <p>There is a useful construction hiding in this, though. Specifically, if you restrict the domain of $\bar D$ to $C^\infty(\mathbb R)$ and the range to its image (which is everything of the form $f + a \zeta_1$ for $f \in C^\infty(\mathbb R)$ and $a \in \mathbb R$), then $\bar D |_{C^{\infty(\mathbb R)}}$ is still an isomorphism, though now between two different vector spaces. Inverting $\bar D$ on this space is just solving an initial value problem:</p> <p>\begin{align} F(x) &amp;= \frac{df}{dx} \\ F(0) &amp;= a \end{align}</p> <p>The solution is $F(x) = a + \int_0^x f(x) dx$. You can do similar constructions for higher derivatives. Obviously, solving initial value problems is highly important, and in fact the above construction is a natural generalization of solving initial value problems like this which allows for arbitrarily many derivatives to be taken. However, even though $\bar D |_{C^\infty(\mathbb R)}$ is still a linear isomorphism, I have a hard time thinking of this as a true way to invert derivatives in the way the OP wants, since the domain and range are different. Hence, in my view this is philosophically more along the lines of Option 3 below, which is to say that it's a way to work around the noninvertibility of the derivative rather than a way to "fix" the derivative. </p> <hr> <p><strong>Option 2a: Get rid of problematic functions by removing them from $C^\infty (\mathbb R)$</strong></p> <p>You may say "rather than adding functions, let's get rid of problematic ones which make this not work". Let's just define the antiderivative to be $\int_0^x f(x') dx'$, and restrict to the largest <a href="http://en.wikipedia.org/wiki/Linear_subspace">subspace</a> $\hat C^\infty (\mathbb R)$ on which $D$ and $\int$ are inverses. the choice of $0$ for the lower limit is again arbitrary, but changing it doesn't change the story much.</p> <p>If you do this, you can see that any function $f \in \hat C^\infty (\mathbb R)$ needs to have $f(0) = 0$, since $f(0) = \int_0^0 \frac{df}{dx} dx = 0$. But requiring $\frac{df}{dx} \in \hat C^\infty (\mathbb R)$ as well means that we must also have $\frac{df}{dx}|_{x=0} = 0$, and by induction $\frac{d^nf}{dx^n}|_{x=0} = 0$ for all $n$. One way of saying this is that the <a href="http://en.wikipedia.org/wiki/Taylor_series">Maclaurin series</a> of $f$ must be identically $0$. If you haven't taken a class in real analysis (which you should if you're deeply interested in these things), you may have never encountered such a function before which has all derivatives $0$ at $x=0$ (other than the zero function), but they do exist; one example is</p> <p>\begin{equation*} f(x) = \left\{ \begin{array}{lr} e^{-1/x^2} &amp; : x \ne 0\\ 0 &amp; : x = 0. \end{array} \right. \end{equation*}</p> <p>There are many other examples, but I'll leave it at that; you can check the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Non-analytic_smooth_function">smooth, non-analytic functions</a> for more information. On $\hat C^\infty (\mathbb R)$, the differential equation $D f = g$ for fixed $g \in \hat C^\infty (\mathbb R)$ has a <em>unique</em> solution given by $f(x) = \int_0^x g(x) dx$. It's worth pointing out that the only polynomial function in $\hat C^\infty (\mathbb R)$ is the $0$ function. </p> <p>This space looks bizarre and uninteresting at first glance, but it's actually not that divorced from things you should care a lot about. If we return to $ C^\infty (\mathbb R)$, typically, we'll want to solve differential equations on this space. A typical example might look something like $$a_n D^n f + a_{n-1} D^{n-1} f + \cdots + a_0 f = g$$ for some fixed $g$. As you will learn from studying differential equations (if you haven't already), one way to get a unique solution is to impose initial conditions on $f$ of the form $f(0) = c_0, (Df)(0) = c_1 , \ldots, (D^{n-1} f) (0) = c_{n-1}$. All we've done in creating this space is find a class of functions upon which we can uniformly impose a particular set of initial conditions.</p> <p>This set of functions also has another nice property which you might not notice at first pass: you can multiply functions. The object we constructed in Option 1 didn't really have an intuitive notion of multiplication, but now, we can multiply point-wise and still end up in the same space. This follows from the <a href="http://en.wikipedia.org/wiki/General_Leibniz_rule">general Leibniz rule</a>. In fact, not only can we multiply any two functions in $\hat C^\infty(\mathbb R)$, but we can multiply a function in $\hat C^\infty(\mathbb R)$ by an arbitrary smooth function in $C^\infty(\mathbb R)$. The result will still be in $\hat C^\infty(\mathbb R)$. In the language of algebra, $\hat C^\infty(\mathbb R)$ is an <a href="http://en.wikipedia.org/wiki/Ideal_(ring_theory)"><em>ideal</em></a> of $C^\infty(\mathbb R)$, and ideals are well-studied and have lots of interesting and nice properties. We can also <a href="http://en.wikipedia.org/wiki/Function_composition"><em>compose</em></a> two functions in $\hat C^\infty(\mathbb R)$, and by the chain rule and we'll still end up in $\hat C^\infty(\mathbb R)$.</p> <p>Unfortunately, in most applications, the space we've constructed isn't sufficient for solving differential equations which arise, despite all its nice properties. The space is missing very many important functions (e.g. $e^x$), and so the right hand side $g$ above will often not satisfy the constraints we impose. So while we can still solve the differential equations, to do it, we need to go back to a space on which $D$ is not one-to-one. However, it's worth pointing out that there are variants of this construction. Rather than putting initial conditions at $x=0$, in many applications, the more natural thing is to put conditions on the asymptotic behavior as $x \rightarrow \infty$. There are interesting parallels to our construction above in this context, but unfortunately, they're a bit too advanced to include here, as they require significantly more than what a typical undergraduate would know. In any case, these constructions are useful, as you would guess, in the study of differential equations, and if you decide to pursue that in detail, eventually you'll see something which looks qualitatively like this.</p> <hr> <p><strong>Option 2b: Get rid of problematic functions by equating functions in $C^\infty (\mathbb R)$</strong></p> <p>Those who have studied linear algebra will remember that there are two different concepts of a vector space being "smaller" than another vector space. One is subspaces. In 2a we constructed a subspace on which $D$ is one-to-one and onto. But we could just as well construct a <a href="http://en.wikipedia.org/wiki/Quotient_space_(linear_algebra)">quotient space</a> with the same property. </p> <p>Let's define an <a href="http://en.wikipedia.org/wiki/Equivalence_relation">equivalence relation</a> on $C^\infty(\mathbb R)$. We'll say $f_1 \sim f_2$ if and only if $f_1(x) - f_2(x)$ is a polynomial function in $x$, i.e. a function such that for some $n$, $D^n (f_1 - f_2) = 0$. Let $[f]$ be the set of all functions $g$ such that $g \sim f$ i.e. the equivalence class of $f$ under $\sim$. We'll call the set of all equivalence classes $\tilde C^\infty(\mathbb R)$. A generic element of $\tilde C^\infty(\mathbb R)$ is a collection $[f]$ of functions in $C^\infty(\mathbb R)$, all of which differ from each other only by polynomial functions.</p> <p>On this space, we have a derivative operator $\tilde D$ given by $\tilde D [f] = [D f]$. You can check that this is well-defined; that is, that if you pick a different representative $g$ for the class $[f]$, that $[D f] = [D g]$. This is just the fact that the derivative of a polynomial function is again a polynomial function. An important thing to realize is that if $f \in C^\infty(\mathbb R)$ is such that $[f] = 0$, then $f$ is a polynomial function.</p> <p>In some sense, this is a strictly bigger space than $\hat C^\infty(\mathbb R)$, because it's easy to check that the map $\hat C^\infty(\mathbb R) \rightarrow \tilde C^\infty(\mathbb R)$ given by $f \mapsto [f]$ is one-to-one (since $\hat C^\infty(\mathbb R)$ has no nonzero polynomial functions). But we've now got access to other important functions, like $f(x) = e^x$, so long as we're content to think of them not as actual functions, but as equivalence classes. Of course, if we impose initial conditions on a function and all derivatives at a given point, we can reconstruct any function from its equivalence class, and thus exactly solve differential equations that way.</p> <p>The expanded set of functions does come at a price. We've lost the ability to multiply functions. To see this, note that $[0] = [1]$ as constant functions, but if we multiply both by $e^x$, $[0] \ne [e^x]$. This flaw is necessary for us, but it does make it harder to study this space in any depth. While $\hat C^\infty(\mathbb R)$ could be studied using algebraic techniques, we're less able to say anything really interesting about $\tilde C^\infty(\mathbb R)$.</p> <p>The real big reason to use a space like this though (for me at least) is for something like asymptotic analysis. If you only care about the behavior of the function for very large arguments, it turns out often to not matter very much (in a precise way) exactly how you pick integration constants. The space we've constructed here is really only good for being able to solve differential equations of the form $D^n f = g$, but with some work you can extend this to other kinds of differential equations. With some more work (i.e. quotienting out functions which are irrelevant asymptotically), you can get rid of other functions which you don't care about asymptotically.</p> <p>Such spaces are often used (implicitly and typically without rigorous definition) as a first approximation in theoretical physics and in certain types of research on ordinary differential equations. You can get a very course view of what the solution to a differential equation looks like without caring too much about the details. I <em>believe</em> (and am far from an expert on this) that they also have some application in <a href="http://en.wikipedia.org/wiki/Differential_Galois_theory">differential Galois theory</a>, and while that would be an interesting direction to go for this answer, it's probably too advanced for someone learning this for the first time, and certainly outside my expertise.</p> <p><strong>Option 3: Realize that $D$ not being invertible isn't necessarily a bad thing</strong></p> <p>This is the direction most good mathematics takes. Just stick with $D$ and $C^\infty(\mathbb R)$, but realize that the non-invertibility of $D$ isn't necessarily a very bad thing. Sure, it means you can't solve differential equations without some initial/boundary conditions, which is to say that you don't have unique anti-derivatives, but let's think about what that buys you. An interesting trend in modern mathematics is realizing that by forgetting information, sometimes you can make other information more manifest and hence easy to work with. So let's see what the non-invertibility of $D$ can actually tell us.</p> <hr> <p>Let's first look at an application, and since I'm a physicist by day, I'll pick physics. <a href="http://en.wikipedia.org/wiki/Newton%27s_laws_of_motion#Newton.27s_second_law">Newton's second law</a> is perhaps the most fundamental law in classical mechanics. For a particle travelling on a line, this looks like $\frac{d^2}{dt^2} x(t) = f(\frac{d}{dt}x(t) ,x(t), t)$. I've switched the variables to position for $x$ (the function of interest) and the independent variable as $t$. In general this equation isn't very easy to solve, but we note that it has two derivatives maximum, so we expect that (morally speaking) we'll have to integrate twice, and be left with 2 free parameters.</p> <p>The two-dimensional space of all such solutions to a system is called "phase space", and as it turns out, this is a very fundamental concept for all of physics. We can take the parameters to be the position and velocity (or momentum) at the initial time. The fact that the full behavior of the system depends only on the initial position and velocity of the system is something that isn't emphasized enough in introductory courses, but is really fundamental to the way that physicists think about physics, to the point that anything involving three derivatives is viewed as confusing and often unphysical. <a href="http://en.wikipedia.org/wiki/Jerk_(physics)">Jerk</a> (i.e. the third derivative of position) isn't expected to play any role at a fundamental level, and when it does show up (e.g. the <a href="http://en.wikipedia.org/wiki/Abraham%E2%80%93Lorentz_force">Abraham–Lorentz force</a>), it is a source of much confusion.</p> <p>That isn't the end of the story though. We know that the world is better described by quantum mechanics than classical mechanics at short distances. In quantum mechanics, we don't have a two-dimensional phase space. Rather, the space of states for a particle on a line are "wave"functions $\mathbb R \rightarrow \mathbb C$. The magnitude squared describes the probability distribution for measurements of position of the particle, while the complex phase of the function describes the distribution of momenta in a precise way. So rather than having 2 real dimensions, we really only have 1 complex one, and use it to describe the distributions of both position and momenta. The <a href="http://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation">Schrödinger equation</a> (which is the quantum mechanical version of Newton's 2nd law) describes how the wavefunction evolves in time, but it involves only one time derivative. This process of cutting down the two-dimensional phase space to a 1-dimensional space (upon which you look at probability distributions) is a source of ongoing research among mathematicians to understand the full extend and conditions on which it can be performed; this program goes by the name of <a href="http://en.wikipedia.org/wiki/Geometric_quantization"><em>geometric quantization</em></a>.</p> <hr> <p>Let's move back to pure mathematics, to see how you can use the non-invertibility of the derivative in that context as well. We already know that the kernel of $D$ is just the set of constant functions. This is a 1-dimensional space; that is, $\dim \ker D = 1$. But what would happen if instead of looking at smooth functions $\mathbb R \rightarrow \mathbb R$, we remove $0$ from the domain? So we're looking at a function $f \in C^\infty((\infty, 0) \cup (0, \infty))$, and we want it to be in the kernel of the derivative operator on this space.</p> <p>The differential equation $Df=0$ isn't hard to solve. You can see that $f$ needs to be locally constant just by integrating. But you can't integrate past $0$ since the function isn't defined there, so the general form of any such $f$ is</p> <p>\begin{equation*} f(x) = \left\{ \begin{array}{lr} c_1 &amp; : x &lt;0 \\ c_2 &amp; : x &gt; 0. \end{array} \right. \end{equation*}</p> <p>But wait! Now this function depends on <strong>2</strong> parameters; that is, $\dim \ker D = 2$ here. You can convince yourself that if you take the domain of the functions in question to be the union of a collection of $n$ disjoint intervals on $\mathbb R$, then a function solving $Df = 0$ for the derivative on this space is constant on each interval, and hence these form an $n$-dimensional space; i.e. $\dim \ker D = n$.</p> <p>So, while we have to forget about the constant functions, we're getting information back in the form of the kernel of $D$. The kernel's dimension is exactly the number of connected components of the domain of the functions. This continues to be true even if we allow the domain to be much more complicated than just a disjoint union of lines. For any <a href="http://en.wikipedia.org/wiki/Smooth_manifold">smooth manifold</a> (i.e. a space which looks locally like Euclidean space but may be connected in ways which a line or a plane isn't, e.g. a circle or a sphere) $M$, if $D$ is the gradient on $C^\infty(M)$, then $\dim \ker D$ is the number of connected components of $M$. You may not yet know multivariable calculus, so this may go over your head a bit, but it's nonetheless true and important.</p> <p>This may seem like a random coincidence, but it's a rather deep fact with profound generalizations for topology. Unfortunately, there's a limit to how much we can do with one dimension, but I'll push that limit as much as we can.</p> <p>Let's look at smooth functions on a circle $S^1$. Now, of course, we already know how $D$ fails to be injective (one-to-one) on $C^\infty(S^1)$, and that the kernel is 1-dimensional, since a circle is connected. What I want to look at is the failure of surjectivity; that is, will $D$ be onto? Of course, what exactly we mean by differentiating on $S^1$ isn't obvious. You can think of $Df$ as a gradient vector which points in the direction which $f$ is growing proportional to the rate of growth at any given point. 1-dimensional vectors are (for our purposes, which are purely topological), just numbers, so we get a number at every point. This turns out to not be exactly the right way to think about this for the sake of generalizing it, but it's good enough for now.</p> <p>As it turns out, when you go all the way around a circle, you need to end up exactly back at the same value of the function you started at. Let's parametrize points on the circle by angle, running from $0$ to $2\pi$. This means that, if we want $f = D g$ for some function $g \in C^\infty(S^1)$, we need $$\int_0^{2 \pi} f(\theta) d \theta = \int_0^{2 \pi} \frac{dg}{d\theta} d\theta = g(2 \pi) - g(0) = 0.$$ So the average value of $f$ needs to be $0$.</p> <p>This means that $\operatorname{im} D$ isn't all of $C^\infty (S^1)$. It's just those functions with average value $0$. That wasn't the case when we had $\mathbb R$. Now, if you take a function $f$, you can decompose $f(x) = f_0 + f_d(x)$, where $f_0$ is the average of $f$, and $f_d(x)$ is the deviation from the average at the point $x$. $f_d$ is a function that has has $0$ average, and so it's in the image of $D$. So if we look at the quotient space $C^\infty (S^1) / \operatorname{im} D$, it's essentially just the space of constant functions on $S^1$, which is 1-dimensional. That is, $\dim C^\infty (S^1) / \operatorname{im} D = 1$.</p> <p>At this point, we've actually come upon something pretty surprising. An arbitrary 1-manifold $M$ is necessarily a disjoint union of $n$ intervals and $m$ circles, for some values of $n$ and $m$ (which could be infinite, but I'll ignore this). And we can classify these just by understanding what $D$ does on $M$; specifically, $\dim \ker D = n + m$ is the number of components, and $\dim C^\infty (S^1) / \operatorname{im} D = m$ is the number of circles.</p> <p>It turns out that one can generalize this to not only measure circular holes in a manifold, but spherical holes of other dimensions. The generalization is neither obvious nor easy, but it does take you to something rather close to modern mathematical research. Specifically, you'd end up at the <a href="http://en.wikipedia.org/wiki/De_Rham_cohomology"><em>de Rham cohomology</em></a> of a manifold. This is a way to compute topological invariants of your manifold $M$ which depend just on the (non)existence to solutions of certain simple classes of differential equations in some unrigorous sense the de Rham cohomology (or more precisely the <a href="http://en.wikipedia.org/wiki/Betti_number">Betti numbers</a>, which are the dimension of the cohomology) counts the number of $n$-dimensional holes of $M$ for each nonnegative integer $n$. This answer is already too long and the amount of material to construct this is far too much to include here, but you may take a look at these blog posts to get some more information on this for starters: <a href="http://math.blogoverflow.com/2014/11/03/more-than-infinitesimal-what-is-dx/">More than Infinitesimal: What is “dx”?</a> and <a href="http://math.blogoverflow.com/2014/11/24/homology-counting-holes-in-doughnuts-and-why-balls-and-disks-are-radically-different/">Homology: counting holes in doughnuts and why balls and disks are radically different.</a>.</p> <p>Anyway, modern-day research in algebraic topology has progressed far enough that we no longer particularly <em>need</em> to construct cohomology this way; there are a plethora of other options and a great number of generalizations. But this is still among the most tractible and intuitive ones. There are many further applications of these things, the most incredible (for me at least) of which is probably the <a href="http://en.wikipedia.org/wiki/Atiyah%E2%80%93Singer_index_theorem"><em>Atiyah-Singer index theorem</em></a>, but that's unfortunately far too advanced to describe here.</p> <hr> <p>So, for me anyway, the question at the end of the day isn't "Why do we define the derivative so that it isn't invertible", so much as "How can we use the fact that the derivative is not invertible to our advantage to do more mathematics?". I've really only given a couple of examples, but hopefully they're enough to convince you that this non-invertibility is actually not entirely a bad thing, and that it can be used to our advantage if we're intelligent about it.</p>
<p>They are inverses up to an additive (arbitrary) constant (given certain conditions). The <a href="http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow">Fundamental Theorem of Calculus</a> gives the details.</p> <p>Your definition is a bit confusing. I think there may be brackets missing. But one reason why your definition doesn't work can be seen if you use $2x+2$ instead of $2x+1$. You end up with the same problem.</p>
probability
<p>Basically, on average, how many times one should roll to expect two consecutive sixes?</p>
<p>Instead of finding the probability distribution, and then the expectation, we can work directly with expectations. That is often a useful strategy.</p> <p>Let $a$ be the expected <em>additional</em> waiting time if we have not just tossed a $6$. At the beginning, we certainly have not just tossed a $6$, so $a$ is the required expectation. Let $b$ be the expected <em>additional</em> waiting time if we have just tossed a $6$. </p> <p>If we have not just tossed a $6$, then with probability $\frac{5}{6}$ we toss a non-$6$ (cost: $1$ toss) and our expected additional waiting time is still $a$. With probability $\frac{1}{6}$ we toss a $6$ (cost: $1$ toss) and our expected additional waiting time is $b$. Thus $$a=1+\frac{5}{6}a+\frac{1}{6}b.$$ If we have just tossed a $6$, then with probability $\frac{5}{6}$ we toss a non-$6$, and then our expected additional waiting time is $a$. (With probability $\frac{1}{6}$ the game is over.) Thus $$b=1+\frac{5}{6}a.$$ We have two linear equations in two unknowns. Solve for $a$. We get $a=42$.</p>
<p>The formula is proved in the link given by Byron. I'll just try to give you my intuition regarding the formula.</p> <p>The chance of rolling 6 on two dice is $1/36$. So if you think of rolling two dice at once as one trial, it would take an average of $36$ trials.</p> <p>However, in this case you are rolling 1 die at a time, and you look at two consecutive rolls. The average number of trials to see the first 6 is $6$ times. Suppose we roll one more times to see if the two 6's are consecutive. Record the sequence of all rolls as</p> <p>$$???????6?$$</p> <p>where $?$ stands for any number that is not 6. The length of this sequence is random, but the average length of this sequence is $7$.</p> <p>Now, the question is how many sequences of this type we will see before we get a sequence that ends in $66$. Since we are only interested in the last number of the sequence, there is a $1/6$ chance that the sequence will end in $66$. The same argument says that the average number of sequences we will need is $6$. Multiply this with the average length of the sequence to get $7 \cdot 6 = 42$ as the answer.</p> <p>This argument generalizes to more than $2$ consecutive 6's by induction. For example, if you want $3$ consecutive 6's, you consider sequences that look like this:</p> <p>$$ ???????66? $$</p> <p>The average length of a sequence of this type is $42 + 1 = 43$. Therefore, the answer is $43 \cdot 6 = 258$.</p> <p>It's not hard to generalize this situation. The general form is $L_{n+1} = \frac{1}{p}(L_n + 1)$ where $p$ is the probability of the desired symbol appearing and $L_n$ is the average number of trials until the first occurrence of $n$ consecutive desired symbols. (By definition, $L_0 = 0$.) This recurrence can be solved pretty easily, giving the same formula as in Byron's link. (The proof is essentially the same too, but my version is informal.)</p>
probability
<p>I'm trying to intuitively understand the Poisson distribution's probability mass function. When $X \sim \mathrm{Pois}(\lambda)$, then $P(X=k)=\frac{\lambda^k e^{-k}}{k!}$, but I don't see the reasoning behind this formula. In other discrete distributions, namely the binomial, geometric, negative binomial, and hypergeometric distributions, I have an intuitive, combinatorics-based understanding of why each distribution's pmf is defined the way it is. </p> <p>That is, if $Y \sim\mathrm{Bin}(n,p)$ then $P(Y=k)=\binom{n}{k}p^k(1-p)^{n-k}$, and this equation is clear - there are $\binom{n}{k}$ ways to choose the $k$ successful trials, and we need the trials to succeed $k$ times and fail $n-k$ times. </p> <p>What is the corresponding intuition for the Poisson distribution?</p>
<p>Explanation based on <a href="https://rads.stackoverflow.com/amzn/click/com/B0086PTUMA" rel="nofollow noreferrer" rel="nofollow noreferrer">DeGroot, second edition, page 256</a>. Consider the binomial distribution with fixed <span class="math-container">$p$</span> <span class="math-container">$$ P(X = k) = {n \choose k}p^k(1-p)^{n-k} $$</span></p> <p>Now define <span class="math-container">$\lambda = np$</span> and thus <span class="math-container">$p = \frac{\lambda}{n}$</span>.</p> <p><span class="math-container">$$ \begin{align} P(X = k) &amp;= {n \choose k}p^k(1-p)^{n-k}\\ &amp;=\frac{n(n-1)(n-2)\cdots(n-k+1)}{k!}\frac{\lambda^k}{n^k}\left(1-\frac{\lambda}{n}\right)^{n-k}\\ &amp;=\frac{\lambda^k}{k!}\frac{n}{n}\cdot\frac{n-1}{n}\cdots\frac{n-k+1}{n}\left(1-\frac{\lambda}{n}\right)^n\left(1-\frac{\lambda}{n}\right)^{-k} \end{align} $$</span> Let <span class="math-container">$n \to \infty$</span> and <span class="math-container">$p \to 0$</span> so <span class="math-container">$np$</span> remains constant and equal to <span class="math-container">$\lambda$</span>.</p> <p>Now <span class="math-container">$$ \lim_{n \to \infty}\frac{n}{n}\cdot\frac{n-1}{n}\cdots\frac{n-k+1}{n}\left(1-\frac{\lambda}{n}\right)^{-k} = 1 $$</span> since in all the fractions, <span class="math-container">$n$</span> climbs at the same rate in the numerator and the denominator and the last parentheses has the fraction going to <span class="math-container">$0$</span>. Furthermore <span class="math-container">$$ \lim_{n \to \infty}\left(1-\frac{\lambda}{n}\right)^n = e^{-\lambda} $$</span> so under our definitions <span class="math-container">$$ \lim_{n\to\infty;\;p\to 0;\;np=\lambda} = {n \choose k}p^k(1-p)^{n-k} = \frac{\lambda^k}{k!}e^{-\lambda} $$</span> In other words, as the probability of success becomes a rate applied to a continuum, as opposed to discrete selections, the binomial becomes the Poisson.</p> <h3>Update with key point from comments</h3> <p>Think about a Poisson process. It really is, in a sense, looking at very, very small intervals of time and seeing if something happened. The &quot;very, very, small&quot; comes from the need that we really only see at most one instance per interval. So what we have is pretty much an infinite sum of infinite Bernoullis. When we have a finite sum of finite Bernoullis, that is binomial. When it is infinite, but with finite probability <span class="math-container">$np=\lambda$</span>, it is Poisson.</p>
<p>Let $p_k(t)$ be the probability of $k$ events in time $t$. We first find $p_0(t)$. Let $h$ be small. By independence $p_0(t+h)=p_0(t)p_0(h)$. The probability of an event in time $h$, where $h$ is very small, is roughly $\lambda h$. More accurately, $\lim_{h\to 0^+}\frac{1-p_0(h)}{h}=\lambda$. So $p_0(h)\approx 1-\lambda h$. Substitute. We get $$\frac{p_0(t+h)-p_0(t)}{h}\approx -\lambda p_0(t).$$ Let $h\to 0$. We conclude that $p_0'(t)=-\lambda p_0(t)$. This is a familiar differential equation. Since $p_0(0)=1$, it has solution $p_0(t)=e^{-\lambda t}$. </p> <p>Now do a similar argument for general $k$. If $h$ is a small time interval, then the probability of $2$ or more events in time interval $h$ is negligible in comparison with the probability of $1$ event. Thus $$p_k(t+h)\approx p_{k}(t)(1-\lambda h)+p_{k-1}(t)\lambda h.$$ Simplifying, and letting $h\to 0$, we find that $$p_k'(t)=-\lambda p_k(t)+\lambda p_{k-1}(t).$$ This DE can be solved, using the induction hypothesis $p_{k-1}(t)=e^{-\lambda t}\frac{(\lambda t)^{k-1}}{(k-1)!}$. Or else we can verify by substitution that the standard expressions do satisfy the DE. </p>
logic
<p>The nLab has a lot of nice things to say about how you can use the <a href="http://ncatlab.org/nlab/show/internal+logic" rel="noreferrer">internal logic</a> of various kinds of categories to prove interesting statements using more or less ordinary mathematical reasoning. However, I can't find a <em>single example</em> on the nLab of what such a proof actually looks like. (The nLab has a frustrating lack of examples in general.)</p> <p>Can anyone supply me with some examples? I'd be particularly interested in the following kinds of examples:</p> <ul> <li><p>I've heard in a topos one can internalize the real numbers and, in the topos $\text{Sh}(X)$ of sheaves on a topological space, this reproduces the sheaf of continuous real-valued functions $X \to \mathbb{R}$. Moreover, one can internalize "finitely generated projective $\mathbb{R}$-module" and in $\text{Sh}(X)$ this reproduces real vector bundles on $X$. What can you prove about vector bundles this way? </p></li> <li><p>I'd also like to see examples of what you can do in the internal logic of Cartesian closed categories. </p></li> </ul> <p><a href="https://mathoverflow.net/questions/124991/what-can-be-expressed-in-and-proved-with-the-internal-logic-of-a-topos">This MO question</a> is related but it doesn't really satisfy my curiosity. </p>
<p>Here is an arbitrary example from algebraic geometry. We'll prove the following well-known statement about $\mathcal{O}_X$-modules on reduced schemes $X$ by reducing to constructive linear algebra interpreted in the topos $\mathrm{Sh}(X)$ of sheaves on $X$:</p> <blockquote> <p>Let $\mathcal{F}$ be an $\mathcal{O}_X$-module locally of finite type. Then $\mathcal{F}$ is locally free iff its rank is constant.</p> </blockquote> <p>We can translate this statement into the internal language of $\mathrm{Sh}(X)$ by the following dictionary:</p> <ul> <li>In the internal language, the sheaf of rings $\mathcal{O}_X$ looks like an ordinary ring.</li> <li>Accordingly, $\mathcal{F}$ looks like an ordinary module on that ring.</li> <li>$\mathcal{F}$ is locally of finite type iff it is finitely generated from the internal point of view.</li> <li>$\mathcal{F}$ is locally free iff it is a free module from the internal point of view.</li> <li>Internally, we can define the rank of $\mathcal{F}$ as the minimal number of elements needed to generate $\mathcal{F}$. But constructively, the natural numbers may fail to have minima of arbitrary inhabited sets (see <a href="http://math.andrej.com/2009/09/08/constructive-stone-minima-of-sets-of-natural-numbers/">this enlightening blog post by Andrej Bauer</a>), so this minimal number might not actually be an (internal) natural number, but be an element of a suitable completion. Externally, the rank defined this way induces a upper semicontinuous function on $X$ (see <a href="http://ncatlab.org/nlab/show/one-sided+real+number">nLab and the Mulvey reference therein</a>); it is constant iff internally, the minimal number of generators is an actual natural number.</li> <li>Finally, the scheme $X$ is reduced iff $\mathcal{O}_X$ looks like an ordinary reduced ring from the internal perspective. This in turn is equivalent to $\mathcal{O}_X$ being a so-called residue field from the internal point of view (i.e. a non-trivial ring with every non-unit being zero).</li> </ul> <p>So the statement follows if we can give a constructive proof of the following linear algebra fact:</p> <blockquote> <p>Let $A$ be a residue field and let $M$ be a finitely generated $A$-module. Then $A$ is free iff the minimal number of elements needed to generate $M$ as an $A$-module is an actual natural number.</p> </blockquote> <p>The direction "$\Rightarrow$" is clear. For the direction "$\Leftarrow$", consider a minimal generating family $x_1,\ldots,x_n$ of $M$ (which exists by assumption). This family is linearly independent (and therefore a basis): Let $\sum_i \lambda_i x_i = 0$. If any $\lambda_i$ were invertible, the family $x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n$ would too generate $M$, contradicting the minimality. So each $\lambda_i$ is not invertible and thus zero (by assumption on $A$).</p>
<p>Joyal and Tierney's 1984 monograph, <em>An extension of the Galois theory of Grothendieck</em>, is an example of a substantial piece of mathematics written using informal reasoning <em>in internal logic</em>. The main result is the following:</p> <p><strong>Theorem.</strong> Every open surjection of toposes is an effective descent morphism. In particular, every Grothendieck topos is equivalent to the category of equivariant sheaves on a localic groupoid.</p> <p>If you look at Chapter I, you will find that reads just like any other mathematical text, save for the avoidance of classical logic and certain kinds of set-theoretic operations. The main difficulty with using internal logic is the interpretation of the conclusions – this requires much care! For example, the proposition scheme $$(\forall b : B . \exists a : A . f(a) = b) \to (\forall h : B^T . \exists g : A^T . h = g \circ f)$$ where $B$ is fixed but $f : A \to B$ and $T$ are allowed to vary, says that "if $f$ is surjective, then for any $h : T \to B$, there exists $g : T \to B$ such that $h = g \circ f$", or in short, "$B$ is a projective object"... but in the canonical semantics, it is neither necessary nor sufficient that $B$ be projective for the statement to hold! That is because what the formula actually means is the following,</p> <blockquote> <p>If $f : A \to B$ is an epimorphism, then $f^T : A^T \to B^T$ is also an epimorphism.</p> </blockquote> <p>whereas $B$ being projective is the statement below:</p> <blockquote> <p>If $f : A \to B$ is an epimorphism, then $\mathrm{Hom}(T, f) : \mathrm{Hom}(T, A) \to \mathrm{Hom}(T, B)$ is a split surjection of sets.</p> </blockquote> <p>If the topos in question has a projective terminal object, then the first statement (internal projectivity) implies the second, and if the topos is well-pointed, then the second statement implies the first.</p> <p>So much for projective objects. What about finitely generated modules? Again there are subtleties, but the most straightforward way to formulate it is to take a mixed approach. Let $R$ be an internal ring. Then $M$ is a finitely-generated $R$-module if there exist global elements $m_1, \ldots, m_n$ (i.e. morphisms $1 \to M$) such that $$\forall m : M . \exists r_1 : R . \cdots . \exists r_n : R . m = r_1 m_1 + \cdots + r_n m_n$$ holds in the internal logic. This amounts to saying that the evident homomorphism $R^{\oplus n} \to M$ is an epimorphism, which is what we want. It is tempting to formulate the whole statement internally, but this cannot work: at best one will obtain an internal characterisation of modules that are <em>locally</em> finitely generated.</p> <hr> <p>Perhaps I should give a positive example. I'm afraid I can't think of anything <em>interesting</em>, so I'll opt for something <em>simple</em> instead. It is well-known that a two-sided unit element of a magma is unique if it exists. This is also true for internal magmas in any topos, and the proof is exactly the same (so long as it is formulated directly). More explicitly:</p> <blockquote> <p>Let $M$ be a magma. Suppose $u$ is a left unit element in $M$ and $v$ is a right unit element in $M$. Then, $u = u v = v$.</p> </blockquote> <p>Formally, we are deducing that $$\forall u : M. \forall v : M. (\forall m : M. u m = m) \land (\forall m : M. m v = m) \to (u = v)$$ which means that, for all $u : S \to M$ and $v : T \to M$, if $\mu \circ (u \times \mathrm{id}_M) = \pi_2$ and $\mu \circ (\mathrm{id}_M \times v) = \pi_1$, then $u \circ \pi_1 = v \circ \pi_2$ as morphisms $S \times T \to M$.</p> <p>Now, suppose we have an internal magma $M$ for which $$\exists u : M . \forall m : M. (u m = m) \land (m u = m)$$ holds in the internal logic, i.e. there exists a morphism $u : T \to M$ satisfying the relevant equations, such that the unique morphism $T \to 1$ is an epimorphism. (The latter is the true content of the quantifier $\exists$.) We wish to show that $M$ has a global unit element, i.e. a morphism $e : 1 \to M$ satisfying the obvious equations. Applying the above result in the case $u = v$, we deduce that $u$ must factor through the coequaliser of $\pi_1, \pi_2 : T \times T \to T$. But this coequaliser computes the coimage of the unique morphism $T \to 1$, and we assumed $T \to 1$ is an epimorphism, so $T \to 1$ is itself the coequaliser of $\pi_1$ and $\pi_2$. Thus $u$ factors through $1$ (in a unique way), yielding the required $e : 1 \to M$.</p> <p>Of course, the above paragraph takes place in the external logic, but this is unavoidable: there is no way to formulate the existence of a <em>global</em> element in the internal logic. I suppose the point is that, once you have built up a stock of these metatheorems that interpret statements in the internal logic, you can then prove various results using internal logic if so desired. </p>
game-theory
<p>Here's the description of a dice game which puzzles me since quite some time (the game comes from a book which offered a quite unsatisfactory solution — but then, its focus was on programming, so this is probably excusable).</p> <p>The game goes as follows:</p> <p>Two players play against each other, starting with score 0 each. Winner is the player to first reach a score of 100 or more. The players play in turn. The score added in each round is determined as follows: The player throws a die. If the die does not show an 1, he has the option to stop and have the points added to his score, or to continue throwing until either he stops or gets an 1. As soon as he gets an 1, his turn ends and no points are added to his score, any points he has accumulated in this round are lost. Afterward it is the second player's turn.</p> <p>The question is now what is the best strategy for that game. The book suggested to try out which of the following two strategies gives better result:</p> <ul> <li>Throw 5 times (if possible), then stop.</li> <li>If the accumulated points in this round add up to 20 or more, stop, otherwise continue.</li> </ul> <p>The rationale is that you want the next throw to increase the expected score. Of course it doesn't need testing to see that the second strategy is better: If you've accumulated e.g. 10 points, it doesn't matter whether you accumulated them with 5 times throwing a 2, or with 2 times throwing a 5.</p> <p>However it is also easy to see that this second strategy isn't the best one either: After all, the ultimate goal is not to maximize the increase per round, but to maximize the probability to win.; both are related, but not the same. For example, imagine you have been very unlucky and are still at a very low score, but your opponent has already 99 points. It's your turn, and you've already accumulated some points (but those points don't get you above 100) and have to decide whether to stop, or to continue. If you stop, you secure the points, but your opponent has a 5/6 chance to win in the next move. Let's say that if you stop, the optimal strategy in the next move will be to try to get 100 points in one run, and that the probability to reach that is $p$. Then if you stop, since your opponent then has his chance to win first, your total probability to win is just $1/6(p + (1-p)/6 (p + (1-p)/6 (p + ...))) = p/(p+5)$. On the other hand, if you continue to 100 points right now, you have the chance $p$ to win this round <em>before</em> the other has a chance to try, but a lower probability $p'$ to win in later rounds, giving a probability $p + p'/(5+p')$. It is obvious that even if we had $p'=0$ (i.e. if you don't succeed now, you'll lose), you'd still have the probability $p&gt;p/(p+5)$ to win by continuing, so you should continue no matter how slim your chances, and even if your accumulated points this round are above 20, because if you stop, you chances will be worse for sure. Since at <em>some</em> time, the optimal strategy <em>will</em> have a step where you try to go beyond 100 (because that's where you win), by induction you can say that if your opponent has already 99 points, your best strategy is, unconditionally, to try to get 100 points in one run.</p> <p>Of course this "brute force rule" is for that specific situation (it also applies if the opponent has 98 points, for obvious reasons). If you'd play that brute-force rule from the beginning, you'd lose even against someone who just throws once each round. Indeed, if both are about equal, and far enough from the final 100 points, intuitively I think the 20 points rule is quite good. Also, intuitively I think if you are far advanced against your opponent, you should even play more safe and stop earlier.</p> <p>As the current game situation is described by the three numbers <em>your score</em> ($Y$), <em>your opponent's score</em> ($O$) and the points already collected in this round ($P$), and your decision is to either continue ($C$) or to stop ($S$), a strategy is completely given by a function $$s:\{(Y, O, P)\}\to \{C,S\}$$ where the following rules are obvious:</p> <ul> <li>If $Y+S\ge 100$ then $s(Y,O,P)=S$ (if you already have collected 100 points, the only reasonable move is to stop).</li> <li>$s(Y, O, 0)=C$ (it doesn't make sense to stop before you threw at least once).</li> </ul> <p>Also, I just above derived the following rule:</p> <ul> <li>$f(Y,98,P)=f(Y,99,P)=C$ unless the first rule kicks in.</li> </ul> <p>I <em>believe</em> the following rule should also hold (but have no idea how to prove it):</p> <ul> <li>If $f(Y,98,P)=S$ then also $f(Y,98,P+1)=S$</li> </ul> <p>If that believe is true, then the description of a strategy can be simplified to a function $g(Y,O)$ which gives the smallest $P$ at which you should stop.</p> <p>However, that's all I've figured out. What I'd really like to know is: What is the optimal strategy for this game?</p>
<p>When you are both far from the goal, you should maximize the expected points per turn. The expected return from another throw is $\frac 56 (2+3+4+5+6) - \frac 16 P$, which says you should stop above $20$ and do whatever you want at $20$. You are right this gets perturbed as you get close to the end.</p>
<p>There is no simplified description of the Nash equilibrium of this game.</p> <p>You can compute the best strategy starting from positions where both players are about to win and going backwards from there. Let $p(Y,O,P)$ the probability that you win if you are at the situation $(Y,O,P)$ and if you make the best choices. The difficulty is that to compute the strategy and probability to win at some situation $(Y,O,P)$, you make your choice depending on the probability $p(O,Y,0)$. So you have a (piecewise affine and contracting) decreasing function $F_{(Y,O,P)}$ such that $p(Y,O,P) = F_{(Y,O,P)}(p(O,Y,0))$, and in particular, you need to find the fixpoint of the composition $F_{(Y,O,0)} \circ F_{(O,Y,0)}$ in order to find the real $p(O,Y,0)$, and deduce everything from there.</p> <p>After computing this for a 100 points game and some inspecting, there is no function $g(Y,O)$ such that the strategy simplifies to "stop if you accumulated $g(Y,O)$ points or more". For example, at $Y=61,O=62$,you should stop when you have exactly $20$ or $21$ points, and continue otherwise.</p> <p>If you let $g(Y,O)$ be the smallest number of points $P$ such that you should stop at $(Y,O,P)$, then $g$ does not look very nice at all. It is not monotonous and does strange things, except in the region where you should just keep playing until you lose or win in $1$ move.</p>
probability
<p>Given the rapid rise of the <a href="http://en.wikipedia.org/wiki/Mega_Millions">Mega Millions</a> jackpot in the US (now advertised at \$640 million and equivalent to a "cash" prize of about \$448 million), I was wondering if there was ever a point at which the lottery became positive expected value (EV), and, if so, what is that point or range?</p> <p>Also, a friend and I came up with two different ways of looking at the problem, and I'm curious if they are both valid.</p> <p>First, it is simple to calculate the expected value of the "fixed" prizes. The first five numbers are selected from a pool of 56, the final "mega" ball from a pool of 46. (Let us ignore taxes in all of our calculations... one can adjust later for one's own tax rate which will vary by state). The expected value of all these fixed prizes is \$0.183.</p> <p>So, then you are paying \$0.817 for the jackpot prize. My plan was then to calculate the expected number of winners of the jackpot (multiple winners split the prize) to get an expected jackpot amount and multiply by the probability of selecting the winning numbers (given by $\binom{56}{5} * 46 = 1 \text{ in } 175,711,536$). The number of tickets sold can be easily estimated since \$0.32 of each ticket is added to the prize, so: </p> <p>(<em>Current Cash Jackpot</em> - <em>Previous Cash Jackpot</em>) / 0.32 = Tickets Sold $(448 - 252) / 0.32 = 612.5$ million tickets sold (!!).</p> <p>(The cash prizes are lower than the advertised jackpot. Currently, they are about 70% of the advertised jackpot.) Obviously, one expects multiple winners, but I can't figure out how to get a precise estimate, and various web sources seem to be getting different numbers.</p> <p><strong>Alternative methodology:</strong> My friend's methodology, which is far simpler, is to say 50% of this drawing's sales will be paid out in prizes (\$0.18 to fixed prizes and \$0.32 to the jackpot). Add to that the carried over jackpot amount (\$250 million cash prize from the unwon previous jackpot) that will also be paid out. So, your expected value is $\$250$ million / 612.5 million tickets sold = \$0.40 from the previous drawing + \$0.50 from this drawing = \$0.90 total expected value for each \$1 ticket purchased (before taxes). Is this a valid approach or is it missing something? It's far simpler than anything I found while searching the web for this.</p> <p><strong>Added:</strong> After considering the answer below, this is why I don't think my friend's methodology can be correct: it neglects the probability that no one will win. For instance, if a $1$ ticket was sold, the expected value of that ticket would not be $250 million + 0.50 since one has to consider the probability of the jackpot not being paid out at all. So, <em>additional question:</em> what is this probability and how do we find it? (Obviously it is quite small when $612.5$ million tickets are sold and the odds of each one winning is $1:175.7$ million.) Would this allow us to salvage this methodology?</p> <p>So, is there a point that the lottery will be positive EV? And, what is the EV this week, and the methodology for calculating it?</p>
<p>I did a fairly <a href="http://www.circlemud.org/~jelson/megamillions">extensive analysis of this question</a> last year. The short answer is that by modeling the relationship of past jackpots to ticket sales we find that ticket sales grow super-linearly with jackpot size. Eventually, the positive expectation of a larger jackpot is outweighed by the negative expectation of ties. For MegaMillions, this happens before a ticket ever becomes EV+.</p>
<p>An interesting thought experiment is whether it would be a good investment for a rich person to buy every possible number for \$175,711,536. This person is then guaranteed to win! Then you consider the resulting size of the pot (now a bit larger), the probability of splitting it with other winners, and the fact that you get to deduct the \$175.7M spent from your winnings before taxes. (Thanks to Michael McGowan for pointing out that last one.)</p> <p>The current pot is \$640M, with a \$462M cash payout. The previous pot was \$252M cash payout, so using \$0.32 into the cash pot per ticket, we have 656,250,000 tickets sold. I, the rich person (who has enough servants already employed that I can send them all out to buy these tickets at no additional labor cost) will add about \$56M to the pot. So the cash pot is now \$518M.</p> <p>If I am the only winner, then I net (\$518M + \$32M (my approximate winnings from smaller prizes)) * 0.65 (federal taxes) + 0.35 * \$176M (I get to deduct what I paid for the tickets) = \$419M. I live in California (of course), so I pay no state taxes on lottery winnings. I get a 138% return on my investment! Pretty good. Even if I did have to pay all those servants overtime for three days.</p> <p>If I split the grand prize with just one other winner, I net \$250M. A 42% return on my investment. Still good. With two other winners, I net $194M for about a 10% gain.</p> <p>If I have to split it with three other winners, then I lose. Now I pay no taxes, but I do not get to deduct my gambling losses against my other income. I net \$161M, about an 8% loss on my investment. If I split it with four other winners, I net \$135M, a 23% loss. Ouch.</p> <p>So how many will win? Given the 656,250,000 other tickets sold, the expected number of other winners (assuming a random distribution of choices, so I'm ignoring the picking-birthdays-in-your-numbers problem) is 3.735. Hmm. This might not turn out well for Mr. Money Bags. Using Poisson, <span class="math-container">$p(n)={\mu^n e^{-\mu}\over n!}$</span>, where <span class="math-container">$\mu$</span> is the expected number (3.735) and <span class="math-container">$n$</span> is the number of other winners, there is only a 2.4% chance of me being the only winner, a 9% chance of one other winner, a 17% chance of two, a 21% chance of three, and then it starts going down with a 19% chance of four, 14% for five, 9% for six, and so on.</p> <p>Summing over those, my expected return after taxes is \$159M. Close. But about a 10% loss on average.</p> <p>Oh well. Time to call those servants back and have them make me a sandwich instead.</p> <p><em>Update for October 23, 2018 Mega Millions jackpot:</em></p> <p>Same calculation.</p> <p>The game has gotten harder to win, where there are now 302,575,350 possible numbers, <em>and</em> each ticket costs \$2. So now it would cost $605,150,700 to assure a winning ticket. Also the maximum federal tax rate has gone up to 39.6%.</p> <p>The current pot (as of Saturday morning -- it will probably go up more) has a cash value of \$904,900,000. The previous cash pot was \$565,600,000. So about 530 million more tickets have been or are expected to be purchased, using my previous assumption of 32% of the cost of a ticket going into the cash pot. Then the mean number of winning tickets, besides the one assured for Mr. Money Bags, is about 1.752. Not too bad actually.</p> <p>Summing over the possible numbers of winners, I get a net <em>win</em> of <span class="math-container">$\approx$</span>\$60M! So if you can afford to buy all of the tickets, and can figure out how to do that in next three days, go for it! Though that win is only a 10% return on investment, so you could very well do better in the stock market. Also that win is a slim margin, and is dependent on the details in the calculation, which would need to be more carefully checked. Small changes in the assumptions can make the return negative.</p> <p>Keep in mind that if you're not buying all of the possible tickets, this is not an indication that the expected value of one ticket is more than \$2. Buying all of the possible ticket values is <span class="math-container">$e\over e-1$</span> times as efficient as buying 302,575,350 <em>random</em> tickets, where you would have many duplicated tickets, and would have less than a 2 in 3 chance of winning.</p>
number-theory
<p>Books on Number Theory for anyone who loves Mathematics?</p> <p>(Beginner to Advanced &amp; just for someone who has a basic grasp of math)</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/038797329X">A Classical Introduction to Modern Number Theory</a> by Ireland and Rosen hands down!</p>
<p>I would still stick with Hardy and Wright, even if it is quite old.</p>
geometry
<p>Here's a cute question I came up with.</p> <blockquote> <p>Start with a circle, and then choose three points <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span> on the circle, and proceed as follows:</p> <ol> <li>Draw the triangle inside the circle with vertices <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span></li> <li>Draw the inscribed circle of that triangle, which is tangent to each of the three sides of triangle, and now label these three points of tangency as <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span> (so we're updating which points we're calling points <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span>).</li> <li>Repeat from Step 1.</li> </ol> <p>This construction gives us a sequence of inscribed circles and triangles that telescope down to a point, a <em>limit point</em>, which is determined only by the initial choice of the points <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span>. Which points in the interior of the circle are <em>limit points</em> of this construction?</p> </blockquote> <p>Also, if anyone has ideas for more interesting variations of this question, I'd like to hear them.</p>
<p>Without loss of generality assume that the circle has radius $1$, and that the triple $(a,b,c)$ is positively oriented (i.e., they go counterclockwise around the circle). As the problem is circularly symmetric, it suffices to show that we can choose $a,b,c$ so that the limit point has any specified distance from its center.</p> <p>For any $\epsilon &gt; 0$, we can choose $a, b, c$ such that the triangle $abc$ does not intersect the circle of radius $1-\epsilon$. Since the limit point must lie inside the triangle $abc$, it follows that the limit point can come arbitrarily close to the boundary.</p> <p>On the other hand, if the triangle $abc$ is equilateral, the limit point is clearly the center of the circle (as it is fixed under a rotation by $\frac{2 \pi}{3}$ about the center).</p> <p>So, if we can show that the map from $(a,b,c)$ to the limit point is continuous, it will follow that the limit point can have any distance less than $1$ from the center. The set of all possible distances will be a continuous image in $\Bbb{R}$ of the connected set of all possible oriented triples $(a,b,c)$, so it must be an interval; we have showed that this interval both contains $0$ and comes arbitrarily close to $1$.</p>
<p>This answer has very much the same flavor as <a href="https://math.stackexchange.com/a/2672681/167197">Micah's answer</a>, since it relies on there being a continuous map from the points $(a,b,c)$ to the <em>limit point</em>, and on the Intermediate Value Theorem.</p> <p><img src="https://i.sstatic.net/Bj0NK.gif" alt="Diagram of the construction"></p> <p>Take any point $X$ (red-violet and blinking for some reason) in the interior of your circle and let $O$ (green) be the center of your circle. Draw the ray $\overrightarrow{OX}$, and let $a$ (violet) be the point where this ray intersects the circle. Draw the other two points $b$ and $c$ (both violet) such that $\angle aOb = \angle aOc$ and each are less than or equal to $2\pi/3$. By the symmetry of this construction, the <em>limit point</em> must lie on the segment $\overline{Oa}$. </p> <p>Now the claim is that this function from $(0,2\pi/3] \to \overline{Oa}$ where $\angle aOb$ maps to the <em>limit point</em> is continuous. In order to rigorously show that it's continuous (without just trusting the picture) we'd have to do some calculations like in <a href="https://math.stackexchange.com/a/2672663/167197">Thomas Andrews's answer</a>. But if you trust that it is continuous, since the <em>limit point</em> is the center of the original circle when $\angle aOb = 2\pi/3$, and since the limit point approaches $a$ as $\angle aOb \to 0$, by the Intermediate Value Theorem the <em>limit point</em> must be $X$ at some point in between. The point $X$ was chosen arbitrarily in the interior of the circle, so we're good: every point in the interior of the circle is the <em>limit point</em> of this construction for some choice of $a$, $b$, and $c$. </p>
geometry
<ol> <li><p>The volume of an $n$-dimensional ball of radius $1$ is given by the classical formula $$V_n=\frac{\pi^{n/2}}{\Gamma(n/2+1)}.$$ For small values of $n$, we have $$V_1=2\qquad$$ $$V_2\approx 3.14$$ $$V_3\approx 4.18$$ $$V_4\approx 4.93$$ $$V_5\approx 5.26$$ $$V_6\approx 5.16$$ $$V_7\approx 4.72$$ It is not difficult to prove that $V_n$ assumes its maximal value when $n=5$. </p> <p><strong>Question.</strong> Is there any non-analytic (i.e. geometric, probabilistic, combinatorial...) demonstration of this fact? What is so special about $n=5$?</p></li> <li><p>I also have a similar question concerning the $n$-dimensional volume $S_n$ ("surface area") of a unit $n$-sphere. Why is the maximum of $S_n$ attained at $n=7$ from a geometric point of view?</p></li> </ol> <p><strong>note</strong>: the question has also been <a href="https://mathoverflow.net/questions/53119/volumes-of-n-balls-what-is-so-special-about-n5">asked on MathOverflow</a> for those curious to other answers. </p>
<p>If you compare the volume of the sphere to that of its enclosing hyper-cube you will find that this ratio continually diminishes. The enclosing hyper-cube is 2 units in length per side if $R=1$. Then we have:</p> <p>$$V_1/2=1\qquad$$ $$V_2/4\approx 0.785$$ $$V_3/8\approx 0.5225$$ $$V_4/16\approx 0.308$$ $$V_5/32\approx 0.164$$</p> <p>The reason for this behavior is how we build hyper-spheres from low dimension to high dimensions. Think for example extending $S_1$ to $S_2$. We begin with a segment extending from $-1$ to $+1$ on the $x$ axis. We build a 2 sphere by sweeping this sphere out along the $y$ axis using the scaling factor $\sqrt{1-y^2}$. Compare this to the process of sweeping out the respective cube where the scale factor is $1$. So now we only occupy approximately $3/4$ of the enclosing cube (i.e. square for $n=2$). Likewise for $n=3$, we sweep the circle along the $z$ axis using the scaling factor, loosing even more volume compared to the cylinder if we had not scaled the circle as it was swept. So as we extend $S_{n-1}$ to get $S_n$ we start with the diminished volume we have and loose even more as we sweep out into the $n^{th}$ dimension.</p> <p>It would be easier to explain with figures, however hopefully you can work through how this works for lower dimensions and extend to higher ones.</p>
<p>At some point, the factorial must overtake the power function. This happens at five dimensions for the sphere, and seven for the surface. The actual race that is involved is between an alternation of $2$ and $\pi$, against $n/2$. At $n=5$, $\frac{5}{2}&gt;2$, but $\frac{6}{2}&lt;\pi$. </p> <p>That means that five dimensions is the last dimension that the sphere's volume stops increasing relative to the prism-product of the radius. </p> <p>After 19 dimensions, the surface of the sphere is less than the prism-product of the radius. </p>
logic
<p><a href="https://en.wikipedia.org/wiki/Peano_axioms" rel="noreferrer">Wikipedia</a> says the Peano Axioms are a set of axioms for the natural numbers. Is the purpose of the axioms to create a base on which we can build the rest of mathematicas formally?</p> <p>If this is true were they chosen because they are agreed to be basic and reasonable?</p> <p>They do not deal with set theory which I thought was the basis for formal mathematics, are they an alternative to it?</p>
<p>Peano axioms come to model the natural numbers, and their most important property: the fact we can use induction on the natural numbers. This has nothing to do with set theory. Equally one can talk about the axioms of a real-closed field, or a vector space.</p> <p>Axioms are given to give a definition for a mathematical object. It is a basic setting from which we can prove certain propositions.</p> <p>As it turns out, however, it is possible to use the natural numbers as a basis for <em>some</em> of our mathematics, and we can use Peano axioms to model first-order logic (its syntax and the inference rules), and the notion of a proof.</p> <p>This can be seen as a basis for some of the mathematics we do, however it is often a syntactical basis only: we only use the integers to manipulate strings in our language and sequences of these strings. We do not have the notion of a structure, of a model.</p> <p>But it seems that you are mainly confused by the use of the term "axioms". These are just basic properties for a mathematical object. In this case "the natural numbers". Much like there are axioms in geometry, but geometry doesn't usually serve as a basis for many parts in mathematics.</p>
<p>You must separate two approaches, of course interconnected but conceptually independent from each other : "rigorization" and "foundation".</p> <p>With "rigorization" I mean the work of most of XIX century mathematician on calculus, aimed at removing the obscurity inherited from Newton's and Leibniz's discoveries. The work of Cauchy , Weierstrass , Bolzano, Cantor, etc. gave us the modern definition of limit, etc. based on <em>real numbers</em>.</p> <p>This effort of rigorization was completed at the turn of the century by Dedekind, Frege, Peano and Hilbert, who achieved several big results :</p> <ul> <li><p>Dedekind (1872 - <strong>Stetigkeit und irrationale Zahlen</strong> (<em>Continuity and irrational numbers</em>) : analysis of irrational numbers and construction of <em>real numbers</em> as Dedekind cuts , i.e. as non-geometrical entities;</p></li> <li><p>Frege (1879 : <strong>Begriffsschrift</strong> and 1884 : <em>Die Grundlagen der Arithmetik</em>) : modern mathematical logic and philosophical anlysis on the nature of numbers;</p></li> <li><p>Dedekind (1888 - <strong>Was sind und was sollen die Zahlen?</strong> (<em>What are numbers and what should they be?</em>) and Peano (1889 - <strong>The principles of arithmetic presented by a new method</strong> ) : axiomatisation of natural number, i.e. characterization of the mathematical structure that we refer to as "natural numbers" in terms of some <em>basic properties</em> (as you said : "agreed to be basic and reasonable") form which all known properties of natural numbers (their "behaviour") can be deduced in a rigorous way (see also Frege) .</p></li> <li><p>Hilbert ( 1899 - <strong>Grundlagen der Geometrie</strong>): modern axiomatisation of geometry ; </p></li> <li><p>Hilbert ( 1928 - with Wilhelm Ackermann, <strong>Grundzüge der theoretischen Logik</strong>) : foundations of mathematical logic (based on works of Peano, Frege and Russell)</p></li> <li><p>Hilbert ( 1920s-30s - with Paul Bernays, <strong>Grundlagen der Mathematik</strong>, vol. 1 - 1934 and vol.2 - 1939) : mathematical investigations on formal systems.</p></li> </ul> <p>These mathematicians (with Russell, Brouwer, Weyl) tried also to develop research programs (e.g. logicism, intuitionism, formalism) aimed at answering basic philosphical questions about the existence of mathematical objects (i.e.numbers), the way we can have knowledge of them, etc. </p> <p>Those research programs was greatly propelled by the discovery of Paradoxes (Cantor's, Russell's), so they become known as "foundational" programs, aimed at find the basic principles that can "secure" our mathematical knowledge.</p> <p>One of the most important result of this movement was Zermelo's axiomatisation of Set Theory (ZFC) : with this, mathematicians was able to find some basic axioms (but this time NOT all "agreed and reasonable") capable of "generating" (up to now without contradictions) all known properties of set AND capable also of building a proxy for other mathematical structures, like natural numbers. This means that Peano Axioms for numbers are now theorems of Set Theory.</p> <p>Can we say that we have <em>reduced numbers to set</em> ?</p> <p>From one point of view : YES. The language of sets is so basic that quite all mathematical concepts can be "described" with it and sets' axioms are so powerful that all properties of the defined mathematical concept can be proved starting from them.</p> <p>Form another point of view (more philosophical ) : NO. In what sense we can say that our basic insight into existence of natural number and their properties are less "clear" or "certain" than our insight into the existence of sets (the cumulative hierarchy) and their properties (i.e.Axiom of Choice) ?</p>
matrices
<p>If I multiply two numbers, say $3$ and $5$, I know it means add $3$ to itself $5$ times or add $5$ to itself $3$ times. </p> <p>But If I multiply two matrices, what does it mean ? I mean I can't think it in terms of repetitive addition. </p> <blockquote> <p>What is the intuitive way of thinking about multiplication of matrices?</p> </blockquote>
<p>Matrix ¨multiplication¨ is the composition of two linear functions. The composition of two linear functions is a linear function.</p> <p>If a linear function is represented by A and another by B then AB is their composition. BA is the their reverse composition.</p> <p>Thats one way of thinking of it. It explains why matrix multiplication is the way it is instead of piecewise multiplication.</p>
<p>Asking why matrix multiplication isn't just componentwise multiplication is an excellent question: in fact, componentwise multiplication is in some sense the most &quot;natural&quot; generalization of real multiplication to matrices: it satisfies all of the axioms you would expect (associativity, commutativity, existence of identity and inverses (for matrices with no 0 entries), distributivity over addition).</p> <p>The usual matrix multiplication in fact &quot;gives up&quot; commutativity; we all know that in general <span class="math-container">$AB \neq BA$</span> while for real numbers <span class="math-container">$ab = ba$</span>. What do we gain? Invariance with respect to change of basis. If <span class="math-container">$P$</span> is an invertible matrix,</p> <p><span class="math-container">$$P^{-1}AP + P^{-1}BP = P^{-1}(A+B)P$$</span> <span class="math-container">$$(P^{-1}AP) (P^{-1}BP) = P^{-1}(AB)P$$</span> In other words, it doesn't matter what basis you use to represent the matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, no matter what choice you make their sum and product is the same.</p> <p>It is easy to see by trying an example that the second property does not hold for multiplication defined component-wise. This is because the inverse of a change of basis <span class="math-container">$P^{-1}$</span> no longer corresponds to the multiplicative inverse of <span class="math-container">$P$</span>.</p>
linear-algebra
<p>The rotation matrix $$\pmatrix{ \cos \theta &amp; \sin \theta \\ -\sin \theta &amp; \cos \theta}$$ has complex eigenvalues $\{e^{\pm i\theta}\}$ corresponding to eigenvectors $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$. The real eigenvector of a 3d rotation matrix has a natural interpretation as the axis of rotation. Is there a nice geometric interpretation of the eigenvectors of the $2 \times 2$ matrix?</p>
<p><a href="https://math.stackexchange.com/a/241399/3820">Tom Oldfield's answer</a> is great, but you asked for a geometric interpretation so I made some pictures.</p> <p>The pictures will use what I called a "phased bar chart", which shows complex values as bars that have been rotated. Each bar corresponds to a vector component, with length showing magnitude and direction showing phase. An example:</p> <p><img src="https://i.sstatic.net/8YsvT.png" alt="Example phased bar chart"></p> <p>The important property we care about is that scaling a vector corresponds to the chart scaling or rotating. Other transformations cause it to distort, so we can use it to recognize eigenvectors based on the lack of distortions. (I go into more depth in <a href="http://twistedoakstudios.com/blog/Post7254_visualizing-the-eigenvectors-of-a-rotation" rel="noreferrer">this blog post</a>.)</p> <p>So here's what it looks like when we rotate <code>&lt;0, 1&gt;</code> and <code>&lt;i, 0&gt;</code>:</p> <p><img src="https://i.sstatic.net/Uauy6.gif" alt="Rotating 0, 1"> <img src="https://i.sstatic.net/nYUhr.gif" alt="Rotating i, 0"></p> <p>Those diagram are not just scaling/rotating. So <code>&lt;0, 1&gt;</code> and <code>&lt;i, 0&gt;</code> are not eigenvectors.</p> <p>However, they do incorporate horizontal and vertical sinusoidal motion. Any guesses what happens when we put them together?</p> <p>Trying <code>&lt;1, i&gt;</code> and <code>&lt;1, -i&gt;</code>:</p> <p><img src="https://i.sstatic.net/t80MU.gif" alt="Rotating 1, i"> <img src="https://i.sstatic.net/OObMZ.gif" alt="Rotation 1, -i"></p> <p>There you have it. The phased bar charts of the rotated eigenvectors are being rotated (corresponding to the components being phased) as the vector is turned. Other vectors get distorting charts when you turn them, so they aren't eigenvectors.</p>
<p>Lovely question!</p> <p>There is a kind of intuitive way to view the eigenvalues and eigenvectors, and it ties in with geometric ideas as well (without resorting to four dimensions!). </p> <p>The matrix, is unitary (more specifically, it is real so it is called orthogonal) and so there is an orthogonal basis of eigenvectors. Here, as you noted, it is $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$, let us call them $v_1$ and $v_2$, that form a basis of $\mathbb{C^2}$, and so we can write any element of $\mathbb{R^2}$ in terms of $v_1$ and $v_2$ as well, since $\mathbb{R^2}$ is a subset of $\mathbb{C^2}$. (And we normally think of rotations as occurring in $\mathbb{R^2}$! Please note that $\mathbb{C^2}$ is a two-dimensional vector space with components in $\mathbb{C}$ and need not be considered as four-dimensional, with components in $\mathbb{R}$.)</p> <p>We can then represent any vector in $\mathbb{R^2}$ uniquely as a linear combination of these two vectors $x = \lambda_1 v_1 + \lambda_2v_2$, with $\lambda_i \in \mathbb{C}$. So if we call the linear map that the matrix represents $R$</p> <p>$$R(x) = R(\lambda_1 v_1 + \lambda_2v_2) = \lambda_1 R(v_1) + \lambda_2R(v_2) = e^{i\theta}\lambda_1 (v_1) + e^{-i\theta}\lambda_2(v_2) $$</p> <p>In other words, when working in the basis ${v_1,v_2}$: $$R \pmatrix{\lambda_1 \\\lambda_2} = \pmatrix{e^{i\theta}\lambda_1 \\ e^{-i\theta}\lambda_2}$$</p> <p>And we know that multiplying a complex number by $e^{i\theta}$ is an anticlockwise rotation by theta. So the rotation of a vector when represented by the basis ${v_1,v_2}$ is the same as just rotating the individual components of the vector in the complex plane!</p>
probability
<p>The following probability question appeared in an <a href="https://math.stackexchange.com/questions/250/a-challenge-by-r-p-feynman-give-counter-intuitive-theorems-that-can-be-transl/346#346">earlier thread</a>:</p> <blockquote> <p>I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?</p> </blockquote> <p>The claim was that it is not actually a mathematical problem and it is only a language problem.</p> <hr> <p>If one wanted to restate this problem formally the obvious way would be like so:</p> <p><strong>Definition</strong>: <em>Sex</em> is defined as an element of the set $\\{\text{boy},\text{girl}\\}$.</p> <p><strong>Definition</strong>: <em>Birthday</em> is defined as an element of the set $\\{\text{Monday},\text{Tuesday},\text{Wednesday},\text{Thursday},\text{Friday},\text{Saturday},\text{Sunday}\\}$</p> <p><strong>Definition</strong>: A <em>Child</em> is defined to be an ordered pair: (sex $\times$ birthday).</p> <p>Let $(x,y)$ be a pair of children,</p> <p>Define an auxiliary predicate $H(s,b) :\\!\\!\iff s = \text{boy} \text{ and } b = \text{Tuesday}$.</p> <p>Calculate $P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y))$</p> <p><em>I don't see any other sensible way to formalize this question.</em></p> <hr> <p>To actually solve this problem now requires no thought (infact it is thinking which leads us to guess incorrect answers), we just compute</p> <p>$$ \begin{align*} &amp; P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y)) \\\\ =&amp; \frac{P(x\text{ is a boy and }y\text{ is a boy and }(H(x)\text{ or }H(y)))} {P(H(x)\text{ or }H(y))} \\\\ =&amp; \frac{P((x\text{ is a boy and }y\text{ is a boy and }H(x))\text{ or }(x\text{ is a boy and }y\text{ is a boy and }H(y)))} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\ =&amp; \frac{\begin{align*} &amp;P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday}) \\\\ + &amp;P(x\text{ is a boy and }y\text{ is a boy and }y\text{ born on Tuesday}) \\\\ - &amp;P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday and }y\text{ born on Tuesday}) \\\\ \end{align*}} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\ =&amp; \frac{1/2 \cdot 1/2 \cdot 1/7 + 1/2 \cdot 1/2 \cdot 1/7 - 1/2 \cdot 1/2 \cdot 1/7 \cdot 1/7} {1/2 \cdot 1/7 + 1/2 \cdot 1/7 - 1/2 \cdot 1/7 \cdot 1/2 \cdot 1/7} \\\\ =&amp; 13/27 \end{align*} $$</p> <hr> <p>Now what I am wondering is, does this refute the claim that this puzzle is just a language problem or add to it? Was there a lot of room for misinterpreting the questions which I just missed? </p>
<p>There are even trickier aspects to this question. For example, what is the strategy of the guy telling you about his family? If he always mentions a boy first and not a daughter, we get one probability; if he talks about the sex of the first born child, we get a different probability. Your calculation makes a choice in this issue - you choose the version of "if the father has a boy and a girl, he'll mention the boy".</p> <p>What I'm aiming to is this: the question is not well-defined mathematically. It has several possible interpretations, and as such the "problem" here is indeed of the language; or more correctly, the fact that a simple statement in English does not convey enough information to specify the precise model for the problem.</p> <p>Let's look at a simplified version without days. The probability space for the make-up of the family is {BB, GB, BG, GG} (GB means "an older girl and a small boy", etc). We want to know what is $P(BB|A)$ where A is determined by the way we interpret the statement about the boys. Now let's look at different possible interpretations.</p> <p>1) If there is a boy in the family, the statement will mention him. In this case A={BB,BG,GB} and so the probability is $1/3$.</p> <p>2) If there is a girl in the family, the statement will mention her. In this case, since the statement talked about a boy, there are NO girls in the family. So A={BB} and so the probability is 1.</p> <p>3) The statement talks about the sex of the firstborn. In this case A={BB,BG} and so the probability is $1/2$.</p> <p>The bottom line: The statement about the family looks "constant" to us, but it must be looked as a function from the random state of the family - and there are several different possible functions, from which you must choose one otherwise no probabilistic analysis of the situation will make sense.</p>
<p>It is actually <em>impossible</em> to have a unique and unambiguous answer to the puzzle without explicitly articulating a probability model for how the information on gender and birthday is generated. The reason is that (1) for the problem to have a unique answer some random process is required, and (2) the answer is a function of which random model is used.</p> <ol> <li><p>The problem assumes that a unique probability can be deduced as the answer. This requires that the set of children described is chosen by a random process, otherwise the number of boys is a deterministic quantity and the probability would be 0 or 1 but with no ability to determine which is the case. More generally one can consider random processes that produce the complete set of information referenced in the problem: choose a parent, then choose what to reveal about the number, gender, and birth days of its children.</p></li> <li><p>The answer depends on which random process is used. If the Tuesday birth is disclosed only when there are two boys, the probability of two boys is 1. If Tuesday birth is disclosed only when there is a sister, the probability of two boys is 0. The answer could be any number between 0 or 1 depending on what process is assumed to produce the data. </p></li> </ol> <p>There is also a linguistic question of how to interpret "one is a boy born on Tuesday". It could mean that the number of Tuesday-born males is exactly one, or at least one child.</p>
probability
<p>If someone asked me what it meant for $X$ to be standard normally distributed, I would tell them it means $X$ has probability density function $f(x) = \frac{1}{\sqrt{2\pi}}\mathrm e^{-x^2/2}$ for all $x \in \mathbb{R}$.</p> <p>More rigorously, I could alternatively say that $f$ is the Radon-Nikodym derivative of the distribution measure of $X$ w.r.t. the Lebesgue measure on $\mathbb{R}$, or $f = \frac{\mathrm d \mu_X}{\mathrm d\lambda}$. As I understand it, $f$ re-weights the values $x \in \mathbb{R}$ in such a way that $$ \int_B \mathrm d\mu_X = \int_B f\, \mathrm d\lambda $$ for all Borel sets $B$. In particular, the graph of $f$ lies below one everywhere: <a href="https://i.sstatic.net/2Uit5.jpg" rel="noreferrer"><img src="https://i.sstatic.net/2Uit5.jpg" alt="normal pdf"></a> </p> <p>so it seems like $f$ is re-weighting each $x \in \mathbb{R}$ to a smaller value, but I don't really have any intuition for this. I'm seeking more insight into viewing $f$ as a change of measure, rather than a sort of distribution describing how likely $X$ is. </p> <p>In addition, does it make sense to ask "which came first?" The definition for the standard normal pdf as just a function used to compute probabilities, or the pdf as a change of measure?</p>
<p>Your understanding of the basic math itself seems pretty solid, so I'll just try to provide some extra intuition.</p> <p>When we integrate a function $g$ with respect to the Lebesgue measure $\lambda$, we find its "area under the curve" or "volume under the surface", etc... This is obvious since the Lebesgue measure assigns the ordinary notion of length (area, etc) to all possible integration regions over the domain of $g$. Therefore, I say that integrating with respect to the Lebesgue measure (which is equivalent in value to Riemannian integration) is a <em>calculation to find the "volume" of some function.</em></p> <p>Let's pretend for a moment that when performing integration, we are always forced to do it over the <em>entire</em> domain of the integrand. Meaning we are only allowed to compute $$\int_B g \,d\lambda\ \ \ \ \text{if}\ \ \ \ B=\mathbb{R}^n$$ where $\mathbb{R}^n$ is assumed to be the entire domain of $g$.</p> <p>With that restriction, what could we do if we only cared about the volume of $g$ over the region $B$? Well, we could define an <a href="https://en.wikipedia.org/wiki/Indicator_function" rel="noreferrer">indicator function</a> for the set $B$ and integrate its product with $g$, $$\int_{\mathbb{R}^n} \mathbf{1}_B g \,d\lambda$$</p> <p>When we do something like this, we are taking the mindset that our goal is to nullify $g$ wherever we don't care about it... but that isn't the only way to think about it. We can instead try to nullify $\mathbb{R}^n$ itself wherever we don't care about it. We would compute the integral then as, $$\int_{\mathbb{R}^n} g \,d\mu$$ where $\mu$ is a measure that behaves just like $\lambda$ for Borel sets that are subsets of $B$, but returns zero for Borel sets that have no intersection with $B$. Using this measure, it doesn't matter that $g$ has value outside of $B$, because $\mu$ will give that support no consideration.</p> <p>Obviously, these integrals are just different ways to think about the same thing, $$\int_{\mathbb{R}^n} g \,d\mu = \int_{\mathbb{R}^n} \mathbf{1}_B g \,d\lambda$$ The function $\mathbf{1}_B$ is clearly the density of $\mu$, its Radon–Nikodym derivative with respect to the Lebesgue measure, or by directly matching up symbols in the equation, $$d\mu = f\,d\lambda$$ where here $f = \mathbf{1}_B$. The reason for showing you all this was to show how we can <em>think of changing measure as a way to tell an integral how to only compute the volume we <strong>care</strong> about</em>. Changing measure allows us to discount parts of the support of $g$ instead of discounting parts of $g$ itself, and the Radon–Nikodym chainrule formalizes their equivalence.</p> <p>The cool thing about this, is that our measures don't have to be as bipolar as the $\mu$ I constructed above. They don't have to completely not care about support outside $B$, but instead can just care about support outside $B$ <em>less</em> than inside $B$.</p> <p>Think about how we might find the total mass of some physical object. We integrate over all of space (the <em>entire</em> domain where particles can exist) but use a measure $m$ that returns larger values for regions in space where there is "more mass" and smaller values (down to zero) for regions in space where there is "less mass". It doesn't have to be just mass vs no-mass, it can be everything in between too, and the Radon–Nikodym derivative of this measure is indeed the literal "density" of the object.</p> <p>So what about probability? Just like with the mass example, we are encroaching on the world of physical modeling and leaving abstract mathematics. Formally, a measure is a probability measure if it returns 1 for the Borel set that is the union of all the other Borel sets. When we consider these Borel sets to model physical "events", this notion makes intuitive modeling sense... we are just defining the probability (measure) of <em>anything</em> happening to be 1.</p> <p>But why 1? <strong>Arbitrary convenience.</strong> In fact, some people don't use 1! Some people use 100. Those people are said to use the "percent" convention. What is the probability that if I flip this coin, it lands on heads or tails? 100... percent. We could have used literally any positive real number, but 1 is just a nice choice. Note that the Lebesgue measure is not a probability measure because $\lambda(\mathbb{R}^n) = \infty$.</p> <p>Anyway, what people are doing with probability is designing a measure that models how much significance they give to various events - which are Borel sets, which are regions in the domain; they are just defining how much they value parts of the domain itself. As we saw before with the measure $\mu$ I constructed, the easiest way to write down your measure is by writing its density.</p> <p>Fun to note: "expected value" of $g$ is just its volume with respect to the given probability measure $P$, and "covariance" of $g$ with $h$ is just their inner product with respect to $P$. Letting $\Omega$ be the entire domain of both $g$ and $h$ (also known as the sample space), if $g$ and $h$ have zero mean, $$\operatorname{cov}(g, h) = \int_{x \in \Omega}g(x)h(x)f(x)\ dx = \int_{\Omega}gh\ dP = \langle g, h \rangle_P$$</p> <p>I'll let you show that the <a href="https://en.wikipedia.org/wiki/Pearson_correlation_coefficient#Definition" rel="noreferrer">correlation coefficient</a> for $g$ and $h$ is just the "cosine of the angle between them".</p> <p>Hope this helps! Measure theory is definitely the modern way of viewing things, and people began to understand "weighted Riemannian integrals" well before they realized the other viewpoint: "weighting" the domain instead of the integrand. Many people attribute this viewpoint's birth to Lebesgue integration, where the operation of integration was first (notably) restated in terms of an arbitrary measure, as opposed to Riemnnian integration which tacitly <em>always</em> assumed the Lebesgue measure.</p> <p>I noticed you brought up the normal distribution specifically. The normal distribution is special for a lot of reasons, but it is by no means some de-facto probability density. There are an infinite number of equally valid probability measures (with their associated densities). The normal distribution is really only so important because of the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem" rel="noreferrer">central limit theorem</a>.</p>
<p>The case you are referring to is valid. In your example, Radon-Nikodym serves as a reweighting of the Lebesgue measure and it turns out that the Radon-Nikodym is the pdf of the given distribution. </p> <p>However, Radon-Nikodym is a more general concept. Your example converts Lebesgue measure to a normal probability measure whereas Radon-Nikodym can be used to convert any measure to another measure as long as they meet certain technical conditions. </p> <p>A quick recap of the intuition behind measure. A measure is a set function that takes a set as an input and returns a non-negative number as output. For example length, volume, weight, and probability are all examples of measures. </p> <p>So what if I have one measure that returns length in meters and another measure that returns length in kilometer? A Radon-Nikodym is to convert these two measures. What is the Radon-Nikodym in this case? It is a constant number 1000. </p> <p>Similarly, another Radon-Nikodym can be used to convert a measure that returns the weight in kg to another measure that returns the weight in lbs. </p> <p>Back to your example, pdf is used to convert a Lebesgue measure to a normal probability measure, but this is just one example usage of measure. </p> <p>Starting from a Lebesgue measure, you can define Radon-Nikodym that generates other useful measures (not necessarily probability measure). </p> <p>Hope this clarifies it. </p>
matrices
<blockquote> <p>Given a field $F$ and a subfield $K$ of $F$. Let $A$, $B$ be $n\times n$ matrices such that all the entries of $A$ and $B$ are in $K$. Is it true that if $A$ is similar to $B$ in $F^{n\times n}$ then they are similar in $K^{n\times n}$?</p> </blockquote> <p>Any help ... thanks!</p>
<p>If the fields are infinite, there is an easy proof.</p> <p>Let $F \subseteq K$ be a field extension with $F$ infinite. Let $A, B \in \mathcal{Mat}_n(F)$ be two square matrices that are similar over $K$. So there is a matrix $M \in \mathrm{GL}_n(K)$ such that $AM = MB$. We can write: $$ M = M_1 e_1 + \dots + M_r e_r, $$ with $M_i \in \mathcal{M}_n(F)$ and $\{ e_1, \dots, e_r \}$ is a $F$-linearly independent subset of $K$. So we have $A M_i = M_i B$ for every $i = 1,\dots, r$. Consider the polynomial $$ P(t_1, \dots, t_r) = \det( t_1 M_1 + \dots + t_r M_r) \in F[t_1, \dots, t_r ]. $$ Since $\det M \neq 0$, $P(e_1, \dots, e_r) \neq 0$, hence $P$ is not the zero polynomial. Since $F$ is infinite, there exist $\lambda_1, \dots, \lambda_r \in F$ such that $P(\lambda_1, \dots, \lambda_r) \neq 0$. Picking $N = \lambda_1 M_1 + \dots + \lambda_r M_r$, we have $N \in \mathrm{GL}_n(F)$ and $A N = N B$.</p>
<blockquote> <p><strong>THEOREM 1.</strong> Let <span class="math-container">$E$</span> be a field, let <span class="math-container">$F$</span> be a subfield, and let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be <span class="math-container">$n$</span> by <span class="math-container">$n$</span> matrices with coefficients in <span class="math-container">$F$</span>. If <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar over <span class="math-container">$E$</span>, they are similar over <span class="math-container">$F$</span>.</p> </blockquote> <p>This is an immediate consequence of</p> <blockquote> <p><strong>THEOREM 2.</strong> In the above setting, let <span class="math-container">$X$</span> be an indeterminate, and let <span class="math-container">$g_k(A)\in F[X]$</span>, <span class="math-container">$1\le k\le n$</span>, be the monic gcd of the determinants of all the <span class="math-container">$k$</span> by <span class="math-container">$k$</span> submatrices of <span class="math-container">$X-A$</span>. Then <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar over <span class="math-container">$F$</span> if and only if <span class="math-container">$g_k(A)=g_k(B)$</span> for all <span class="math-container">$k$</span>.</p> </blockquote> <p>References:</p> <p><a href="http://books.google.com/books?id=_K04QgAACAAJ&amp;dq=inauthor:%22Nathan+Jacobson%22&amp;hl=en&amp;ei=4YxHTqjgJYfBswaugvzBBw&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=3&amp;ved=0CDYQ6AEwAg" rel="nofollow noreferrer">Basic Algebra I: Second Edition</a>, Jacobson, N., Section 3.10.</p> <p><a href="http://books.google.com/books?id=YHuYOwAACAAJ&amp;dq=intitle%3Asurvey%20intitle%3Aalgebra%20inauthor%3Abirkhoff&amp;source=gbs_book_other_versions" rel="nofollow noreferrer">A Survey of Modern Algebra</a>, Birkhoff, G. and Lane, S.M., 2008. In the 1999 edition it was in Section X1.8, titled &quot;The Calculation of Invariant Factors&quot;.</p> <p><a href="http://books.google.com/books?id=Dc-TU2Iub6sC&amp;dq=intitle:algebre+intitle:4+intitle:7+inauthor:bourbaki&amp;source=gbs_navlinks_s" rel="nofollow noreferrer">Algèbre: Chapitres 4 à 7</a>, Nicolas Bourbaki. Translation: <a href="http://books.google.com/books?id=-10_AQAAIAAJ&amp;dq=bibliogroup:%22Algebra+II%22&amp;hl=en&amp;ei=wYRnTvHiFc3BtAa69eT8Cg&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CCkQ6AEwAA" rel="nofollow noreferrer">Algebra II</a>.</p> <p>(I haven't found online references.)</p> <p>Here is the sketch of a proof of Theorem 2.</p> <p><strong>EDIT</strong> [This edit follows Soarer's interesting comment.] Each of the formulas <span class="math-container">$fv:=f(A)v$</span> and <span class="math-container">$fv:=f(B)v$</span> (for all <span class="math-container">$f\in F[X]$</span> and all <span class="math-container">$v\in F^n$</span>) defines on <span class="math-container">$F^n$</span> a structure of finitely generated module over the principal ideal domain <span class="math-container">$F[X]$</span>. Moreover, <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar if and only if the corresponding modules are isomorphic. The good news is that a wonderful theory for the finitely generated modules over a principal ideal domain is freely available to us. <strong>TIDE</strong></p> <blockquote> <p><strong>THEOREM 3.</strong> Let <span class="math-container">$A$</span> be a principal ideal domain and <span class="math-container">$M$</span> a finitely generated <span class="math-container">$A$</span>-module. Then <span class="math-container">$M$</span> is isomorphic to <span class="math-container">$\oplus_{i=1}^nA/(a_i)$</span>, where the <span class="math-container">$a_i$</span> are elements of <span class="math-container">$A$</span> satisfying <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_n$</span>. [As usual <span class="math-container">$(a)$</span> is the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$a\mid b$</span> means &quot;<span class="math-container">$a$</span> divides <span class="math-container">$b$</span>&quot;.] Moreover the ideals <span class="math-container">$(a_i)$</span> are uniquely determined by these conditions.</p> </blockquote> <p>Let <span class="math-container">$K$</span> be the field of fractions of <span class="math-container">$A$</span>, and <span class="math-container">$S$</span> a submodule of <span class="math-container">$A^n$</span>. The maximum number of linearly independent elements of <span class="math-container">$S$</span> is also the dimension of the vector subspace of <span class="math-container">$K^n$</span> generated by <span class="math-container">$S$</span>. Thus this integer, called the <strong>rank</strong> of <span class="math-container">$S$</span>, only depends on the isomorphism class of <span class="math-container">$S$</span> and is additive with respect to finite direct sums.</p> <blockquote> <p><strong>THEOREM 4.</strong> In the above setting we have:</p> <p>(a) <span class="math-container">$S$</span> is free of rank <span class="math-container">$r\le n$</span>.</p> <p>(b) There is a basis <span class="math-container">$u_1,\dots,u_n$</span> of <span class="math-container">$A^n$</span> and there are elements <span class="math-container">$a_1,\dots,a_r$</span> of <span class="math-container">$A$</span> such that <span class="math-container">$a_1u_1,\dots,a_ru_r$</span> is a basis of <span class="math-container">$S$</span>, and <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_r$</span>.</p> </blockquote> <p>Let <span class="math-container">$A$</span> be a commutative ring with one. Recall that <span class="math-container">$A$</span> is a <strong>principal ideal ring</strong> if all its ideals are principal, and that <span class="math-container">$A$</span> is a <strong>Bézout ring</strong> if all its <em>finitely generated</em> ideals are principal.</p> <blockquote> <p><strong>LEMMA.</strong> Let <span class="math-container">$A$</span> be a Bézout ring and let <span class="math-container">$c,d$</span> be in <span class="math-container">$A$</span>. Let <span class="math-container">$\Phi$</span> be a set of ideals of <span class="math-container">$A$</span> such that: <span class="math-container">$(c)$</span> and <span class="math-container">$(d)$</span> are in <span class="math-container">$\Phi$</span>; <span class="math-container">$(c)$</span> is maximal in <span class="math-container">$\Phi$</span>; and <span class="math-container">$(ac+bd)\in\Phi$</span> for all <span class="math-container">$a,b\in A$</span>. Then <span class="math-container">$c$</span> divides <span class="math-container">$d$</span> [equivalently <span class="math-container">$(c)$</span> contains <span class="math-container">$(d)$</span>].</p> </blockquote> <p><strong>Proof.</strong> We have <span class="math-container">$(c,d)=(ac+bd)$</span> for some <span class="math-container">$a,b\in A$</span>. This ideal belongs to <span class="math-container">$\Phi$</span>, contains <span class="math-container">$(c)$</span> and is thus equal to <span class="math-container">$(c)$</span>. Hence we get <span class="math-container">$(d)\subset(c,d)=(c)$</span>. <strong>QED</strong></p> <blockquote> <p><strong>PROPOSITION 1.</strong> Let <span class="math-container">$A$</span> be a principal ideal ring and <span class="math-container">$f$</span> an <span class="math-container">$A$</span>-valued bilinear map defined on a product of two <span class="math-container">$A$</span>-modules. Then the image of <span class="math-container">$f$</span> is an ideal.</p> </blockquote> <p><strong>Proof.</strong> Let <span class="math-container">$\Phi$</span> be the set of all ideals of the form <span class="math-container">$(f(x,y))$</span>; pick <span class="math-container">$x,y$</span> such that <span class="math-container">$(f(x,y))$</span> is maximal in <span class="math-container">$\Phi$</span>; and let <span class="math-container">$(f(u,v))$</span> be another element of <span class="math-container">$\Phi$</span>. It suffices to show that <span class="math-container">$f(x,y)\mid f(u,v)$</span>.</p> <p>Claim: <span class="math-container">$f(x,y)\mid f(x,v)$</span> and <span class="math-container">$f(x,y)\mid f(u,y)$</span>.</p> <p>Since we have <span class="math-container">$af(x,y)+bf(x,v)=f(x,ay+bv)$</span> and <span class="math-container">$af(x,y)+bf(u,y)=f(ax+bu,y)$</span>, the claim follows from the lemma.</p> <p>By the claim we have <span class="math-container">$f(u,y)=af(x,y)$</span> and <span class="math-container">$f(x,v)=bf(x,y)$</span> for some <span class="math-container">$a,b\in A$</span>. Setting <span class="math-container">$u'=u-av$</span>, <span class="math-container">$v'=v-by$</span> we get <span class="math-container">$f(x,v')=0=f(u',y)$</span> and thus <span class="math-container">$af(x,y)+bf(u',v')=f(ax+bu',y+v')$</span>. Now the lemma yields the conclusion. <strong>QED</strong></p> <p>We assume now that <span class="math-container">$A$</span> is a principal ideal <em>domain</em>.</p> <p><strong>Proof of Theorem 4.</strong> We assume (as we may) that <span class="math-container">$S$</span> is nonzero, we let <span class="math-container">$f:A^n\times A^n\to A$</span> be the dot product. By Proposition 1 the set <span class="math-container">$f(S\times A^n)$</span>. Let <span class="math-container">$a_1=f(s_1,y_1)$</span> be a a generator of this ideal. [Naively: <span class="math-container">$a_1$</span> is a gcd of the coordinates of the elements of <span class="math-container">$S$</span>.] Clearly, <span class="math-container">$u_1:=s_1/a_1$</span> is in <span class="math-container">$A^n$</span> and <span class="math-container">$f(u_1,y_1)=1$</span>. Moreover we have <span class="math-container">$$ A^n=Au_1\oplus (y_1)^\perp,\qquad S=As_1\oplus(S\cap(y_1)^\perp), $$</span> where <span class="math-container">$(y_1)^\perp$</span> is the orthogonal of <span class="math-container">$y_1$</span>. [The corresponding projection <span class="math-container">$A^n\twoheadrightarrow Au_1$</span> is given by <span class="math-container">$x\mapsto f(x,u_1)\,u_1$</span>.] Then (a) follows by induction on <span class="math-container">$r$</span>. Let us prove (b). By (a) we know that <span class="math-container">$(y_1)^\perp$</span> and <span class="math-container">$S\cap(y_1)^\perp$</span> are free of rank <span class="math-container">$n-1$</span> and <span class="math-container">$r-1$</span>. By the induction hypothesis there is a basis <span class="math-container">$u_2,\dots,u_n$</span> of <span class="math-container">$(y_1)^\perp$</span> and there are elements <span class="math-container">$a_2,\dots,a_r$</span> of <span class="math-container">$A$</span> such that <span class="math-container">$a_2u_2,\dots,a_ru_r$</span> is a basis of <span class="math-container">$S\cap (y_1)^\perp$</span> and <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_r$</span>. It only remains to show <span class="math-container">$a_1\mid a_2$</span>. We have <span class="math-container">$a_1\mid f(s,y)$</span> for all <span class="math-container">$(s,y)\in S\times A^n$</span>. There is a <span class="math-container">$y$</span> in <span class="math-container">$A^n$</span> such that <span class="math-container">$f(u_2,y)=1$</span>. Indeed, since the determinant of <span class="math-container">$(f(u_i,e_j))$</span> is <span class="math-container">$\pm1$</span>, no prime of <span class="math-container">$A$</span> can divide <span class="math-container">$f(u_2,e_i)$</span> for all <span class="math-container">$i$</span>, and we get <span class="math-container">$a_1\mid f(a_2u_2,y)=a_1$</span>. <strong>QED</strong></p> <p><strong>Proof of Theorem 3.</strong> First statement: Let <span class="math-container">$v_1,\dots,v_n$</span> be generators of the <span class="math-container">$A$</span>-module <span class="math-container">$M$</span>, let <span class="math-container">$(e_i)$</span> be the canonical basis of <span class="math-container">$A^n$</span>, and let <span class="math-container">$\phi:A^n\twoheadrightarrow M$</span> be the <span class="math-container">$A$</span>-linear surjection mapping <span class="math-container">$e_i$</span> to <span class="math-container">$v_i$</span>. Applying Theorem 4 to the submodule <span class="math-container">$\operatorname{Ker}\phi$</span> of <span class="math-container">$A^n$</span>, we get a basis <span class="math-container">$u_1,\dots,u_n$</span> of <span class="math-container">$A^n$</span> and elements <span class="math-container">$a_1,\dots,a_r$</span> of <span class="math-container">$A$</span> such that <span class="math-container">$a_1u_1,\dots,a_ru_r$</span> is a basis of <span class="math-container">$\operatorname{Ker}\phi$</span> and <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_r$</span>, and we set <span class="math-container">$a_{r+1}=\cdots=a_n=0$</span>. Then <span class="math-container">$M$</span> is isomorphic to <span class="math-container">$\oplus_{i=1}^nA/(a_i)$</span>, where the <span class="math-container">$a_i$</span> are as in Theorem 3.</p> <p>Second statement: Assume that <span class="math-container">$M$</span> is also isomorphic to <span class="math-container">$\oplus_{i=1}^mA/(b_i)$</span>, where the <span class="math-container">$b_i$</span> satisfy the same conditions as the <span class="math-container">$a_i$</span>. We only need to prove <span class="math-container">$m=n$</span> and <span class="math-container">$(a_i)=(b_i)$</span> for all <span class="math-container">$i$</span>. Let <span class="math-container">$p\in A$</span> be a prime. By the Chinese Remainder Theorem [see below] it suffices to prove the above equality in the case where <span class="math-container">$M$</span> is the direct sum of a finite family of modules of the form <span class="math-container">$M_i:=A/(p^{i+1})$</span> for <span class="math-container">$i\ge0$</span>. For each <span class="math-container">$j\ge0$</span> the quotient <span class="math-container">$p^jM/p^{j+1}M$</span> is an <span class="math-container">$A/(p)$</span> vector space of finite dimension <span class="math-container">$n_j$</span>. We claim that the multiplicity of <span class="math-container">$A/(p^{i+1})$</span> in <span class="math-container">$M$</span> is then <span class="math-container">$n_i-n_{i+1}$</span>.</p> <p>Here is a way to see this. Form the polynomial <span class="math-container">$M(X):=\sum n_jX^j$</span> (where <span class="math-container">$X$</span> is an indeterminate). We have <span class="math-container">$$ M_i(X)=1+X+X^2+\cdots+X^i=\frac{X^{i+1}-1}{X-1}\ , $$</span> and we must solve <span class="math-container">$\sum\,m_i\,M_i(X)=\sum\,n_j\,X^j$</span> for the <span class="math-container">$m_i$</span>, where the <span class="math-container">$n_j$</span> are considered as known quantities (almost all equal to zero). Multiplying through by <span class="math-container">$X-1$</span> we get <span class="math-container">$$ \sum\,m_{i-1}\,X^i-\sum\,m_i=\sum\,(n_{i-1}-n_i)\,X^i, $$</span> whence the formula. <strong>QED</strong></p> <blockquote> <p><strong>PROPOSITION 2.</strong> Let <span class="math-container">$0\to A^r\overset f{\to}A^n\to M\to0$</span> be an exact sequence of <span class="math-container">$A$</span>-modules. Then there are basis of <span class="math-container">$A^r$</span> and <span class="math-container">$A^n$</span> making the matrix of <span class="math-container">$f$</span> of the form <span class="math-container">$$ \begin{bmatrix} a_1\\ &amp;\ddots\\ &amp;&amp;a_r\\ {}\\ {}\\ {} \end{bmatrix} $$</span> where only the nonzero entries are indicated. The ideals <span class="math-container">$(a_i)$</span> coincide with the ones given by Theorem 3. Moreover, if <span class="math-container">$\alpha$</span> is the matrix of <span class="math-container">$f$</span> relative to an arbitrary basis of <span class="math-container">$A^r$</span> and <span class="math-container">$A^n$</span>, then the ideal of <span class="math-container">$A$</span> generated by the <span class="math-container">$k$</span>-minors of <span class="math-container">$\alpha$</span> is <span class="math-container">$(a_1a_2\cdots a_k)$</span>.</p> </blockquote> <p><strong>Proof.</strong> It suffices to prove the last sentence because the other statements follow immediately from Theorems 3 and 4. Let <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma$</span> be rectangular matrices with entries in <span class="math-container">$A$</span> such that the product <span class="math-container">$\beta\gamma$</span> is defined. Clearly, if an element of <span class="math-container">$A$</span> divides each entry of <span class="math-container">$\alpha$</span>, or if it divides each entry of <span class="math-container">$\gamma$</span>, then it divides each entry of <span class="math-container">$\beta\gamma$</span>. A similar statement holds if we replace <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma$</span> with <span class="math-container">$\bigwedge^k\beta$</span> and <span class="math-container">$\bigwedge^k\gamma$</span>. Thus, multiplying <span class="math-container">$\beta$</span> on the left or on the right by an invertible matrix does not change the ideal of <span class="math-container">$A$</span> generated by the <span class="math-container">$k$</span>-minors. <strong>QED</strong></p> <p><strong>Proof of Theorem 2.</strong> We will apply Proposition 2 to the principal ideal domain <span class="math-container">$F[X]$</span>. It suffices to find an exact sequence of the form <span class="math-container">$$ 0\to F[X]^n\xrightarrow{X-A}F[X]^n\xrightarrow\phi F^n\to0. $$</span> We do this in a slightly more general setting:</p> <p>Let <span class="math-container">$K$</span> be a commutative ring, let <span class="math-container">$M$</span> be a <span class="math-container">$K$</span>-module, let <span class="math-container">$f$</span> be an endomorphism of <span class="math-container">$M$</span>, let <span class="math-container">$X$</span> be an indeterminate, and let <span class="math-container">$M[X]$</span> be the <span class="math-container">$K[X]$</span>-module of polynomials in <span class="math-container">$X$</span> with coefficients in <span class="math-container">$M$</span>. [In particular, any <span class="math-container">$K$</span>-basis of <span class="math-container">$M$</span> is a <span class="math-container">$K[X]$</span>-basis of <span class="math-container">$M[X]$</span>.] Equip <span class="math-container">$M$</span> and <span class="math-container">$M[X]$</span> with the <span class="math-container">$K[X]$</span>-module structures characterized by <span class="math-container">$$ X^i\cdot x=f^ix,\qquad X^i\cdot X^jx=X^{i+j}x $$</span> for all <span class="math-container">$i,j$</span> in <span class="math-container">$\mathbb N$</span> and all <span class="math-container">$x$</span> in <span class="math-container">$M$</span>. Let <span class="math-container">$\phi$</span> be the <span class="math-container">$K[X]$</span>-linear map from <span class="math-container">$M[X]$</span> to <span class="math-container">$M$</span> satisfying <span class="math-container">$\phi(X^ix)=f^ix$</span> for all <span class="math-container">$i,x$</span>, and write again <span class="math-container">$f:M[X]\to M[X]$</span> the <span class="math-container">$K[X]$</span>-linear extension of <span class="math-container">$f:M\to M$</span>. It is enough to check that the sequence <span class="math-container">$$ 0\to M[X]\xrightarrow{X-f}M[X]\xrightarrow{\phi}M\to0 $$</span> is exact. The only nontrivial inclusion to verify is <span class="math-container">$\operatorname{Ker}\phi\subset\operatorname{Im}(X-f)$</span>. For <span class="math-container">$x=\sum_{i\ge0}X^ix_i$</span> in <span class="math-container">$\operatorname{Ker}\phi$</span>, we have <span class="math-container">$$ x=\sum_{i\ge0}X^ix_i-\sum_{i\ge0}f^ix_i=\sum_{i\ge1}\,(X^i-f^i)\,x_i=(X-f) \sum_{j+k=i-1}X^jf^kx_i. $$</span> [Non-rigorous wording of the argument: Since <span class="math-container">$f$</span> is a root of the polynomial <span class="math-container">$P(X)=\sum X^ix_i$</span>, the linear polynomial <span class="math-container">$X-f$</span> divides <span class="math-container">$P(X)$</span>.] <strong>QED</strong></p> <p>Here is a proof of the Chinese Remainder Theorem.</p> <blockquote> <p><strong>CHINESE REMAINDER THEOREM.</strong> Let <span class="math-container">$A$</span> be a commutative ring and <span class="math-container">$\mathfrak a_1,\dots,\mathfrak a_n$</span> ideals such that <span class="math-container">$\mathfrak a_p+\mathfrak a_q=A$</span> for <span class="math-container">$p\not=q$</span>. Then the natural morphism from <span class="math-container">$A$</span> to the product of the <span class="math-container">$A/\mathfrak a_p$</span> is surjective. Moreover the intersection of the <span class="math-container">$\mathfrak a_p$</span> coincides with their product.</p> </blockquote> <p><strong>Proof.</strong> Multiplying the equalities <span class="math-container">$A=\mathfrak a_1+\mathfrak a_p$</span> for <span class="math-container">$p=2,\dots,n$</span> we get <span class="math-container">$$ A=\mathfrak a_1+\mathfrak a_2\cdots\mathfrak a_n.\qquad(*) $$</span> In particular there is an <span class="math-container">$a_1$</span> in <span class="math-container">$A$</span> such that <span class="math-container">$$ a_1\equiv1\bmod\mathfrak a_1,\quad a_1\equiv0\bmod\mathfrak a_p\ \forall\ p&gt;1. $$</span> Similarly we can find elements <span class="math-container">$a_p$</span> in <span class="math-container">$A$</span> such that <span class="math-container">$a_p\equiv\delta_{pq}\bmod\mathfrak a_q$</span> (Kronecker delta). This proves the first claim. Let <span class="math-container">$\mathfrak a$</span> be the intersection of the <span class="math-container">$\mathfrak a_p$</span>. Multiplying <span class="math-container">$(*)$</span> by <span class="math-container">$\mathfrak a$</span> we get <span class="math-container">$$ \mathfrak a=\mathfrak a_1\mathfrak a+\mathfrak a\mathfrak a_2\cdots\mathfrak a_n\subset\mathfrak a_1\,(\mathfrak a_2\cap\cdots\cap\mathfrak a_n)\subset\mathfrak a. $$</span> This gives the second claim, directly for <span class="math-container">$n=2$</span>, by induction for <span class="math-container">$n&gt;2$</span>. <strong>QED</strong></p>
differentiation
<p>It's easy for scalars, $(\exp(a(x)))' = a' e^a$. But can anything be said about matrices? Do $A(x)$ and $A'(x)$ commute such that $(\exp(A(x)))' = A' e^A = e^A A'$ or is this only a special case?</p>
<p><a href="https://math.stackexchange.com/users/121/tom-stephens">Tom Stephens</a> linked to <a href="http://en.wikipedia.org/wiki/Matrix_exponential#The_exponential_map" rel="noreferrer">the exponential map</a>, which states that</p> <blockquote> <p>$ \frac{d}{dt}e^{X(t)} = \int\limits_0^1 e^{\alpha X(t)} \frac{dX(t)}{dt} e^{(1-\alpha) X(t)} d\alpha $</p> </blockquote> <p>If $X(t)$ and $\frac{d}{dt}X(t)$ commute, the latter also commutes with $\exp(X(t))$ and <em>then</em> it simplifies into $ \frac{d}{dt}e^{X(t)} = \frac{d X(t)}{dt} e^{X(t)}$.</p> <p>A counter-example is $$X(t) = \begin{pmatrix} \cos(t) &amp; \sin(t) \\\ \sin(t) &amp; -\cos(t) \end{pmatrix}$$ at $t=0$, i.e. $X(0) = \sigma_3, \dot X(0) = \sigma_1$ ($\sigma_i$ are the non-commuting <a href="http://en.wikipedia.org/wiki/Pauli_matrices" rel="noreferrer">Pauli Matrices</a>)</p>
<p>The question comes down to computing what's called the "derivative of", the "differential of", or the "tangent map to", the exponential map from $M_n(\mathbb R)$ into itself at a given matrix $A$ (not necessarily the zero matrix). There is a classical formula for this. Here is the first reference I found: pages 1 and 2 of</p> <p><a href="http://www.math.columbia.edu/~woit/notes4.pdf" rel="noreferrer">http://www.math.columbia.edu/~woit/notes4.pdf</a></p> <p>by Peter Woit. Here it is (with Peter Woit's notation) $$\exp_*(X)\ Y=\exp(X)\ \frac{1-e^{-ad(X)}}{ad(X)}\ Y.$$ Here is a reference for the Chain Rule:</p> <p><a href="http://en.wikipedia.org/wiki/Chain_rule#The_fundamental_chain_rule" rel="noreferrer">http://en.wikipedia.org/wiki/Chain_rule#The_fundamental_chain_rule</a></p> <p>It reads, in Peter Woit's notation and under appropriate assumptions,</p> <p>$$(f\circ g)\_*(x)=f_*(g(x))\circ g_*(x).$$</p> <p>[Thank you to <a href="https://math.stackexchange.com/users/171/kennytm">KennyTM</a> for having edited this formula.] </p> <p><strong>EDIT 1.</strong> Here are the two formulas written in another notation:</p> <p>$$\exp'(X)=\exp(X)\ \frac{1-e^{-ad(X)}}{ad(X)}\quad,$$ </p> <p>$$(f\circ g)'(x)=f'(g(x))\circ g'(x).$$</p> <p><strong>EDIT 2.</strong> Here is another reference. This is a post by <a href="https://math.stackexchange.com/users/536/akhil-mathew">Akhil Mathew</a>:</p> <p><a href="http://deltaepsilons.wordpress.com/2009/11/07/helgasons-formula-ii/" rel="noreferrer">http://deltaepsilons.wordpress.com/2009/11/07/helgasons-formula-ii/</a></p>
combinatorics
<p>Consider the identity <span class="math-container">$$\sum_{k=0}^n (-1)^kk!{n \brace k} = (-1)^n$$</span> where <span class="math-container">${n\brace k}$</span> is a Stirling number of the second kind. This is slightly reminiscent of the binomial identity <span class="math-container">$$\sum_{k=0}^n(-1)^k\binom{n}{k} = 0$$</span> which essentially states that the number of even subsets of a set is equal to the number of odd subsets.</p> <p>Now there is an easy proof of the binomial identity using symmetric differences to biject between even and odd subsets. I am wondering if there is an analogous combinatorial interpretation for the Stirling numbers. The term <span class="math-container">$k!{n\brace k}$</span> counts the number of set partitions of an <span class="math-container">$n$</span> element set into <span class="math-container">$k$</span> ordered parts. Perhaps there is something relating odd ordered partitions with even ordered partitions?</p> <p>As an added note, there is a similar identity <span class="math-container">$$\sum_{k=1}^n(-1)^k(k-1)!{n\brace k}=0$$</span> for <span class="math-container">$n \geq 2$</span>. A combinatorial interpretation of this one would also be appreciated.</p>
<blockquote> <p>Perhaps there is something relating odd ordered partitions with even ordered partitions?</p> </blockquote> <p>There is indeed. Let's try to construct an involution $T_n$, mapping odd ordered partitions of $n$-element set to even and vice versa: if partition has part $\{n\}$, move $n$ into previous part; otherwise move $n$ into new separate part.</p> <p>Example: $(\{1,2\},\{\mathbf{5}\},\{3,4\})\leftrightarrow(\{1,2,\mathbf{5}\},\{3,4\})$.</p> <p>This involution is not defined on partitions of the form $(\{n\},\ldots)$, but for these partitions one can use previous involution $T_{n-1}$ and so on.</p> <p>Example: $(\{5\},\{4\},\{1,2\},\{\mathbf{3}\})\leftrightarrow(\{5\},\{4\},\{1,2,\mathbf{3}\})$.</p> <p>In the end only partition without pair will be $(\{n\},\{n-1\},\ldots,\{1\})$. So our (recursively defined) involution gives a bijective proof of $\sum_{\text{k is even}}k!{n \brace k}=\sum_{\text{k is odd}}k!{n \brace k}\pm1$ (cf. <a href="https://math.stackexchange.com/a/15593/152">1</a>, <a href="https://math.stackexchange.com/a/929/152">2</a>).</p> <p><strong>Upd.</strong> As for the second identity, the involution $T_n$ is already defined on all cyclically ordered partitions, so $\sum_{\text{k is even}}(k-1)!{n \brace k}=\sum_{\text{k is odd}}(k-1)!{n \brace k}$.</p> <hr> <p><strong>P.S.</strong> I can't resist adding that $k!{n \brace k}$ is the number of $(n-k)$-dimensional faces of an $n$-dimensional convex polytope, permutohedron (the convex hull of all vectors formed by permuting the coordinates of the vector $(0,1,2,\ldots,n)$). So $\sum(-1)^{n-k}k!{n \brace k}=1$ since it's the Euler characteristic of a convex polytope.</p>
<p>For the sake of completeness I include a treatment using generating functions. The exponential generating function of the Stirling numbers of the second kind is $$ G(z, u) = \exp(u(\exp(z)-1))$$ so that $$ {n \brace k} = n! [z^n] \frac{(\exp(z) - 1)^k}{k!}.$$ It follows that $$\sum_{k=0}^n (-1)^k k! {n \brace k} = n! [z^n] \sum_{k=0}^n (1-\exp(z))^k = n! [z^n] \sum_{k=0}^\infty (1-\exp(z))^k,$$ where the last equality occurs because the series for $(1-\exp(z))^k$ starts at degree $k.$ But this is just $$ n! [z^n] \frac{1}{1-(1-\exp(z))} = n! [z^n] \exp(-z) = (-1)^n,$$ showing the result.</p>
linear-algebra
<p>I just started Linear Algebra. Yesterday, I read about the ten properties of fields. As far as I can tell a field is a mathematical system that we can use to do common arithmetic. Is that correct?</p>
<p>More or less, yes. You can do arithmetic already with <a href="https://en.wikipedia.org/wiki/Natural_number" rel="nofollow noreferrer">natural numbers</a> (in <span class="math-container">$\Bbb N$</span>), but you have to be careful with some operations:</p> <p>The equation <span class="math-container">$x+a=b$</span> does not necessarily have a solution.</p> <p>So one would prefer working in a <a href="https://en.wikipedia.org/wiki/Ring_%28mathematics%29" rel="nofollow noreferrer">ring</a>, like the integers (<span class="math-container">$\Bbb Z$</span>). The preceding equation has always one solution, <span class="math-container">$x=b-a$</span>.</p> <p>But then, the equation <span class="math-container">$ax=b$</span> does not have always a solution, in a ring.</p> <p>So one would prefer working in a <a href="https://en.wikipedia.org/wiki/Field_%28mathematics%29" rel="nofollow noreferrer">field</a>, like <a href="https://en.wikipedia.org/wiki/Rational_number" rel="nofollow noreferrer">rational numbers</a> (<span class="math-container">$\Bbb Q$</span>). The preceding equation has always one solution, <span class="math-container">$x=\frac ba$</span>, if <span class="math-container">$a\neq0$</span>.</p> <p>So, working in a field is much more friendly. However, there are strange fields too, like <a href="https://en.wikipedia.org/wiki/Finite_field" rel="nofollow noreferrer">finite fields</a>. Or <a href="https://en.wikipedia.org/wiki/Quaternion" rel="nofollow noreferrer">quaternions</a>, for which commutativity does not hold anymore (<span class="math-container">$ab\neq ba$</span> usually): they form what is called a <a href="https://en.wikipedia.org/wiki/Division_ring" rel="nofollow noreferrer">skew-field</a>.</p> <hr> <p>Just to develop this way of building larger tools a bit more:</p> <p>But even then some equations have no solutions, like <span class="math-container">$x^2-2=0$</span> or <span class="math-container">$x^2+1=0$</span>. So you may build larger fields: real <a href="https://en.wikipedia.org/wiki/Algebraic_number" rel="nofollow noreferrer">algebraic numbers</a>, complex algebraic numbers, <a href="https://en.wikipedia.org/wiki/Real_number" rel="nofollow noreferrer">real numbers</a> (<span class="math-container">$\Bbb R$</span>), and finally <a href="https://en.wikipedia.org/wiki/Complex_number" rel="nofollow noreferrer">complex numbers</a> (<span class="math-container">$\Bbb C$</span>).</p> <p>The field of real numbers has a valuable property, very important in analysis: every <a href="https://en.wikipedia.org/wiki/Cauchy_sequence" rel="nofollow noreferrer">Cauchy sequence</a> has a limit. That is, <span class="math-container">$\Bbb R$</span> is <a href="https://en.wikipedia.org/wiki/Completeness_of_the_real_numbers" rel="nofollow noreferrer">complete</a>. It's for example easy to prove that the sequence of rationals <span class="math-container">$u_n=1+\frac{1}{1!}+\frac{1}{2!}+\cdots+\frac{1}{n!}$</span> is a Cauchy sequence, but does not converge to a rational.</p> <p>The field of complex numbers is also complete, and have the extra feature that every nonconstant polynomial can be factored in factors of degree <span class="math-container">$1$</span>. It's said to be an <a href="https://en.wikipedia.org/wiki/Algebraically_closed_field" rel="nofollow noreferrer">algebraically closed field</a>. However, you lose something too, since, unlike the reals, it's not an <a href="https://en.wikipedia.org/wiki/Ordered_field" rel="nofollow noreferrer">ordered field</a>: there are <a href="https://en.wikipedia.org/wiki/Total_order" rel="nofollow noreferrer">total orders</a> on complex numbers, but none is compatible with the arithmetic operations.</p> <p>As you can guess, while building larger and larger sets, you are more and more generalizing what you mean by a <em>number</em>.</p> <p>Other genralizations of numbers that form a field include <a href="https://en.wikipedia.org/wiki/Hyperreal_number" rel="nofollow noreferrer">hyperreal numbers</a> and <a href="https://en.wikipedia.org/wiki/Surreal_number" rel="nofollow noreferrer">surreal numbers</a>.</p> <hr> <p>The story does not end here, and many mathematical objects that form a field do not resemble numbers very much, apart from having the basic arithmetic rules of a field.</p> <p>Given any <a href="https://en.wikipedia.org/wiki/Integral_domain" rel="nofollow noreferrer">integral domain</a>, you can build its <a href="https://en.wikipedia.org/wiki/Field_of_fractions" rel="nofollow noreferrer">field of fractions</a>:</p> <ul> <li>For instance, the ring of polynomials over a field <span class="math-container">$\Bbb K$</span> leads to the field of <a href="https://en.wikipedia.org/wiki/Rational_function" rel="nofollow noreferrer">rational functions</a> over <span class="math-container">$\Bbb K$</span>. Finite extensions of such fields yield <a href="https://en.wikipedia.org/wiki/Algebraic_function_field" rel="nofollow noreferrer">algebraic function fields</a>.</li> <li><a href="https://en.wikipedia.org/wiki/Formal_power_series" rel="nofollow noreferrer">Formal power series</a> over a field <span class="math-container">$\Bbb K$</span> lead to the field of <a href="https://en.wikipedia.org/wiki/Formal_power_series#Formal_Laurent_series" rel="nofollow noreferrer">formal Laurent series</a>. An <a href="https://en.wikipedia.org/wiki/Algebraic_closure" rel="nofollow noreferrer">algebraic closure</a> of the field of Laurent series is given by the field of <a href="https://en.wikipedia.org/wiki/Puiseux_series" rel="nofollow noreferrer">Puiseux series</a>.</li> <li><a href="https://en.wikipedia.org/wiki/Holomorphic_function" rel="nofollow noreferrer">Holomorphic functions</a> defined on a complex domain, lead to the field of <a href="https://en.wikipedia.org/wiki/Meromorphic_function" rel="nofollow noreferrer">meromorphic functions</a>.</li> <li>In algebraic geometry, <a href="https://en.wikipedia.org/wiki/Function_field_of_an_algebraic_variety" rel="nofollow noreferrer">functions fields</a> are defined on an <a href="https://en.wikipedia.org/wiki/Algebraic_variety" rel="nofollow noreferrer">algebraic variety</a>, again as a field of fractions.</li> </ul> <p>Another example of a field not related to the notion of number: <a href="https://en.wikipedia.org/wiki/Hardy_field" rel="nofollow noreferrer">Hardy fields</a> are fields of <a href="https://en.wikipedia.org/wiki/Equivalence_class" rel="nofollow noreferrer">equivalence classes</a> of functions.</p>
<p>Yes, at its most basic level, a field is a generalization of the rational numbers. In a field, you can do addition, subtraction, multiplication and division as you could in $\Bbb Q$.</p> <p>At a deeper level, fields have geometric significance. If you've ever studied a little geometry, then you'd know that there are at least two famous ways to approach geometry: with axioms akin to Euclid's axioms (the synthetic approach) and another way by using vector spaces and equations (the linear algebra approach).</p> <p>We know that $\Bbb R\times \Bbb R$ (an $\Bbb R$ vector space) can be interpreted as a model of Euclidean geometry, and how its 1-d subspaces represent lines, it's elements represent points etc, and that it satisfies the synthetic axioms of Euclidean geometry. </p> <p>But what about the other direction? Why can't we start with synthetic axioms and get vector spaces? Well, that's the thing: you can (if you have enough axioms.)</p> <p>It turns out that if you adopt <a href="https://en.wikipedia.org/wiki/Hilbert%27s_axioms">Hilbert's axioms</a> groups $I-IV$ for plane geometry, then you can systematically build a field $F$, such that $F\times F$ models that plane exactly when the plane satisfies <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem">Pappus's theorem</a>.</p> <p>Another way to ensure the existence of the field is to adopt Hilbert's continuity axiom $V$ called "Archimedes's axiom." It's known that this axiom, in the presence of the others, implies Pappus's theorem, and the resulting field will be an Archimedian ordered field.</p> <p>You can, of course, do higher dimensional geometry and get vector spaces $F^n$ and so on, as long as you have something like Pappus's theorem or Archimedes's axiom in your axioms.</p> <p>If you asked me for a rough description of how the axioms of fields translate into geometric ideas for vector spaces, then this is how I would start. Since $F$ is an additive group, $F^n$ is also an additive group, and you can translate any point to another point using addition of vectors. For multiplication, you can use it to scale any vector to another vector in the same 1-dimensional subspace.</p> <p>Now, this is just the first hint at the geometric nature of fields. <a href="https://en.wikipedia.org/wiki/Galois_theory">Galois theory</a> and then algebraic geometry really take the connection to more extreme altitudes!</p>
matrices
<p>I've come across a paper that mentions the fact that matrices commute if and only if they share a common basis of eigenvectors. Where can I find a proof of this statement?</p>
<p>Suppose that $A$ and $B$ are $n\times n$ matrices, with complex entries say, that commute.<br> Then we decompose $\mathbb C^n$ as a direct sum of eigenspaces of $A$, say $\mathbb C^n = E_{\lambda_1} \oplus \cdots \oplus E_{\lambda_m}$, where $\lambda_1,\ldots, \lambda_m$ are the eigenvalues of $A$, and $E_{\lambda_i}$ is the eigenspace for $\lambda_i$. (Here $m \leq n$, but some eigenspaces could be of dimension bigger than one, so we need not have $m = n$.)</p> <p>Now one sees that since $B$ commutes with $A$, $B$ preserves each of the $E_{\lambda_i}$: If $A v = \lambda_i v, $ then $A (B v) = (AB)v = (BA)v = B(Av) = B(\lambda_i v) = \lambda_i Bv.$ </p> <p>Now we consider $B$ restricted to each $E_{\lambda_i}$ separately, and decompose each $E_{\lambda_i}$ into a sum of eigenspaces for $B$. Putting all these decompositions together, we get a decomposition of $\mathbb C^n$ into a direct sum of spaces, each of which is a simultaneous eigenspace for $A$ and $B$.</p> <p>NB: I am cheating here, in that $A$ and $B$ may not be diagonalizable (and then the statement of your question is not literally true), but in this case, if you replace "eigenspace" by "generalized eigenspace", the above argument goes through just as well.</p>
<p>This is false in a sort of trivial way. The identity matrix $I$ commutes with every matrix and has eigenvector set all of the underlying vector space $V$, but no non-central matrix has this property.</p> <p>What is true is that two matrices which commute and are also <em>diagonalizable</em> are <a href="http://en.wikipedia.org/wiki/Diagonalizable_matrix#Simultaneous_diagonalization">simultaneously diagonalizable</a>. The proof is particularly simple if at least one of the two matrices has distinct eigenvalues.</p>
linear-algebra
<p>Define</p> <ul> <li><p>The <em>algebraic multiplicity</em> of <span class="math-container">$\lambda_{i}$</span> to be the degree of the root <span class="math-container">$\lambda_i$</span> in the polynomial <span class="math-container">$\det(A-\lambda I)$</span>.</p> </li> <li><p>The <em>geometric multiplicity</em> the be the dimension of the eigenspace associated with the eigenvalue <span class="math-container">$\lambda_i$</span>.</p> </li> </ul> <p>For example: <span class="math-container">$\begin{bmatrix}1&amp;1\\0&amp;1\end{bmatrix}$</span> has root <span class="math-container">$1$</span> with algebraic multiplicity <span class="math-container">$2$</span>, but the geometric multiplicity <span class="math-container">$1$</span>.</p> <p><strong>My Question</strong> : Why is the geometric multiplicity always bounded by algebraic multiplicity?</p> <p>Thanks.</p>
<p>Suppose the geometric multiplicity of the eigenvalue $\lambda$ of $A$ is $k$. Then we have $k$ linearly independent vectors $v_1,\ldots,v_k$ such that $Av_i=\lambda v_i$. If we change our basis so that the first $k$ elements of the basis are $v_1,\ldots,v_k$, then with respect to this basis we have $$A=\begin{pmatrix} \lambda I_k &amp; B \\ 0 &amp; C \end{pmatrix}$$ where $I_k$ is the $k\times k$ identity matrix. Since the characteristic polynomial is independent of choice of basis, we have $$\mathrm{char}_A(x)=\mathrm{char}_{\lambda I_k}(x)\mathrm{char}_{C}(x)=(x-\lambda)^k\mathrm{char}_{C}(x)$$ so the algebraic multiplicity of $\lambda$ is at least $k$.</p>
<p>I will give more details to other answers.</p> <p>For a specific <span class="math-container">$\lambda_i$</span>, the idea is to transform the matrix <span class="math-container">$A$</span> (n by n) to matrix <span class="math-container">$B$</span> which shares the same eigenvalues as <span class="math-container">$A$</span>. If <span class="math-container">$P_1=[v_1, \cdots, v_m]$</span> are eigenvectors of <span class="math-container">$\lambda_i$</span>, we expand it to the basis <span class="math-container">$P=[P_1, P_2]=[v_1, \cdots, v_m, \cdots, v_n]$</span>. Therefore, <span class="math-container">$AP=[\lambda_i P_1, AP_2]$</span>. In order to make <span class="math-container">$P^{-1}AP=B$</span>, we must have <span class="math-container">$AP=PB$</span>. Let <span class="math-container">$B= \begin{bmatrix} B_{11} &amp; B_{12} \\ B_{21} &amp; B_{22} \end{bmatrix}$</span>, then <span class="math-container">$P_1B_{11}+P_2B_{21}=\lambda_i P_1$</span> and <span class="math-container">$P_1B_{12}+P_2B_{22}=AP_2$</span>. Because <span class="math-container">$P$</span> is a basis, <span class="math-container">$B_{11}=\lambda_iI$</span>(m by m), <span class="math-container">$B_{21}=\mathbf{0}$</span> (<span class="math-container">$(n-m)\times m$</span>). So, we have <span class="math-container">$B=\begin{bmatrix} \lambda_iI &amp; B_{12}\\ \mathbf{0} &amp; B_{22} \end{bmatrix}$</span>.</p> <p><span class="math-container">$\det(A-\lambda I)=\det(P^{-1}(A-\lambda I)P)=\det(P^{-1}AP-\lambda I)=det(B-\lambda I)$</span> <span class="math-container">$ = \det \Bigg(\begin{bmatrix} (\lambda_i-\lambda)I &amp; B_{12}\\ \mathbf{0} &amp; B_{22}-\lambda I \end{bmatrix} \Bigg)=(\lambda_i-\lambda)^{m}\det(B_{22}-\lambda I)$</span>.</p> <p>Obviously, <span class="math-container">$m$</span> is no bigger than the algebraic multiplicity of <span class="math-container">$\lambda_i$</span>.</p>
matrices
<p>I know how basic operations are performed on matrices, I can do transformations, find inverses, etc. But now that I think about it, I actually don't "understand" or know what I've been doing all this time. Our teacher made us memorise some rules and I've been following it like a machine.</p> <ol> <li><p>So what exactly is a matrix? And what is a determinant? </p></li> <li><p>What do they represent? </p></li> <li><p>Is there a geometrical interpretation?</p></li> <li><p>How are they used? Or, rather, what for are they used?</p></li> <li><p>How do I understand the "properties" of matrix?</p></li> </ol> <p>I just don't wanna mindlessly cram all those properties, I want to understand them better. </p> <p>Any links, which would improve my understanding towards determinants and matrices? Please use simpler words. Thanks :)</p>
<p>A matrix is a compact but general way to represent any linear transform. (Linearity means that the image of a sum is the sum of the images.) Examples of linear transforms are rotations, scalings, projections. They map points/lines/planes to point/lines /planes.</p> <p>So a linear transform can be represented by an array of coefficients. The size of the matrix tells you the number of dimension of the domain and the image spaces. The composition of two linear transforms corresponds to the product of their matrices. The inverse of a linear transform corresponds to the matrix inverse.</p> <p>A determinant measures the volume of the image of a unit cube by the transformation; it is a single number. (When the number of dimensions of the domain and image differ, this volume is zero, so that such "determinants" are never considered.) For instance, a rotation preserves the volumes, so that the determinant of a rotation matrix is always 1. When a determinant is zero, the linear transform is "singular", which means that it loses some dimensions (the transformed volume is flat), and cannot be inverted.</p> <p>The determinants are a fundamental tool in the resolution of systems of linear equations.</p> <p>As you will later learn, a linear transformation can be decomposed in a pure rotation, a pure (anisotropic) scaling and another pure rotation. Only the scaling deforms the volumes, and the determinant of the transform is the product of the scaling coefficients.</p>
<p><strong>1. Definition of a matrix.</strong></p> <p>The question of what a matrix <em>is</em>, precisely, is one I had for a long time as a high school student. It took many tries to get a straight answer, because people tend to conflate "matrix" with "linear transformation". The two are closely related, but NOT the same thing. So let me start with the fully rigorous definition of a matrix: </p> <blockquote> <p>An $m$ by $n$ matrix is a function of two variables, the first of which has domain $\{1,2,\dots,m\}$ and the second of which has domain $\{1,2,\dots,n\}$.</p> </blockquote> <p>This is the formal definition of matrices, but it's not how we usually think about them. We have a special notation for matrices--the "box of numbers" you are familiar with, where the value of the function at $(1,1)$ is put in the top left corner, the value at $(2,1)$ is put just below it, etc. We usually think of the matrix as just this box, and forget that it is a function. However, sometimes you need to remember that a matrix has a more formal definition, like when implementing matrices on a computer (most programming languages have matrices built into them).</p> <p><strong>2. What matrices represent.</strong></p> <p>Matrices can represent different things in different contexts, but there is one application that is most common. The most common application is linear transformations (a.k.a. linear maps), but before I get into that, let me briefly mention some other applications: </p> <ul> <li>Matrices can be used to store data. For example, images on a computer are often stored as a matrix, where the matrix's value at $(i,j)$ is the intensity of light on the camera pixel that is $i^{th}$ from the top and $j^{th}$ from the left. </li> <li>Matrices can be used as computational tools. For example, one way to compute the Fibonacci numbers is from powers of the matrix $$M = \begin{bmatrix} 1 &amp; 1 \\ 1 &amp; 0 \\ \end{bmatrix}$$ It turns out that $(M^k)_{11}$ is the $k^{th}$ Fibonacci number.</li> <li>Matrices can be used to encode some mathematical structure. I'm going to be sort of hand-wavy about this, but an example of what I have in mind is an <a href="https://en.wikipedia.org/wiki/Adjacency_matrix" rel="noreferrer">adjacency matrix</a> for a graph or network, which tells you which nodes are connected to which. </li> </ul> <p>So the point is that a matrix can be used for lots of things. However, one usage prevails as most common, and that is representing <strong>linear transformations</strong>. The prevalence of this usage is why people often conflate the two concepts. A linear transformation is a function $f$ of vectors which has the following properties:</p> <ul> <li>$f(x+y) = f(x) + f(y)$ for any vectors $x$ and $y$.</li> <li>$f(ax) = af(x)$ for any vector $x$ and any scalar $a$. </li> </ul> <p>These properties are what it takes to ensure that the function $f$ has "no curvature". So it's like a straight line, but possibly in higher dimensions. </p> <p>The relationship between matrices and linear transformations comes from the fact that a linear transformation is completely specified by the values it takes on a <em>basis</em> for its domain. (I presume you know what a basis is.) To see how this works, suppose we have a linear transformation $f$ which has domain $V$ and range $W$, where $V$ is a vector space with basis $v_1,v_2,\dots, v_n$ and $W$ is a vector space with basis $w_1,w_2,\dots,w_m$. Then there is a matrix $M$ representing $f$ <strong>with respect to these bases</strong>, which has as element $(i,j)$ the coefficient of $w_i$ when you express $f(v_j)$ as a sum of basis elements in $W$. </p> <p>The reason that this is a good idea is that if you have some miscellaneous vector $x = a_1 v_1 + a_2 v_2 + \cdots + a_n v_n \in V$, then if you represent $x$ as a column vector $[a_1,a_2,\dots,a_n]^T$ and $f$ as its matrix $M$, then the value $f(x)$ is given by the matrix product of $M$ and $[a_1,a_2,\dots,a_n]^T$. So the matrix $M$ completely encodes the linear transformation $f$, and matrix multiplication tells you how to decode it, i.e. how to use the matrix to get values of $f$. </p> <p><strong>3. Geometrical intuition.</strong></p> <p>In my opinion, the most important theorem for getting intuition for matrices and linear transformations is the <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition" rel="noreferrer">singular value decomposition</a> theorem. This says that any linear transformation can be written as a sequence of three simple transformations: a rotation, a stretching, and another rotation. Note that the stretching operation can stretch by different amounts in different orthogonal directions. This tells you that all linear transformations are some combination of rotation and stretching. </p> <p>Other properties of matrices often have direct geometric interpretation, too. For example, the determinant tells you how a linear transformation changes volumes. By the singular value decomposition, a linear transformation turns a cube into some sort of stretched and rotated parallelogram. The determinant is the ratio of the volume of the resulting parallelogram to that of the cube you started with. </p> <p>Not all properties of a matrix can be easily associated with familiar geometric concepts, though. I don't know of a good geometric picture for the trace, for instance. That doesn't mean that the trace is any less useful or easy to work with, though!</p> <p><strong>4. Other properties.</strong></p> <p>Almost all of the "properties" and "operations" for matrices come from properties of linear maps and theorems about them. For example, the standard multiplication of matrices is designed specifically to give the values of linear maps as explained above. This is NOT the only type of multiplication that can be defined on matrices, and in fact there are other types of multiplication for matrices (for example, the <a href="https://en.wikipedia.org/wiki/Hadamard_product_(matrices)" rel="noreferrer">Hadamard product</a> and the <a href="https://en.wikipedia.org/wiki/Kronecker_product" rel="noreferrer">Kronecker product</a>). These other types of multiplication are sometimes useful, but generally not as useful as regular matrix multiplication, so people often don't know (or care) about them.</p> <hr> <p><strong>5. TL;DR</strong></p> <p>The moral of the story is that you can use matrices for whatever you want (and they are indeed used in many different ways), but the way that most people use them most of the time is to represent linear maps, and the standard definitions and "properties" of matrices reflect this bias. The study of linear maps goes by the name "linear algebra", and a textbook on this subject is a good place to start if you want to learn more about matrices. (Depending on your background, you may find some good reference suggestions here: <a href="https://math.stackexchange.com/questions/2402775/looking-for-a-rigorous-linear-algebra-book">link</a>.)</p>
matrices
<p>I came across a video lecture in which the professor stated that there may or may not be any eigenvectors for a given linear transformation.</p> <p>But I had previously thought every square matrix has eigenvectors. </p>
<p>It depends over what field we're working. For example, the <strong>real</strong> matrix</p> <p>$$A=\begin{pmatrix}0&amp;\!\!-1\\1&amp;0\end{pmatrix}$$</p> <p>has no eigenvalues at all (i.e., over $\;\Bbb R\;$ ), yet the very same matrix defined over the complex field $\;\Bbb C\;$ has two eigenvalues: $\;\pm i\;$ </p>
<p>Over an algebraically closed field, every square matrix has an eigenvalue. For instance, every complex matrix has an eigenvalue. Every real matrix has an eigenvalue, but it may be complex.</p> <p>In fact, a field <span class="math-container">$K$</span> is algebraically closed iff every matrix with entries in <span class="math-container">$K$</span> has an eigenvalue. You can use the <a href="http://en.wikipedia.org/wiki/Companion_matrix" rel="noreferrer">companion matrix</a> to prove one direction. In particular, the existence of eigenvalues for complex matrices is equivalent to the fundamental theorem of algebra. </p>
geometry
<p>Some days ago, our math teacher said that he would give a good grade to the first one that will manage to draw this:</p> <p><a href="https://i.sstatic.net/tVb2m.png"><img src="https://i.sstatic.net/tVb2m.png" alt="enter image description here"></a></p> <p>To draw this without lifting the pen and without tracing the same line more than once. It's a bit like the "nine dots" puzzle but here I didn't find any working solution. So I have two questions:</p> <ul> <li>is it really impossible?</li> <li>how can it be proven that it is impossible (if it's impossible)</li> </ul> <p><strong>[EDIT]: After posting this question, and seeing how it was easy for people to solve it, I noticed that I posted the wrong drawing. The actual one is eaxctly like that but with triangles on all sides, not only top and bottom. As it would make the current answers invalid, I didn't replace the picture.</strong></p>
<p>It is possible, and it's actually quite easy. Start on the red dot and move your pen according to numbers in a picture below.</p> <p><a href="https://i.sstatic.net/ZUNfn.png" rel="noreferrer"><img src="https://i.sstatic.net/ZUNfn.png" alt="enter image description here"></a></p>
<p>Here is one way, out of many.</p> <p><a href="https://i.sstatic.net/evlFz.png" rel="noreferrer"><img src="https://i.sstatic.net/evlFz.png" alt="enter image description here"></a></p> <p>As you can see, I started at one vertex and ended at the same vertex. Since the graph is connected, and every vertex has an even number of segments joined to it, you can start at any vertex and end there, covering all the segments.</p>
linear-algebra
<p><span class="math-container">$$\det(A^T) = \det(A)$$</span></p> <p>Using the geometric definition of the determinant as the area spanned by the <em>columns</em>, could someone give a geometric interpretation of the property?</p>
<p><em>A geometric interpretation in four intuitive steps....</em></p> <p><strong>The Determinant is the Volume Change Factor</strong></p> <p>Think of the matrix as a geometric transformation, mapping points (column vectors) to points: $x \mapsto Mx$. The determinant $\mbox{det}(M)$ gives the factor by which volumes change under this mapping.</p> <p>For example, in the question you define the determinant as the volume of the parallelepiped whose edges are given by the matrix columns. This is exactly what the unit cube maps to, so again, the determinant is the factor by which the volume changes.</p> <p><strong>A Matrix Maps a Sphere to an Ellipsoid</strong></p> <p>Being a linear transformation, a matrix maps a sphere to an ellipsoid. The singular value decomposition makes this especially clear.</p> <p>If you consider the principal axes of the ellipsoid (and their preimage in the sphere), the singular value decomposition expresses the matrix as a product of (1) a rotation that aligns the principal axes with the coordinate axes, (2) scalings in the coordinate axis directions to obtain the ellipsoidal shape, and (3) another rotation into the final position.</p> <p><strong>The Transpose Inverts the Rotation but Keeps the Scaling</strong></p> <p>The transpose of the matrix is very closely related, since the transpose of a product is the reversed product of the transposes, and the transpose of a rotation is its inverse. In this case, we see that the transpose is given by the inverse of rotation (3), the <em>same</em> scaling (2), and finally the inverse of rotation (1).</p> <p>(This is almost the same as the inverse of the matrix, except the inverse naturally uses the <em>inverse</em> of the original scaling (2).)</p> <p><strong>The Transpose has the Same Determinant</strong></p> <p>Anyway, the rotations don't change the volume -- only the scaling step (2) changes the volume. Since this step is exactly the same for $M$ and $M^\top$, the determinants are the same.</p>
<p>This is more-or-less a reformulation of Matt's answer. He relies on the existence of the SVD-decomposition, I show that <span class="math-container">$\det(A)=\det(A^T)$</span> can be stated in a little different way.</p> <p>Every square matrix can be represented as the product of an orthogonal matrix (representing an isometry) and an upper triangular matrix (<a href="http://en.wikipedia.org/wiki/QR_decomposition" rel="noreferrer">QR decomposition</a>)- where the determinant of an upper (or lower) triangular matrix is just the product of the elements along the diagonal (that stay in their place under transposition), so, by the Binet formula, <span class="math-container">$A=QR$</span> gives: <span class="math-container">$$\det(A^T)=\det(R^T Q^T)=\det(R)\det(Q^T)=\det(R)\det(Q^{-1}),$$</span> <span class="math-container">$$\det(A^T)=\frac{\det{R}}{\det{Q}}=\det(Q)\det(R)=\det(QR)=\det(A),$$</span> where we used that the transpose of an orthogonal matrix is its inverse, and the determinant of an orthogonal matrix belongs to <span class="math-container">$\{-1,1\}$</span> - since an orthogonal matrix represents an isometry.</p> <hr /> <p>You can also consider that <span class="math-container">$(*)$</span> the determinant of a matrix is preserved under Gauss-row-moves (replacing a row with the sum of that row with a linear combination of the others) and Gauss-column-moves, too, since the volume spanned by <span class="math-container">$(v_1,\ldots,v_n)$</span> is the same of the volume spanned by <span class="math-container">$(v_1+\alpha_2 v_2+\ldots,v_2,\ldots,v_n)$</span>. By Gauss-row-moves you can put <span class="math-container">$A$</span> in upper triangular form <span class="math-container">$R$</span>, then have <span class="math-container">$\det A=\prod R_{ii}.$</span> If you apply the same moves as column moves on <span class="math-container">$A^T$</span>, you end with <span class="math-container">$R^T$</span> that is lower triangular and has the same determinant of <span class="math-container">$R$</span>, obviously. So, in order to provide a &quot;really geometric&quot; proof that <span class="math-container">$\det(A)=\det(A^T)$</span>, we only need to provide a &quot;really geometric&quot; interpretation of <span class="math-container">$(*)$</span>. An intuition is that the volume of the parallelepiped originally spanned by the columns of <span class="math-container">$A$</span> is the same if we change, for istance, the base of our vector space by sending <span class="math-container">$(e_1,\ldots,e_n)$</span> into <span class="math-container">$(e_1,\ldots,e_{i-1},e_i+\alpha\, e_j,e_{i+1},\ldots,e_n)\,$</span> with <span class="math-container">$i\neq j$</span>, since the geometric object is the same, and we are only changing its &quot;description&quot;.</p>
combinatorics
<blockquote> <p>Consider a square of side equal to $1$. Prove that we can place inside the square a finite number of disjoint discs, with different radii of the form $1/k$ with $k$ a positive integer, such that the area of the remaining region is at most $0.0001$.</p> </blockquote> <p>If we consider all the discs of this form, their total area is $\sum_{k \geq 1}\displaystyle \pi \frac{1}{k^2} - \pi=\frac{\pi^3}{6}-\pi\simeq 2.02$ which is greater than the area of the square. (I subtracted $\pi$ because we cannot place a disc of radius $1$ inside the square).</p> <p>So the discs of this form can cover the square very well, but how can I prove that there is a disjoint family which leaves out a small portion of the area?</p>
<p>I don't think this is possible for general $\epsilon$, and I doubt it's possible for remainder $0.0001$.</p> <p>Below are some solutions with remainder less than $0.01$. I produced them by randomized search from two different initial configurations. In the first one, I only placed the circle with curvature $2$ in the centre and tried placing the remaining circles randomly, beginning with curvature $12$; in the second one, I prepositioned pairs of circles that fit in the corners and did a deterministic search for the rest.</p> <p>The data structure I used was a list of interstices, each in turn consisting of a list of circles forming the interstice (where the lines forming the boundary of the square are treated as circles with zero curvature). I went through the circles in order of curvature and for each circle tried placing it snugly in each of the cusps where two circles touch in random order. If a circle didn't fit anywhere, I discarded it; if that decreased the remaining area below what was needed to get up to the target value (in this case $0.99$), I backtracked to the last decision.</p> <p>I also did this without using the circle with curvature $2$. For that case I did a complete search and found no configurations with remainder less than $0.01$. Thus, if there is a better solution in that case, it must involve placing the circles in a different order. (We can always transform any solution to one where each circle is placed snugly in a cusp formed by two other circles, so testing only such positions is not a restriction; however, circles with lower curvature might sit in the cusps of circles with higher curvature, and I wouldn't have found such solutions.)</p> <p>For the case including the circle with curvature $2$, the search wasn't complete (I don't think it can be done completely in this manner, without introducing further ideas), so I can't exclude that there's are significantly better configurations (even ones with in-order placement), but I'll try to describe how I came to doubt that there's much room for improvement beyond $0.01$, and particularly that this can be done for arbitrary $\epsilon$.</p> <p>The reasons are both theoretical and numerical. Numerically, I found that this seems to be a typical combinatorial optimization problem: There are many local minima, and the best ones are quite close to each other. It's easy to get to $0.02$; it's relatively easy to get to $0.011$; it takes quite a bit more optimization to get to $0.01$; and beyond that practically all the solutions I found were within $0.0002$ or so of $0.01$. So a solution with $0.0001$ would have to be of a completely different kind from everything that I found.</p> <p>Now of course <em>a priori</em> there might be some systematic solution that's hard to find by this sort of search but can be proved to exist. That might conceivably be the case for $0.0001$, but I'm pretty sure it's not the case for general $\epsilon$. To prove that it's possible to leave a remainder less than $\epsilon$ for any $\epsilon\gt0$, one might try to argue that after some initial phase it will always be possible to fit the remaining circles into the remaining space. The problem is that such an argument can't work, because we're trying to fill the rational area $1$ by discarding rational multiples of $\pi$ from the total area $\pi^3/6$, so we can't do it by discarding a finite number of circles, since $\pi$ is transcendental.</p> <p>Thus we can never reach a stage where we could prove that the remaining circles will exactly fit, and hence every proof that proves we can beat an arbitrary $\epsilon$ would have to somehow show that the remaining circles can be divided into two infinite subsets, with one of them exactly fitting into the remaining gaps. Of course this, too, is possible in principle, but it seems rather unlikely; the problem strikes me as a typical messy combinatorial optimization problem with little regularity.</p> <p>A related reason not to expect a clean solution is that in an <a href="http://en.wikipedia.org/wiki/Apollonian_gasket" rel="noreferrer">Apollonian gasket</a> with integer curvatures, some integers typically occur more than once. For instance, one might try to make use of the fact that the curvatures $0$, $2$, $18$ and $32$ form a quadruple that would allow us to fill an entire half-corner with a gasket of circles of integer curvature; however, in that gasket, many curvatures, for instance $98$, occur more than once, so we'd have to make exceptions for those since we're not allowed to reuse those circles. Also, if you look at the gaskets produced by $0$, $2$ and the numbers from $12$ to $23$ (which are the candidates to be placed in the corners), you'll find that the fourth number increases more rapidly than the third; that is, $0$, $2$ and $18$ lead to $32$, whereas $0$ $2$ and $19$ already lead to $(\sqrt2+\sqrt{19})^2\approx33.3$; so not only can you not place all the numbers from $12$ to $23$ into the corners (since only two of them fit together and there are only four corners), but then if you start over with $24$ (which is the next number in the gasket started by $12$), you can't even continue with the same progression, since the spacing has increased. The difference would have to be compensated by the remaining space in the corners that's not part of the gaskets with the big $2$-circle, but that's too small to pick up the slack, which makes it hard to avoid dropping several of the circles in the medium range around the thirties. </p> <p>My impression from the optimization process is that we're forced to discard too much area quite early on; that is, we can't wait for some initial irregularities to settle down into some regular pattern that we can exploit. For instance, the first solution below uses all curvatures except for the following: 3 4 5 6 7 8 9 10 11 16 17 20 22 25 30 31 33 38 46 48 49 52 53 55 56 57 59 79 81 94 96 101 106 107 108 113 125 132. Already at 49 the remaining area becomes less than would be needed to fill the square. Other solutions I found differed in the details of which circles they managed to squeeze in where, but the total area always dropped below $1$ early on. Thus, it appears that it's the irregular constraints at the beginning that limit what can be achieved, and this can't be made up for by some nifty scheme extending to infinity. It might even be possible to prove by an exhaustive search that some initial set of circles can't be placed without discarding too much area. To be rigorous, this would have to take a lot more possibilities into account than my search did (since the circles could be placed in any order), but I don't see why allowing the bigger circles to be placed later on should make such a huge difference, since there's precious little wiggle room for their placement to begin with if we want to fit in most of the ones between $12$ and $23$.</p> <p>So here are the solutions I found with remainder less than $0.01$. The configurations shown are both filled up to an area $\gtrsim0.99$ and have a tail of tiny circles left worth about another $0.0002$. For the first one, I checked with integer arithmetic that none of the circles overlap. (In fact I placed the circles with integer arithmetic, using floating-point arithmetic to find an approximation of the position and a single iteration of Newton's method in integer arithmetic to correct it.)</p> <p>The first configuration has $10783$ circles and was found using repeated randomized search starting with only the circle of curvature $2$ placed; I think I ran something like $100$ separate trials to find this one, and something like $1$ in $50$ of them found a solution with remainder below $0.01$; each trial took a couple of seconds on a MacBook Pro.</p> <p><img src="https://i.sstatic.net/K1MNc.png" alt="randomized"></p> <p>The second configuration has $17182$ circles and was found by initially placing pairs of circles with curvatures $(12,23)$, $(13,21)$, $(14,19)$ and $(15,18)$ touching each other in the corners and tweaking their exact positions by hand; the tweaking brought a gain of something like $0.0005$, which brought the remainder down below $0.01$. The search for the remaining circles was carried out deterministically, in that I always tried first to place a circle into the cusps formed by the smaller circles and the boundary lines; this was to keep as much contiguous space as possible available in the cusps between the big circle and the boundary lines.</p> <p><img src="https://i.sstatic.net/5zUI0.png" alt="pre-placed"></p> <p>I also tried placing pairs of circles with curvatures $(13,21)$, $(14,19)$, $(15,18)$ and $(16,17)$ in the corners, but only got up to $0.9896$ with that.</p> <p>Here are high-resolution version of the images; they're scaled down in this column, but you can open them in a new tab/window (where you might have to click on them to toggle the browser's autoscale feature) to get the full resolution.</p> <p>Randomized search:</p> <p><img src="https://i.sstatic.net/JWiwd.png" alt="randomized hi-res"></p> <p>With pre-placed circles:</p> <p><img src="https://i.sstatic.net/aFFfN.png" alt="enter image description here"></p>
<p>Let's roll up our sleeves here. Let $C_k$ denote the disk of radius $1/k$. Suppose we can cover an area of $\ge 0.9999$ using a set of non-overlapping disks inside the unit square, and let $S$ denote the set of integers $k$ such that $C_k$ is used in this cover.<br> Then we require</p> <p>$$\sum_{k\in S}\frac{1}{k^2} \ge 0.9999/\pi \approx 0.318278$$</p> <p>As the OP noted, we know that $1 \not\in S$. This leaves</p> <p>$$\sum_{k\ge2}\frac{1}{k^2} \approx 0.644934$$</p> <p>which gives us $0.644934 - 0.318278 = 0.326656$ 'spare capacity' to play with.</p> <p><strong>Case 1</strong> Suppose $2 \in S$. Then the largest disk that will fit into the spaces in the corners left by $C_2$ is $C_{12}$, so we must throw $3,...,11$ out of $S$. This wastes</p> <p>$$\sum_{k=3}^{11}\frac{1}{k^2}\approx0.308032$$</p> <p>and we are close to using up our spare capacity: we would be left with $0.326656-0.308032=0.018624$ to play with. </p> <p><strong>Case 2</strong> Now suppose $2 \not\in S$. Then we can fit $C_3$ and $C_4$ into the unit square, but not $C_5$. So we waste</p> <p>$$\frac{1}{2^2} + \frac{1}{5^2} = 0.29$$</p> <p>leaving us with $0.326656-0.29=0.036656$ to play with. </p> <p>Neither of these cases fills me with confidence that this thing is doable.</p>
logic
<p>Completeness is defined as if given $\Sigma\models\Phi$ then $\Sigma\vdash\Phi$. Meaning if for every truth placement $Z$ in $\Sigma$ we would get $T$, then $\Phi$ also would get $T$. If the previous does indeed exists, then we can prove $\Phi$ using the rules in $\Sigma$.</p> <p>Soundness is defined as: when given that $\Sigma\vdash\Phi$ then $\Sigma\models\Phi$ , which is the opposite. </p> <p>Can you please explain the basic difference between the two of them ? </p> <p>Thanks ,Ron</p>
<p>In brief:</p> <p>Soundness means that you <em>cannot</em> prove anything that's wrong.</p> <p>Completeness means that you <em>can</em> prove anything that's right.</p> <p>In both cases, we are talking about a some fixed system of rules for proof (the one used to define the relation $\vdash$ ).</p> <p>In more detail: Think of $\Sigma$ as a set of hypotheses, and $\Phi$ as a statement we are trying to prove. When we say $\Sigma \models \Phi$, we are saying that $\Sigma$ <em>logically implies</em> $\Phi$, i.e., in every circumstance in which $\Sigma$ is true, then $\Phi$ is true. Informally, $\Phi$ is "right" given $\Sigma$.</p> <p>When we say $\Sigma \vdash \Phi$, on the other hand, we must have some set of rules of proof (sometimes called "inference rules") in mind. Usually these rules have the form, "if you start with some particular statements, then you can derive these other statements". If you can derive $\Phi$ starting from $\Sigma$, then we say that $\Sigma \vdash \Phi$, or that $\Phi$ is provable from $\Sigma$. </p> <p>We are thinking of a proof as something used to convince others, so it's important that the rules for $\vdash$ are mechanical enough so that another person or a computer can <em>check</em> a purported proof (this is different from saying that the other person/computer could <em>create</em> the proof, which we do <em>not</em> require).</p> <p>Soundness states: $\Sigma \vdash \Phi$ implies $\Sigma \models \Phi$. If you can prove $\Phi$ from $\Sigma$, then $\Phi$ is true given $\Sigma$. Put differently, if $\Phi$ is not true (given $\Sigma$), then you can't prove $\Phi$ from $\Sigma$. Informally: "You can't prove anything that's wrong."</p> <p>Completeness states: $\Sigma \models \Phi$ imples $\Sigma \vdash \Phi$. If $\Phi$ is true given $\Sigma$, then you can prove $\Phi$ from $\Sigma$. Informally: "You can prove anything that's right."</p> <p>Ideally, a proof system is both sound and complete.</p>
<p>From the perspective of trying to write down axioms for first-order logic that satisfy both completeness and soundness, soundness is the easy direction: all you have to do is make sure that all of your axioms are true and that all of your inference rules preserve truth. Completeness is the hard direction: you need to write down strong enough axioms to capture semantic truth, and it's not obvious from the outset that this is even possible in a non-trivial way. </p> <p>(A trivial way would be to admit all truths as your axioms, but the problem with this logical system is that you can't recognize what counts as a valid proof.) </p>
linear-algebra
<p>I plan to self-study linear algebra this summer. I am sorta already familiar with vectors, vector spaces and subspaces and I am really interested in everything about matrices (diagonalization, ...), linear maps and their matrix representation and eigenvectors and eigenvalues. I am looking for a book that handles every of the aforementioned topics in details. I also want to build a solid basis of the mathematical way of thinking to get ready to an exciting abstract algebra next semester, so my main aim is to work on proofs for somehow hard problems. I got Lang's "Intro. to Linear Algebra" and it is too easy, superficial.</p> <p>Can you advise me a good book for all of the above? Please take into consideration that it is for self-study, so that it' gotta work on its own. Thanks.</p>
<p>When I learned linear algebra for the first time, I read through Friedberg, Insel, and Spence. It is slightly more modern than Hoffman/Kunze, is fully rigorous, and has a bunch of useful exercises to work through.</p>
<p>A great book freely available online is <a href="https://sites.google.com/a/brown.edu/sergei-treil-homepage/linear-algebra-done-wrong" rel="nofollow noreferrer">Linear Algebra Done Wrong</a> by Sergei Treil. It covers all the topics you listed and culminates in a discussion of spectral theory, which can be considered a generalized treatment of diagonalization.</p> <p>Don't be put off by the book's title. It's a play on the popular Linear Algebra Done Right, by Sheldon Axler. Axler's book is also very good, and you might want to check it out.</p> <p>The classic proof-based linear algebra text is the one by Hoffman and Kunze. I find the two books I listed above easier to read, but you might also consider it. In any case, it is a good reference.</p> <p>I hope this helps. Please comment if you have any questions.</p>
probability
<p>Suppose I have a line segment of length $L$. I now select two points at random along the segment. What is the expected value of the distance between the two points, and why?</p>
<p>Byron has already answered your question, but I will attempt to provide a detailed solution...</p> <p>Let $X$ be a random variable uniformly distributed over $[0,L]$, i.e., the probability density function of $X$ is the following</p> <p>$$f_X (x) = \begin{cases} \frac{1}{L} &amp; \textrm{if} \quad{} x \in [0,L]\\ 0 &amp; \textrm{otherwise}\end{cases}$$</p> <p>Let us randomly pick two points in $[0,L]$ <em>independently</em>. Let us denote those by $X_1$ and $X_2$, which are random variables distributed according to $f_X$. The distance between the two points is a new random variable</p> <p>$$Y = |X_1 - X_2|$$</p> <p>Hence, we would like to find the expected value $\mathbb{E}(Y) = \mathbb{E}( |X_1 - X_2| )$. Let us introduce function $g$</p> <p>$$g (x_1,x_2) = |x_1 - x_2| = \begin{cases} x_1 - x_2 &amp; \textrm{if} \quad{} x_1 \geq x_2\\ x_2 - x_1 &amp; \textrm{if} \quad{} x_2 \geq x_1\end{cases}$$</p> <p>Since the two points are picked independently, the joint probability density function is the product of the pdf's of $X_1$ and $X_2$, i.e., $f_{X_1 X_2} (x_1, x_2) = f_{X_1} (x_1) f_{X_2} (x_2) = 1 / L^2$ in $[0,L] \times [0,L]$. Therefore, the expected value $\mathbb{E}(Y) = \mathbb{E}(g(X_1,X_2))$ is given by</p> <p>$$\begin{align} \mathbb{E}(Y) &amp;= \displaystyle\int_{0}^L\int_{0}^L g(x_1,x_2) \, f_{X_1 X_2} (x_1, x_2) \,d x_1 \, d x_2\\[6pt] &amp;= \frac{1}{L^2} \int_0^L\int_0^L |x_1 - x_2| \,d x_1 \, d x_2\\[6pt] &amp;= \frac{1}{L^2} \int_0^L\int_0^{x_1} (x_1 - x_2) \,d x_2 \, d x_1 + \frac{1}{L^2} \int_0^L\int_{x_1}^L (x_2 - x_1) \,d x_2 \, d x_1\\[6pt] &amp;= \frac{L^3}{6 L^2} + \frac{L^3}{6 L^2} = \frac{L}{3}\end{align}$$</p>
<p>Sorry. I posted a cryptic comment just before running off to class. What I meant was that if $X,Y$ are independent uniform $(0,1)$ random variables, then the triple $$(A,B,C):=(\min(X,Y),\ \max(X,Y)-\min(X,Y),\ 1-\max(X,Y))$$ is an exchangeable sequence. In particular, $\mathbb{E}(A)=\mathbb{E}(B)=\mathbb{E}(C),$ and since $A+B+C=1$ identically we must have $\mathbb{E}(B)=\mathbb{E}(\mbox{distance})={1\over 3}.$ </p> <p>Intuitively, the "average" configuration of two random points on a interval looks like this: <img src="https://i.sstatic.net/NTD7P.jpg" alt="enter image description here"></p>
probability
<p>I gave my friend <a href="https://math.stackexchange.com/questions/42231/obtaining-irrational-probabilities-from-fair-coins">this problem</a> as a brainteaser; while her attempted solution didn't work, it raised an interesting question.</p> <p>I flip a fair coin repeatedly and record the results. I stop as soon as the number of heads is equal to twice the number of tails (for example, I will stop after seeing HHT or THTHHH or TTTHHHHHH). What's the probability that I never stop?</p> <p>I've tried to just compute the answer directly, but the terms got ugly pretty quickly. I'm hoping for a hint towards a slick solution, but I will keep trying to brute force an answer in the meantime.</p>
<blockquote> <p>The game stops with probability $u=\frac34(3-\sqrt5)=0.572949017$.</p> </blockquote> <p>See the end of the post for generalizations of this result, first to every asymmetric heads-or-tails games (Edit 1), and then to every integer ratio (Edit 2).</p> <hr> <p><strong>To prove this</strong>, consider the random walk which goes two steps to the right each time a tail occurs and one step to the left each time a head occurs. Then the number of tails is the double of the number of heads each time the walk is back at its starting point (and only then). In other words, the probability that the game never stops is $1-u$ where $u=P_0(\text{hits}\ 0)$ for the random walk with equiprobable steps $+2$ and $-1$.</p> <p>The classical one-step analysis of hitting times for Markov chains yields $2u=v_1+w_2$ where, for every positive $k$, $v_k=P_{-k}(\text{hits}\ 0)$ and $w_k=P_{k}(\text{hits}\ 0)$. We first evaluate $w_2$ then $v_1$.</p> <p>The $(w_k)$ part is easy: the only steps to the left are $-1$ steps hence to hit $0$ starting from $k\ge1$, the walk must first hit $k-1$ starting from $k$, then hit $k-2$ starting from $k-1$, and so on. These $k$ events are equiprobable hence $w_k=(w_1)^k$. Another one-step analysis, this time for the walk starting from $1$, yields $$ 2w_1=1+w_3=1+(w_1)^3 $$ hence $w_k=w^k$ where $w$ solves $w^3-2w+1=0$. Since $w\ne1$, $w^2+w=1$ and since $w&lt;1$, $w=\frac12(\sqrt5-1)$.</p> <p>Let us consider the $(v_k)$ part. The random walk has a drift to the right hence its position converges to $+\infty$ almost surely. Let $k+R$ denote the first position visited on the right of the starting point $k$. Then $R\in\{1,2\}$ almost surely, the distribution of $R$ does not depend on $k$ because the dynamics is invariant by translations, and $$ v_1=r+(1-r)w_1\quad\text{where}\ r=P_{-1}(R=1). $$ Now, starting from $0$, $R=1$ implies that the first step is $-1$ hence $2r=P_{-1}(A)$ with $A=[\text{hits}\ 1&#160;\text{before}\ 2]$. Consider $R&#39;$ for the random walk starting at $-1$. If $R&#39;=2$, $A$ occurs. If $R&#39;=1$, the walk is back at position $0$ hence $A$ occurs with probability $r$. In other words, $2r=(1-r)+r^2$, that is, $r^2-3r+1=0$. Since $r&lt;1$, $r=\frac12(3-\sqrt5)$ (hence $r=1-w$).</p> <p>Plugging these values of $w$ and $r$ into $v_1$ and $w_2$ yields the value of $u$.</p> <hr> <p><strong>Edit 1</strong> Every asymmetric random walk which performs elementary steps $+2$ with probability $p$ and $-1$ with probability $1-p$ is transient to $+\infty$ as long as $p&gt;\frac13$ (and naturally, for every $p\le\frac13$ the walk hits $0$ with full probability). In this regime, one can compute the probability $u(p)$ to hit $0$. The result is the following.</p> <blockquote> <p>For every $p$ in $(0,\frac13)$, $u(p)=\frac32\left(2-p-\sqrt{p(4-3p)}\right).$</p> </blockquote> <p>Note that $u(p)\to1$ when $p\to\frac13$ and $u(p)\to0$ when $p\to1$, as was to be expected.</p> <hr> <p><strong>Edit 2</strong> Coming back to symmetric heads-or-tails games, note that, for any fixed integer $N\ge2$, the same techniques apply to compute the probability $u_N$ to reach $N$ times more tails than heads. </p> <p>One gets $2u_N=E(w^{R_N-1})+w^N$ where $w$ is the unique solution in $(0,1)$ of the polynomial equation $2w=1+w^{1+N}$, and the random variable $R_N$ is almost surely in $\{1,2,\ldots,N\}$. The distribution of $R_N$ is characterized by its generating function, which solves $$ (1-(2-r_N)s)E(s^{R_N})=r_Ns-s^{N+1}\quad\text{with}\quad r_N=P(R_N=1). $$ This is equivalent to a system of $N$ equations with unknowns the probabilities $P(R_N=k)$ for $k$ in $\{1,2,\ldots,N\}$. One can deduce from this system that $r_N$ is the unique root $r&lt;1$ of the polynomial $(2-r)^Nr=1$. One can then note that $r_N=w^N$ and that $E(w^{R_N})=\dfrac{Nr_N}{2-r_N}$ hence some further simplifications yield finally the following general result.</p> <blockquote> <p>For every $N\ge2$, $u_N=\frac12(N+1)r_N$ where $r_N&lt;1$ solves the equation $(2-r)^Nr=1$.</p> </blockquote>
<p>(Update: The answer to the original question is that the probability of stopping is $\frac{3}{4} \left(3 - \sqrt{5}\right)$. See end of post for an infinite series expression in the general case.) <HR></p> <p>Let $S(n)$ denote the number of ways to stop after seeing $n$ tails. Seeing $n$ tails means seeing $2n$ heads, so this would be stopping after $3n$ flips. Since there are $2^{3n}$ possible sequences in $3n$ flips, the probability of stopping is $\sum_{n=1}^{\infty} S(n)/8^n$.</p> <p>To determine $S(n)$, we see that there are $\binom{3n}{n}$ ways to choose which $n$ of $3n$ flips will be tails. However, this overcounts for $n &gt; 1$, as we could have seen twice as many heads as tails for some $k &lt; n$. Of these $\binom{3n}{n}$ sequences, there are $S(k) \binom{3n-3k}{n-k}$ sequences of $3n$ flips in which there are $k$ tails the first time we would see twice as many heads as tails, as any of the $S(k)$ sequences of $3k$ flips could be completed by choosing $n-k$ of the remaining $3n-3k$ flips to be tails. Thus $S(n)$ satisfies the recurrence $S(n) = \binom{3n}{n} - \sum_{k=1}^{n-1} \binom{3n-3k}{n-k}S(k)$, with $S(1) = 3$.</p> <p>The solution to this recurrence is $S(n) = \frac{2}{3n-1} \binom{3n}{n}.$ This can be verified easily, as substituting this expression into the recurrence yields a slight variation on Identity 5.62 in <em>Concrete Mathematics</em> (p. 202, 2nd ed.), namely, $$\sum_k \binom{tk+r}{k} \binom{tn-tk+s}{n-k} \frac{r}{tk+r} = \binom{tn+r+s}{n},$$ with $t = 3$, $r = -1$, $s=0$.</p> <p>So the probability of stopping is $$\sum_{n=1}^{\infty} \binom{3n}{n} \frac{2}{3n-1} \frac{1}{8^n}.$$</p> <p>Mathematica gives the closed form for this probability of stopping to be $$2 \left(1 - \cos\left(\frac{2}{3} \arcsin \frac{3 \sqrt{3/2}}{4}\right) \right) \approx 0.572949.$$</p> <p><em>Added</em>: The sum is hypergeometric and has a simpler representation. See Sasha's comments for why the sum yields this closed form solution and also why the answer is $$\frac{3}{4} \left(3 - \sqrt{5}\right) \approx 0.572949.$$</p> <p><HR> <em>Added 2</em>: This answer is generalizable to other ratios $r$ up to the infinite series expression. For the general $r \geq 2$ case, the argument above is easily adapted to produce the recurrence $S(n) = \binom{(r+1)n}{n} - \sum_{k=1}^{n-1} \binom{(r+1)n-(r+1)k}{n-k}S(k)$, with $S(1) = r+1$. The solution to the recurrence is $S(n) = \frac{r}{(r+1) n - 1} \binom{(r+1) n}{n}$ and can be verified easily by using the binomial convolution formula given above. Thus, for the ratio $r$, the probability of stopping has the infinite series expression $$\sum_{n=1}^{\infty} \binom{(r+1)n}{n} \frac{r}{(r+1)n-1} \frac{1}{2^{(r+1)n}}.$$ This can be expressed as a hypergeometric function, but I am not sure how to simplify it any further for general $r$ (and neither does Mathematica). It can also be expressed using the generalized binomial series discussed in <em>Concrete Mathematics</em> (p. 200, 2nd ed.), but I don't see how to simplify it further in that direction, either.</p> <p><HR> <em>Added 3</em>: In case anyone is interested, I found a <a href="https://math.stackexchange.com/questions/60991/combinatorial-proof-of-binom3nn-frac23n-1-as-the-answer-to-a-coin-fli/66146#66146">combinatorial proof of the formula for $S(n)$</a>. It works in the general $r$ case, too.</p>
probability
<p>Suppose</p> <p><span class="math-container">$$X_1, X_2, \dots, X_n\sim Unif(0, \theta), iid$$</span></p> <p>and suppose</p> <p><span class="math-container">$$\hat\theta = \max\{X_1, X_2, \dots, X_n\}$$</span></p> <p>How would I find the probability density of <span class="math-container">$\hat\theta$</span>?</p>
<p>\begin{align} P(Y\leq x) &amp;= P(\max(X_1,X_2 ,\cdots,X_n)\leq x)\\ &amp;= P(X_1\leq x,X_2\leq x,\cdots,X_n\leq x)\\ &amp;\stackrel{ind}{=} \prod_{i=1}^nP(X_i\leq x )\\ &amp;= \prod_{i=1}^n\dfrac{x}{\theta}\\&amp;=\left(\dfrac{x}{\theta}\right)^n \end{align}</p>
<p>Let random variable $W$ denote the maximum of the $X_i$. We will assume that the $X_i$ are independent, else we can say very little about the distribution of $W$. </p> <p>Note that the maximum of the $X_i$ is $\le w$ if and only if <strong>all</strong> the $X_i$ are $\le w$. For $w$ in the interval $[0,\theta]$, the probability that $X_i\le w$ is $\frac{w}{\theta}$. It follows by independence that the probability that $W\le w$ is $\left(\frac{w}{\theta}\right)^n$. </p> <p>Thus, in our interval, the cumulative distribution function $F_W(w)$ of $W$ is given by $$F_W(w)= \left(\frac{w}{\theta}\right)^n.$$ Differentiate to get the density function of $W$. </p>
probability
<p>Let <span class="math-container">$a \le b \le c$</span> be the sides of a triangle inscribed inside a fixed circle such that the vertices of the triangle are distributed uniformly on the circumference.</p> <p><strong>Question 1</strong>: Is it true that the probability that <span class="math-container">$ac &gt; b^2$</span> is <span class="math-container">$\displaystyle \frac{1}{5}$</span>. I ran a simulation by generating <span class="math-container">$1.75 \times 10^9$</span> triangle and counting the number of times <span class="math-container">$ac &gt; b^2$</span>. The experimental data seems to suggest that probability converges to about <span class="math-container">$0.2$</span>.</p> <p><strong>Note</strong>: For any triangle with <span class="math-container">$a \le b \le c$</span>, the triangle inequality implies <span class="math-container">$b &lt; a+c &lt; 3b$</span>. Now the condition <span class="math-container">$ac &gt; b^2$</span> implies that <span class="math-container">$2b &lt; a+c &lt; 3b$</span>; here the lower bound follows from AM-GM inequality. Hence all triangles for which <span class="math-container">$b &lt; a+c &lt; 2b$</span> are ruled out. For our problem, the condition <span class="math-container">$2b &lt; a+c &lt; 3b$</span> is necessary but not sufficient.</p> <p><strong>Update</strong>: Changed the title in light of the comments and answer that relaxing the condition <span class="math-container">$a\le b \le c$</span> is easier to handle</p> <p><a href="https://i.sstatic.net/2v0S6.png" rel="noreferrer"><img src="https://i.sstatic.net/2v0S6.png" alt="enter image description here" /></a></p> <p>Related question: <a href="https://math.stackexchange.com/questions/4840303/if-a-b-c-are-the-sides-of-a-triangle-and-x-ge-1-what-is-the-probability">If <span class="math-container">$(a,b,c)$</span> are the sides of a triangle and <span class="math-container">$x \ge 1$</span>, what is the probability that <span class="math-container">$a+b &gt; cx$</span>?</a></p>
<p>Assume that the circle is the unit circle centred at the origin, and the vertices of the triangle are:<br /> <span class="math-container">$A(\cos(-Y),\sin(-Y))$</span> where <span class="math-container">$0\le Y\le2\pi$</span><br /> <span class="math-container">$B(1,0)$</span><br /> <span class="math-container">$C(\cos X,\sin X)$</span> where <span class="math-container">$0\le X\le2\pi$</span></p> <p><em>Relax the requirement</em> that <span class="math-container">$a \le b \le c$</span>, and let:<br /> <span class="math-container">$a=BC=2\sin\left(\frac{X}{2}\right)$</span><br /> <span class="math-container">$b=AC=\left|2\sin\left(\frac{2\pi-X-Y}{2}\right)\right|=\left|2\sin\left(\frac{X+Y}{2}\right)\right|$</span><br /> <span class="math-container">$c=AB=2\sin\left(\frac{Y}{2}\right)$</span></p> <p><span class="math-container">$\therefore P[ac&gt;b^2]=P\left[\sin\left(\frac{X}{2}\right)\sin\left(\frac{Y}{2}\right)&gt;\sin^2\left(\frac{X+Y}{2}\right)\right]$</span> where <span class="math-container">$0\le X\le2\pi$</span> and <span class="math-container">$0\le Y\le2\pi$</span></p> <p>This probability is the ratio of the area of the shaded region to the area of the square in the graph below.</p> <p><a href="https://i.sstatic.net/Quxu9.png" rel="noreferrer"><img src="https://i.sstatic.net/Quxu9.png" alt="enter image description here" /></a></p> <p>Rotate these regions <span class="math-container">$45^\circ$</span> clockwise about the origin, then shrink by factor <span class="math-container">$\frac{1}{\sqrt2}$</span>, then translate left <span class="math-container">$\pi$</span> units, by letting <span class="math-container">$X=x+\pi-y$</span> and <span class="math-container">$Y=x+\pi+y$</span>.</p> <p><a href="https://i.sstatic.net/yzBeO.png" rel="noreferrer"><img src="https://i.sstatic.net/yzBeO.png" alt="enter image description here" /></a></p> <p><span class="math-container">$\begin{align} P[ac&gt;b^2]&amp;=P\left[\sin\left(\frac{x+\pi-y}{2}\right)\sin\left(\frac{x+\pi+y}{2}\right)&gt;\sin^2(x+\pi)\right]\\ &amp;=P\left[\cos(x+\pi)-\cos y&lt;-2\sin^2(x+\pi)\right]\text{ using sum to product identity}\\ &amp;=P\left[-\cos x-\cos y&lt;-2\sin^2 x\right]\\ &amp;=P\left[-\arccos(2\sin^2x-\cos x)&lt;y&lt;\arccos(2\sin^2x-\cos x)\right]\\ &amp;=\dfrac{\int_0^{\pi/3}\arccos(2\sin^2x-\cos x)\mathrm dx}{\frac{\pi^2}{2}} \end{align}$</span></p> <p><a href="https://www.wolframalpha.com/input?i2d=true&amp;i=%5C%2840%29Divide%5B5%2CPower%5Bpi%2C2%5D%5D%5C%2841%29Integrate%5Barccos%5C%2840%292Power%5B%5C%2840%29sinx%5C%2841%29%2C2%5D-cosx%5C%2841%29%2C%7Bx%2C0%2CDivide%5Bpi%2C3%5D%7D%5D" rel="noreferrer">Numerical evidence</a> suggests that the integral equals <span class="math-container">$\frac{\pi^2}{5}$</span>. (I've posted this integral as <a href="https://math.stackexchange.com/q/4838494/398708">another question</a>.) If that's true, then the probability is <span class="math-container">$\frac25$</span>.</p> <p>If the probability <em>without</em> requiring <span class="math-container">$a \le b \le c$</span> is <span class="math-container">$\frac25$</span> , it follows that the probability <em>with</em> requiring <span class="math-container">$a \le b \le c$</span> is <span class="math-container">$\frac15$</span>, as @joriki explained in the comments.</p> <h2>Update:</h2> <p>The integral has been <a href="https://math.stackexchange.com/a/4838976/398708">shown</a> to equal <span class="math-container">$\frac{\pi^2}{5}$</span>, thus showing that the answer to the OP is indeed <span class="math-container">$1/5$</span>.</p> <p>The simplicity of the answer, <span class="math-container">$1/5$</span>, suggests that there might be a more intuitive solution, but given the amount of attention the OP has received, an intuitive solution seems to be quite elusive. We might have to chalk this one up as another probability question with a simple answer but no intuitive explanation. (Other examples of such probability questions are <a href="https://math.stackexchange.com/q/4803481/398708">here</a> and <a href="https://math.stackexchange.com/q/4799757/398708">here</a>.)</p>
<p>This is not an answer, rather a numerical exploration. We may suppose that the three points are on the unit circle and that one of them is <span class="math-container">$P=(1, 0)$</span>. Let the two other points be <span class="math-container">$Q = (\cos x, \sin x)$</span> and <span class="math-container">$R=(\cos y, \sin y)$</span>. Let us define the set <span class="math-container">\begin{equation} S = \{(x, y) | \min(a,b,c)\max(a,b,c)&gt; \text{mid}(a,b,c)^2\} \end{equation}</span> where <span class="math-container">$a = 2 |\sin(x/2)|$</span>, <span class="math-container">$b = 2|\sin(y/2)|$</span>, <span class="math-container">$c = 2|\sin((x-y)/2)|$</span> are the lengths of the sides of the triangle.</p> <p>I was able to plot the indicator function <span class="math-container">${\bf 1}_S(x,y)$</span> in <a href="https://i.sstatic.net/WRqz0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WRqz0.png" alt="enter image description here" /></a>The picture of <span class="math-container">${\bf 1}_S(x+\frac{y}{2},y)$</span> shows even more regularity. It seems that only a few sine-like curves are involved in this picture <a href="https://i.sstatic.net/DHaSK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DHaSK.png" alt="enter image description here" /></a></p> <p>Edit: I updated the pictures. The original version of this post inverted the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> axis.</p> <p>Using @joriki 's comment that the problem is equivalent to showing that the probability of the unordered triple <span class="math-container">$(a, b, c)$</span> satisfies <span class="math-container">$a c &gt; b^2$</span> is <span class="math-container">$2/5$</span> allows us to create a simpler picture. Let <span class="math-container">\begin{equation} U = \{(x, y) | a c&gt; b^2\} \end{equation}</span> with the same definition of <span class="math-container">$a, b, c$</span>. Plotting the function <span class="math-container">${\bf 1}_U(x+\frac{y}{2},y)$</span> gives the much simpler picture <a href="https://i.sstatic.net/dCTqB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dCTqB.png" alt="enter image description here" /></a></p> <p>The sine-looking function appearing in this picture can be proved to have the equation <span class="math-container">\begin{equation} y = 2 \arccos\left(-\frac{1}{4} + \frac{1}{4}\sqrt{17 + 8\cos x}\right) \end{equation}</span> for <span class="math-container">$0\le x\le \pi$</span>. So the question reduces to: <strong>Is is true that</strong> <span class="math-container">\begin{equation} \frac{2}{\pi^2} \int_0^\pi \arccos\left(-\frac{1}{4} + \frac{1}{4}\sqrt{17 + 8\cos x}\right) d x = \frac{2}{5} \end{equation}</span> <a href="https://www.wolframalpha.com/input?i=integrate%20%282%2Fpi%5E2%29*%20arccos%28%28-1%20%2B%20sqrt%2817%20%2B%208*%20cos%28x%29%29%29%2F4%29%20from%200%20to%20pi" rel="nofollow noreferrer">WolframAlpha</a> actually gives this value numerically. Here is how the equation of the curve can be established, recall that <span class="math-container">$2 \sin u \sin v = \cos \left(u-v\right)-\cos \left(u+v\right)$</span>, then</p> <p><span class="math-container">\begin{equation}\renewcommand{\arraystretch}{1.5} \begin{array}{rl}&amp;\sin \left(\frac{x}{2}+\frac{y}{4}\right) \sin \left(\frac{x}{2}-\frac{y}{4}\right) = {\sin }^{2} \left(\frac{y}{2}\right)\\ \Longleftrightarrow &amp;\displaystyle \frac{1}{2} \left(\cos \left(\frac{y}{2}\right)-\cos \left(x\right)\right) = {\sin }^{2} \left(\frac{y}{2}\right)\\ \Longleftrightarrow &amp;\displaystyle {\cos }^{2} \left(\frac{y}{2}\right)+\frac{1}{2} \cos \left(\frac{y}{2}\right)-\left(1+\frac{\cos \left(x\right)}{2}\right) = 0\\ \Longleftrightarrow &amp;\displaystyle \cos \left(\frac{y}{2}\right) =-\frac{1}{4} \pm \frac{1}{4} \sqrt{17+8 \cos \left(x\right)}\\ \Longleftrightarrow &amp;\displaystyle \cos \left(\frac{y}{2}\right) =-\frac{1}{4} + \frac{1}{4} \sqrt{17+8 \cos \left(x\right)} \end{array}\end{equation}</span></p>
differentiation
<p>Let $f$ be a real-valued function continuous on $[a,b]$ and differentiable on $(a,b)$.<br> Suppose that $\lim_{x\rightarrow a}f'(x)$ exists.<br> Then, prove that $f$ is differentiable at $a$ and $f'(a)=\lim_{x\rightarrow a}f'(x)$. </p> <p>It seems like an easy example, but a little bit tricky.<br> I'm not sure which theorems should be used in here. </p> <p>==============================================================</p> <p>Using @David Mitra's advice and @Pete L. Clark's notes<br> I tried to solve this proof. I want to know my proof is correct or not.</p> <p>By MVT, for $h&gt;0$ and $c_h \in (a,a+h)$ $$\frac{f(a+h)-f(a)}{h}=f'(c_h)$$<br> and $\lim_{h \rightarrow 0^+}c_h=a$. </p> <p>Then $$\lim_{h \rightarrow 0^+}\frac{f(a+h)-f(a)}{h}=\lim_{h \rightarrow 0^+}f'(c_h)=\lim_{h \rightarrow 0^+}f'(a)$$ </p> <p>But that's enough? I think I should show something more, but don't know what it is. </p>
<p>Some hints:</p> <p>Using the definition of derivative, you need to show that $$ \lim_{h\rightarrow 0^+} {f(a+h)-f(a)\over h } $$ exists and is equal to $\lim\limits_{x\rightarrow a^+} f'(x)$.</p> <p>Note that for $h&gt;0$ the Mean Value Theorem provides a point $c_h$ with $a&lt;c_h&lt;a+h$ such that $$ {f(a+h)-f(a)\over h } =f'(c_h). $$</p> <p>Finally, note that $c_h\rightarrow a^+$ as $h\rightarrow0^+$.</p>
<p>The result is essentially Theorem 5.29 from <a href="http://alpha.math.uga.edu/%7Epete/2400full.pdf" rel="nofollow noreferrer">my honors calculus notes</a>. As I mention, I learned this result from Spivak's <em>Calculus</em>. I say &quot;essentially&quot; because the version discussed in my notes is for an interior point of an interval whereas your version is at an endpoint, but to prove the two-sided version you just make a one-sided argument twice, so it's really the same thing.</p> <p>[And David Mitra is right: the proof uses the Mean Value Theorem and not much else.]</p> <p><b>Added</b>: Since we are talking about this result anyway: although I call it a &quot;theorem of Spivak&quot;, this is not entirely serious -- the result is presumably much older than Michael Spivak. I am just identifying my (probably secondary) source. If someone knows a primary source, I'd be very happy to hear it.</p> <p>Also it is of interest to ask what this result is used for. In my notes it isn't used for anything but is only a curiosity. I think Spivak does use it for something, though I forget what at the moment. Moreover a colleague of mine called my attention to this result in the context of, IIRC, Taylor's Theorem. Does anyone know of further applications?</p>
geometry
<blockquote> <p>A goat is tied to an external corner of a rectangular shed measuring 4 m by 6 m. If the goat’s rope is 8 m long, what is the total area, in square meters, in which the goat can graze?</p> </blockquote> <p>Well, it seems like the goat can turn a full circle of radius 8 m, and a rectangular shed's diagonal is less than 8m (actually √52), and so shouldn't it be just 6 x 4 = 24 sq metre? The answer says it is 53 pi, and I have no clue why it is so or why my way of solving doesn't work. </p> <p>Updated: Oh, and the only area given is that of the shed's. How can I know the full area in which the goat can actually graze on?</p>
<p>Parts of three different circles. <a href="https://i.sstatic.net/QLqTA.jpg" rel="noreferrer"><img src="https://i.sstatic.net/QLqTA.jpg" alt="enter image description here"></a></p>
<p>$$\frac{3}{4}\pi 8^2+\frac{1}{4}\pi (8-6)^2+\frac{1}{4}\pi (8-4)^2=53\pi$$</p>
geometry
<p>A disc contains <span class="math-container">$n$</span> independent uniformly random points. Each point is connected by a line segment to its nearest neighbor, forming clusters of connected points.</p> <p>For example, here are <span class="math-container">$20$</span> random points and <span class="math-container">$7$</span> clusters, with an average cluster size of <span class="math-container">$\frac{20}{7}$</span>.</p> <p><a href="https://i.sstatic.net/PXovh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PXovh.png" alt="enter image description here" /></a></p> <blockquote> <p>What does the average cluster size approach as <span class="math-container">$n\to\infty$</span> ?</p> </blockquote> <p><strong>My attempt:</strong></p> <p>I made a <a href="https://www.desmos.com/calculator/v4ziiyaonx?lang=zh-CN" rel="nofollow noreferrer">random point generator</a> that generates <span class="math-container">$20$</span> random points. The average cluster size is usually approximately <span class="math-container">$3$</span>.</p> <p>I considered what happens when we add a new random point to a large set of random points. Adding the point either causes no change in the number of clusters, or it causes the number of clusters to increase by <span class="math-container">$1$</span> (<strong>Edit:</strong> this is not true, as noted by @TonyK in the comments). The probability that adding a new point increases the number of clusters by <span class="math-container">$1$</span>, is the reciprocal of the answer to my question. (Analogy: Imagine guests arriving to a party; if 25% of guests bring a bottle of wine, then the expectation of the average number of guests per bottle of wine is <span class="math-container">$4$</span>.) But I haven't worked out this probability.</p> <p><strong>Context:</strong></p> <p>This question was inspired by the question <a href="https://math.stackexchange.com/questions/271497/stars-in-the-universe-probability-of-mutual-nearest-neighbors">Stars in the universe - probability of mutual nearest neighbors</a>.</p> <p><strong>Edit:</strong> Postd on <a href="https://mathoverflow.net/q/462252/494920">MO</a>.</p>
<p>This is <a href="https://math.stackexchange.com/questions/271497/">Stars in the universe - probability of mutual nearest neighbors</a> in disguise. If there are <span class="math-container">$k$</span> pairs of mutual nearest neighbours, then there are <span class="math-container">$n-k$</span> edges (since <span class="math-container">$n$</span> edges are drawn and <span class="math-container">$k$</span> of them occur twice). There can’t be any cycles (unless it’s an Escher disk) because the distances would have to decrease all along the cycle. So the graph is a forest of <span class="math-container">$n-(n-k)=k$</span> trees. For <span class="math-container">$n\to\infty$</span>, the fraction of points that are their nearest neighbour’s nearest neighbour goes to the probability for that to happen; <span class="math-container">$\frac kn$</span> is half that fraction, and the expected average cluster size is the reciprocal of that. In two dimensions, that yields</p> <p><span class="math-container">$$ 2\left(\frac43+\frac{\sqrt3}{2\pi}\right)\approx3.218\;. $$</span></p>
<p>Your question is scratching the surface of the area in probability called <em>percolation theory</em>.</p> <p>Indeed, noting that the connectivity of a random graph in OP's model is independent of the scale, we may consider <span class="math-container">$n$</span> independent points sampled uniformly at random from a disk of radius <span class="math-container">$\sqrt{n}$</span>. Then, as <span class="math-container">$n \to \infty$</span>, the thermodynamic limit of this model converges to what is termed as the <em>nearest-neighbor continuum percolation</em> model [1], which can be described as follows:</p> <blockquote> <p><strong>Limit Model.</strong> Let <span class="math-container">$X$</span> be a Poisson point process on <span class="math-container">$\mathbb{R}^2$</span> with constant intensity. For any two distinct points <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$X$</span>, connect <span class="math-container">$x$</span> and <span class="math-container">$y$</span> if <span class="math-container">$y$</span> is a nearest neighbor of <span class="math-container">$x$</span>.</p> <p><strong>Q.</strong> Can we say anything about the cluster-size distribution in this model?</p> </blockquote> <p>To my knowledge, it seems that most of the questions regarding this distribution, including OP's one, has never been explored in the literature, and I have a hunch that it is almost impossible to answer this exactly. At least, it is known that all the clusters are finite with probability one, as proved in [1].</p> <hr /> <p><strong>Edit.</strong> <a href="https://math.stackexchange.com/a/4845566/9340"><strong>@joriki</strong> demonstrated that</a> OP's question can be answered without explicitly knowing the size distribution of the connected clusters, via associating each cluster to the unique &quot;mutual nearest-neighbor pair&quot; contained in it.</p> <hr /> <p><strong>[1]</strong> Häggström, O. and Meester, R. (1996), <em>Nearest neighbor and hard sphere models in continuum percolation</em>. Random Struct. Alg., 9: 295-315. <a href="https://doi.org/10.1002/(SICI)1098-2418(199610)9:3%3C295::AID-RSA3%3E3.0.CO;2-S" rel="nofollow noreferrer">https://doi.org/10.1002/(SICI)1098-2418(199610)9:3&lt;295::AID-RSA3&gt;3.0.CO;2-S</a></p>
number-theory
<p>The following appeared in the problems section of the March 2015 issue of the <em>American Mathematical Monthly</em>.</p> <blockquote> <p>Show that there are infinitely many rational triples $(a, b, c)$ such that $a + b + c = abc = 6$.</p> </blockquote> <p>For example, here are two solutions $(1,2,3)$ and $(25/21,54/35,49/15)$. </p> <p>The deadline for submitting solutions was July 31 2015, so it is now safe to ask: is there a simple solution? One that doesn't involve elliptic curves, for instance? </p>
<p>(<strong><em>Edit at the bottom</em></strong>.) Here is an <strong>elementary</strong> way (known to Fermat) to find an infinite number of rational points. From $a+b+c = abc = 6$, we need to solve the equation,</p> <p>$$ab(6-a-b) = 6\tag1$$</p> <p>Solving $(1)$ as a quadratic in $b$, its discriminant $D$ must be made a square,</p> <p>$$D := a^4-12a^3+36a^2-24a = z^2$$</p> <p>Using <strong><em>any</em></strong> non-zero solution $a_0$, do the transformation,</p> <p>$$a=x+a_0\tag2$$</p> <p>For this curve, let $a_0=2$, and we get,</p> <p>$$x^4-4x^3-12x^2+8x+16$$</p> <p>Assume it to be a square, </p> <p>$$x^4-4x^3-12x^2+8x+16 = (px^2+qx+r)^2$$</p> <p>Expand, then collect powers of $x$ to get the form,</p> <p>$$p_4x^4+p_3x^3+p_2x^2+p_1x+p_0 = 0$$</p> <p>where the $p_i$ are polynomials in $p,q,r$. Then solve the system of <strong>three</strong> equations $p_2 = p_1 = p_0 = 0$ using the <strong>three</strong> unknowns $p,q,r$. One ends up with,</p> <p>$$105/64x^4+3/8x^3=0$$</p> <p>Thus, $x =-16/35$ or,</p> <p>$$a = x+a_0 = -16/35+2 = 54/35$$</p> <p>and you have a new rational point, </p> <p>$$a_1 = 54/35 = 6\times 3^{\color{red}2}/35$$ </p> <p>Use this on $(2)$ as $x = y+54/35$ and repeat the procedure. One gets,</p> <p>$$a_2 = 6\times 4286835^{\color{red}2}/37065988023371$$</p> <p>Again using this on $(2)$, we eventually have,</p> <p>$$\small {a_3 = 6\times 11838631447160215184123872719289314446636565357654770746958595}^{\color{red}2} /d\quad$$</p> <p>where the denominator $d$ is a large integer too tedious to write. </p> <p><strong>Conclusion:</strong> Starting with a "seed" solution, just a few iterations of this procedure has yielded $a_i$ with a similar form $6n^{\color{red}2}/d$ that grow rapidly in "height". Heuristically, it then suggests an <strong><em>infinite</em></strong> sequence of distinct rational $a_i$ that grow in height with each iteration. </p> <p>$\color{blue}{Edit}$: Courtesy of Aretino's remark below, then another piece of the puzzle was found. We can translate his recursion into an identity. If,</p> <p>$$a^4-12a^3+36a^2-24a = z^2$$</p> <p>then subsequent ones are,</p> <p>$$v^4-12v^3+36v^2-24v = \left(\frac{12\,e\,g\,(e^2+3f^2)}{(e^2-f^2)^2}\right)^2$$</p> <p>where,</p> <p>$$\begin{aligned} v &amp;=\frac{-6g^2}{e^2-f^2}\\ \text{and,}\\ e &amp;=\frac{a^3-3a^2+3}{3a}\\ f &amp;=\frac{a^3-6a^2+9a-6}{z}\\ g &amp;=\frac{a^3-6a^2+12a-6}{z} \end{aligned}$$</p> <p>Starting with $a_0=2$, this leads to $v_1 = 6\times 3^2/35$, then $v_2 = 6\times 4286835^2/37065988023371$, <em>ad infinitum</em>. Thus, this is an <strong><em>elementary</em></strong> demonstration that there an infinite sequence of rational $a_i = v_i$ without appealing to elliptic curves.</p>
<p>More generally, suppose for some <span class="math-container">$S,P$</span> we're given a rational solution <span class="math-container">$(a_0,b_0,c_0)$</span> of the Diophantine equation <span class="math-container">$$ E = E_{S,P}: \quad a+b+c = S, \ \ abc = P. $$</span> Then, as long as <span class="math-container">$a,b,c$</span> are pairwise distinct, we can obtain a new solution <span class="math-container">$(a_1,b_1,c_1)$</span> by applying the transformation <span class="math-container">$$ T\bigl((a,b,c)\bigr) = \left( -\frac{a(b-c)^2}{(a-b)(a-c)} \, , -\frac{b(c-a)^2}{(b-c)(b-a)} \, , -\frac{c(a-b)^2}{(c-a)(c-b)} \right). $$</span> Indeed, it is easy to see that the coordinates of <span class="math-container">$T(a,b,c)$</span> multiply to <span class="math-container">$abc$</span>; that they also sum to <span class="math-container">$a+b+c$</span> takes only a bit of algebra. (This transformation was obtained by regarding <span class="math-container">$abc=P$</span> as a cubic curve in the plane <span class="math-container">$a+b+c=S$</span>, finding the tangent at <span class="math-container">$(a_0,b_0,c_0)$</span>, and computing its third point of intersection with <span class="math-container">$abc=P$</span>; see picture and further comments below.) We can then repeat the procedure, computing <span class="math-container">$$ (a_2,b_2,c_2) = T\bigl((a_1,b_1,c_1)\bigr), \quad (a_3,b_3,c_3) = T\bigl((a_2,b_2,c_2)\bigr), $$</span> etc., as long as each <span class="math-container">$a_i,b_i,c_i$</span> are again pairwise distinct. In our case <span class="math-container">$S=P=6$</span> and we start from <span class="math-container">$(a_0,b_0,c_0) = (1,2,3)$</span>, finding <span class="math-container">$(a_1,b_1,c_1) = (-1/2, 8, -3/2)$</span>, <span class="math-container">$(a_2,b_2,c_2) = (-361/68, -32/323, 867/76)$</span>, <span class="math-container">$$ (a_3,b_3,c_3) = \left( \frac{79790995729}{9885577384}\, ,\ -\frac{4927155328}{32322537971}\, ,\ -\frac{9280614987}{24403407416}\, \right), $$</span> "etcetera".</p> <p>As with the recursive construction given by <strong>Tito Piezas III</strong>, the construction of <span class="math-container">$T$</span> via tangents to a cubic is an example of a classical technique that has been incorporated into the modern theory of elliptic curves but does not require explicit delving into this theory. Also as with <strong>TPIII</strong>'s construction, completing the proof requires showing that the iteration does not eventually cycle. We do this by showing that (as suggested by the first three steps) the solutions <span class="math-container">$(a_i,b_i,c_i)$</span> get ever more complicated as <span class="math-container">$i$</span> increases.</p> <p>We measure the complexity of a rational number by writing it as <span class="math-container">$m/n$</span> <em>in lowest terms</em> and defining a "height" <span class="math-container">$H$</span> by <span class="math-container">$H(m/n) = \sqrt{m^2+n^2}$</span>. Using the defining equations of <span class="math-container">$E_{S,P}$</span>, we eliminate <span class="math-container">$b,c$</span> from the formula for the first coordinate of <span class="math-container">$T\bigl( (a,b,c) \bigr)$</span>, and likewise for each of the other two coordinates, finding that <span class="math-container">$$ T\bigl( (a,b,c) \bigr) = (t(a),t(b),t(c)) \bigr) $$</span> where <span class="math-container">$$ t(x) := -\frac{x^2(x-S)^2 - 4Px}{x^2(2x-S)+P}. $$</span> We find that the numerator and denominator are relatively prime as polynomials in <span class="math-container">$x$</span>, unless <span class="math-container">$P=0$</span> or <span class="math-container">$P=(S/3)^3$</span>, when <span class="math-container">$E_{S,P}$</span> is degenerate (obviously so if <span class="math-container">$P=0$</span>, and with an isolated double point at <span class="math-container">$a=b=c=S/3$</span> if <span class="math-container">$P=(S/3)^3$</span>). Thus <span class="math-container">$t$</span> is a rational function of degree <span class="math-container">$4$</span>, meaning that <span class="math-container">$$ t(m/n) = \frac{N(m,n)}{D(m,n)} $$</span> for some homogeneous polynomials <span class="math-container">$N,D$</span> of degree <span class="math-container">$4$</span> <em>without common factor</em>. We claim:</p> <p><strong>Proposition.</strong> <em>If <span class="math-container">$f=N/D$</span> is a rational function of degree <span class="math-container">$d$</span> then there exists <span class="math-container">$c&gt;0$</span> such that <span class="math-container">$H(t(x)) \geq c H(x)^d$</span> for all <span class="math-container">$x$</span>.</em></p> <p><strong>Corollary</strong>: <em>If <span class="math-container">$d&gt;1$</span> and a sequence <span class="math-container">$x_0,x_1,x_2,x_3,\ldots$</span> is defined inductively by <span class="math-container">$x_{i+1} = f(x_i)$</span>, then <span class="math-container">$H(x_i) \rightarrow \infty$</span> as <span class="math-container">$i \rightarrow \infty$</span> provided some <span class="math-container">$H(x_i)$</span> is large enough, namely <span class="math-container">$H(x_i) &gt; c^{-1/(d-1)}$</span>.</em></p> <p><em>Proof</em> of Proposition: This would be clear if we knew that the fraction <span class="math-container">$f(m/n) = N(m,n)/D(m,n)$</span> must be in lowest terms, because then we could take <span class="math-container">$$ c = c_0 := \min_{m^2+n^2 = 1} \sqrt{N(m,n)^2 + D(m,n)^2}. $$</span> (Note that <span class="math-container">$c_0$</span> is strictly positive, because it is the minimum value of a continuous positive function on the unit circle, and the unit circle is compact.) In general <span class="math-container">$N(m,n)$</span> and <span class="math-container">$D(m,n)$</span> need not be relatively prime, but their gcd is bounded above: because <span class="math-container">$N,D$</span> have no common factor, they have nonzero linear combinations of the form <span class="math-container">$R_1 m^{2d}$</span> and <span class="math-container">$R_2 n^{2d}$</span>, and since <span class="math-container">$\gcd(m^{2d},n^{2d}) = \gcd(m,n)^{2d} = 1$</span> we have <span class="math-container">$$ \gcd(N(m,n), D(m,n)) \leq R := \text{lcm} (R_1,R_2). $$</span> (In fact <span class="math-container">$R = \pm R_1 = \pm R_2$</span>, the common value being <span class="math-container">$\pm$</span> the <em>resultant</em> of <span class="math-container">$N$</span> and <span class="math-container">$D$</span>; but we do not need this.) Thus we may take <span class="math-container">$c = c_0/R$</span>, <strong>QED</strong>.</p> <p>For our degree-<span class="math-container">$4$</span> functions <span class="math-container">$t$</span> associated with <span class="math-container">$E_{S,P}$</span> we compute <span class="math-container">$R = P^2 (27P-S^3)^2$</span>, which is <span class="math-container">$18^4$</span> for our <span class="math-container">$S=P=6$</span>; and we calculate <span class="math-container">$c_0 &gt; 1/12$</span> (the minimum occurs near <span class="math-container">$(.955,.3)$</span>). Hence the sequence of solutions <span class="math-container">$(a_i,b_i,c_i)$</span> is guaranteed not to cycle once some coordinate has height at least <span class="math-container">$(12 \cdot 18^4)^{1/3} = 108$</span>. This already happens for <span class="math-container">$i=2$</span>, so we have proved that <span class="math-container">$E_{6,6}$</span> has infinitely many rational solutions. <span class="math-container">$\Box$</span></p> <p>The same technique works with <strong>TPIII</strong>'s recursion, which has <span class="math-container">$d=9$</span>.</p> <p>The following Sage plot shows:</p> <p>in <span class="math-container">$\color{blue}{\text{blue}}$</span>, the curve <span class="math-container">$E_{6,6}$</span>, projected to the <span class="math-container">$(a,b)$</span> plane (with both coordinates in <span class="math-container">$[-6,12]$</span>);</p> <p>in <span class="math-container">$\color{gray}{\text{gray}}$</span>, the asymptotes <span class="math-container">$a=0$</span>, <span class="math-container">$b=0$</span>, and <span class="math-container">$c=0$</span>;</p> <p>and in <span class="math-container">$\color{orange}{\text{orange}}$</span>, <span class="math-container">$\color{red}{\text{red}}$</span>, and <span class="math-container">$\color{brown}{\text{brown}}$</span>, the tangents to the curve at <span class="math-container">$(a_i,b_i,c_i)$</span> that meet the curve again at <span class="math-container">$(a_{i+1},b_{i+1},c_{i+1})$</span>, for <span class="math-container">$i=0,1,2$</span>:</p> <p><a href="https://i.sstatic.net/tf2eK.png" rel="noreferrer"><img src="https://i.sstatic.net/tf2eK.png" alt=""></a><br> <sub>(source: <a href="http://math.harvard.edu/~elkies/mx1384653.png" rel="noreferrer">harvard.edu</a>)</sub> </p> <p>Further solutions can be obtained intersecting <span class="math-container">$E$</span> with the line joining two non-consecutive points; this illustrated by the dotted <span class="math-container">$\color{green}{\text{green}}$</span> line, which connects the <span class="math-container">$i=0$</span> to the <span class="math-container">$i=2$</span> point, and meets <span class="math-container">$E$</span> again in a point <span class="math-container">$(20449/8023, 25538/10153, 15123/16159)$</span> with all coordinates positive.</p> <p>In the modern theory of elliptic curves, the rational points (including any rational "points at infinity", here the asymptotes) form an additive group, with three points adding to zero <strong>iff</strong> they are the intersection of <span class="math-container">$E$</span> with a line (counted with multiplicity). Hence if we denote our initial point <span class="math-container">$(1,2,3)=(a_0,b_0,c_0)$</span> by <span class="math-container">$P$</span>, the map <span class="math-container">$T$</span> is multiplication by <span class="math-container">$-2$</span> in the group law, so the <span class="math-container">$i$</span>-th iterate is <span class="math-container">$(-2)^i P$</span>, and <span class="math-container">$(20449/8023, 25538/10153, 15123/16159)$</span> is <span class="math-container">$-(P+4P) = -5P$</span>. Cyclic permutations of the coordinates is translation by a 3-torsion point (indeed an elliptic curve has a rational 3-torsion point <strong>iff</strong> it is isomorphic with <span class="math-container">$E_{S,P}$</span> for some <span class="math-container">$S$</span> and <span class="math-container">$P$</span>), and switching two coordinates is multiplication by <span class="math-container">$-1$</span>. The iteration constructed by <strong>Tito Piezas III</strong> is multiplication by <span class="math-container">$\pm 3$</span> in the group law; in general, multiplication by <span class="math-container">$k$</span> is a rational function of degree <span class="math-container">$k^2$</span>.</p>
matrices
<p>I have two square matrices: <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. <span class="math-container">$A^{-1}$</span> is known and I want to calculate <span class="math-container">$(A+B)^{-1}$</span>. Are there theorems that help with calculating the inverse of the sum of matrices? In general case <span class="math-container">$B^{-1}$</span> is not known, but if it is necessary then it can be assumed that <span class="math-container">$B^{-1}$</span> is also known.</p>
<p>In general, <span class="math-container">$A+B$</span> need not be invertible, even when <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are. But one might ask whether you can have a formula under the additional assumption that <span class="math-container">$A+B$</span> <em>is</em> invertible.</p> <p>As noted by Adrián Barquero, there is <a href="http://www.jstor.org/stable/2690437" rel="noreferrer">a paper by Ken Miller</a> published in the <em>Mathematics Magazine</em> in 1981 that addresses this.</p> <p>He proves the following:</p> <p><strong>Lemma.</strong> If <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> are invertible, and <span class="math-container">$B$</span> has rank <span class="math-container">$1$</span>, then let <span class="math-container">$g=\operatorname{trace}(BA^{-1})$</span>. Then <span class="math-container">$g\neq -1$</span> and <span class="math-container">$$(A+B)^{-1} = A^{-1} - \frac{1}{1+g}A^{-1}BA^{-1}.$$</span></p> <p>From this lemma, we can take a general <span class="math-container">$A+B$</span> that is invertible and write it as <span class="math-container">$A+B = A + B_1+B_2+\cdots+B_r$</span>, where <span class="math-container">$B_i$</span> each have rank <span class="math-container">$1$</span> and such that each <span class="math-container">$A+B_1+\cdots+B_k$</span> is invertible (such a decomposition always exists if <span class="math-container">$A+B$</span> is invertible and <span class="math-container">$\mathrm{rank}(B)=r$</span>). Then you get:</p> <p><strong>Theorem.</strong> Let <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> be nonsingular matrices, and let <span class="math-container">$B$</span> have rank <span class="math-container">$r\gt 0$</span>. Let <span class="math-container">$B=B_1+\cdots+B_r$</span>, where each <span class="math-container">$B_i$</span> has rank <span class="math-container">$1$</span>, and each <span class="math-container">$C_{k+1} = A+B_1+\cdots+B_k$</span> is nonsingular. Setting <span class="math-container">$C_1 = A$</span>, then <span class="math-container">$$C_{k+1}^{-1} = C_{k}^{-1} - g_kC_k^{-1}B_kC_k^{-1}$$</span> where <span class="math-container">$g_k = \frac{1}{1 + \operatorname{trace}(C_k^{-1}B_k)}$</span>. In particular, <span class="math-container">$$(A+B)^{-1} = C_r^{-1} - g_rC_r^{-1}B_rC_r^{-1}.$$</span></p> <p>(If the rank of <span class="math-container">$B$</span> is <span class="math-container">$0$</span>, then <span class="math-container">$B=0$</span>, so <span class="math-container">$(A+B)^{-1}=A^{-1}$</span>).</p>
<p>It is shown in <a href="http://dspace.library.cornell.edu/bitstream/1813/32750/1/BU-647-M.version2.pdf" rel="noreferrer">On Deriving the Inverse of a Sum of Matrices</a> that </p> <p><span class="math-container">$(A+B)^{-1}=A^{-1}-A^{-1}B(A+B)^{-1}$</span>.</p> <p>This equation cannot be used to calculate <span class="math-container">$(A+B)^{-1}$</span>, but it is useful for perturbation analysis where <span class="math-container">$B$</span> is a perturbation of <span class="math-container">$A$</span>. There are several other variations of the above form (see equations (22)-(26) in this paper). </p> <p>This result is good because it only requires <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> to be nonsingular. As a comparison, the SMW identity or Ken Miller's paper (as mentioned in the other answers) requires some nonsingualrity or rank conditions of <span class="math-container">$B$</span>.</p>
combinatorics
<p>I was under the mistaken impression that if one could find the generating function for a sequence of numbers, you could just plug in a natural number $n$ to find the nth term of the sequence. I realize now that I was confusing this with a closed form formula. </p> <p>So if that is not the case, then what is the point of generating functions? How do they make understanding counting sequences easier? For example, suppose I had a problem where I wanted to count how many ways I could buy $n$ pieces of apples, oranges, and pears given that I want an even number of apples, an odd number of oranges, and at most 3 pears. This would be the number of nonnegative integer solutions to $a+b+c=n$ with $a$ even, $b$ odd, and $0\leq c\leq 3$. This is the same as the coefficient of $x^n$ in the product $$ (1+x^2+x^4+\cdots)(x+x^3+x^5+\cdots)(1+x+x^2+x^3) = \frac{1}{1-x^2}\cdot\frac{x}{1-x^2}\cdot\frac{1-x^4}{1-x}$$</p> <p>But what good is that? I don't see how this is much better. Also, with use of exponential generating functions, it seems the choice of monomials we use as place holders for the terms of the sequence can be arbitrary. Then then $n$th term of the sequence is just the coefficient of the $n$th monomial that you've chosen to build the generating function with. What is the real advantage of doing things like this? Many problems I see tend to ask me find the generating function, but then I'm rarely asked to do anything with it.</p>
<p>Closed form formulas are overrated. When they exist, generating function techniques can often help you find them; when they don't, the generating function is the next best thing, and it turns out to be much more powerful than it looks at first glance. For example, most generating functions are actually <em>meromorphic functions</em>, and this means that one can deduce asymptotic information about a sequence from the locations of the poles of its generating function. This is, for example, how one deduces the asymptotic of the <a href="http://en.wikipedia.org/wiki/Partition_(number_theory)#Asymptotic_behaviour">partition numbers</a>.</p> <p>In your particular example, the generating function is rational, so it has a finite number of poles. That means you can use partial fraction decomposition on it to immediately get a closed form.</p> <p>You might be interested in reading <a href="https://math.berkeley.edu/~qchu/TopicsInGF.pdf">my notes on generating functions</a>, which have several examples and which I hope will be enlightening. The first basic thing to grasp is that manipulating generating functions is much easier than manipulating sequences, but the power of generating functions goes much deeper than this. For a really thorough discussion I highly recommend Flajolet and Sedgewick's <a href="http://algo.inria.fr/flajolet/Publications/books.html">Analytic Combinatorics</a>, which is available for free online. </p>
<p>Have you read <a href="http://www.math.upenn.edu/~wilf/gfologyLinked2.pdf">Generatingfunctionology</a> by Herbert Wilf? The book is loaded with methods and techniques based on generating functions for solving a number of different problems. I guess the book can answer your question better than I could.</p> <p>I feel that generating functions are powerful because they allow you to use tools from calculus and analysis to solve problems in areas such as discrete mathematics and combinatorics, where such tools don't seem readily applicable. </p> <p>Edit: I just wanted to add one more point. Suppose you have translated the problem from the sequence to the generating function and you don't see any simple closed form solutions, you could use Wolfram Alpha to make absolutely sure. Wolfram alpha can easily expand out expressions involving large polynomial terms and will often give you a series expansion if it exists in closed form. </p>
geometry
<p>How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$ Could anyone explain this to me, please?</p> <p>I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components...</p> <p>When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.</p>
<p>There exists an infinite number of vectors in 3 dimension that are perpendicular to a fixed one. They should only satisfy the following formula: $$(3\mathbf{i}+4\mathbf{j}-2\mathbf{k}) \cdot v=0$$</p> <p>For finding all of them, just choose 2 perpendicular vectors, like $v_1=(4\mathbf{i}-3\mathbf{j})$ and $v_2=(2\mathbf{i}+3\mathbf{k})$ and any linear combination of them is also perpendicular to the original vector: $$v=((4a+2b)\mathbf{i}-3a\mathbf{j}+3b\mathbf{k}) \hspace{10 mm} a,b \in \mathbb{R}$$</p>
<p>Take cross product with any vector. You will get one such vector.</p>
probability
<p>There's a question in my Olympiad questions book which I can't seem to solve:</p> <blockquote> <p>You have the option to throw a die up to three times. You will earn the face value of the die. You have the option to stop after each throw and walk away with the money earned. The earnings are not additive. What is the expected payoff of this game?</p> </blockquote> <p>I found a solution <a href="https://web.archive.org/web/20180120152240/http://mathforum.org/library/drmath/view/68228.html" rel="nofollow noreferrer">here</a> but I don't understand it.</p>
<p>Let's suppose we have only 1 roll. What is the expected payoff? Each roll is equally likely, so it will show $1,2,3,4,5,6$ with equal probability. Thus their average of $3.5$ is the expected payoff.</p> <p>Now let's suppose we have 2 rolls. If on the first roll, I roll a $6$, I would not continue. The next throw would only maintain my winnings of $6$ (with $1/6$ chance) or make me lose. Similarly, if I threw a $5$ or a $4$ on the first roll, I would not continue, because my expected payoff on the last throw would be a $3.5$. However, if I threw a $1,2$ of $3$, I would take that second round. This is again because I expect to win $3.5$.</p> <p>So in the 2 roll game, if I roll a $4,5,6$, I keep those rolls, but if I throw a $1,2,3$, I reroll. Thus I have a $1/2$ chance of keeping a $4,5,6$, or a $1/2$ chance of rerolling. Rerolling has an expected return of $3.5$. As the $4,5,6$ are equally likely, rolling a $4,5$ or $6$ has expected return $5$. Thus my expected payout on 2 rolls is $.5(5) + .5(3.5) = 4.25$.</p> <p>Now we go to the 3 roll game. If I roll a $5$ or $6$, I keep my roll. But now, even a $4$ is undesirable, because by rerolling, I'd be playing the 2 roll game, which has expected payout of $4.25$. So now the expected payout is $\frac{1}{3}(5.5) + \frac{2}{3}(4.25) = 4.\overline{66}$.</p> <p>Does that make sense?</p>
<p>This problem is solved using the theory of optimal stopping for Markov chains. I will explain some of the theory, then turn to your specific question. You can learn more about this fascinating topic in Chapter 4 of <em>Introduction to Stochastic Processes</em> by Gregory F. Lawler.</p> <hr> <p>Think of a Markov chain with state space $\cal S$ as a game.</p> <p>A <em>payoff function</em> $f:{\cal S}\to[0,\infty)$ assigns a monetary "payoff'' to each state of the Markov chain. This is the amount you would collect if you stop playing with the chain in that state. </p> <p>In contrast, the <em>value function</em> $v:{\cal S}\to[0,\infty)$ is defined as the greatest expected payoff possible from each starting point; $$v(x)=\sup_T \mathbb{E}_x(f(X_T)).$$ There is a single optimal strategy $T_{\rm opt}$ so that $v(x)=\mathbb{E}_x(f(X_{T_{\rm opt}})).$ It can be described as $T_{\rm opt}:=\inf(n\geq 0: X_n\in{\cal E})$, where ${\cal E}=\{x\in {\cal S}\mid f(x)=v(x)\}$. That is, you should stop playing as soon as you hit the set $\cal E$.</p> <hr> <p><strong>Example:</strong> </p> <p>You roll an ordinary die with outcomes $1,2,3,4,5,6$. You can keep the value or roll again. If you roll, you can keep the new value or roll a third time. After the third roll you must stop. You win the amount showing on the die. What is the value of this game?</p> <p><strong>Solution:</strong> The state space for the Markov chain is $${\cal S}=\{\mbox{Start}\}\cup\left\{(n,d): n=2,1,0; d=1,2,3,4,5,6\right\}.$$ The variable $n$ tells you how many rolls you have left, and this decreases by one every time you roll. Note that the states with $n=0$ are absorbing.</p> <p>You can think of the state space as a tree, the chain moves forward along the tree until it reaches the end. </p> <p><img src="https://i.sstatic.net/pNXRN.jpg" alt="enter image description here"></p> <p>The function $v$ is given above in green, while $f$ is in red.</p> <p>The payoff function $f$ is zero at the start, and otherwise equals the number of spots on $d$.</p> <p>To find the value function $v$, let's start at the right hand side of the tree. At $n=0$, we have $v(0,d)=d$, and we calculate $v$ elsewhere by working backwards, averaging over the next roll and taking the maximum of that and the current payoff. Mathematically, we use the property $v(x)=\max(f(x),(Pv)(x))$ where $Pv(x)=\sum_{y\in {\cal S}} p(x,y)v(y).$</p> <p>The value of the game at the start is \$4.66. The optimal strategy is to keep playing until you reach a state where the red number and green number are the same. </p>
geometry
<p><em>This is a very speculative/soft question; please keep this in mind when reading it. Here "higher" means "greater than 3".</em></p> <p>What I am wondering about is what <em>new</em> geometrical phenomena are there in higher dimensions. When I say new I mean phenomena which are counterintuitive or not analogous to their lower dimensional counterparts. A good example could be <a href="http://mathworld.wolfram.com/HyperspherePacking.html" rel="noreferrer">hypersphere packing</a>.</p> <p>My main (and sad) impression is that almost all phenomena in higher dimensions could be thought intuitively by dimensional analogy. See for example, <a href="http://eusebeia.dyndns.org/4d/vis/10-rot-1" rel="noreferrer">this link</a>:</p> <p><a href="https://i.sstatic.net/Tk7J4.gif" rel="noreferrer"><img src="https://i.sstatic.net/Tk7J4.gif" alt="Rotation of a 3-cube in the YW plane"></a></p> <p>What this implies (for me) is the boring consequence that there is no new conceptual richness in higher dimensional geometry beyond the fact than the numbers are larger (for example my field of study is string compactifications and though, at first sight, it could sound spectacular to use orientifolding which set a loci of fixed points which are O3 and O7 planes; the reasoning is pretty much the same as in lower dimensions...)</p> <p>However the question of higher dimensional geometry is very related (for me) to the idea of beauty and complexity: these projections to 2-D of higher dimensional objects totally amazes me (for example this orthonormal projection of a <a href="https://i.pinimg.com/originals/a3/ac/9f/a3ac9fdf2dd062ebf4e9fab843ddf1c4.jpg" rel="noreferrer">12-cube</a>) and makes me think there must be interesting higher dimensional phenomena...</p> <p><a href="https://i.sstatic.net/jU5pes.jpg" rel="noreferrer"><img src="https://i.sstatic.net/jU5pes.jpg" alt="enter image description here"></a></p> <p>I would thank anyone who could give me examples of beautiful ideas implying “visualization” of higher dimensional geometry…</p>
<p>In high dimensions, almost all of the volume of a ball sits at its surface. More exactly, if $V_d(r)$ is the volume of the $d$-dimensional ball with radius $r$, then for any $\epsilon&gt;0$, no matter how small, you have $$\lim_{d\to\infty} \frac{V_d(1-\epsilon)}{V_d(1)} = 0$$ Algebraically that's obvious, but geometrically I consider it highly surprising.</p> <p><strong>Edit:</strong></p> <p>Another surprising fact: In 4D and above, you can have a flat torus, that is, a torus without any intrinsic curvature (like a cylinder in 3D). Even more: You can draw such a torus (not an image of it, the flat torus itself) on the surface of a hyperball (that is, a hypersphere). Indeed, the three-dimensional hypersphere (surface of the four-dimensional hyperball) can be almost completely partitioned into such tori, with two circles remaining in two completely orthogonal planes (thanks to anon in the comments for reminding me of those two leftover circles). Note that the circles could be considered degenerate tori, as the flat tori continuously transform into them (in much the same way as the circles of latitude on the 2-sphere transform into a point at the poles).</p>
<p>A number of problems in discrete geometry (typically, involving arrangements of points or other objects in $\mathbb R^d$) change behavior as the number of dimensions grows past what we have intuition for.</p> <p>My favorite example is the "sausage catastrophe", because of the name. The problem here is: take $n$ unit balls in $\mathbb R^d$. How can we pack them together most compactly, minimizing the volume of the convex hull of their union? (To visualize this in $\mathbb R^3$, imagine that you're wrapping the $n$ balls in plastic wrap, creating a single object, and you want the object to be as small as possible.)</p> <p>There are two competing strategies here:</p> <ol> <li>Start with a dense sphere packing in $\mathbb R^d$, and pick out some roughly-circular piece of it.</li> <li>Arrange all the spheres in a line, so that the convex hull of their union forms the shape of a sausage.</li> </ol> <p>Which strategy is best? It depends on $d$, in kind of weird ways. For $d=2$, the first strategy (using the hexagonal circle packing, and taking a large hexagonal piece of it) is best for almost any number of circles. For $d=3$, the sausage strategy is the best known configuration for $n \le 56$ (though this is not proven) and the first strategy takes over for larger $n$ than that: the point where this switch happens is called the "sausage catastrophe".</p> <p>For $d=4$, the same behavior as in $d=3$ occurs, except we're even less certain when. We've managed to show that the sausage catastrophe occurs for some $n &lt; 375,769$. On the other hand, we're not even sure if the sausage is optimal for $n=5$.</p> <p>Finally, we know that there is <em>some</em> sufficiently large $d$ such that the sausage strategy is always the best strategy in $\mathbb R^d$, no matter how many balls there are. We think that value is $d=5$, but the best we've shown is that the sausage is always optimal for $d\ge 42$. There are many open questions about sausages.</p> <hr> <p>If you're thinking about the more general problem of packing spheres in $\mathbb R^d$ as densely as possible, the exciting stuff also happens in dimensions we can't visualize. A recent result says that the <a href="https://en.wikipedia.org/wiki/E8_lattice" rel="noreferrer">$E_8$ lattice</a> and the <a href="https://en.wikipedia.org/wiki/Leech_lattice" rel="noreferrer">Leech lattice</a> are the densest packing in $\mathbb R^8$ and $\mathbb R^{24}$ respectively, and these are much better than the best thing we know how to do in "adjacent" dimensions. In a sense, this is saying that there are $8$-dimensional and $24$-dimensional objects with no analog in $\mathbb R^d$ for arbitrary $d$: a perfect example of something that happens in many dimensions that can't be intuitively described by comparing it to ordinary $3$-dimensional space.</p> <hr> <p>Results like the <a href="https://en.wikipedia.org/wiki/Hales%E2%80%93Jewett_theorem" rel="noreferrer">Hales–Jewett theorem</a> are another source of "new behavior" in sufficiently high-dimensional space. The Hales–Jewett theorem says, roughly speaking, that for any $n$ there is a dimension $d$ such that $n$-in-a-row tic-tac-toe on an $n \times n \times \dots \times n$ board cannot be played to a draw. (For $n=3$, that dimension is $d=3$; for $n=4$, it's somewhere between $d=7$ and $d = 10^{11}$.) However, you could complain that this result is purely combinatorial; you're not doing so much visualizing of $d$-dimensional objects here.</p>
combinatorics
<p>The continued fraction of this series exhibits a truly crazy pattern and I found no reference for it so far. We have:</p> <p><span class="math-container">$$\sum_{k=1}^\infty \frac{1}{(2^k)!}=0.5416914682540160487415778421$$</span></p> <p>But the continued fraction is just beautiful:</p> <p><code>[1, 1, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 1, 1, 601080389, 2, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 1, 1, 1832624140942590533, 2, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 2, 601080389, 1, 1, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 1, 1, 23951146041928082866135587776380551749, 2, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 1, 1, 601080389, 2, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 2, 1832624140942590533, 1, 1, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 2, 601080389, 1, 1, 5, 2, 69, 1, 1, 5, 2,...]</code></p> <p>All of these large numbers are not just random - they have a simple closed form:</p> <p><span class="math-container">$$A_n= \left( \begin{array}( 2^n \\ 2^{n-1} \end{array} \right) -1$$</span></p> <p><span class="math-container">$$A_1=1$$</span></p> <p><span class="math-container">$$A_2=5$$</span></p> <p><span class="math-container">$$A_3=69$$</span></p> <p><span class="math-container">$$A_4=12869$$</span></p> <p><span class="math-container">$$A_5=601080389$$</span></p> <p>And so on. This sequence is not in OEIS, only the larger sequence is, which contains this one as a subsequence <a href="https://oeis.org/A014495" rel="noreferrer">https://oeis.org/A014495</a></p> <blockquote> <p>What is the explanation for this?</p> <p>Is there a regular pattern in this continued fraction (in the positions of numbers)?</p> </blockquote> <p>Is there generalizations for other sums of the form <span class="math-container">$\sum_{k=1}^\infty \frac{1}{(a^k)!}$</span>?</p> <hr /> <p><strong>Edit</strong></p> <p>I think a good move will be to rename the strings of small numbers:</p> <p><span class="math-container">$$a=1, 1, 5, 2,\qquad b=1, 1, 5, 1, 1,\qquad c=2,5,1,1,\qquad d=2, 5, 2$$</span></p> <p>As a side note if we could set <span class="math-container">$1,1=2$</span> then all these strings will be the same.</p> <p>Now we rewrite the sequence. I will denote <span class="math-container">$A_n$</span> by just their indices <span class="math-container">$n$</span>:</p> <blockquote> <p><span class="math-container">$$[a, 3, b, 4, c, 3, c, 5, d, 3, a, 4, b, 3, c, 6, d, 3, b, 4, c, 3, d, 5, a, 3, a, 4, b, 3, c, 7, \\ d, 3, b, 4, c, 3, c, 5, d, 3, a, 4, b, 3, d, 6, a, 3, b, 4, c, 3, d, 5, a, 3, a,...]$$</span></p> </blockquote> <p><span class="math-container">$$[a3b4c3c5d3a4b3c6d3b4c3d5a3a4b3c7d3b4c3c5d3a4b3d6a3b4c3d5a3a,...]$$</span></p> <p>Now we have new large numbers <span class="math-container">$A_n$</span> appear at positions <span class="math-container">$2^n$</span>. And postitions of the same numbers are in a simple arithmetic progression with a difference <span class="math-container">$2^n$</span> as well.</p> <p>Now we only have to figure out the pattern (if any exists) for <span class="math-container">$a,b,c,d$</span>.</p> <blockquote> <p>The <span class="math-container">$10~000$</span> terms of the continued fraction are uploaded at github <a href="http://gist.github.com/anonymous/20d6f0e773a2391dc01815e239eacc79" rel="noreferrer">here</a>.</p> </blockquote> <hr /> <p>I also link my <a href="https://math.stackexchange.com/q/1856226/269624">related question</a>, from the iformation there we can conclude that the series above provide a greedy algorithm Egyptian fraction expansion of the number, and the number is irrational by the theorem stated in <a href="http://www.jstor.org/stable/2305906" rel="noreferrer">this paper</a>.</p>
<p>actually your pattern is true and it is relatively easy to prove. Most of it can be found in an article from Henry Cohn in Acta Arithmetica (1996) ("Symmetry and specializability in continued fractions") where he finds similar patterns for other kind of continued fractions such as $\sum \frac{1}{10^{n!}}$. Curiously he doesn't mention your particular series although his method applies directly to it.</p> <p>Let $[a_0,a_1,a_2,\dots,a_m]$ a continued fraction and as usual let $$ \frac{p_m}{q_m} = [a_0,a_1,a_2,\dots,a_m] $$ we use this lemma from the mentioned article (the proof is not difficult and only uses elementary facts about continued fractions): </p> <p><strong>[Folding Lemma]</strong> $$ \frac{p_m}{q_m} + \frac{(-1)^m}{xq_m^2} = [a_0,a_1,a_2,\dots,a_m,x,-a_m,-a_{m-1},\dots,-a_2,-a_1] $$</p> <p>This involves negative integers but it can be easiy transformed in a similar expresion involving only posive numbers using the fact that for any continued fraction: $$ [\dots, a, -\beta] = [\dots, a-1, 1, \beta-1] $$ So $$ \frac{p_m}{q_m} + \frac{(-1)^m}{xq_m^2} = [a_0,a_1,a_2,\dots,a_m,x-1,1,a_m-1,a_{m-1},\dots,a_2,a_1] $$</p> <p>With all this consider the write the $m$th partial sum of your series as $$ S_n = \sum_{k=1}^n \frac{1}{2^k!} = [0,a_1,a_2,\dots,a_m] = \frac{p_m}{q_m}$$</p> <p>Where we can take always $m$ even (ie if $m$ is odd and $a_m&gt;1$ then we can consider instead the continued fraction $[0,a_1,a_2,\dots,a_m-1,1]$ and so on). </p> <p>Now $q_m = 2^n!$ we see it using induction on $n$: it is obvious for $n=1$ and if $S_{n-1} = P/2^{n-1}!$ then $$ S_n = \frac{P}{2^{n-1}!} + \frac{1}{2^n!} = \frac{P (2^n!/2^{n-1}!) + 1}{2^n!} $$ now any common factor of the numerator and the denominator is has to be a factor of $2^{n-1}!$ dividing also $P$, but this is impossible as both are coprime so $q_m = 2^n!$ and we are done. </p> <p>Using the "positive" form of the folding lemma with $x = \binom{2^{n+1}}{2^n}$ we get:</p> <p>$$ \frac{p_m}{q_m} + \frac{(-1)^m}{\binom{2^{n+1}}{2^n}(2^n!)^2} = \frac{p_m}{q_m} + \frac{1}{2^n!} = [0,a_1,a_2,\dots,a_m,\binom{2^{n+1}}{2^n}-1,1,a_m-1,a_{m-1},\dots,a_1] $$</p> <p>And we get the "shape" of the continued fraction and the your $A_m$. Let's see several steps: </p> <p>We start with the first term wich is $$ \frac{1}{2} = [0,2] $$ as $m$ is odd we change and use instead the continuos fraction $$ \frac{1}{2} = [0,1,1] $$ and apply the last formula getting $$ \frac{1}{2}+\frac{1}{2^2!} = [0,1,1,5,1,0,1] $$ We can dispose of the zeros using the fact that for any continued fraction: $$ [\dots, a, 0, b, \dots] = [\dots, a+b, \dots ] $$ so $$ \frac{1}{2}+\frac{1}{2^2!} = [0,1,1,5,2] $$ this time $m$ is even so we apply again the formula getting $$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!} = [0,1,1,5,2,69,1,1,5,1,1] $$ again $m$ is even (and it will always continue even as easy to infer) so we apply again the formula getting as the next term $$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!}+\frac{1}{2^4!} = [0,1,1,5,2,69,1,1,5,1,1,12869,1,0,1,5,1,1,69,2,5,1,1] $$ and we reduce it using the zero trick: $$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!}+\frac{1}{2^4!} = [0,1,1,5,2,69,1,1,5,1,1,12869,2,5,1,1,69,2,5,1,1] $$</p> <p>from now it is easy to see that the obtained continued fraction always has an even number of terms and we always have to remove the zero leaving a continued fraction ending again in 1,1. So the rule to obtain the continued fraction from here to an arbitrary number of termes is repeating the follewing steps: let the shape of the last continued fraction be $[0,1,1,b_1,\dots,b_k,1,1]$ then the next continued fraction will be $$[0,1,1,b_1,\dots,b_k,1,1,A_n,2,b_k,\dots,b_1,1,1 ]$$ from this you can easily derive the patterns you have found for the position of apperance of the different integers. </p>
<p>I have computed $10~000$ entries of the continued fraction, using Mathematica (the number itself was evaluated with $100~000$ digit precision).</p> <p>The results show that the pattern is very simple for the most part.</p> <p>First, we denote again:</p> <p>$$A_n= \left( \begin{array}( 2^n \\ 2^{n-1} \end{array} \right) -1$$</p> <p>The computed CF contains $A_n$ up to $n=13$, and the formula is numerically confirmed.</p> <p>Now the positions of $A_n$ go like this ($P_n$ is the position of $A_n$ in the list of all CF entries):</p> <p>$$P_2=P(5)=[3,8,13,18,23,28,\dots]$$</p> <p>$$P_3=P(69)=[5,16,25,36,45,56,\dots]$$</p> <p>$$P_4=[11,30,51,70,91,110,\dots]$$</p> <p>$$P_5=[21,60,101,140,181,220,\dots]$$</p> <p>$$P_6=[41,120,201,280,361,440,\dots]$$</p> <p>But all these are just a combination of two <strong>arithmetic progressions</strong> for even and odd terms!</p> <p>The first two terms are a little off, but the rest goes exactly like $P_4,P_5$, meaning for $n \geq 4$ we can write the general position for $A_n$ ($k=0,1,2,\dots$):</p> <blockquote> <p>$$p_k(A_n)= \begin{cases} 5 \cdot 2^{n-3}(1+2 k)+1,\qquad k \text{ even} \\ 5 \cdot 2^{n-3}(1+2 k),\qquad \qquad k \text{ odd} \end{cases}$$</p> </blockquote> <p>As special, different cases, we have:</p> <blockquote> <p>$$p_k(5)=3+5 k,\qquad k=0,1,2,\dots$$</p> <p>$$p_k(69)= \begin{cases} 2(3+5 k)-1,\qquad k \text{ even} \\ 2(3+5 k),\qquad \qquad k \text{ odd} \end{cases}$$</p> </blockquote> <hr> <p>For $P_1=1$ and for $2$ I can't see any definite pattern so far.</p> <hr> <blockquote> <p>Basically, we now know the explicit expression for every CF entry and all of its positions in the list, except for entries $1$ and $2$.</p> </blockquote> <p>It's enough now to consider the positions of $2$, then we just fill the rest of the list with $1$. The positions of $2$ start like this:</p> <p><code>[4, 12, 17, 22, 24, 29, 37, 42, 44, 52, 57, 59, 64, 69, 77, 82, 84, 92, 97, 102, 104, 109, 117, 119, 124, 132, 137, 139, 144, 149, 157, 162, 164, 172, 177, 182, 184, 189, 197, 202, 204, 212, 217, 219, 224, 229, 237, 239, 244, 252, 257, 262, 264, 269, 277, 279, 284, 292, 297, 299, 304, 309, 317, 322, 324, 332, 337, 342, 344, 349, 357, 362, 364, 372, 377, 379, 384, 389, 397, 402, 404, 412, 417, 422, 424, 429, 437, 439, 444, 452, 457, 459, 464, 469, 477, 479, 484, 492, 497, 502, 504, 509, 517, 522, 524, 532, 537, 539, 544, 549, 557, 559, 564, 572, 577, 582, 584, 589, 597,...]</code></p> <p>So far I have found four uninterrupted patterns for $2$:</p> <p>$$p_{1k}(2)=4+20k,\qquad k=0,1,2,\dots$$</p> <p>$$p_{2k}(2)=17+20k,\qquad k=0,1,2,\dots$$</p> <p>$$p_{3k}(2)=29+40k,\qquad k=0,1,2,\dots$$</p> <p>$$p_{4k}(2)=12+40k,\qquad k=0,1,2,\dots$$</p> <p><strong>Edit</strong></p> <p>Discounting these four progressions the rest of the sequence is very close to $20k$, but some numbers are $20k+2$, while some $20k-1$ with no apparent pattern:</p> <p><code>[22,42,59,82,102,119,139,162,182,202,219,239,262,279,299,322,342,362,379,402,422,439,459,479,502,522,539,559,582,599,619,642,662,682,699,722,742,759,779,802,822,842,859,879,902,919,939,959,982,1002,1019,1042,1062,1079,1099,1119,1142,1162,1179,1199,1222,1239,1259,1282,1302,1322,1339,1362,1382,1399,1419,1442,1462,1482,1499,1519,1542,1559,1579,1602,1622,1642,1659,1682,1702,1719,1739,1759,1782,1802,1819,1839,1862,1879,1899,1919,1942,1962,1979,2002,2022,2039,2059,2082,2102,2122,2139,2159,2182,2199,2219,2239,2262,2282,2299,2322,2342,2359,2379,2399,2422,2442,2459,2479,2502,2519,2539,2562,2582,2602,2619,2642,2662,2679,2699,2722,2742,2762,2779,2799,2822,2839,2859,2882,2902,2922,2939,2962,2982,2999,3019,3039,3062,3082,3099,3119,3142,3159,3179,3202,3222,3242,3259,3282,3302,3319,3339,3362,3382,3402,3419,3439,3462,3479,3499,3519,3542,3562,3579,3602,3622,3639,3659,3679,3702,3722,3739,3759,3782,3799,3819,3839,3862,3882,3899,3922,3942,3959,3979,4002,4022,4042,4059,4079,4102,4119,4139,4162,4182,4202,4219,4242,4262,4279,4299,4319,4342,4362,4379,4399,4422,4439,4459,4479,4502,4522,4539,4562,4582,4599,4619,4642,4662,4682,4699,4719,4742,4759,4779,4799,4822,4842,4859,4882,4902,4919,4939,4959,4982,5002,...]</code></p> <hr> <blockquote> <p>The $10~000$ terms of the continued fraction are uploaded at github <a href="http://gist.github.com/anonymous/20d6f0e773a2391dc01815e239eacc79">here</a>. You can check my conjectures or try to obtain the full pattern.</p> </blockquote> <hr> <blockquote> <p>I would also like some hints and outlines for a proof of the above conjectures.</p> </blockquote> <p>I understand that it's quite likely that any of the patterns I found break down for some large $k$.</p>
linear-algebra
<p>Assuming the axiom of choice, set $\mathbb F$ to be some field (we can assume it has characteristics $0$).</p> <p>I was told, by more than one person, that if $\kappa$ is an infinite cardinal then the vector space $V=\mathbb F^{(\kappa)}$ (that is an infinitely dimensional space with basis of cardinality $\kappa$) is <em>not</em> isomorphic (as a vector space) to the algebraic dual, $V^*$.</p> <p>I have asked several professors in my department, and this seems to be completely folklore. I was directed to some book, but could not have find it in there as well.</p> <p>The <a href="http://en.wikipedia.org/wiki/Dual_space#Infinite-dimensional_case">Wikipedia entry</a> tells me that this is indeed not a cardinality issue, for example $\mathbb R^{&lt;\omega}$ (that is all the eventually zero sequences of real numbers) has the same cardinality as its dual $\mathbb R^\omega$ but they are not isomorphic.</p> <p>Of course being of the same cardinality is necessary but far from sufficient for two vector spaces to be isomorphic.</p> <p>What I am asking, really, is whether or not it is possible when given a basis and an embedding of a basis of $V$ into $V^*$, to say "<strong>This</strong> guy is not in the span of the embedding"?</p> <p><strong>Edit:</strong> I read the answers in the link given by Qiaochu. They did not satisfy me too much. </p> <p>My main problem is this: suppose $\kappa$ is our basis then $V$ consists of $\{f\colon\kappa\to\mathbb F\Big| |f^{-1}[\mathbb F\setminus\{0\}]|&lt;\infty\}$ (that is finite support), while $V^*=\{f\colon\kappa\to\mathbb F\}$ (that is <em>all</em> the functions).</p> <p>In particular, the basis for $V$ is given by $f_\alpha(x) = \delta_{\alpha x}$ (i.e. $1$ on $\alpha$, and $0$ elsewhere), while $V^*$ needs a much larger basis. Why can't there by other linear functionals on $V$?</p> <p><strong>Edit II:</strong> After the discussions in the comments and the answers, I have a better understanding of my question to begin with. I have no qualms that under the axiom of choice given an infinite set $\kappa$ there are a lot more functions from $\kappa$ into $\mathbb F$, than functions with <em>finite support</em> from $\kappa$ into $\mathbb F$. It is also clear to me that the basis of a vector space is actually the set of $\delta$ functions, whereas the basis for the dual is a subset of characteristic functions.</p> <p>My problem is, if so, <em>why</em> is the dual space composed from all functions from $A$ into $F$? </p> <p>(And if possible, not to just show by cardinality games that the basis is much larger but actually show the algorithm for the diagonalization.)</p>
<p>This is just Bill Dubuque's sci.math proof (see <a href="http://groups.google.com/d/msg/sci.math/8aeaiKMLP8o/2IqZlhlzdCIJ">Google Groups</a> or <a href="http://mathforum.org/kb/message.jspa?messageID=7216370">MathForum</a>) mentioned in the comments, expanded.</p> <p><strong>Edit.</strong> I'm also reorganizing this so that it flows a bit better.</p> <p>Let $F$ be a field, and let $V$ be the vector space of dimension $\kappa$.</p> <p>Then $V$ is naturally isomorphic to $\mathop{\bigoplus}\limits_{i\in\kappa}F$, the set of all functions $f\colon \kappa\to F$ of finite support. Let $\epsilon_i$ be the element of $V$ that sends $i$ to $1$ and all $j\neq i$ to $0$ (that is, you can think of it as the $\kappa$-tuple with coefficients in $F$ that has $1$ in the $i$th coordinate, and $0$s elsewhere).</p> <p><strong>Lemma 1.</strong> If $\dim(V)=\kappa$, and either $\kappa$ or $|F|$ are infinite, then $|V|=\kappa|F|=\max\{\kappa,|F|\}$.</p> <p><em>Proof.</em> If $\kappa$ is finite, then $V=F^{\kappa}$, so $|V|=|F|^{\kappa}=|F|=|F|\kappa$, as $|F|$ is infinite here and the equality holds.</p> <p>Assume then that $\kappa$ is infinite. Each element of $V$ can be represented uniquely as a linear combination of the $\epsilon_i$. There are $\kappa$ distinct finite subsets of $\kappa$; and for a subset with $n$ elements, we have $|F|^n$ distinct vectors in $V$. </p> <p>If $\kappa\leq |F|$, then in particular $F$ is infinite, so $|F|^n=|F|$. Hence you have $|F|$ distinct vectors for each of the $\kappa$ distinct subsets (even throwing away the zero vector), so there is a total of $\kappa|F|$ vectors in $V$.</p> <p>If $|F|\lt\kappa$, then $|F|^n\lt\kappa$ since $\kappa$ is infinite; so there are at most $\kappa$ vectors for each subset, so there are at most $\kappa^2 = \kappa$ vectors in $V$. Since the basis has $\kappa$ elements, $\kappa\leq|V|\leq\kappa$, so $|V|=\kappa=\max\{\kappa,|F|\}$. <strong>QED</strong></p> <p>Now let $V^*$ be the dual of $V$. Since $V^* = \mathcal{L}(V,F)$ (where $\mathcal{L}(V,W)$ is the vector space of all $F$-linear maps from $V$ to $W$), and $V=\mathop{\oplus}\limits_{i\in\kappa}F$, then again from abstract nonsense we know that $$V^*\cong \prod_{i\in\kappa}\mathcal{L}(F,F) \cong \prod_{i\in\kappa}F.$$ Therefore, $|V^*| = |F|^{\kappa}$. </p> <hr/> <p><strong>Added.</strong> Why is it that if $A$ is the basis of a vector space $V$, then $V^*$ is equivalent to the set of all functions from $A$ to the ground field?</p> <p>A functional $f\colon V\to F$ is completely determined by its value on a basis (just like any other linear transformation); thus, if two functionals agree on $A$, then they agree everywhere. Hence, there is a natural injection, via restriction, from the set of all linear transformations $V\to F$ (denoted $\mathcal{L}(V,F)$) to the set of all functions $A\to F$, $F^A\cong \prod\limits_{a\in A}F$. Moreover, given any function $g\colon A\to F$, we can extend $g$ linearly to all of $V$: given $\mathbf{x}\in V$, there exists a unique finite subset $\mathbf{a}_1,\ldots,\mathbf{a}_n$ (pairwise distinct) of $A$ and unique scalars $\alpha_1,\ldots,\alpha_n$, none equal to zero, such that $\mathbf{x}=\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n$ (that's from the definition of basis as a spanning set that is linearly independent; spanning ensures the existence of at least one such expression, linear independence guarantees that there is at most one such expression); we define $g(\mathbf{x})$ to be $$g(\mathbf{x})=\alpha_1g(\mathbf{a}_1)+\cdots \alpha_ng(\mathbf{a}_n).$$ (The image of $\mathbf{0}$ is the empty sum, hence equal to $0$). Now, let us show that this is linear.</p> <p>First, note that $\mathbf{x}=\beta_1\mathbf{a}_{i_1}+\cdots\beta_m\mathbf{a}_{i_m}$ is <em>any</em> expression of $\mathbf{x}$ as a linear combination of pairwise distinct elements of the basis $A$, then it must be the case that this expression is equal to the one we already had, plus some terms with coefficient equal to $0$. This follows from the linear independence of $A$: take $$\mathbf{0}=\mathbf{x}-\mathbf{x} = (\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n) - (\beta_1\mathbf{a}_{i_1}+\cdots+\beta_m\mathbf{a}_{i_m}).$$ After any cancellation that can be done, you are left with a linear combination of elements in the linearly independent set $A$ equal to $\mathbf{0}$, so all coefficients must be equal to $0$. That means that we can likewise define $g$ as follows: given <strong>any</strong> expression of $\mathbf{x}$ as a linear combination of elements of $A$, $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$, with $\mathbf{a}_i\in A$, not necessarily distinct, $\gamma_i$ scalars not necessarily equal to $0$, we define $$g(\mathbf{x}) = \gamma_1g(\mathbf{a}_1)+\cdots+\gamma_mg(\mathbf{a}_m).$$ This will be well-defined by the linear independence of $A$. And now it is very easy to see that $g$ is linear on $V$: if $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$ and $\mathbf{y}=\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n$ are expressions for $\mathbf{x}$ and $\mathbf{y}$ as linear combinations of elements of $A$, then $$\begin{align*} g(\mathbf{x}+\lambda\mathbf{y}) &amp;= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+\lambda(\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n)\Bigr)\\ &amp;= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+ \lambda\delta_{1}\mathbf{a'}_1+\cdots+\lambda\delta_n\mathbf{a'}_n\\ &amp;= \gamma_1g(\mathbf{a}_1) + \cdots \gamma_mg(\mathbf{a}_m) + \lambda\delta_1g(\mathbf{a'}_1) + \cdots + \lambda\delta_ng(\mathbf{a'}_n)\\ &amp;= g(\mathbf{x})+\lambda g(\mathbf{y}). \end{align*}$$</p> <p>Thus, the map $\mathcal{L}(V,F)\to F^A$ is in fact onto, giving a bijection. </p> <p>This is the "linear-algebra" proof. The "abstract nonsense proof" relies on the fact that if $A$ is a basis for $V$, then $V$ is isomorphic to $\mathop{\bigoplus}\limits_{a\in A}F$, a direct sum of $|A|$ copies of $A$, and on the following universal property of the direct sum:</p> <p><strong>Definition.</strong> Let $\mathcal{C}$ be an category, let $\{X_i\}{i\in I}$ be a family of objects in $\mathcal{C}$. A <em>coproduct</em> of the $X_i$ is an object $C$ of $\mathcal{C}$ together with a family of morphisms $\iota_j\colon X_j\to C$ such that for every object $X$ and ever family of morphisms $g_j\colon X_j\to X$, there exists a unique morphism $\mathbf{f}\colon C\to X$ such that for all $j$, $g_j = \mathbf{g}\circ \iota_j$. </p> <p>That is, a family of maps from each element of the family is equivalent to a single map from the coproduct (just like a family of maps <em>into</em> the members of a family is equivalent to a single map <em>into</em> the product of the family). In particular, we get that:</p> <p><strong>Theorem.</strong> Let $\mathcal{C}$ be a category in which the sets of morphisms are sets; let $\{X_i\}_{i\in I}$ be a family of objects of $\mathcal{C}$, and let $(C,\{\iota_j\}_{j\in I})$ be their coproduct. Then for every object $X$ of $\mathcal{C}$ there is a natural bijection $$\mathrm{Hom}_{\mathcal{C}}(C,X) \longleftrightarrow \prod_{j\in I}\mathrm{Hom}_{\mathcal{C}}(X_j,X).$$</p> <p>The left hand side is the collection of morphisms from the coproduct to $X$; the right hand side is the collection of all families of morphisms from each element of $\{X_i\}_{i\in I}$ into $X$. </p> <p>In the vector space case, the fact that a linear transformation is completely determined by its value on a basis is what establishes that a vector space $V$ with basis $A$ is the coproduct of $|A|$ copies of the one-dimensional vector space $F$. So we have that $$\mathcal{L}(V,W) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}\limits_{a\in A}F,W\right) \leftrightarrow \prod_{a\in A}\mathcal{L}(F,W).$$ But a linear transformation from $F$ to $W$ is equivalent to a map from the basis $\{1\}$ of $F$ into $W$, so $\mathcal{L}(F,W) \cong W$. Thus, we get that if $V$ has a basis of cardinality $\kappa$ (finite or infinite), we have: $$\mathcal{L}(V,F) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}_{i\in\kappa}F,F\right) \leftrightarrow \prod_{i\in\kappa}\mathcal{L}(F,F) \leftrightarrow \prod_{i\in\kappa}F = F^{\kappa}.$$</p> <hr/> <p><strong>Lemma 2.</strong> If $\kappa$ is infinite, then $\dim(V^*)\geq |F|$. </p> <p><em>Proof.</em> If $F$ is finite, then the inequality is immediate. Assume then that $F$ is infinite. Let $c\in F$, $c\neq 0$. Define $\mathbf{f}_c\colon V\to F$ by $\mathbf{f}_c(\epsilon_n) = c^n$ if $n\in\omega$, and $\mathbf{f}_c(\epsilon_i)=0$ if $i\geq\omega$. These are linearly independent:</p> <p>Suppose that $c_1,\ldots,c_m$ are pairwise distinct nonzero elements of $F$, and that $\alpha_1\mathbf{f}_{c_1} + \cdots + \alpha_m\mathbf{f}_{c_m} = \mathbf{0}$. Then for each $i\in\omega$ we have $$\alpha_1 c_1^i + \cdots + \alpha_m c_m^i = 0.$$ Viewing the first $m$ of these equations as linear equations in the $\alpha_j$, the corresponding coefficient matrix is the <a href="http://en.wikipedia.org/wiki/Vandermonde_matrix">Vandermonde matrix</a>, $$\left(\begin{array}{cccc} 1 &amp; 1 &amp; \cdots &amp; 1\\ c_1 &amp; c_2 &amp; \cdots &amp; c_m\\ c_1^2 &amp; c_2^2 &amp; \cdots &amp; c_m^2\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\ c_1^{m-1} &amp; c_2^{m-1} &amp; \cdots &amp; c_m^{m-1} \end{array}\right),$$ whose determinant is $\prod\limits_{1\leq i\lt j\leq m}(c_j-c_i)\neq 0$. Thus, the system has a unique solution, to wit $\alpha_1=\cdots=\alpha_m = 0$. </p> <p>Thus, the $|F|$ linear functionals $\mathbf{f}_c$ are linearly independent, so $\dim(V^*)\geq |F|$. <strong>QED</strong></p> <p>To recapitulate: Let $V$ be a vector space of dimension $\kappa$ over $F$, with $\kappa$ infinite. Let $V^*$ be the dual of $V$. Then $V\cong\mathop{\bigoplus}\limits_{i\in\kappa}F$ and $V^*\cong\prod\limits_{i\in\kappa}F$.</p> <p>Let $\lambda$ be the dimension of $V^*$. Then by Lemma 1 we have $|V^*| = \lambda|F|$. </p> <p>By Lemma 2, $\lambda=\dim(V^*)\geq |F|$, so $|V^*| = \lambda$. On the other hand, since $V^*\cong\prod\limits_{i\in\kappa}F$, then $|V^*|=|F|^{\kappa}$.</p> <p>Therefore, $\lambda= |F|^{\kappa}\geq 2^{\kappa} \gt \kappa$. Thus, $\dim(V^*)\gt\dim(V)$, so $V$ is not isomorphic to $V^*$. </p> <hr/> <p><strong>Added${}^{\mathbf{2}}$.</strong> Some results on vector spaces and bases.</p> <p>Let $V$ be a vector space, and let $A$ be a maximal linearly independent set (that is, $A$ is linearly independent, and if $B$ is any subset of $V$ that properly contains $A$, then $B$ is linearly dependent). </p> <p>In order to guarantee that there <em>is</em> a maximal linearly independent set in any vector space, one needs to invoke the Axiom of Choice in some manner, since the existence of such a set is, as we will see below, equivalent to a basis; however, here we are assuming that we already have such a set given. I believe that the Axiom of Choice is <strong>not</strong> involved in any of what follows.</p> <p><strong>Proposition.</strong> $\mathrm{span}(A) = V$.</p> <p><em>Proof.</em> Since $A\subseteq V$, then $\mathrm{span}(A)\subseteq V$. Let $\mathbf{v}\in V$. If $v\in A$, then $v\in\mathrm{span}(A)$. If $v\notin A$, then $B=V\cup\{v\}$ is linearly dependent by maximality. Therefore, there exists a finite subset $a_1,\ldots,a_m$ in $A$ and scalars $\alpha_1,\ldots,\alpha_n$, not all zero, such that $\alpha_1a_1+\cdots+\alpha_ma_m=\mathbf{0}$. Since $A$ is linearly independent, at least one of the $a_i$ must be equal to $v$; say $a_1$. Moreover, $v$ must occur with a nonzero coefficient, again by the linear independence of $A$. So $\alpha_1\neq 0$, and we can then write $$v = a_1 = \frac{1}{\alpha_1}(-\alpha_2a_2 -\cdots - \alpha_na_n)\in\mathrm{span}(A).$$ This proves that $V\subseteq \mathrm{span}(A)$. $\Box$</p> <p><strong>Proposition.</strong> Let $V$ be a vector space, and let $X$ be a linearly independent subset of $V$. If $v\in\mathrm{span}(X)$, then any two expressions of $v$ as linear combinations of elements of $X$ differ only in having extra summands of the form $0x$ with $x\in X$.</p> <p><em>Proof.</em> Let $v = a_1x_1+\cdots a_nx_n = b_1y_1+\cdots+b_my_m$ be two expressions of $v$ as linear combinations of $X$. </p> <p>We may assume without loss of generality that $n\leq m$. Reordering the $x_i$ and the $y_j$ if necessary, we may assume that $x_1=y_1$, $x_2=y_2,\ldots,x_{k}=y_k$ for some $k$, $0\leq k\leq n$, and $x_1,\ldots,x_k,x_{k+1},\ldots,x_n,y_{k+1},\ldots,y_m$ are pairwise distinct. Then $$\begin{align*} \mathbf{0} &amp;= v-v\\ &amp;=(a_1x_1+\cdots+a_nx_n)-(b_1y_1+\cdots+b_my_m)\\ &amp;= (a_1-b_1)x_1 + \cdots + (a_k-b_k)x_k + a_{k+1}x_{k+1}+\cdots + a_nx_n - b_{k+1}y_{k+1}-\cdots - b_my_m. \end{align*}$$ As this is a linear combination of pairwise distinct elements of $X$ equal to $\mathbf{0}$, it follows from the linear independence of $X$ that $a_{k+1}=\cdots=a_n=0$, $b_{k+1}=\cdots=b_m=0$, and $a_1=b_1$, $a_2=b_2,\ldots,a_k=b_k$. That is, the two expressions of $v$ as linear combinations of elements of $X$ differ only in that there are extra summands of the form $0x$ with $x\in X$ in them. <strong>QED</strong></p> <p><strong>Corollary.</strong> Let $V$ be a vector space, and let $A$ be a maximal independent subset of $V$. If $W$ is a vector space, and $f\colon A\to W$ is any function, then there exists a unique linear transformation $T\colon V\to W$ such that $T(a)=f(a)$ for each $a\in A$.</p> <p><em>Proof.</em> <strong>Existence.</strong> Given $v\in V$, then $v\in\mathrm{span}{A}$. Therefore, we can express $v$ as a linear combination of elements of $A$, $v = \alpha_1a_1+\cdots+\alpha_na_n$. Define $$T(v) = \alpha_1f(a_1)+\cdots+\alpha_nf(a_n).$$ Note that $T$ is well-defined: if $v = \beta_1b_1+\cdots+\beta_mb_m$ is any other expression of $v$ as a linear combination of elements of $A$, then by the lemma above the two expressions differ only in summands of the form $0x$; but these summands do not affect the value of $T$. </p> <p>Note also that $T$ is linear, arguing as above. Finally, since $a\in A$ can be expressed as $a=1a$, then $T(a) = 1f(a) = f(a)$, so the restriction of $T$ to $A$ is equal to $f$.</p> <p><strong>Uniqueness.</strong> If $U$ is any linear transformation $V\to W$ such that $U(a)=f(a)$ for all $a\in A$, then for every $v\in V$, write $v=\alpha_1a_1+\cdots+\alpha_na_n$ with $a_i\in A$. Then. $$\begin{align*} U(v) &amp;= U(\alpha_1a_1+\cdots + \alpha_na_n)\\ &amp;= \alpha_1U(a_1) + \cdots + \alpha_n U(a_n)\\ &amp;= \alpha_1f(a_1)+\cdots + \alpha_n f(a_n)\\ &amp;= \alpha_1T(a_1) + \cdots + \alpha_n T(a_n)\\ &amp;= T(\alpha_1a_1+\cdots+\alpha_na_n)\\ &amp;= T(v).\end{align*}$$ Thus, $U=T$. <strong>QED</strong></p>
<p>The "this guy" you're looking for is just the function that takes each of your basis vectors and sends them to 1.</p> <p>Note that this is <em>not</em> in the span of the set of functions that each take a single basis vector to 1, and all others to 0, because the span is defined to be the set of <em>finite linear combinations</em> of basis vectors. And a finite linear combination of things that have finite-dimensional support will still have finite-dimensional support, and thus can't send infinitely many independent vectors all to 1.</p> <p>You may want to say, "But look! If I add up these infinitely many functions, I clearly get a function that sends all my basis vectors to 1!" But this is actually a very tricky process. What you need is a notion of <em>convergence</em> if you want to add infinitely many things, which isn't always obvious how to define.</p> <p>In the end, it boils down to a cardinality issue - not of the vector spaces themselves, but of the dimensions. In the example you give, $\mathbb{R}^{&lt;\omega}$ has countably infinite dimension, but the dimension of its dual is uncountable.</p> <p>(Added, in response to comment below): Think of all the possible ways you can have a function which is 1 on some set of your basis vectors and 0 on the rest. The only ways you can do these and stay in the span of your basis vectors is if you take the value 1 on only <em>finitely many</em> of those vectors. Since your starting space was infinite-dimensional, there's an uncountable number of such functions, and so uncountably many of them lie outside the span of your basis. You can only ever incorporate finitely many of them by "adding" them in one at a time (or even countably many at a time), so you'll never establish the vector isomorphism you're looking for.</p>
logic
<p>Second order logic is a language, but, is it a logic?</p> <p>My understanding is that a logic (or "logical system") is an ordered pair; it is a formal system together with a semantics. However, the language of second-order logic is associated with a variety of inequivalent formal systems and a variety of semantics. So, is it a logic?</p> <p>I think not. Supposing not, what do sentences like the following (<a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem#Completeness_in_other_logics">copied</a> from wikipedia) even mean?</p> <blockquote> <p>Second-order logic, for example, does not have a completeness theorem for its standard semantics (but does have the completeness property for Henkin semantics).</p> </blockquote>
<p>This is a very interesting philosophical question! There are philosophers on both sides of the issue. A prominent figure who spoke against SOL as logic was W.V.O. Quine. His argument mostly focuses on the point that SOL, which quantifies over sets explicitly, seems to just be "set theory in disguise", and hence <em>not</em> logic proper (see his <em>Philosophy of Logic</em> for more details). People have also put forth the argument you presented, viz. any putative logic must have a sound and complete deduction system in order to be a genuine logic, and since SOL doesn't have one, it follows that it can't be a logic. </p> <p>On the other side, a prominent defender of SOL as logic proper is George Boolos. See his "On Second-Order Logic" and his "To Be is to Be the Value of a Variable (Or to Be Some Values of Some Variables)". One argument put forward in favor of SOL from a practical standpoint is that it has a way of really shortening proofs (see "A Curious Inference"). Boolos also suggests that we could use monadic SOL to model talk of plurals, i.e. to model sentences like "Some critics only admire one another". Another great defender of SOL is Stewart Shapiro, whose book <em>Foundations Without Foundationalism</em> not only presents a wonderful technical treatment of SOL but also some good philosophical defenses of it. In particular, Shapiro argues that SOL is needed for everyday mathematical discourse. I would <em>highly</em> recommend this book as an introduction to the subject, both mathematically and philosophically.</p> <p>The status of completeness theorems in this debate is fascinating, but difficult to pin down. After all, many philosophers might just say "Look, so what if SOL is incomplete? That just means that the set of <em>validities</em> doesn't coincide with the set of <em>provable sentences</em>, i.e. there are some sentences which are logically true, but not provable. And while this is unfortunate, why shouldn't we expect this? Why shouldn't we just accept that logic didn't turn out the way we hoped?" On the other hand, many philosophers might say "Look, logic is about <em>making inferences</em>. If there are truths which don't come from some deduction or inference, then how could they be <em>truths of logic</em>?" </p> <p>I personally don't find the view that genuine logics must have complete proof systems convincing (or at least convincing enough), but I certainly do feel the pull to search for/work with logics with complete proof systems. Ultimately, it comes down to what you think "logic" and "logical consequence" are, which are highly contentious matters in the philosophical literature, and which have no widely accepted answers. </p>
<p>Second-order logic has a collection of deduction rules. From a collection of statements, we can apply the deduction rules to derive new statements. This is called "syntactic entailment", or more simply, "formal proof".</p> <p>Standard semantics talks about set-theoretic interpretations with the property that power types are interpreted as power sets. e.g. if $X$ is the interpretation of the domain of objects, then $\mathcal{P}(X)$ (or equivalently, $\{T,F\}^X$) is the interpretation of the domain of unary predicates on objects.</p> <p>Given a collection of statements, we can consider set-theoretic models of those statements; interpretations where those statements are true. Using <em>the laws of ZFC</em>, one can show that the interpretations of statements are also true in every model. This is called "semantic entailment".</p> <p>The incompleteness theorem is that syntactic entailment and semantic entailment (with standard semantics) do not coincide for second-order logic. (They do coincide for first-order logic!)</p> <hr> <p>Incidentally, <em>first-order logic</em> also has variations. e.g. on the syntactic side, one can generalize to <a href="http://en.wikipedia.org/wiki/Free_logic">"free logic"</a> (which, in some settings is what people mean when they say "first-order logic"), and people often consider various restricted fragments of it such as "regular logic" or "algebraic theories". On the semantic side, we can consider interpretations in terms of sheaves or objects in categories rather than in sets.</p>
logic
<p>I am doing some homework exercises and stumbled upon this question. I don't know where to start. </p> <blockquote> <p>Prove that the union of countably many countable sets is countable.</p> </blockquote> <p>Just reading it confuses me. </p> <p>Any hints or help is greatly appreciated! Cheers!</p>
<p>Let's start with a quick review of "countable". A set is countable if we can set up a 1-1 correspondence between the set and the natural numbers. As an example, let's take <span class="math-container">$\mathbb{Z}$</span>, which consists of all the integers. Is <span class="math-container">$\mathbb Z$</span> countable?</p> <p>It may seem uncountable if you pick a naive correspondence, say <span class="math-container">$1 \mapsto 1$</span>, <span class="math-container">$2 \mapsto 2 ...$</span>, which leaves all of the negative numbers unmapped. But if we organize the integers like this:</p> <p><span class="math-container">$$0$$</span> <span class="math-container">$$1, -1$$</span> <span class="math-container">$$2, -2$$</span> <span class="math-container">$$3, -3$$</span> <span class="math-container">$$...$$</span></p> <p>We quickly see that there is a map that works. Map 1 to 0, 2 to 1, 3 to -1, 4 to 2, 5 to -2, etc. So given an element <span class="math-container">$x$</span> in <span class="math-container">$\mathbb Z$</span>, we either have that <span class="math-container">$1 \mapsto x$</span> if <span class="math-container">$x=0$</span>, <span class="math-container">$2x \mapsto x$</span> if <span class="math-container">$x &gt; 0$</span>, or <span class="math-container">$2|x|+1 \mapsto x$</span> if <span class="math-container">$x &lt; 0$</span>. So the integers are countable.</p> <p>We proved this by finding a map between the integers and the natural numbers. So to show that the union of countably many sets is countable, we need to find a similar mapping. First, let's unpack "the union of countably many countable sets is countable":</p> <ol> <li><p>"countable sets" pretty simple. If <span class="math-container">$S$</span> is in our set of sets, there's a 1-1 correspondence between elements of <span class="math-container">$S$</span> and <span class="math-container">$\mathbb N$</span>.</p></li> <li><p>"countably many countable sets" we have a 1-1 correspondence between <span class="math-container">$\mathbb N$</span> and the sets themselves. In other words, we can write the sets as <span class="math-container">$S_1$</span>, <span class="math-container">$S_2$</span>, <span class="math-container">$S_3$</span>... Let's call the set of sets <span class="math-container">$\{S_n\}, n \in \mathbb N$</span>.</p></li> <li><p>"union of countably many countable sets is countable". There is a 1-1 mapping between the elements in <span class="math-container">$\mathbb N$</span> and the elements in <span class="math-container">$S_1 \cup S_2 \cup S_3 ...$</span></p></li> </ol> <p>So how do we prove this? We need to find a correspondence, of course. Fortunately, there's a simple way to do this. Let <span class="math-container">$s_{nm}$</span> be the <span class="math-container">$mth$</span> element of <span class="math-container">$S_n$</span>. We can do this because <span class="math-container">$S_n$</span> is by definition of the problem countable. We can write the elements of ALL the sets like this:</p> <p><span class="math-container">$$s_{11}, s_{12}, s_{13} ...$$</span> <span class="math-container">$$s_{21}, s_{22}, s_{23} ...$$</span> <span class="math-container">$$s_{31}, s_{32}, s_{33} ...$$</span> <span class="math-container">$$...$$</span></p> <p>Now let <span class="math-container">$1 \mapsto s_{11}$</span>, <span class="math-container">$2 \mapsto s_{12}$</span>, <span class="math-container">$3 \mapsto s_{21}$</span>, <span class="math-container">$4 \mapsto s_{13}$</span>, etc. You might notice that if we cross out every element that we've mapped, we're crossing them out in diagonal lines. With <span class="math-container">$1$</span> we cross out the first diagonal, <span class="math-container">$2-3$</span> we cross out the second diagonal, <span class="math-container">$4-6$</span> the third diagonal, <span class="math-container">$7-10$</span> the fourth diagonal, etc. The <span class="math-container">$nth$</span> diagonal requires us to map <span class="math-container">$n$</span> elements to cross it out. Since we never "run out" of elements in <span class="math-container">$\mathbb N$</span>, eventually given any diagonal we'll create a map to every element in it. Since obviously every element in <span class="math-container">$S_1 \cup S_2 \cup S_3 ...$</span> is in one of the diagonals, we've created a 1-1 map between <span class="math-container">$\mathbb N$</span> and the set of sets.</p> <p>Let's extend this one step further. What if we made <span class="math-container">$s_{11} = 1/1$</span>, <span class="math-container">$s_{12} = 1/2$</span>, <span class="math-container">$s_{21} = 2/1$</span>, etc? Then <span class="math-container">$S_1 \cup S_2 \cup S_3 ... = \mathbb Q^+$</span>! This is how you prove that the rationals are countable. Well, the positive rationals anyway. Can you extend these proofs to show that the rationals are countable?</p>
<p>@Hovercouch's answer is correct, but the presentation hides a really rather important point that you ought probably to know about. Here it is:</p> <blockquote> <p>The argument depends on accepting (a weak version of) the Axiom of Choice!</p> </blockquote> <p>Why so?</p> <p>You are only given that each $S_i$ is <em>countable</em>. You aren't given up front a way of <em>counting</em> any particular $S_i$, so you need to choose a surjective function $f_i\colon \mathbb{N} \to S_i$ to do the counting (in @Hovercouch's notation, $f_m(n) = s_{mn}$). And, crucially, you need to choose such an $f_i$ countably many times (a choice for each $i$). </p> <p>That's an infinite sequence of choices to make: and it's a version of the highly non-trivial Axiom of Choice that says, yep, it's legitimate to pretend we can do that. </p>
matrices
<p>What is the importance of the rank of a matrix? I know that the rank of a matrix is the number of linearly independent rows or columns (whichever is smaller). </p> <p>Why is it a problem if a matrix is rank deficient?</p> <p>Also, why is the smaller value between row and column the rank? </p> <p>An intuitive or descriptive answer (also in terms of geometry) would help a lot.</p>
<p>A rank of the matrix is probably the most important concept you learn in Matrix Algebra. There are two ways to look at the rank of a matrix. One from a theoretical setting and the other from a applied setting.</p> <p>From a theoretical setting, if we say that a linear operator has a rank $p$, it means that the range of the linear operator is a $p$ dimensional space. From a matrix algebra point of view, column rank denotes the number of independent columns of a matrix while row rank denotes the number of independent rows of a matrix. An interesting, and I think a non-obvious (though the proof is not hard) fact is the row rank is same as column rank. When we say a matrix $A \in \mathbb{R}^{n \times n}$ has rank $p$, what it means is that if we take all vectors $x \in \mathbb{R}^{n \times 1}$, then $Ax$ spans a $p$ dimensional sub-space. Let us see this in a 2D setting. For instance, if </p> <p>$A = \left( \begin{array}{cc} 1 &amp; 2 \\ 2 &amp; 4 \end{array} \right) \in \mathbb{R}^{2 \times 2}$ and let $x = \left( \begin{array}{c} x_1 \\ x_2 \end{array} \right) \in \mathbb{R}^{2 \times 1}$, then $\left( \begin{array}{c} y_1 \\ y_2 \end{array} \right) = y = Ax = \left( \begin{array}{c} x_1 + 2x_2 \\ 2x_1 + 4x_2 \end{array} \right)$.</p> <p>The rank of matrix $A$ is $1$ and we find that $y_2 = 2y_1$ which is nothing but a line passing through the origin in the plane.</p> <p>What has happened is the points $(x_1,x_2)$ on the $x_1 - x_2$ plane have all been mapped on to a line $y_2 = 2y_1$. Looking closely, the points in the $x_1 - x_2$ plane along the line $x_1 + 2x_2 = c = \text{const}$, have all been mapped onto a single point $(c,2c)$ in the $y_1 - y_2$ plane. So the single point $(c,2c)$ on the $y_1 - y_2$ plane represents a straight line $x_1 + 2x_2 = c$ in the $x_1 - x_2$ plane.</p> <p>This is the reason why you cannot solve a linear system when it is rank deficient. The rank deficient matrix $A$ maps $x$ to $y$ and this transformation is neither onto (points in the $y_1 - y_2$ plane not on the line $y_2 = 2y_1$ e.g. $(2,3)$ are not mapped onto, which results in no solutions) nor one-to-one (every point $(c,2c)$ on the line $y_2 = 2y_1$ corresponds to the line $x_1 + 2x_2 =c$ in the $x_1 - x_2$ plane, which results in infinite solutions).</p> <p>An observation you can make here is that the product of the slopes of the line $x_1 + 2x_2 = c$ and $y_2 = 2y_1$ is $-1$. This is true in general for higher dimensions as well.</p> <p>From an applied setting, rank of a matrix denotes the <strong>information content</strong> of the matrix. The lower the rank, the lower is the "information content". For instance, when we say a rank $1$ matrix, the matrix can be written as a product of a column vector times a row vector i.e. if $u$ and $v$ are column vectors, the matrix $uv^T$ is a rank one matrix. So all we need to represent the matrix is $2n-1$ elements. In general, if we know that a matrix $A \in \mathbb{R}^{m \times n}$ is of rank $p$, then we can write $A$ as $U V^T$ where $U \in \mathbb{R}^{m \times p}$ and is of rank $p$ and $V \in \mathbb{R}^{n \times p}$ and is of rank $p$. So if we know that a matrix $A$ is of rank $p$ all we need is only $2np-p^2$ of its entries. So if we know that a matrix is of low rank, then we can compress and store the matrix and can do efficient matrix operations using it. The above ideas can be extended for any linear operator and these in fact form the basis for various compression techniques. You might also want to look up <a href="http://en.wikipedia.org/wiki/Singular_value_decomposition">Singular Value Decomposition</a> which gives us a nice (though expensive) way to make low rank approximations of a matrix which allows for compression.</p> <p>From solving a linear system point of view, when the square matrix is rank deficient, it means that we do not have complete information about the system, ergo we cannot solve the system.</p>
<p>The rank of a matrix is of <em>major</em> importance. It is closely connected to the nullity of the matrix (which is the dimension of the solution space of the equation $A\mathbf{x}=\mathbf{0}$), via the Dimension Theorem:</p> <p><strong>Dimension Theorem.</strong> Let $A$ be an $m\times n$ matrix. Then $\mathrm{rank}(A)+\mathrm{nullity}(A) = n$.</p> <p>Even if all you know about matrices is that they can be used to solve systems of linear equations, this tells you that the rank is <em>very</em> important, because it tells you whether $A\mathbf{x}=\mathbf{0}$ has a single solution or multiple solutions.</p> <p>When you think of matrices as being linear transformations (there is a correspondence between $m\times n$ matrices with coefficients in a field $\mathbf{F}$, and the linear transformations between a given vector space over $\mathbf{F}$ of dimension $n$ with a given basis, and a vector space of dimension $m$ with a given basis), then the rank of the matrix is the dimension of the image of that linear transformation.</p> <p>The simplest way of computing the Jordan Canonical Form of a matrix (an important way of representing a matrix) is to use the ranks of certain matrices associated to $A$; the same is true for the Rational Canonical Form.</p> <p>Really, the rank just shows all over the place, it is usually relatively easy to compute, and has a lot of applications and important properties. They will likely not be completely apparent until you start seeing the myriad applications of matrices to things like vector calculus, linear algebra, and the like, but trust me, they're there.</p>
probability
<p>You have a six-sided die. You keep a cumulative total of your dice rolls. (E.g. if you roll a 3, then a 5, then a 2, your cumulative total is 10.) If your cumulative total is ever equal to a perfect square, then you lose, and you go home with nothing. Otherwise, you can choose to go home with a payout of your cumulative total, or to roll the die again.</p> <p>My question is about the optimal strategy for this game. In particular, this means that I am looking for an answer to this question: if my cumulative total is $n$, do I choose to roll or not to roll in order to maximize my cumulative total? Is there some integer $N$ after which the answer to this question is always to roll?</p> <p>I think that there is such an integer, and I conjecture that this integer is $4$. My reasoning is that the square numbers become sufficiently sparse for the expected value to always be in increased by rolling the die again.</p> <p>As an example, suppose your cumulative total is $35$. Rolling a $1$ and hitting 36 means we go home with nothing, so the expected value of rolling once is:</p> <p>$$E(Roll|35) = \frac 0 6 + \frac {37} 6 + \frac {38} 6 + \frac{39} 6 + \frac {40} {6} + \frac{41}{6} = 32.5$$</p> <p>i.e.</p> <p>$$E(Roll|35) = \frac 1 6 \cdot (37 + 38 + 39 + 40 + 41) = 32.5$$</p> <p>But the next square after $35$ is $49$. So in the event that we don't roll a $36$, we get to keep rolling the die at no risk as long as the cumulative total is less than $42$. For the sake of simplification, let's say that if we roll and don't hit $36$, then we will roll once more. That die-roll has an expected value of $3.5$. This means the expected value of rolling on $35$ is:</p> <p>$$E(Roll|35) = \frac 1 6 \cdot (40.5 + 41.5 + 42.5 + 43.5 + 44.5) = 35.42$$</p> <p>And since $35.42 &gt; 35$, the profit-maximizing choice is to roll again. And this strategy can be applied for every total. I don't see when this would cease to be the reasonable move, though I haven't attempted to verify it computationally. I intuitively think about this in terms of diverging sequences.</p> <p>I recently had this question in a job interview, and thought it was quite interesting. (And counter-intuitive, since this profit-maximizing strategy invariably results in going home with nothing.)</p>
<p><em>How to use your observation in general</em>:</p> <p>Just checking things for $35$ isn't indicative of the general case, which for large values is different. Take your argument and use a general square $n^2$.</p> <p>Suppose you're at $n^2-1$. You can leave with $n^2-1$, or you can roll. On a roll, you lose everything with probability $\frac{1}{6}$. With probability $\frac{5}{6}$ you'll get at least $(n+1)^2-6 = n^2 + 2n - 5$ by rolling until it isn't safe to any more. So, for a simple lower bound, we want to know if $\frac{5}{6}(n^2 + 2n - 5)$ is greater than $n^2-1$. For large $n$, this is not the case, and we can actually get an upper bound by similar reasoning:</p> <hr> <p><em>An upper bound</em>:</p> <p>[A tidied up version of the original follows. Keeping the original upper bound would have required a few extra lines of logic, so I've just upped it slightly to keep it brief.]</p> <p>An upper bound: Suppose we're between $n^2-5$ and $n^2-1$. If we roll, the best things could go for us would be to lose $\frac{1}{6}$ the time, and, when we don't lose, we get in the range $(n+1)^2-6$ to $(n+1)^2-1$ (the highest we could get without taking another risk). Just comparing current valuation, you're trading at least $n^2-5$ for at most $\frac{5}{6}(n^2 + 2n)$ by the time you get to the next decision. The difference is $-\frac{1}{6}n^2 + \frac{5}{3}n + 5$. For $n \geq 13$ this is negative. So if you're at $13^2-k$ for $1 \leq k \leq 6$, don't roll. (Higher $n$ making this difference even more negative gives this conclusion. See next paragraph for details.)</p> <p>Extra details for the logic giving this upper bound: Let $W_n$ be the current expected winnings of your strategy for the first time you are within $6$ of $n^2$, also including times you busted or stopped on your own at a lower value. The above shows that, if your strategy ever includes rolling when within $6$ of $n^2$ for $n \geq 13$, then $W_{n+1}$ is less than $W_n$. Therefore there's no worry about anything increasing without bound, and you should indeed never go above $168$.</p> <hr> <p><em>An easy lower bound (small numbers)</em>:</p> <p>Low values: For $n = 3$ we have $(5+6+7+8)/6 &gt; 3$ so roll at $n = 3$. Except for the case $n = 3$, if we roll at $n$, we'll certainly get at least $\frac{5}{6}(n+3)$. So if $\frac{5}{6}(n+3) - n &gt; 0$, roll again. This tells us to roll again for $n &lt; 18$ at the least. Since we shouldn't stop if we can't bust, this tells us to roll again for $n \leq 18$.</p> <hr> <p><em>An algorithm and reported results for the general case</em>:</p> <p>Obtaining a general solution: Start with expected values of $n$ and a decision of "stay" assigned to $n$ for $163 \leq n \leq 168$. Then work backwards to obtain EV and decisions at each smaller $n$. From this, you'll see where to hit/stay and what the set of reachable values with the optimal strategy is.</p> <p>A quick script I wrote outputs the following: Stay at 30, 31, 43, 44, 45, 58, 59, 60, 61, 62, and 75+. You'll never exceed 80. The overall EV of the game is roughly 7.2. (Standard disclaimer that you should program it yourself and check.)</p>
<p>If you keep rolling the die forever, you will hit a perfect square with probability 1. Intuitively, every time you get close to a square (within distance 6), you have a 1/6 chance to hit the square. This happens infinitely many times, and so you're bound to hit a square eventually.</p> <p>Slightly more formally, suppose that you roll the die infinitely often whatever happens. Your <em>trajectory</em> (sequence of partial sums) has the property that the difference between adjacent points is between $1$ and $6$. In particular, for each number $N$, there will be a point $x$ in the trajectory such that $N-6 \leq x &lt; N$. If $x$ is the first such point, then you have a chance of $1/6$ to hit $N$ <em>as your next point</em>. Furthermore, if $N_2 &gt; N_1+6$, then these events are independent. So your probability of hitting either $N_1$ or $N_2$ at the first shot once "in range" is $1-(5/6)^2$. The same argument works for any finite number of separated points, and given infinitely many points, no matter how distant, we conclude that you hit one of them almost surely. </p>
logic
<p>I am comfortable with the different sizes of infinities and Cantor's "diagonal argument" to prove that the set of all subsets of an infinite set has cardinality strictly greater than the set itself. So if we have a set $\Omega$ and $|\Omega| = \aleph_i$, then (assuming continuum hypothesis) the cardinality of $2^{\Omega}$ is $|2^{\Omega}| = \aleph_{i+1} &gt; \aleph_i$ and we have $\aleph_{i+1} = 2^{\aleph_i}$.</p> <p>I am fine with these argument.</p> <p>What I don't understand is why should there be a smallest $\infty$? Is there a proof (or) an axiom that there exists a smallest $\infty$ and that this is what we address as "countably infinite"? or to rephrase the question "why can't I find a set whose power set gives us $\mathbb{N}$"?</p> <p>The reason why I am asking this question is in "some sense" $\aleph_i = \log_2(\aleph_{i+1})$. I do not completely understand why this process should stop when $\aleph_i = \aleph_0$.</p> <p>(Though coming to think about it I can feel that if I take an infinite set with $\log_2 \aleph_0$ elements I can still put it in one-to-one correspondence with the Natural number set. So Is $\log_2 \aleph_0 = \aleph_0$ (or) am I just confused? If $n \rightarrow \infty$, then $2^n \rightarrow \infty$ faster while $\log (n) \rightarrow \infty$ slower and $\log (\log (n)) \rightarrow \infty$ even slower and $\log(\log(\log (n))) \rightarrow \infty$ even "more" slower and so on).</p> <p>Is there a clean (and relatively elementary) way to explain this to me?</p> <p>(I am totally fine if you direct me to some paper (or) webpage. I tried googling but could not find an answer to my question)</p>
<p>First, let me clear up a misunderstanding.</p> <p>Question: Does $2^\omega = \aleph_1$? More generally, does $2^{\aleph_\alpha} = \aleph_{\alpha+1}$?</p> <p>The answer of "yes" to the first question is known as the continuum hypothesis (CH), while an answer of "yes" to the second is known as the generalized continuum hypothesis (GCH).</p> <p>Answer: Both are undecidable using ZFC. That is, Godel has proven that if you assume the answer to CH is "yes", then you don't add any new contradictions to set theory. In particular, this means it's impossible to prove the answer is "no".</p> <p>Later, Cohen showed that if you assume the answer is "no", then you don't add any new contradictions to set theory. In particular, this means it's impossible to prove the answer is "yes".</p> <p>The answer for GCH is the same.</p> <p>All of this is just to say that while you are allowed to assume an answer of "yes" to GCH (which is what you did in your post), there is no way you can prove that you are correct.</p> <p>With that out of the way, let me address your actual question.</p> <p>Yes, there is a proof that $\omega$ is the smallest infinite cardinality. It all goes back to some very precise definitions. In short, when one does set theory, all one really has to work with is the "is a member of" relation $\in$. One defines an "ordinal number" to be any transitive set $X$ such that $(X,\in)$ is a well ordered set. (Here, "transitive" means "every element is a subset". It's a weird condition which basically means that $\in$ is a transitive relation). For example, if $X = \emptyset$ or $X=\{\emptyset\}$ or $X = \{\{\emptyset\},\emptyset\}$, then $X$ is an ordinal. However, if $X=\{\{\emptyset\}\}$, then $X$ is not.</p> <p>There are 2 important facts about ordinals. First, <strong>every</strong> well ordered set is (order) isomorphic to a unique ordinal. Second, for any two ordinals $\alpha$ and $\beta$, precisely one of the following holds: $\alpha\in\beta$ or $\beta\in\alpha$ or $\beta = \alpha$. In fact, it turns out the the collection of ordinals is well ordered by $\in$, modulo the detail that there is no set of ordinals.</p> <p>Now, cardinal numbers are simply special kinds of ordinal numbers. They are ordinal numbers which can't be bijected (in, perhaps NOT an order preserving way) with any smaller ordinal. It follows from this that the collection of all cardinal numbers is also well ordered. Hence, as long as there is one cardinal with a given property, there will be a least one. One example of such a property is "is infinite".</p> <p>Finally, let me just point out that for <strong>finite</strong> numbers (i.e. finite natural numbers), one usually cannot find a solution $m$ to $m = \log_2(n)$. Thus, as least from an inductive reasoning point of view, it's not surprising that there are infinite cardinals which can't be written as $2^k$ for some cardinal $k$.</p>
<p>Suppose $A$ is an infinite set. In particular, it is not empty, so there exists a $x_1\in A$. Now $A$ is infinite, so it is not $\{x_1\}$, so there exists an $x_2\in A$ such that $x_2\neq x_1$. Now $A$ is infinite, so it is not $\{x_1,x_2\}$, so there exists an $x_3\in A$ such that $x_3\neq x_1$ and $x_3\neq x_2$. Now $A$ is infinite, so it is not $\{x_1,x_2,x_3\}$, so there exists an $x_4\in A$ such that $x_4\neq x_1$, $x_4\neq x_2$ and $x_4\neq x_3$... And so on.</p> <p>This was you can construct a sequence $(x_n)_{n\geq1}$ of elements of $A$ such that $x_i\neq x_j$ if $i\neq j$. If you define a function $f:\mathbb N\to A$ so that $f(i)=x_i$ for all $i\in\mathbb N$, then $f$ is injective.</p> <p>By definition, then, the cardinal of $A$ is greater or equal to that of $\mathbb N$.</p> <p>Since $\mathbb N$ is itself infinite, this shows that the smallest infinite cardinal is $\aleph_0=|\mathbb N|$.</p>