tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
geometry
<p>Peter Taylor pointed out at <a href="https://matheducators.stackexchange.com/a/14228/511">MathEduc</a> that some <a href="https://en.wikipedia.org/wiki/Bermudian_dollar" rel="noreferrer">BD</a>$1 coins from 1997 are <a href="https://en.wikipedia.org/wiki/Reuleaux_triangle" rel="noreferrer">Reuleaux triangles</a>: <hr /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="https://i.sstatic.net/9u3cs.jpg" rel="noreferrer"><img src="https://i.sstatic.net/9u3cs.jpg" alt="Coin"></a> <br /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <sup> (Image from <a href="https://de.ucoin.net/coin/bermuda-1-dollar-1997/?tid=50025" rel="noreferrer">de.ucoin.net</a>.) </sup> <hr /> Does anyone know why they were shaped this way? Was there some pragmatic reason connected to its constant-width property? Or was it just a design/aesthetic decision?</p>
<blockquote> <p>The <a href="https://en.wikipedia.org/wiki/Blaschke%E2%80%93Lebesgue_theorem" rel="noreferrer">Blaschke-Lebesgue Theorem</a> states that among all planar convex domains of given constant width $B$ the Reuleaux triangle has minimal area.$^\dagger$</p> </blockquote> <p>The area of the <a href="https://en.wikipedia.org/wiki/Reuleaux_triangle" rel="noreferrer">Reuleaux triangle</a> of unit width is $\frac{\pi - \sqrt{3}}{2} \approx 0.705$, which is approximately $90\%$ of the area of the disk of unit diameter. Therefore, if one needs to mint (convex) coins of a given constant width and thickness, using Reuleaux triangles allows one to use approximately $10\%$ less metal.</p> <hr> <p>$\dagger$ Evans M. Harrell, <a href="https://arxiv.org/abs/math/0009137" rel="noreferrer">A direct proof of a theorem of Blaschke and Lebesgue</a>, September 2000. </p>
<p>The reason to not have a circle is likely purely aesthetic.</p> <p>Once you've decided to mint a coin which isn't a circle, it's pretty important to still make one of constant width so it doesn't get stuck in machines. Also, many machines use the width of the coins to sort them (see for instance <a href="https://www.youtube.com/watch?v=CasXSwXbm2Y" rel="noreferrer">this youtube video</a> showing one in action). That's a lot easier to do if you only have a single width for each coin instead of a range.</p>
logic
<p>I am just a high school student, and I haven't seen much in mathematics (calculus and abstract algebra).</p> <p>Mathematics is a system of axioms which you choose yourself for a set of undefined entities, such that those entities satisfy certain basic rules you laid down in the first place on your own.</p> <p>Now using these laid-down rules and a set of other rules for a subject called logic which was established similarly, you define certain quantities and name them using the undefined entities and then go on to prove certain statements called theorems.</p> <p>Now what is a proof exactly? Suppose in an exam, I am asked to prove Pythagoras' theorem. Then I prove it using only one certain system of axioms and logic. It isn't proved in all the axiom-systems in which it could possibly hold true, and what stops me from making another set of axioms that have Pythagoras' theorem as an axiom, and then just state in my system/exam &quot;this is an axiom, hence can't be proven&quot;.</p> <p><strong>EDIT</strong> : How is the term &quot;wrong&quot; defined in mathematics then? You can say that proving Fermat's Last Theorem using the number theory axioms was a difficult task but then it can be taken as an axiom in another set of axioms.</p> <p>Is mathematics as rigorous and as thought-through as it is believed and expected to be? It seems to me that there many loopholes in problems as well as the subject in-itself, but there is a false backbone of rigour that seems true until you start questioning the very fundamentals.</p>
<p>There are really two very different kinds of proofs:</p> <ul> <li><p><em>Informal proofs</em> are what mathematicians write on a daily basis to convince themselves and other mathematicians that particular statements are correct. These proofs are usually written in prose, although there are also geometrical constructions and "proofs without words". </p></li> <li><p><em>Formal proofs</em> are mathematical objects that model informal proofs. Formal proofs contain absolutely every logical step, with the result that even simple propositions have amazingly long formal proofs. Because of that, formal proofs are used mostly for theoretical purposes and for computer verification. Only a small percentage of mathematicians would be able to write down any formal proof whatsoever off the top of their head. </p></li> </ul> <p>With a little humor, I should say there is a third kind of proof: </p> <ul> <li><em>High-school proofs</em> are arguments that teachers force their students to reproduce in high school mathematics classes. These have to be written according to very specific rules described by the teacher, which are seemingly arbitrary and not shared by actual informal or formal proofs outside high-school mathematics. High-school proofs include the "two-column proofs" where the "steps" are listed on one side of a vertical line and the "reasons" on the other. The key thing to remember about high-school proofs is that they are only an imitation of "real" mathematical proofs.</li> </ul> <p>Most mathematicians learn about mathematical proofs by reading and writing them in classes. Students develop proof skills over the course of many years in the same way that children learn to speak - without learning the rules first. So, as with natural languages, there is no firm definition of "what is an informal proof", although there are certainly common patterns. </p> <p>If you want to learn about proofs, the best way is to read some real mathematics written at a level you find comfortable. There are many good sources, so I will point out only two: <a href="http://www.maa.org/pubs/mathmag.html">Mathematics Magazine</a> and <a href="http://www.maa.org/mathhorizons/">Math Horizons</a> both have well-written articles on many areas of mathematics. </p>
<p>Starting from the end, if you take Pythagoras' Theorem as an axiom, then proving it is very easy. A proof just consists of a single line, stating the axiom itself. The modern way of looking at axioms is not as things that can't be proven, but rather as those things that we explicitly state as things that hold. </p> <p>Now, exactly what a proof is depends on what you choose as the rules of inference in your logic. It is important to understand that a proof is a typographical entity. It is a list of symbols. There are certain rules of how to combine certain lists of symbols to extend an existing proof by one more line. These rules are called inference rules. </p> <p>Now, remembering that all of this happens just on a piece of paper - the proof consist just of marks on paper, where what you accept as valid proof is anything that is obtained from the axioms by following the inference rules - we would somehow like to relate this to properties of actual mathematical objects. To understand that, another technicality is required. If we are to write a proof as symbols on a piece of paper we had better have something telling us which symbols are we allowed to use, and how to combine them to obtain what are called terms. This is provided by the formal concept of a language. Now, to relate symbols on a piece of paper to mathematical objects we turn to semantics. First the language needs to be interpreted (another technical thing). Once the language is interpreted each statement (a statement is a bunch of terms put together in a certain way that is trying to convey a property of the objects we are interested in) becomes either true or false. </p> <p>This is important: Before an interpretation was made, we could still prove things. A statement was either provable or not. Now, with an interpretation at hand, each statement is also either true or false (in that particular interpretation). So, now comes the question whether or not the rules of inference are <em>sound</em>. That is to say, whether those things that are provable from the axioms are actually true in each and every interpretation where these axioms hold. Of course we absolutely must choose the inference rules so that they are sound. </p> <p>Another question is whether we have completeness. That is, if a statement is true under each and every interpretation where the axioms hold, does it follow that a proof exists? This is a very subtle question since it relates semantics (a concept that is quite illusive) to provability (a concept that is very trivial and completely mechanical). Typically, proving that a logical system is complete is quite hard. </p> <p>I hope this satisfies your curiosity, and thumbs up for your interest in these issues!</p>
linear-algebra
<p>I'm TAing linear algebra next quarter, and it strikes me that I only know one example of an application I can present to my students. I'm looking for applications of elementary linear algebra outside of mathematics that I might talk about in discussion section.</p> <p>In our class, we cover the basics (linear transformations; matrices; subspaces of $\Bbb R^n$; rank-nullity), orthogonal matrices and the dot product (incl. least squares!), diagonalization, quadratic forms, and singular-value decomposition.</p> <p>Showing my ignorance, the only application of these I know is the one that was presented in the linear algebra class I took: representing dynamical systems as Markov processes, and diagonalizing the matrix involved to get a nice formula for the $n$th state of the system. But surely there are more than these.</p> <p>What are some applications of the linear algebra covered in a first course that can motivate the subject for students? </p>
<p>I was a teaching assistant in Linear Algebra previous semester and I collected a few applications to present to my students. This is one of them:</p> <p><strong>Google's PageRank algorithm</strong></p> <p>This algorithm is the "heart" of the search engine and sorts documents of the world-wide-web by their "importance" in decreasing order. For the sake of simplicity, let us look at a system only containing of four different websites. We draw an arrow from $i$ to $j$ if there is a link from $i$ to $j$.</p> <p><img src="https://i.sstatic.net/mHgB7.png" alt=""></p> <p>The goal is to compute a vector $\underline{x} \in \mathbb{R}^4$, where each entry $x_i$ represents the website's importance. A bigger value means the website is more important. There are three criteria contributing to the $x_i$:</p> <ol> <li>The more websites contain a link to $i$, the bigger $x_i$ gets.</li> <li>Links from more important websites have a more relevant weight than those of less important websites.</li> <li>Links from a website which contains many links to other websites (outlinks) have less weight.</li> </ol> <p>Each website has exactly one "vote". This vote is distributed uniformly to each of the website's outlinks. This is known as <em>Web-Democracy</em>. It leads to a system of linear equations for $\underline{x}$. In our case, for</p> <p>$$P = \begin{pmatrix} 0&amp;0&amp;1&amp;1/2\\ 1/3&amp;0&amp;0&amp;0\\ 1/3&amp; 1/2&amp;0&amp;1/2\\ 1/3&amp;1/2&amp;0&amp;0 \end{pmatrix}$$</p> <p>the system of linear equations reads $\underline{x} = P \underline{x}$. The matrix $P$ is a stochastical matrix, hence $1$ is an eigenvalue of $P$. One of the corresponding eigenvectors is</p> <p>$$\underline{x} = \begin{pmatrix} 12\\4\\9\\6 \end{pmatrix},$$</p> <p>hence $x_1 &gt; x_3 &gt; x_4 &gt; x_2$. Let</p> <p>$$G = \alpha P + (1-\alpha)S,$$</p> <p>where $S$ is a matrix corresponding to purely randomised browsing without links, i.e. all entries are $\frac{1}{N}$ if there are $N$ websites. The matrix $G$ is called the <em>Google-matrix</em>. The inventors of the PageRank algorithm, Sergey Brin and Larry Page, chose $\alpha = 0.85$. Note that $G$ is still a stochastical matrix. An eigenvector for the eigenvalue $1$ of $\underline{x} = G \underline{x}$ in our example would be (rounded)</p> <p>$$\underline{x} = \begin{pmatrix} 18\\7\\14\\10 \end{pmatrix},$$</p> <p>leading to the same ranking.</p>
<p>Another very useful application of Linear algebra is</p> <p><strong>Image Compression (Using the SVD)</strong></p> <p>Any real matrix $A$ can be written as</p> <p>$$A = U \Sigma V^T = \sum_{i=1}^{\operatorname{rank}(A)} u_i \sigma_i v_i^T,$$</p> <p>where $U$ and $V$ are orthogonal matrices and $\Sigma$ is a diagonal matrix. Every greyscale image can be represented as a matrix of the intensity values of its pixels, where each element of the matrix is a number between zero and one. For images of higher resolution, we have to store more numbers in the intensity matrix, e.g. a 720p greyscale photo (1280 x 720), we have 921'600 elements in its intensity matrix. Instead of using up storage by saving all those elements, the singular value decomposition of this matrix leads to a simpler matrix that requires much less storage.</p> <p>You can create a <em>rank $J$ approximation</em> of the original image by using the first $J$ singular values of its intensity matrix, i.e. only looking at</p> <p>$$\sum_{i=1}^J u_i \sigma_i v_i^T .$$</p> <p>This saves a large amount of disk space, but also causes the image to lose some of its visual clarity. Therefore, you must choose a number $J$ such that the loss of visual quality is minimal but there are significant memory savings. Example:</p> <p><img src="https://i.sstatic.net/JkUsU.png" alt=""></p> <p>The image on the RHS is an approximation of the image on the LHS by keeping $\approx 10\%$ of the singular values. It takes up $\approx 18\%$ of the original image's storage. (<a href="http://www.math.uci.edu/icamp/courses/math77c/demos/SVD_compress.pdf" rel="noreferrer">Source</a>)</p>
differentiation
<p>I've got a function <span class="math-container">$$g(x,y) = \| f(x,y) \|_2$$</span> and I want to calculate its derivatives with respect to <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p> <p>Using Mathematica, differentiating w.r.t. <span class="math-container">$x$</span> gives me <span class="math-container">$ f'_x(x,y) \text{Norm}'( f(x,y))$</span>, where Norm is <span class="math-container">$\| \cdot \|$</span>.</p> <p>I read <a href="http://www.cs.berkeley.edu/%7Ewkahan/MathH110/NORMlite.pdf" rel="noreferrer">here</a> that</p> <p><span class="math-container">$$d\|{\bf x}\| = \frac{ {\bf x}^Td{\bf x}}{\|{\bf x}\|}$$</span></p> <p>at least for the <span class="math-container">$2$</span>-norm. Point is, as inside the norm I have a multivariate function, I'm still confused on how to calculate <span class="math-container">$ f'_x(x,y) \text{Norm}'( f(x,y))$</span></p> <p>I think it should be <span class="math-container">$f'_x(x,y) \frac{f(x,y)}{||f(x,y)||}$</span>, but some verification would be great :)</p>
<p>Suppose $f:\mathbb R^m \to \mathbb R^n$. Decompose into $f = (f_1, \ldots, f_n)$. Each $f_i$ is a real-valued function, <em>i.e.</em>, $f_i: \mathbb R^m \to \mathbb R$. Then $$ g(X) = \|f(X)\|_2 = \sqrt{\sum_{i=1}^n f_i(X)^2}. $$ Therefore, $$\nabla g(X) = \frac 12\left(\sum_{i=1}^n f_i(X)^2\right)^{-\frac 12}\left(\sum_{i=1}^n 2f_i(X)\nabla f_i(X)\right) = \frac{\sum_{i=1}^n f_i(X)\nabla f_i(X)}{\|f(X)\|_2}. $$ This matches your answer.</p> <p>If you want to write in terms of the Jacobian matrix of $f$ instead of components $f_i$, you can: $$ \nabla g(X) = \frac{J_f(X)^T f(X)}{\|f(X)\|_2}. $$</p>
<p>To calculate the derivative of the function <span class="math-container">$ g(x, y) = \|f(x, y)\|^2 $</span> with respect to <span class="math-container">$x$</span>, you can use the chain rule and the derivative of the Euclidean norm <span class="math-container">$\|x\|$</span> as you mentioned. Here's the correct calculation:</p> <p>Given <span class="math-container">$$g(x, y) = \|f(x, y)\|^2,$$</span> let's find <span class="math-container">$\dfrac{{\partial g}}{{\partial x}}$</span>. We have</p> <p><span class="math-container">$$ \begin{align*} g(x, y) &amp;= \|f(x, y)\|^2 \\ &amp;= [f(x, y) \cdot f(x, y)] \\ &amp;= [f(x, y)]^T \cdot f(x, y) \quad \text{ (assuming \(f\) is a column vector)} \\ \end{align*} $$</span> Now, we can differentiate both sides with respect to <span class="math-container">$x$</span>: <span class="math-container">$$ \begin{align*} \frac{{\partial g}}{{\partial x}} &amp;= \frac{{\partial}}{{\partial x}} \left([f(x, y)]^T \cdot f(x, y)\right) \\ &amp;= \left(\frac{{\partial}}{{\partial x}}[f(x, y)]^T\right) \cdot f(x, y) + [f(x, y)]^T \cdot \left(\frac{{\partial}}{{\partial x}} f(x, y)\right) \\ \end{align*} $$</span></p> <p>Now, the first term is the derivative of the transpose of <span class="math-container">$f$</span> which is simply the transpose of the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$x$</span>. So, it becomes <span class="math-container">$f_x^\prime(x, y)^T$</span>.</p> <p>The second term <span class="math-container">$$\frac{{\partial}}{{\partial x}} f(x, y)$$</span> is the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$x$</span>, therefore the final result for <span class="math-container">$\dfrac{{\partial g}}{{\partial x}}$</span> is:</p> <p><span class="math-container">$$\frac{{\partial g}}{{\partial x}} = f_x'(x, y)^T \cdot f(x, y) + [f(x, y)]^T \cdot f_x(x, y)$$</span></p> <p>This expression accounts for the derivative of the norm as well as the derivative of the function <span class="math-container">$f(x, y)$</span> with respect to <span class="math-container">$x$</span>.</p> <p>The approach to calculate <span class="math-container">$\dfrac{{\partial g}}{{\partial y}}$</span> is similar.</p>
linear-algebra
<p>Quick question: Can I define some inner product on any arbitrary vector space such that it becomes an inner product space? If yes, how can I prove this? If no, what would be a counter example? Thanks a lot in advance.</p>
<p>I'm assuming the ground field is ${\mathbb R}$ or ${\mathbb C}$, because otherwise it's not clear what an "inner product space" is.</p> <p>Now any vector space $X$ over ${\mathbb R}$ or ${\mathbb C}$ has a so-called <em>Hamel basis</em>. This is a family $(e_\iota)_{\iota\in I}$ of vectors $e_\iota\in X$ such that any $x\in X$ can be written uniquely in the form $x=\sum_{\iota\in I} \xi_\iota\ e_\iota$, where only finitely many $\xi_\iota$ are $\ne 0$. Unfortunately you need the axiom of choice to obtain such a basis, if $X$ is not finitely generated.</p> <p>Defining $\langle x, y\rangle :=\sum_{\iota\in I} \xi_\iota\ \bar\eta_\iota$ gives a bilinear "scalar product" on $X$ such that $\langle x, x\rangle&gt;0$ for any $x\ne0$. Note that in computing $\langle x,y\rangle$ no question of convergence arises. </p> <p>It follows that $\langle\ ,\ \rangle$ is an inner product on $X$, and adopting the norm $\|x\|^2:=\langle x,x\rangle$ turns $X$ into a metric space in the usual way.</p>
<p>How about vector spaces over <a href="http://en.wikipedia.org/wiki/Finite_field">finite fields</a>? Finite fields don't have an ordered subfield, and thus one cannot meaningfully define a positive-definite inner product on vector spaces over them.</p>
game-theory
<p>I am from programming background but with very limited knowledge of maths.</p> <p>I am very much eager to learn and apply <a href="http://www.britannica.com/EBchecked/topic/224893/game-theory" rel="noreferrer">game theory</a> to understand dynamics of International Politics and economics. But I am facing difficulties to understand the maths involved in it. </p> <p>So, </p> <ol> <li>What are <strong>prerequisites for understanding the math in the game theory</strong>?</li> <li>What are the authentic sources to understand game theory mathematically and and <strong>its application in the international politics and economics</strong>? </li> </ol>
<p>As @JordanMahar mentions, Fudenberg and Tirole is the standard graduate-level text. But I would start with <strong><em>Game Theory for Applied Economists</em></strong> by <strong>Gibbons</strong>. It is very readable. </p> <p>Prerequisites for Gibbons are minimal. A little algebra and probability will do just fine. </p>
<p>In economics, the classic source for game theory is:</p> <p>Fudenberg, D. and Tirole, J. (1991). <a href="http://www.amazon.ca/Game-Theory-Drew-Fudenberg/dp/0262061414/ref=sr_1_1?ie=UTF8&amp;qid=1369889236&amp;sr=8-1&amp;keywords=game+theory+tirole" rel="nofollow">Game Theory</a></p> <p>At least with Game Theory applied to economics, you can begin with a minimal knowledge of mathematics (applied calculus and some set theory will usually suffice). </p>
logic
<p>In some areas of mathematics it is everyday practice to prove the existence of things by entirely non-constructive arguments that say nothing about the object in question other than it exists, e.g. the celebrated probabilistic method and many things found in this thread: <a href="https://math.stackexchange.com/questions/1452844/what-are-some-things-we-can-prove-they-must-exist-but-have-no-idea-what-they-ar">What are some things we can prove they must exist, but have no idea what they are?</a></p> <p><strong>Now what about proofs themselves?</strong> Is there some remote (or not so remote) area of mathematics or logic which contains a concrete theorem that is actually capable of being proved by showing that a proof must exist without actually giving such a proof?</p> <p>Naively, I imagine that this would require formalizing a proof in such a way that it can be regarded as a mathematical object itself and then maybe showing that the set of such is non-empty. The only thing in this direction that I've seen so far is the "category of proofs", but that's not the answer.</p> <p>This may sound like mathematical science fiction, but initially so does e.g. the idea of proving that some statement is unprovable in a framework, which has become standard in axiomatic set theory.</p> <p>Feel free to change my speculative tags.</p>
<p>There are various ways to interpret the question. One interesting class of examples consists of "speed up" theorems. These generally involve two formal systems, $T_1$ and $T_2$, and family of statements which are provable in both $T_1$ and $T_2$, but for which the shortest formal proofs in $T_1$ are much longer than the shortest formal proofs in $T_2$. </p> <p>One of the oldest such theorems is due to Gödel. He noticed that statements such as "This theorem cannot be proved in Peano Arithmetic in fewer than $10^{1000}$ symbols" are, in fact, provable in Peano Arithmetic. </p> <p>Knowing this, we know that we could make a formal proof by cases that examines every Peano Arithmetic formal proof with fewer than $10^{1000}$ symbols and checks that none of them proves the statement. So we can prove indirectly that a formal proof of the statement in Peano Arithmetic exists. </p> <p>But, because the statement is true, the shortest formal proof of the statement in Peano Arithmetic will in fact require more than $10^{1000}$ symbols. So nobody will be able to write out that formal proof completely. We can replace $10^{1000}$ with any number we wish, to obtain results whose shortest formal proof in Peano arithmetic must have at least that many symbols. </p> <p>Similarly, if we prefer another formal system such as ZFC, we can consider statements such as "This theorem cannot be proved in ZFC in fewer than $10^{1000}$ symbols". In this way each sufficiently strong formal system will have some results which we know are formally provable, but for which the shortest formal proof in that system is too long to write down. </p>
<p><a href="https://en.wikipedia.org/wiki/Leon_Henkin#The_completeness_proof">Henkin's completeness proof</a> is an example of what you seek: It demonstrates that for a certain statement there is a proof, but does not establish what that proof is.</p>
number-theory
<p>This conjecture is tested for all odd natural numbers less than $10^8$: </p> <p>If $n&gt;1$ is an odd natural number, then there are natural numbers $a,b$ such that $n=a+b$ and $a^2+b^2\in\mathbb P$. </p> <p>$\mathbb P$ is the set of prime numbers.</p> <p>I wish help with counterexamples, heuristics or a proof.</p> <hr> <p>Addendum: For odd $n$, $159&lt;n&lt;50,000$, there are $a,b\in\mathbb Z^+$ such that $n=a+b$ and both $a^2+b^2$ and $a^2+(b+2)^2$ are primes.</p> <hr> <p>As hinted by pisco125 in a comment, there is a weaker version of the conjecture:</p> <blockquote> <p>Every odd number can be written $x+y$ where $x+iy$ is a Gaussian prime.</p> </blockquote> <p>Which give arise to a function:</p> <p>$g:\mathbb P_G\to\mathbb O'$, given by $g(x+iy)=x+y$, where $\mathbb O'$ is the odd integers with $0,\pm 2$ included. </p> <p>The weaker conjecture is then equivalent with that $g$ is onto. </p> <p>The reason why the conjecture is weaker is that any prime of the form $p=4n-1$ is a Gaussian prime. The reason why $0,\pm 2$ must be added is that $\pm 1 \pm i$ is a Gaussian prime.</p>
<p>Here are some heuristics. As Hans Engler defines, let $k(n)$ be the number of pairs $(a,b)$ with $a&lt;b$ for which $a+b=n$ and $a^2+b^2$ is prime. In other words, $$ k(n) = \#\{ 1\le a &lt; \tfrac n2 \colon a^2 + (n-a)^2 = 2a^2 - 2an + n^2 \text{ is prime} \}. $$ Ignoring issues of uniformity in $n$, the <a href="https://en.wikipedia.org/wiki/Bateman%E2%80%93Horn_conjecture" rel="noreferrer">Bateman–Horn conjecture</a> predicts that the number of prime values of an irreducible polynomial $f(a)$ up to $x$ is asymptotic to $$ \frac x{\log x} \prod_p \bigg( 1-\frac1p \bigg)^{-1} \bigg( 1-\frac{\sigma(p)}p \bigg), $$ where $\log$ denotes the natural logarithm and $$ \sigma(p) = \#\{ 1\le t\le p\colon f(t) \equiv 0 \pmod p \}. $$</p> <p>We now calculate $\sigma(p)$ for $f(a) = 2a^2 - 2an + n^2$. Note that the discriminant of $f$ is $(-2n)^2 - 4\cdot2n^2 = -4n^2$. Therefore if $p$ does not divide $-4n^2$, the number of solutions is given by the Legendre symbol $$ \sigma(p) = 1 + \bigg (\frac{-4n^2}p\bigg) = 1 + \bigg (\frac{-1}p\bigg) = \begin{cases} 2, &amp;\text{if } p\equiv1\pmod 4, \\ 0, &amp;\text{if } p\equiv3\pmod 4. \end{cases} $$ Furthermore, we can check by hand that if $p=2$ then $\sigma(p)=0$, while if $p$ divides $n$ then $\sigma(p)=1$. Therefore our prediction becomes $$ k(n) \approx \frac{n/2}{\log(n/2)} \cdot 2 \prod_{\substack{p\equiv1\pmod 4 \\ p\nmid n}} \frac{p-2}{p-1} \prod_{\substack{p\equiv3\pmod 4 \\ p\nmid n}} \frac p{p-1}. $$ (We're abusing notation: those two products don't individually converge, but their product converges when the primes are taken in their natural order.) In principle that constant could be cleverly evaluated to several decimal places. But for the purposes of experiment, perhaps it's valuable to note that $k(n)$ should be approximately $n/\log n$, times some universal constant, times $$ \prod_{\substack{p\equiv1\pmod 4 \\ p\mid n}} \frac{p-1} {p-2}\prod_{\substack{p\equiv3\pmod 4 \\ p\mid n}} \frac {p-1}p; $$ and so the data can be normalized by that function of $n$ to test for consistency.</p>
<p>COMMENT.-This is another way, maybe interesting for some people, of stating the same problem.</p> <p>Given an odd natural number, $2n + 1$, there are $n$ different ways to express it as the sum of two natural $$2n+1=(2n-k)+(k+1);\space k=1,2,....,n$$ Then the problem can be stated as follows equivalently: $$\text{ For all natural 2n+1 greater than 1}\text{ at least one of the n numbers}\\\begin{cases}M_1=4n^2+1\\M_2=(2n-1)^2+2^2\\M_3=(2n-3)^2+3^3\\...........\\...........\\M_n=(n+1)^2+n^2\end{cases}\\ \text{ is a prime}$$</p> <p><strong>NOTE</strong>.- <em>It is known that such a prime (if it exists) is necessarily of the form $p=4m+1$. Besides each $M_k$ has a factorization of the form $$M_k=\prod p_i^{\alpha_i}\prod q_j^{2\beta_j}$$ where $\alpha_i,\space \beta_j$ are non-negative integers,the primes $p_i$ and $q_j$ being of the form $4m+1$ and $4m-1$ respectively.</em></p> <p><em>While larger 2n + 1 is, more likely to exist such a prime number. It would seem that the conjecture is true</em></p>
linear-algebra
<p>In the <strong>few</strong> linear algebra texts I have read, the determinant is introduced in the following manner;</p> <p>“Here is a formula for what we call <span class="math-container">$\det A$</span>. Here are some other formulas. And finally, here are some nice properties of the determinant.”</p> <p>For example, in very elementary textbooks it is introduced by giving the co-factor expansion formula. In Axler’s “Linear Algebra Done Right” it is defined, for <span class="math-container">$T\in L(V)$</span> to be <span class="math-container">${(-1)}^{\dim V}$</span> times the constant term in the characteristic polynomial of <span class="math-container">$T$</span>.</p> <p>However I find this somewhat unsatisfactory. It’s like the real definition of the determinant is hidden. Ideally, wouldn’t the determinant be defined in the following manner:</p> <p>“Given a matrix <span class="math-container">$A$</span>, let <span class="math-container">$\det A$</span> be an element of <span class="math-container">$\mathbb{F}$</span> such that <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$z$</span>.”</p> <p>Then one would proceed to prove that this element is unique, and derive the familiar formulae.</p> <p>So my question is: Does a definition of the latter type exist, is there some minimal set of properties sufficient to define what a determinant is? If not, can you explain why?</p>
<p>Let $V$ be a vector space of dimension $n$. For any $p$, the construction of the <a href="http://en.wikipedia.org/wiki/Exterior_algebra">exterior power</a> $\Lambda^p(V)$ is <a href="http://en.wikipedia.org/wiki/Functor">functorial</a> in $V$: it is the universal object for alternating multilinear functions out of $V^p$, that is, functions</p> <p>$$\phi : V^p \to W$$</p> <p>where $W$ is any other vector space satisfying $\phi(v_1, ... v_i + v, ... v_p) = \phi(v_1, ... v_i, ... v_p) + \phi(v_1, ... v_{i-1}, v, v_{i+1}, ... v_p)$ and $\phi(v_1, ... v_i, ... v_j, ... v_p) = - \phi(v_1, ... v_j, ... v_i, ... v_p)$. What this means is that there is a map $\psi : V^p \to \Lambda^p(V)$ (the exterior product) which is alternating and multilinear which is universal with respect to this property; that is, given any other map $\phi$ as above with the same properties, $\phi$ factors uniquely as $\phi = f \circ \psi$ where $f : \Lambda^p(V) \to W$ is linear.</p> <p>Intuitively, the universal map $\psi : V^p \to \Lambda^p(V)$ is the universal way to measure the oriented $p$-dimensional volumes of <a href="http://en.wikipedia.org/wiki/Parallelepiped#Parallelotope">paralleletopes</a> defined by $p$-tuples of vectors in $V$, the point being that for geometric reasons oriented $p$-dimensional volume is alternating and multilinear. (It is instructive to work out how this works when $n = 2, 3$ by explicitly drawing some diagrams.)</p> <p>Functoriality means the following: if $T : V \to W$ is any map between two vector spaces, then there is a natural map $\Lambda^p T : \Lambda^p V \to \Lambda^p W$ between their $p^{th}$ exterior powers satisfying certain natural conditions. This natural map comes in turn from the natural action $T(v_1, ... v_p) = (Tv_1, ... Tv_p)$ defining a map $T : V^p \to W^p$ which is compatible with the passing to the exterior powers.</p> <p>The top exterior power $\Lambda^n(V)$ turns out to be one-dimensional. We then define the <strong>determinant</strong> $T : V \to V$ to be the scalar $\Lambda^n T : \Lambda^n(V) \to \Lambda^n(V)$ by which $T$ acts on the top exterior power. This is equivalent to the intuitive definition that $\det T$ is the constant by which $T$ multiplies oriented $n$-dimensional volumes. But it requires <em>no arbitrary choices</em>, and the standard properties of the determinant (for example that it is multiplicative, that it is equal to the product of the eigenvalues) are extremely easy to verify.</p> <p>In this definition of the determinant, all the work that would normally go into showing that the determinant is the unique function with such-and-such properties goes into showing that $\Lambda^n(V)$ is one-dimensional. If $e_1, ... e_n$ is a basis, then $\Lambda^n(V)$ is in fact spanned by $e_1 \wedge e_2 \wedge ... \wedge e_n$. This is not so hard to prove; it is essentially an exercise in row reduction.</p> <p>Note that this definition does not even require a definition of oriented $n$-dimensional volume as a number. Abstractly such a notion of volume is given by a choice of isomorphism $\Lambda^n(V) \to k$ where $k$ is the underlying field, but since $\Lambda^n(V)$ is one-dimensional its space of endomorphisms is already <em>canonically</em> isomorphic to $k$. </p> <p>Note also that just as the determinant describes the action of $T$ on the top exterior power $\Lambda^n(V)$, the $p \times p$ minors of $T$ describe the action of $T$ on the $p^{th}$ exterior power $\Lambda^p(V)$. In particular, the $(n-1) \times (n-1)$ minors (which form the matrix of cofactors) describe the action of $T$ on the second-to-top exterior power $\Lambda^{n-1}(V)$. This exterior power has the same dimension as $V$, and with the right extra data can be identified with $V$, and this leads to a quick and natural proof of the explicit formula for the inverse of a matrix.</p> <hr> <p>As an advance warning, the determinant is sometimes defined as an alternating multilinear function on $n$-tuples of vectors $v_1, ... v_n$ satisfying certain properties; this properly defines a linear transformation $\Lambda^n(V) \to k$, not a determinant of a linear transformation $T : V \to V$. If we fix a basis $e_1, ... e_n$, then this function can be thought of as the determinant of the linear transformation sending $e_i$ to $v_i$, but this definition is basis-dependent. </p>
<p>Let $B$ a basis of a vector space $E$ of dimension $n$ over $\Bbbk$. Then $det_B$ is the only $n$-alternating multilinear form with $det_B(B) = 1$.</p> <p>A $n$-multilinear form is a map of $E^n$ in $\Bbbk$ which is linear for each variable.</p> <p>A $n$- alternated multilinear form is a multilinear form which verify for all $i,j$ $$ f(x_1,x_2,\dots,x_i,\dots, x_j, \dots, x_n) = -f(x_1,x_2,\dots,x_j,\dots, x_i, \dots, x_n) $$ In plain english, the sign of the application change when you switch two argument. You understand why you use the big sum over permutations to define the determinant with a closed formula. </p>
linear-algebra
<p>I understand that a vector has direction and magnitude whereas a point doesn't.</p> <p>However, in the course notes that I am using, it is stated that a point is the same as a vector.</p> <p>Also, can you do cross product and dot product using two points instead of two vectors? I don't think so, but my roommate insists yes, and I'm kind of confused now.</p>
<p>Here's an answer without using symbols.</p> <p>The difference is precisely that between <em>location</em> and <em>displacement</em>.</p> <ul> <li>Points are <strong>locations in space</strong>.</li> <li>Vectors are <strong>displacements in space</strong>.</li> </ul> <p>An analogy with time works well.</p> <ul> <li>Times, (also called instants or datetimes) are <strong>locations in time</strong>.</li> <li>Durations are <strong>displacements in time</strong>.</li> </ul> <p>So, in time,</p> <ul> <li>4:00 p.m., noon, midnight, 12:20, 23:11, etc. are <em>times</em></li> <li>+3 hours, -2.5 hours, +17 seconds, etc., are <em>durations</em></li> </ul> <p>Notice how durations can be positive or negative; this gives them &quot;direction&quot; in addition to their pure scalar value. Now the best way to mentally distinguish times and durations is by the operations they support</p> <ul> <li>Given a time, you can add a duration to get a new time (3:00 + 2 hours = 5:00)</li> <li>You can subtract two times to get a duration (7:00 - 1:00 = 6 hours)</li> <li>You can add two durations (3 hrs, 20 min + 6 hrs, 50 min = 10 hrs, 10 min)</li> </ul> <p>But <em>you cannot add two times</em> (3:15 a.m. + noon = ???)</p> <p>Let's carry the analogy over to now talk about space:</p> <ul> <li><span class="math-container">$(3,5)$</span>, <span class="math-container">$(-2.25,7)$</span>, <span class="math-container">$(0,-1)$</span>, etc. are <em>points</em></li> <li><span class="math-container">$\langle 4,-5 \rangle$</span> is a <em>vector</em>, meaning 4 units east then 5 south, assuming north is up (sorry residents of southern hemisphere)</li> </ul> <p>Now we have exactly the same analogous operations in space as we did with time:</p> <ul> <li>You can add a point and a vector: Starting at <span class="math-container">$(4,5)$</span> and going <span class="math-container">$\langle -1,3 \rangle$</span> takes you to the point <span class="math-container">$(3,8)$</span></li> <li>You can subtract two points to get the displacement between them: <span class="math-container">$(10,10) - (3,1) = \langle 7,9 \rangle$</span>, which is the displacement you would take from the second location to get to the first</li> <li>You can add two displacements to get a compound displacement: <span class="math-container">$\langle 1,3 \rangle + \langle -5,8 \rangle = \langle -4,11 \rangle$</span>. That is, going 1 step north and 3 east, THEN going 5 south and 8 east is the same thing and just going 4 south and 11 east.</li> </ul> <p>But you cannot add two points.</p> <p>In more concrete terms: Moscow + <span class="math-container">$\langle\text{200 km north, 7000 km west}\rangle$</span> is another location (point) somewhere on earth. But Moscow + Los Angeles makes no sense.</p> <p>To summarize, a location is where (or when) you are, and a displacement is <em>how to get from one location to another</em>. Displacements have both magnitude (how far to go) and a direction (which in time, a one-dimensional space, is simply positive or negative). In space, locations are <strong>points</strong> and displacements are <strong>vectors</strong>. In time, locations are (points in) time, a.k.a. <strong>instants</strong> and displacements are <strong>durations</strong>.</p> <p><strong>EDIT 1</strong>: In response to some of the comments, I should point out that 4:00 p.m. is <em>NOT</em> a displacement, but &quot;+4 hours&quot; and &quot;-7 hours&quot; are. Sure you can get to 4:00 p.m. (an instant) by adding the displacement &quot;+16 hours&quot; to the instant midnight. You can also get to 4:00 p.m. by adding the diplacement &quot;-3 hours&quot; to 7:00 p.m. The source of the confusion between locations and displacements is that people mentally work in coordinate systems relative to some origin (whether <span class="math-container">$(0,0)$</span> or &quot;midnight&quot; or similar) and both of these concepts are represented as coordinates. I guess that was the point of the question.</p> <p><strong>EDIT 2</strong>: I added some text to make clear that durations actually have direction; I had written both -2.5 hours and +3 hours earlier, but some might have missed that the negative encapsulated a direction, and felt that a duration is &quot;only a scalar&quot; when in fact the adding of a <span class="math-container">$+$</span> or <span class="math-container">$-$</span> really does give it direction.</p> <p><strong>EDIT 3</strong>: A summary in table form:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Concept</th> <th>SPACE</th> <th>TIME</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">LOCATION</td> <td>POINT</td> <td>TIME</td> </tr> <tr> <td style="text-align: left;">DISPLACEMENT</td> <td>VECTOR</td> <td>DURATION</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Loc - Loc = Disp</td> <td>Pt - Pt = Vec</td> <td>Time - Time = Dur</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$(3,5)-(10,2) = \langle -7,3 \rangle$</span></td> <td>7:30 - 1:15 = 6hr15m</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Loc + Disp = Loc</td> <td>Pt + Vec = Pt</td> <td>Time + Dur = Time</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$(10,2)+ \langle -7,3 \rangle = (3,5)$</span></td> <td>3:15 + 2hr = 5:15</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Disp + Disp = Disp</td> <td>Vec + Vec = Vec</td> <td>Dur + Dur = Dur</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$\langle 8, -5 \rangle + \langle -7, 3 \rangle = \langle 1, -2 \rangle$</span></td> <td>3hr + 5hr = 8hr</td> </tr> </tbody> </table> </div>
<p>Points and vectors are not the same thing. Given two points in 3D space, we can make a vector from the first point to the second. And, given a vector and a point, we can start at the point and "follow" the vector to get another point.</p> <p>There is a nice fact, however: the points in 3D space (or $\mathbb{R}^n$, more generally) are in a very nice correspondence with the vectors that start at the point $(0,0,0)$. Essentially, the idea is that we can represent the vector with its ending point, and no information is lost. This is sometimes called putting the vector in "standard position".</p> <p>For a course like vector calculus, it is important to keep a good distinction between points and vectors. Points correspond to vectors that start at the origin, but we may need vectors that start at other points.</p> <p>For example, given three points $A$, $B$, and $C$ in 3D space, we may want to find the equation of the plane that spans them, If we just knew the normal vector $\vec n$ of the plane, we could write the equation directly as $\vec n \cdot (x,y,z) = \vec n \cdot A$. So we need to find that normal $\vec n$. To do that, we compute the cross product of the vectors $\vec {AB}$ and $\vec{AC}$. If we computed the cross product of $A$ and $C$ instead (pretending they are vectors in standard position), we could not get the right normal vector.</p> <p>For example, if $A = (1,0,0)$, $B = (0,1,0)$, and $C = (0,0,1)$, the normal vector of the corresponding plane would not be parallel to any coordinate axis. But if we take any two of $A$, $B$, and $C$ and compute a cross product, we will get a vector parallel to one of the coordinate axes.</p>
probability
<h2>Does this reflect the real world and what is the empirical evidence behind this?</h2> <p><img src="https://upload.wikimedia.org/wikipedia/commons/f/f9/Largenumbers.svg" alt="Wikipedia illustration"></p> <p><strong><em>Layman here so please avoid abstract math in your response.</em></strong></p> <p>The Law of Large Numbers states that the <em>average</em> of the results from multiple trials will tend to converge to its expected value <em>(e.g. 0.5 in a coin toss experiment)</em> as the sample size increases. The way I understand it, while the first 10 coin tosses may result in an average closer to 0 or 1 rather than 0.5, after 1000 tosses a statistician would expect the average to be very close to 0.5 and definitely 0.5 with an infinite number of trials.</p> <p>Given that a coin has no memory and each coin toss is independent, what physical laws would determine that the average of all trials will eventually reach 0.5. More specifically, why does a statistician believe that a random event with 2 possible outcomes will have a close to equal amount of both outcomes over say 10,000 trials? What prevents the coin to fall 9900 times on heads instead of 5200?</p> <p>Finally, since gambling and insurance institutions rely on such expectations, are there any experiments that have conclusively shown the validity of the LLN in the real world?</p> <p><strong>EDIT:</strong> I do differentiate between the LLN and the Gambler's fallacy. My question is NOT if or why any specific outcome or series of outcomes become more likely with more trials--that's obviously false--but why <strong><em>the mean of all outcomes</em></strong> tends toward the expected value?</p> <p><strong>FURTHER EDIT:</strong> LLN seems to rely on two assumptions in order to work:</p> <ol> <li>The universe is <strong>indifferent</strong> towards the result of any one trial, because each outcome is equally likely</li> <li>The universe is <strong>NOT indifferent</strong> towards any one particular outcome coming up too frequently and dominating the rest.</li> </ol> <p>Obviously, we as humans would label 50/50 or a similar distribution of a coin toss experiment <strong><em>"random"</em></strong>, but if heads or tails turns out to be say 60-70% after thousands of trials, we would suspect there is something wrong with the coin and it isn't fair. Thus, if the universe is truly <em>indifferent</em> towards the average of large samples, there is no way we can have true randomness and consistent predictions--there will always be a suspicion of bias unless the total distribution is not somehow kept in check by something that preserves the relative frequencies.</p> <p><strong>Why is the universe NOT indifferent towards big samples of coin tosses? What is the objective reason for this phenomenon?</strong></p> <p><strong>NOTE:</strong> <em>A good explanation would not be circular: justifying probability with probabilistic assumptions (e.g. "it's just more likely"). Please check your answers, as most of them fall into this trap.</em></p>
<p>Reading between the lines, it sounds like you are committing the fallacy of the layman interpretation of the "law of averages": that if a coin comes up heads 10 times in a row, then it needs to come up tails <em>more</em> often from then on, in order to balance out that initial asymmetry.</p> <p>The real point is that no divine presence needs to take corrective action in order for the <em>average</em> to stabilize. The simple reason is attenuation: once you've tossed the coin another 1000 times, the effect of those initial 10 heads has been diluted to mean almost nothing. What used to look like 100% heads is now a small blip only strong enough to move the needle from 50% to 51%.</p> <p>Now combine this observation with the easily verified fact that 9900 out of 10000 heads is simply a less common combination than 5000 out of 10000. The reason for that is combinatorial: there is simply less freedom in hitting an extreme target than a moderate one.</p> <p>To take a tractable example, suppose I ask you to flip a coin 4 times and get 4 heads. If you've flip tails even once, you've failed. But if instead I ask you to aim for 2 heads, you still have options (albeit slimmer) no matter how the first two flips turn out. Numerically we can see that 2 out of 4 can be achieved in 6 ways: HHTT, HTHT, HTTH, THHT, THTH, TTHH. But the 4 out of 4 goal can be achieved in only one way: HHHH. If you work out the numbers for 9900 out of 10000 versus 5000 out of 10000 (or any specific number in that neighbourhood), that disparity becomes truly immense.</p> <p>To summarize: it takes no conscious effort to get an empirical average to tend towards its expected value. In fact it would be fair to think in the exact opposite terms: the effect that requires conscious effort is forcing the empirical average to <em>stray</em> from its expectation.</p>
<p>Nice question! In the real word, we don't get to let $n \to \infty$, so the question of why LLN should be of any comfort is important. </p> <p>The short answer to your question is that we <em>cannot</em> empirically verify LLN since we can never perform an infinite number of experiments. Its a theoretical idea that is very well founded, but, like all applied mathematics, the question of whether or not a particular model or theory holds is a perennial concern.</p> <p>A more useful law from a statistical standpoint is the Central Limit Theorem and the various probability inequalities (Chebyshev, Markov, Chernov, etc). These allow us to place bounds on or approximate the <em>probability</em> of our sample average being far from the true value for a <em>finite</em> sample.</p> <p>As for an actual experiment to test LLN. One can hardly do better than <a href="http://en.wikipedia.org/wiki/John_Edmund_Kerrich">John Kerrichs 10,000 coin flip experiment</a>-- he got 50.67% heads!!</p> <p>So, in general, I would say LLN is empirically well supported by the fact that scientists from all fields rely upon sample averages to estimate models, and this approach has been largely successful, so the sample averages appear to be converging nicely for finite, and <em>feasible</em>, sample sizes.</p> <p>There are "pathological" cases that one can construct (I'll spare you the details) where one needs astronomical sample sizes to get a reasonable probability of being close to the true mean. This is apparent if you are using the Central Limit Theorem, but the LLN is simply not informative enough to give me much comfort in day-to-day practice.</p> <p><strong>The physical basis for probability</strong></p> <p>It seems you still an issue with <em>why</em> long-run averages exist in the real world, apart from the theory of probability regarding the behavior of these averages <em>assuming</em> long-run averages exist. Let me state a fact that may help you:</p> <p><strong>Fact</strong> Nether probability theory nor the existence of a long-run averages <strong>requires randomness</strong> ! </p> <p>The determinism vs. indeterminism debate is for philosophers, not mathematics. The notion of probability as a physical observable comes from <em>ignorance</em> <strong>or</strong> <em>absence</em> of the detailed dynamics of what you are observing. You could just as easily apply probability theory to a boring 'ol pendulum as to the stock market or coin flips...its just that with pendulum's we have a nice, detailed theory that that allows us make precise estimates of future observations. I have no doubt that a full physical analysis of a coin flip would allow for us to predict what face would come up...but in reality, we will never know this! </p> <p>This isn't an issue though. We don't need to assume a guiding hand nor true indeterminism to apply probability theory. Lets say that coin flips are truly deterministic, then we can still apply probability theory meaningfully if we assume a couple basic things:</p> <ol> <li>The underlying process is $ergodic$...okay, this is a bit technical, but it basically means that the process dynamics are stable over the long term (e.g., we are not flipping coins in a hurricane or where tornados pop in and out of the vicinity!). Note that I said nothing about randomness...this could be a totally deterministic, albeit very complex, process...all we need is that the dynamics are stable (i.e., we could write down a series of equations with specific parameters for the coin flips and they wouldn't change from flip to flip).</li> <li>The values the process can take on at any time are "well behaved". Basically, like I said earlier wrt the Cauchy...the system should not produce values that consistently exceed $\approx n$ times the sum of all previous observations. It may happen once in a while, but it should become very rare, very fast (precise definition is somewhat technical).</li> </ol> <p>With these two assumptions, we now have the physical basis for the existence of a long-run average of a physical process. Now, if its complicated, then instead of using physics to model it exactly, we can apply probability theory to describe the statistical properties of this process (i.e., aggregated over many observations). </p> <p>Note that the above is independent from whether or not we have selected the correct probability model. Models are made to match reality...reality does not conform itself to our models. Therefore, it is the job of the <em>modeler</em>, not nature or divine provenance, to ensure that the results of the model match the observed outcomes.</p> <p>Hope this helps clarify when and how probability applies to the real world.</p>
linear-algebra
<p>If $A,B$ are $2 \times 2$ matrices of real or complex numbers, then</p> <p>$$AB = \left[ \begin{array}{cc} a_{11} &amp; a_{12} \\ a_{21} &amp; a_{22} \end{array} \right]\cdot \left[ \begin{array}{cc} b_{11} &amp; b_{12} \\ b_{21} &amp; b_{22} \end{array} \right] = \left[ \begin{array}{cc} a_{11}b_{11}+a_{12}b_{21} &amp; a_{11}b_{12}+a_{12}b_{22} \\ a_{21}b_{11}+a_{22}b_{21} &amp; a_{22}b_{12}+a_{22}b_{22} \end{array} \right] $$</p> <p>What if the entries $a_{ij}, b_{ij}$ are themselves $2 \times 2$ matrices? Does matrix multiplication hold in some sort of "block" form ?</p> <p>$$AB = \left[ \begin{array}{c|c} A_{11} &amp; A_{12} \\\hline A_{21} &amp; A_{22} \end{array} \right]\cdot \left[ \begin{array}{c|c} B_{11} &amp; B_{12} \\\hline B_{21} &amp; B_{22} \end{array} \right] = \left[ \begin{array}{c|c} A_{11}B_{11}+A_{12}B_{21} &amp; A_{11}B_{12}+A_{12}B_{22} \\\hline A_{21}B_{11}+A_{22}B_{21} &amp; A_{22}B_{12}+A_{22}B_{22} \end{array} \right] $$ This identity would be very useful in my research.</p>
<p>It depends on how you partition it, not all partitions work. For example, if you partition these two matrices </p> <p>$$\begin{bmatrix} a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i \end{bmatrix}, \begin{bmatrix} a' &amp; b' &amp; c' \\ d' &amp; e' &amp; f' \\ g' &amp; h' &amp; i' \end{bmatrix} $$ </p> <p>in this way </p> <p>$$ \left[\begin{array}{c|cc}a&amp;b&amp;c\\ d&amp;e&amp;f\\ \hline g&amp;h&amp;i \end{array}\right], \left[\begin{array}{c|cc}a'&amp;b'&amp;c'\\ d'&amp;e'&amp;f'\\ \hline g'&amp;h'&amp;i' \end{array}\right] $$</p> <p>and then multiply them, it won't work. But this would</p> <p>$$\left[\begin{array}{c|cc}a&amp;b&amp;c\\ \hline d&amp;e&amp;f\\ g&amp;h&amp;i \end{array}\right] ,\left[\begin{array}{c|cc}a'&amp;b'&amp;c'\\ \hline d'&amp;e'&amp;f'\\ g'&amp;h'&amp;i' \end{array}\right] $$</p> <p>What's the difference? Well, in the first case, all submatrix products are not defined, like $\begin{bmatrix} a \\ d \\ \end{bmatrix}$ cannot be multiplied with $\begin{bmatrix} a' \\ d' \\ \end{bmatrix}$</p> <p>So, what is the general rule? (Taken entirely from the Wiki page on <a href="https://en.wikipedia.org/wiki/Block_matrix" rel="noreferrer">Block matrix</a>)</p> <p>Given, an $(m \times p)$ matrix $\mathbf{A}$ with $q$ row partitions and $s$ column partitions $$\begin{bmatrix} \mathbf{A}_{11} &amp; \mathbf{A}_{12} &amp; \cdots &amp;\mathbf{A}_{1s}\\ \mathbf{A}_{21} &amp; \mathbf{A}_{22} &amp; \cdots &amp;\mathbf{A}_{2s}\\ \vdots &amp; \vdots &amp; \ddots &amp;\vdots \\ \mathbf{A}_{q1} &amp; \mathbf{A}_{q2} &amp; \cdots &amp;\mathbf{A}_{qs}\end{bmatrix}$$</p> <p>and a $(p \times n)$ matrix $\mathbf{B}$ with $s$ row partitions and $r$ column parttions</p> <p>$$\begin{bmatrix} \mathbf{B}_{11} &amp; \mathbf{B}_{12} &amp; \cdots &amp;\mathbf{B}_{1r}\\ \mathbf{B}_{21} &amp; \mathbf{B}_{22} &amp; \cdots &amp;\mathbf{B}_{2r}\\ \vdots &amp; \vdots &amp; \ddots &amp;\vdots \\ \mathbf{B}_{s1} &amp; \mathbf{B}_{s2} &amp; \cdots &amp;\mathbf{B}_{sr}\end{bmatrix}$$</p> <p>that are compatible with the partitions of $\mathbf{A}$, the matrix product</p> <p>$ \mathbf{C}=\mathbf{A}\mathbf{B} $</p> <p>can be formed blockwise, yielding $\mathbf{C}$ as an $(m\times n)$ matrix with $q$ row partitions and $r$ column partitions.</p>
<p>It is always suspect with a very late answer to a popular question, but I came here looking for what a compatible block partitioning is and did not find it:</p> <p>For <span class="math-container">$\mathbf{AB}$</span> to work by blocks the important part is that the partition along the columns of <span class="math-container">$\mathbf A$</span> must match the partition along the rows of <span class="math-container">$\mathbf{B}$</span>. This is analogous to how, when doing <span class="math-container">$\mathbf{AB}$</span> without blocks—which is of course just a partitioning into <span class="math-container">$1\times 1$</span> blocks—the number of columns in <span class="math-container">$\mathbf A$</span> must match the number of rows in <span class="math-container">$\mathbf B$</span>.</p> <p>Example: Let <span class="math-container">$\mathbf{M}_{mn}$</span> denote any matrix of <span class="math-container">$m$</span> rows and <span class="math-container">$n$</span> columns irrespective of contents. We know that <span class="math-container">$\mathbf{M}_{mn}\mathbf{M}_{nq}$</span> works and yields a matrix <span class="math-container">$\mathbf{M}_{mq}$</span>. Split <span class="math-container">$\mathbf A$</span> by columns into a block of size <span class="math-container">$a$</span> and a block of size <span class="math-container">$b$</span>, and do the same with <span class="math-container">$\mathbf B$</span> by rows. Then split <span class="math-container">$\mathbf A$</span> however you wish along its rows, same for <span class="math-container">$\mathbf B$</span> along its columns. Now we have <span class="math-container">$$ A = \begin{bmatrix} \mathbf{M}_{ra} &amp; \mathbf{M}_{rb} \\ \mathbf{M}_{sa} &amp; \mathbf{M}_{sb} \end{bmatrix}, B = \begin{bmatrix} \mathbf{M}_{at} &amp; \mathbf{M}_{au} \\ \mathbf{M}_{bt} &amp; \mathbf{M}_{bu} \end{bmatrix}, $$</span></p> <p>and <span class="math-container">$$ AB = \begin{bmatrix} \mathbf{M}_{ra}\mathbf{M}_{at} + \mathbf{M}_{rb}\mathbf{M}_{bt} &amp; \mathbf{M}_{ra}\mathbf{M}_{au} + \mathbf{M}_{rb}\mathbf{M}_{bu} \\ \mathbf{M}_{sa}\mathbf{M}_{at} + \mathbf{M}_{sb}\mathbf{M}_{bt} &amp; \mathbf{M}_{sa}\mathbf{M}_{au} + \mathbf{M}_{sb}\mathbf{M}_{bu} \end{bmatrix} = \begin{bmatrix} \mathbf{M}_{rt} &amp; \mathbf{M}_{ru} \\ \mathbf{M}_{st} &amp; \mathbf{M}_{su} \end{bmatrix}. $$</span></p> <p>All multiplications conform, all sums work out, and the resulting matrix is the size you'd expect. There is nothing special about splitting in two so long as you match any column split of <span class="math-container">$\mathbf A$</span> with a row split in <span class="math-container">$\mathbf B$</span> (try removing a block row from <span class="math-container">$\mathbf A$</span> or further splitting a block column of <span class="math-container">$\mathbf B$</span>).</p> <p>The nonworking example from the accepted answer is nonworking because the columns of <span class="math-container">$\mathbf A$</span> are split into <span class="math-container">$(1, 2)$</span> while the rows of <span class="math-container">$\mathbf B$</span> are split into <span class="math-container">$(2, 1)$</span>.</p>
probability
<blockquote> <p>Suppose that we simulated a random walk on $\mathbb Z$ starting at $0$. At each step, we transition from position $x$ to position $x-3,\,x-2,\,x-1,\,x+1,\,x+2,$ or $x+3$ with equal probability. If we ever move to a position we have been at before, we stop. What is the probability that we will eventually reach (and stop at) $0$ again?</p> </blockquote> <p>This is to say, we consider that walks like $0,2,4,3,2$ end at $2$ because they repeated and walks like $0,2,4,3,1,0$ reach $0$ again - and are interested in the probability of the latter case.</p> <p>I'm not sure what methods I can use to dispatch such a problem; if the step size were $1$ rather than $2$, the answer is obviously $\frac{1}2$, as we would need to immediately undo our first step, otherwise we'd never return. If the step size is $1$ or $2$, then we can solve the problem by noting that we can never leap over any pair of positions $(x,x+1)$ - which allows us to classify and count every possible walk (e.g. walks of the form $0,1,3,5,\ldots,2n+1,2n,2n-2,\ldots,0$ form one class). The probability of returning to $0$ is $\frac{7}{18}$ in this case. However, no such approach extends to the case above. By simulating all random walks of length $8$ or less, I have proven that the probability $p$ satisfies $.32&lt;p&lt;.39$. Running many random trials suggests that $p$ is very close to the lower bound.</p>
<p>First let's consider a closely related enumeration problem.</p> <blockquote> <p>Define the <em>range</em> of a cycle $\{x_0, x_1, x_2, \ldots, x_{k-1}, x_k=x_0\}$ to be $\max_i \left|x_{i+1}-x_i\right|$. How many cycles on $\mathbb{Z}$ are there with $k$ edges and range $\le r$? (Two cycles are equivalent if they differ by a translation or a rotation.)</p> </blockquote> <p>If $N_k^{(r)}$ is the number of cycles with $k$ edges and range no greater than $r$, then the probability of returning safely to $0$ by making random jumps of length $\le r$ is given by $$ P^{(r)}=\sum_{k=2}^{\infty}\frac{kN_{k}^{(r)}}{(2r)^{k}}. $$ The factor of $k$ is needed because the starting point ($0$) can be identified with any of the nodes in a given cycle; and the factor of $(2r)^{-k}$ is the probability of taking any specific sequence of $k$ jumps. For $r=1$, there is only one legal cycle (a jump to the right and then a jump to the left), and it has two edges, so $$ P^{(1)}=\frac{2 N_2^{(1)}}{(2r)^{2}}=\frac{2\cdot 1}{(2\cdot 1)^2}=\frac{1}{2}. $$ For $r=2$, there are two cycles for each $k\ge 2$ (depending on whether the first rightward jump is $+1$ or $+2$), so $$ P^{(2)}=\sum_{k=2}^{\infty}\frac{kN_k^{(2)}}{(2r)^k}=2 \sum_{k=2}^{\infty}k 4^{-k}=-2x\frac{d}{dx}\sum_{k=2}^{\infty}x^{-k}\bigg\vert_{x=4}=-2x\frac{d}{dx}\left(\frac{1}{x(x-1)}\right)\bigg\vert_{x=4}=\frac{2(2x-1)}{x(x-1)^2}\bigg\vert_{x=4}=\frac{2\cdot 7}{4\cdot 9}=\frac{7}{18}. $$ This recovers the two known results. For $r\ge 3$, the desired enumeration can be done using a transfer matrix method. I'll make this answer "community wiki" in case someone (maybe me) has the time and energy to fill in the details for $r=3$.</p> <hr> <p>The gist of the transfer matrix approach is that we can list the "states" that a particular node in the cycle can have, figure out which state-to-state transitions are allowed moving from left to right on the number line, and express the number of ways to get from the left end of a cycle to the right end (i.e., the number of different cycles) in terms of a product of many identical <em>local</em> matrices. In this case the "state" consists of the set of jumps to, from, and over the node, along with the range remaining to each jump. Because we're counting cycles, each node will have exactly one incoming edge and one outgoing edge. The states for $r=2$ are simple:</p> <p><img src="https://i.sstatic.net/ZApXL.png" alt="enter image description here"></p> <p>A blue (red) line represents a jump to the right (left), and the number next to a line indicates its remaining range. A legal transition consists of these steps:</p> <ul> <li>Decrement all numbers on the right edge by the same value (at least $1$, but no more than the smallest number on the tile).</li> <li>Select a new tile with the same number of blue and red lines on its left edge as the current tile has on its right edge.</li> <li>Place the new tile to the right of the current tile and connect lines of the same color. Make sure that all lines that pass through the new tile maintain their (decremented) values.</li> </ul> <p>Legal transitions for $r=2$ are: $A$ to $(B,C,D)$ (and $A$ to $D$ can happen two ways), $B$ to $(C,D)$, and $C$ to $(B,D)$. The corresponding transfer matrix is $$ \left(\begin{matrix}0 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 1 &amp; 0 \\ 1 &amp; 1 &amp; 0 &amp; 0 \\ 2 &amp; 1 &amp; 1 &amp; 0 \end{matrix}\right). $$ Now, the number of cycles with $k$ edges will be given by $$ N^{(2)}_k = \left(\begin{matrix}0 &amp; 0 &amp; 0 &amp; 1 \end{matrix}\right) \left(\begin{matrix}0 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 1 &amp; 0 \\ 1 &amp; 1 &amp; 0 &amp; 0 \\ 2 &amp; 1 &amp; 1 &amp; 0 \end{matrix}\right)^{k-1} \left(\begin{matrix}1 \\ 0 \\ 0 \\ 0 \end{matrix}\right) = \left(\begin{matrix}0 &amp; 0 &amp; 1 \end{matrix}\right) \left(\begin{matrix} 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 0 \end{matrix}\right)^{k-2} \left(\begin{matrix} 1 \\ 1 \\ 2 \end{matrix}\right), $$ where we can eliminate the first row and column of the full transfer matrix because the initial state ($A$) never recurs (i.e., the first row is all zeroes). Note that we are counting the number of ways to go from the initial state to the final state, $D$, in $k-1$ steps. In this case, the vector on the right-hand side is an eigenvector of the (reduced) transfer matrix with eigenvalue $1$, so $$ N^{(2)}_k=\left(\begin{matrix}0 &amp; 0 &amp; 1 \end{matrix}\right) \left(\begin{matrix} 1 \\ 1 \\ 2 \end{matrix}\right) = 2 $$ for any $k \ge 2$. In general, though, we will have $N_k=\sum_i c_i \lambda_i^{k-2}$, where the $\lambda_i$ are eigenvalues of the reduced transfer matrix.</p> <hr> <p>The $r=2$ case can be made even simpler by noting that the set of states is symmetric under exchange of red and blue ($A$ and $D$ are themselves symmetric, and $B$ and $C$ are symmetric images of each other), so we can treat mirror pairs as single states. The transitions are then: $A$ to $D$ (two ways), $A$ to $B/C$ (two ways), $B/C$ to $B/C$, and $B/C$ to $D$. So $$ N^{(2)}_k = \left(\begin{matrix}0 &amp; 0 &amp; 1 \end{matrix}\right) \left(\begin{matrix}0 &amp; 0 &amp; 0 \\ 2 &amp; 1 &amp; 0 \\ 2 &amp; 1 &amp; 0 \end{matrix}\right)^{k-1} \left(\begin{matrix}1 \\ 0 \\ 0 \end{matrix}\right) = \left(\begin{matrix}0 &amp; 1 \end{matrix}\right) \left(\begin{matrix} 1 &amp; 0 \\ 1 &amp; 0 \end{matrix}\right)^{k-2} \left(\begin{matrix} 2 \\ 2 \end{matrix}\right) \\ = \left(\begin{matrix}0 &amp; 1 \end{matrix}\right)\left(\begin{matrix} 2 \\ 2 \end{matrix}\right) = 2. $$ This additional simplification, though hardly necessary for $r=2$, will come in handy for $r=3$.</p> <hr> <p>For a less trivial application of the transfer matrix approach (but still not the full $r=3$ case), I considered the <em>asymmetric</em> model where each jump is uniformly drawn from $\{-2,-1,1,2,3\}$. The states in this case are:</p> <p><img src="https://i.sstatic.net/M2Ndv.png" alt="enter image description here"></p> <p>The legal transitions are: $A$ to $(B,C,D,E,H)$ (and $A$ to $H$ can happen two ways), $B$ to $(C,D,H)$ (two way from $B$ to $H$), $C$ to $(D,H)$, $D$ to $(B,H)$, $E$ to $(F,G)$, $F$ to $G$, and $G$ to $H$. So the number of cycles with $k$ edges is given by $$ \left(\begin{matrix}0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \end{matrix}\right)\left(\begin{matrix} 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0\\ 2 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ \end{matrix}\right)^{k-2} \left(\begin{matrix} 1 \\ 1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 2 \end{matrix}\right). $$ Using the Jordan decomposition of the reduced transfer matrix, this is $$ \left(\begin{matrix}1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \end{matrix}\right)\left(\begin{matrix} 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; \nu_1 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; \nu_2 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; \nu_2^{*} \end{matrix}\right)^{k-2}\left(\begin{matrix} -1 \\ 0 \\ 1 \\ 1 \\ c_1 \\ c_2 \\ c_2^{*} \end{matrix}\right) \\ =c_1 \nu_1^{k-2} + c_2 \nu_2^{k-2} + c_2^{*} (\nu_2^{*})^{k-2} - \delta_{k,2} + \delta_{k,4} + \delta_{k,5}, $$ where $\nu_1=1.32472$, $\nu_2=-0.662359 - 0.56228 i$, $c_1=2.945975$, and $c_2=0.0270124 + 0.118443 i$. Then the probability of safe return to $0$ is given by $$ P^{(3,2)}=\sum_{k=2}^{\infty}\frac{kN_k^{(3,2)}}{5^k}=-\frac{2}{5^2}+\frac{4}{5^4}+\frac{5}{5^5}+\sum_{i} c_i \sum_{k=2}^{\infty}\frac{k \nu_i^{k-2}}{5^k} \\ = -\frac{9}{125} + \sum_i \frac{c_i(10-\nu_i)}{5(5-\nu_i)^2} = -0.072 + 0.378409 + 2\cdot 0.00289355 \\ = 0.3121961. $$ For comparison, in a simple simulation of $10^8$ trials (running each trial for at most $1000$ jumps), $31218647$ asymmetric jumpers returned safely to the origin. This gives an estimate of $0.31219 \pm 0.00006$ for the probability; i.e., the simulation agrees perfectly with the matrix calculation above.</p> <hr> <p>Finally, for $r=3$ we have the following set of states:</p> <p><img src="https://i.sstatic.net/Hi8SQ.png" alt="enter image description here"></p> <p>Several pairs of states are related by red-blue exchange symmetry (e.g., $B$ and $B'$). Using this symmetry, we can write the transfer matrix in terms of symmetric states only (e.g., $B/B'$), as described earlier for the $r=2$ case. The transitions out of unprimed states are:</p> <ul> <li>$A$ to $(B,B',C,C',D,H(\times 3))$</li> <li>$B$ to $(B',C,C',E,H(\times 2))$</li> <li>$C$ to $(B',H)$</li> <li>$D$ to $(F,F',G,G')$</li> <li>$E$ to $(F',G')$</li> <li>$F$ to $G$</li> <li>$G$ to $(C,H)$.</li> </ul> <p>So</p> <p>$$ N_k^{(3)} = \left(\begin{matrix}0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \end{matrix}\right)\left(\begin{matrix} 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 2 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 2 &amp; 1 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 2 &amp; 1 &amp; 1 &amp; 0 &amp; 0\\ 2 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ \end{matrix}\right)^{k-2} \left(\begin{matrix} 2 \\ 2 \\ 1 \\ 0 \\ 0 \\ 0 \\ 3 \end{matrix}\right). $$ This sequence begins with $3,6,14,30,62,130,\ldots$. After diagonalizing the matrix, we arrive at $$ N_k^{(3)}=\delta_{k,2}+\left(\begin{matrix}1 &amp; 1 &amp; 1 &amp; 1 \end{matrix}\right)\left(\begin{matrix} \nu_1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; \nu_2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; \nu_3 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; \nu_3^{*} \end{matrix}\right)^{k-2}\left(\begin{matrix} c_1 \\ c_2 \\ c_3 \\ c_3^{*} \end{matrix}\right)=\delta_{k,2} + \sum_i c_i \nu_i^{k-2}, $$ where $$ \nu_1 = -0.716673 \\ \nu_2 = 2.10692 \\ \nu_3 = 0.304877 - 0.754529 i \\ c_1 = -0.188335 \\ c_2 = 3.136179 \\ c_3 = -0.4739206 - 0.3006352 i. $$ Finally, the desired probability is $$ P^{(3)} = \sum_{k=2}^{\infty}\frac{kN_k^{(3)}}{6^k}=\frac{1}{18}+\sum_i \frac{c_i(12-\nu_i)}{6(6-\nu_i)^2}=0.325873. $$ Checking by simulation for $10^8$ trials yields an estimate of $0.32587 \pm 0.00006$, wholly consistent with this result. </p> <hr> <p><strong>Automating the process</strong></p> <p>One can do all of the following by making a number of formalizations. It is advantageous to mechanize the process before proceeding to larger cases; in particular, we will represent a state as a triple $(X,Y,P)$ where $X$ is the set of remaining ranges of rightward (red) edges, $Y$ is the set of leftward (blue) edges, and $P$ is a partition of $X\cup Y$ representing which edges have been connected (noting that if $x=y$ then the edges represented by $x\in X$ and $y\in Y$ originated together and are clearly connected). We will say $(X',Y',P')$ is an $n$-translate of $(X,Y,P)$ if we might see the former after moving our "window" $n$ steps to the right. For convenience, define $$m=\min(X\cup Y)$$ $$S-n=\{s-n:s\in S\}$$ and, because the relations are not conducive to standard notation, I will leave to the reader the problem of relating $P'$ and $P$. Then, we can have the following situations.</p> <ul> <li><p><strong>Birth</strong>: In the state where the node in question has two rightward edges passing from it. We require: $$X'=X-n\cup \{r\}$$ $$Y'=Y-n\cup \{r\}$$ We will consider $r$ to be its own connectedness class in $P'$, and otherwise just shift everything.</p></li> <li><p><strong>Right-Continuation</strong>: This is the state where the node receives a rightward edges and also outputs one, we need that for some $c\in X-n$ we have: $$X'=X-n\cup\{r\}\setminus\{c\}$$ $$Y'=Y-n$$ The partition will essentially shift and add $r$ to the equivalence class of $c$.</p></li> <li><p><strong>Left-Continuation</strong>: This is where a node receives and produces a leftward edge. Flop $X$ and $Y$ in the above definition.</p></li> <li><p><strong>Death</strong>: This is where a node is a local maximum, receiving two edges. We need that $$X'=X-n\setminus\{c_x\}$$ $$Y'=Y-n\setminus\{c_y\}$$ where $c_x$ and $c_y$ are appropriate members of $X-n$ and $Y-n$ respectively. Neither $X'$ nor $Y'$ may contain $0$. The equivalence classes representing $c_x$ and $c_y$ will be merged. If $c_x$ and $c_y$ were the last remaining representatives of their equivalence class, then the remaining edges were not connected to them; this transition is invalid unless $X'=Y'=\{\}$.</p></li> </ul> <p>Then, in the matrix, the number of ways to transition from one state to another is the number of $n$'s such that the latter is an $n$-translate of the former. My code works by generating every <em>conceivable</em> state - i.e. every pair $(X,Y,P$) where $|X|=|Y|$ and $P$ is a partition of $X\cup Y$. Some of these are inaccessible from the starting state or cannot access the ending state, but I did not write code to prune such states away. I also did not take advantage of the symmetry $(X,Y,P)\rightarrow(Y,X,P)$. Either would likely lead to big increases in efficiency, allowing the computation of more values $P^{(r)}$.</p> <p>It is worthy of note that $$\sum_{k=2}^{\infty}a\cdot \frac{kM^{k-1}}{2k}\cdot b=a\cdot\left(\sum_{k=2}^{\infty}\frac{kM^{k-1}}{2r}\right)\cdot b$$ by linearity, meaning we can first use the identity that $$\sum_{k=2}^{\infty}\frac{kM^{k-1}}{(2r)^k}=\frac{1}{4r^2}(2I-\frac{1}{2r}M)\cdot M\cdot(\frac{1}{2r}M-I)^{-2}$$ which is easily derived by noting that $M$ acts, for these purposes, just like a real number with $|M|&lt;2r$ as $M$ must have no eigenvalues of absolute value $2r$ or more (given that there are only $(2r)^k$ paths of length $k$, whereas there are about $k\lambda^k$ cycles of length $k$ with identified starting point where $\lambda$ is the largest eigenvalue of $M$). Then, we can take the transfer matrix, directly compute the infinite sum as above, and extract the desired coefficient. It should be noted that this will always yield a <em>rational</em> answer. In particular, we receive: $$P^{(1)}=\frac{1}2$$ $$P^{(2)}=\frac{7}{18}$$ $$P^{(3)}=\frac{4368595}{13405842}\approx 0.325872$$ $$P^{(4)}=\frac{33373525946827013707062414432976}{117177232862732965149684560993569}\approx 0.284812$$</p> <p>Mathematica code may be found <a href="https://math.stackexchange.com/revisions/1270310/8">in this revision</a>. It may be noted that the methods may be modified easily to handle asymmetric cases or cases with non-uniform probability - essentially, instead of using our transfer matrix as a count, we can use it to measure the probability of a cycle occurring <em>given</em> that we start on a given point of it.</p>
<p>Here is another approach, which involves a lot of elementary computations. The key is similarity. In the case of steps up to size two, this works in the following way: Let $P(i_1,i_2,\dots,i_k)$ denote the probability to stop at zero after you went through $i_1,i_2,\dots,i_k$.</p> <p>Then the probability you search is $$ P=\frac 14(P(-2)+P(-1)+P(1)+P(2)). $$ By symmetry (or the first type of similarity) you have $$ P=\frac 12(P(1)+P(2)). $$ Now $P(1)=\frac 14(P(1,-1)+1+P(1,2)+P(1,3))$ where $1=P(1,0)$.</p> <p>Next compute $$ P(1,-1)=\frac 14(P(1,-1,-3)+P(1,-1,-2)+1). $$ Clearly $P(1,-1,-2)=\frac 14$ since the only chance to stop at zero after that path is to go directly to zero, with probability $1/4$. But the path $P(1,-1,-3)$ is similar to $P(1,-1)$ and we see that $$ P(1,-1,-3)=\frac 14 P(1,-1). $$ We will prove the method of similarity only here, and then use it freely: There is a bijection between the paths going after $(1,-1)$ to zero and the paths going after $(1,-1,-3)$ to zero, the bijection assigns to the path $(1,-1,n_1,\dots,n_k)$ with $n_k=0$ the path $(1,-1,-3,n_1-2,\dots,n_k-2,0)$. Since the latter path has a probability $\frac 14$ less to occur than the former, we obtain the factor $\frac 14$ relating $P(1,-1,-3)$ and $P(1,-1)$.</p> <p>So we have $$ 4P(1,-1)=P(1,-1,-3)+P(1,-1,-2)+1=\frac 14 P(1,-1)+\frac 14+1, $$ from which it follows that $P(1,-1)=1/3$.</p> <p>Now $P(1,2)=\frac 14$ since the only chance to stop at zero after that path is to go directly to zero, with probability $1/4$, and $P(1,3)$ is similar to $P(1,-1)$ with $P(1,3)=\frac 14 P(1,-1)=1/{12}$.</p> <p>So $$ P(1)=\frac 14(P(1,-1)+1+P(1,2)+P(1,3))=\frac 14\left(\frac 13+1+\frac 14+\frac{1}{12}\right)=\frac{5}{12}. $$</p> <p>Finally compute $$ P(2)=\frac 14(1+P(2,1)+P(2,3)+P(2,4)). $$ Note that $P(2,3)=\frac 14 P(2,3,1)=\frac 14 P(2,1)$, since the only chance is to jump directly to 1; the second equality follows by similarity. We also have $$ P(2,4)= P(1,3)P(2,4,3,1)= P(1,3)P(2,1)=\frac{1}{12}P(2,1), $$ by similarity, where a path after $(2,4)$ is split into one path going to 1 (which gives by similarity $P(1,3)$) and the other part corresponds to a path going to zero after $(2,1)$.</p> <p>Hence $$ P(2)=\frac 14(1+P(2,1)+P(2,3)+P(2,4))=\frac 14\left(1+P(2,1)\left(1+\frac 14+\frac 1{12}\right)\right)=\frac 14\left(1+\frac 43 P(2,1)\right). $$ So we have to compute $P(2,1)=\frac 14(P(2,1,-1)+1+P(2,1,3))$. But $P(2,1,3)=0$ and by similarity $P(2,1,-1)=P(1,-1)=1/3$. So we arrive at $$ P(2,1)=\frac 14\left(\frac 13+1\right)=\frac 13, $$ and so $$ P(2)=\frac 14\left(1+\frac 43 P(2,1)\right)=\frac{13}{36}. $$ Finally $P=\frac 12(P(1)+P(2))=\frac 12(\frac{5}{12}+\frac{13}{36})=\frac{7}{18}$, which coincides with the value you have found.</p> <p>This method can be applied in the case where the steps are of maximal length three, but there are much more cases to consider. For example, we try to compute $P(-1,-2,1)$: $$ P(-1,-2,1)=\frac 16\left(1+P(-1,-2,1,2)+P(-1,-2,1,3)+P(-1,-2,1,4)\right). $$ By similarity we have $P(-1,-2,1,3)=\frac 16(1+P(-1,-2,1))$. We also have $$ P(-1,-2,1,2)=\frac 16\left(1+P(-1,-2,1,2,3)+P(-1,-2,1,2,4)+P(-1,-2,1,2,5)\right) $$ $$ =\frac 16\left(1+\frac 16+\frac 16 P(-1,-2,1)+P(-1,-2,1,2,5)\right). $$ We can expand $P(-1,-2,1,2,5)$ and the only term that causes trouble is $P(-1,-2,1,2,5,8)$. In general, we need a nice expression for all terms $P(-1,-2,1,2,5,8,\dots,2+3k)$. Similarly in the expansion of $P(-1,-2,1,4)$ we need a nice expression of the term $P(-1,-2,1,4,7,\dots, 1+3k)$. I think, if you express $P(-1,-2,1,4,7,\dots, 1+3k)$ in terms of $P(-1,-2,1,4,7,\dots, 1+3k,4+3k)$, and use that $\lim P(-1,-2,1,4,7,\dots, 1+3k)=0$, one could work out a formula for these terms and similarly for $P(-1,-2,1,2,5,8,\dots,2+3k)$.</p>
matrices
<p>Just wanted some input to see if my proof is satisfactory or if it needs some cleaning up.</p> <p>Here is what I have.</p> <hr> <p><strong>Proof</strong></p> <blockquote> <p>Suppose $A$ is square matrix and invertible and, for the sake of contradiction, let $0$ be an eigenvalue. Consider, $(A-\lambda I)\cdot v = 0$ with $\lambda=0 $ $$\Rightarrow (A- 0\cdot I)v=0$$<br> $$\Rightarrow(A-0)v=0$$<br> $$\Rightarrow Av=0$$</p> <p>We know $A$ is an invertible and in order for $Av = 0$, $v = 0$, but $v$ must be non-trivial such that $\det(A-\lambda I) = 0$. Here lies our contradiction. Hence, $0$ cannot be an eigenvalue.</p> </blockquote> <p><strong>Revised Proof</strong></p> <blockquote> <p>Suppose $A$ is square matrix and has an eigenvalue of $0$. For the sake of contradiction, lets assume $A$ is invertible. </p> <p>Consider, $Av = \lambda v$, with $\lambda = 0$ means there exists a non-zero $v$ such that $Av = 0$. This implies $Av = 0v \Rightarrow Av = 0$</p> <p>For an invertible matrix $A$, $Av = 0$ implies $v = 0$. So, $Av = 0 = A\cdot 0$. Since $v$ cannot be $0$,this means $A$ must not have been one-to-one. Hence, our contradiction, $A$ must not be invertible.</p> </blockquote>
<p>Your proof is correct. In fact, a square matrix $A$ is invertible <strong>if and only if</strong> $0$ is not an eigenvalue of $A$. (You can replace all logical implications in your proof by logical equivalences.)</p> <p>Hope this helps!</p>
<p>This looks okay. Initially, we have that $Av=\lambda v$ for eigenvalues $\lambda$ of $A$. Since $\lambda=0$, we have that $Av=0$. Now assume that $A^{-1}$ exists. </p> <p>Now by multiplying on the left by $A^{-1}$, we get $v=0$. This is a contradiction, since $v$ cannot be the zero vector. So, $A^{-1}$ does not exist. </p>
combinatorics
<p>You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.</p> <p>The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length. </p> <p>So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?</p> <p>And if so, what is the smallest possible ratio?</p> <p>Example 1 (unit is cm):</p> <ul> <li>1st cut: 50 : 50 ratio: 1 </li> <li>2nd cut: 50 : 25 : 25 ratio: 2 - bad</li> </ul> <p>Example 2</p> <ul> <li>1st cut: 40 : 60 ratio: 1.5</li> <li>2nd cut: 40 : 30 : 30 ratio: 1.33</li> <li>3rd cut: 20 : 20 : 30 : 30 ratio: 1.5</li> <li>4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad</li> </ul> <p>Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.</p>
<p>TLDR: $a_n=\log_2(1+1/n)$ works, and is the only smooth solution.</p> <p>This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.</p> <p>Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,\dots,a_{2n-1}$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_{2n}$ and $a_{2n+1}$. We have the following constraints:</p> <p>$$a_1=1\qquad a_n=a_{2n}+a_{2n+1}\qquad a_n\ge a_{n+1}\qquad a_n&lt;2a_{2n-1}$$</p> <p>I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_{2n}+a_{2n+1}$ as well).</p> <p>First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_n\in (1/2,1)$ then we can define $a_{2n}=a_nb_n$ and $a_{2n+1}=a_n(1-b_n)$, and this will completely define the sequence $a_n$.</p> <p>Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence</p> <p>$$1,\frac12,\frac12,\frac14,\frac14,\frac14,\frac14,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\dots$$ (which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does <em>not</em> tend to a limit - it varies between $1$ and $2$.</p> <p>If we average this exponentially, by considering the function $g(x)=2^xa_{\lfloor 2^x\rfloor}$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]\to\Bbb R$ such that $g(x+n)\to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.</p> <p>There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,\log_2 (3/2)]$ and scaled down on $[\log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $\int_0^{\log_2(3/2)}h(x)\,dx$ and $\int_{\log_2(3/2)}^1h(x)\,dx$.</p> <p>Thus, to keep these balanced we should let $b_1=\log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[\log_2(2n),\log_2(2n+1)]$ and $[\log_2(2n+1),\log_2(2n+2)]$ (reduced$\bmod 1$), so we must set them to $$b_n=\frac{\log_2(2n+1)-\log_2(2n)}{\log_2(2n+2)-\log_2(2n)}=\frac{\log(1+1/2n)}{\log(1+1/n)}.$$</p> <p>When we do this, a miracle occurs, and $a_n=\log_2(1+1/n)$ becomes analytically solvable: \begin{align} a_1&amp;=\log_2(1+1/1)=1\\ a_{2n}+a_{2n+1}&amp;=\log_2\Big(1+\frac1{2n}\Big)+\log_2\Big(1+\frac1{2n+1}\Big)\\ &amp;=\log_2\left[\Big(1+\frac1{2n}\Big)\Big(1+\frac1{2n+1}\Big)\right]\\ &amp;=\log_2\left[1+\frac{2n+(2n+1)+1}{2n(2n+1)}\right]\\ &amp;=\log_2\left[1+\frac1n\right]=a_n. \end{align}</p> <p>As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m&lt;2n$, then \begin{align} 2a_m&amp;=2\log_2\Big(1+\frac1m\Big)=\log_2\Big(1+\frac1m\Big)^2=\log_2\Big(1+\frac2m+\frac1{m^2}\Big)\\ &amp;\ge\log_2\Big(1+\frac2m\Big)&gt;\log_2\Big(1+\frac2{2n}\Big)=a_n, \end{align}</p> <p>so this is indeed a proper solution, and we have also attained our smoothness goal &mdash; $na_n$ converges, to $\frac 1{\log 2}=\log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.</p> <p>It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):</p> <ul> <li>$58:42$, ratio = $1.41$</li> <li>$42:32:26$, ratio = $1.58$</li> <li>$32:26:22:19$, ratio = $1.67$</li> <li>$26:22:19:17:15$, ratio = $1.73$</li> <li>$22:19:17:15:14:13$, ratio = $1.77$</li> </ul> <p>If you are working with a sequence of points treated$\bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=\log_2(2n+1)\bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://i.sstatic.net/SCaaE.gif" rel="noreferrer"><img src="https://i.sstatic.net/SCaaE.gif" alt="sausages"></a></p> <p>A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0\le x\le 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0\le x\le 1$, and no solution can do better than that (in the limit) for any $x$.</p>
<p>YES, it is possible!</p> <p>You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement. So in fact, you must never have two equal parts. Make the first cut so that the condition is not violated, say $60:40$. </p> <p>From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.) We construct a good cut that maintains this property.</p> <p>So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/b\approx 1$). All you have to make sure is that </p> <ul> <li>$a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round. </li> <li>$a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round). </li> </ul> <p>Then the condition is preserved. For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.</p>
logic
<p>I remember once hearing offhandedly that in set builder notation, there was a difference between using a colon versus a vertical line, e.g. $\{x: x \in A\}$ as opposed to $\{x\mid x \in A\}$. I've tried searching for the distinction, but have come up empty-handed.</p>
<p>There is no difference that I've ever heard of. I do strongly prefer "$\vert$" to "$\colon$", though, because I'm often interested in sets of maps, and e.g. $$\{f \mid f\colon \mathbb{R}\rightarrow\mathbb{C}\text{ with $f(6)=24$}\}$$ is easier to read than $$\{f: f\colon \mathbb{R}\rightarrow\mathbb{C}\text{ with $f(6)=24$}\}$$.</p> <p>EDIT: Note that as Mike Pierce's answer shows, sometimes "$:$" is clearer. At the end of the day, <em>use whichever notation is most clear for your context</em>. </p>
<p>There is no difference. The <em>bar</em> is just often easier to read than the <em>colon</em> (like in the example in Noah Schweber's answer). However in analysis and probability, the <em>bar</em> is used in other notation. In analysis it is used for absolute value (or distance or norms) and in probability it is used in conditional statements (the probability of $A$ given $B$ is $\operatorname{P}(A \mid B)$). So looking at <em>bar</em> versus <em>colon</em> in sets with these notations $$ \{x \in X \mid ||x|-|y_0||&lt;\varepsilon\} \quad\text{vs}\quad \{x \in X : ||x|-|y_0||&lt;\varepsilon\} $$ $$ \{A \subset X \mid \operatorname{P}(B \mid A) &gt; 0.42\} \quad\text{vs}\quad \{A \subset X : \operatorname{P}(B \mid A) &gt; 0.42\} $$ it can be better to use the <em>colon</em> just to avoid overloading the <em>bar</em>.</p>
geometry
<p>Given an infinite (in all directions), $n$-dimensional chess board $\mathbb Z^n$, and a black king. What is the minimum number of white rooks necessary that can guarantee a checkmate in a finite number of moves?</p> <p>To avoid trivial exceptions, assume the king starts a very large distance away from the nearest rook.</p> <p>Rooks can change one coordinate to anything. King can change any set of coordinates by one.</p> <p>And same problem with i) Bishops and ii) Queens, in place of rooks.</p>
<p>In 3-dimensional chess, it is possible to force checkmate starting with a finite number of rooks. As this fact still appears to be open, I'll post a method of forcing checkmate with 96 rooks, even though it should be clear that this is not optimal. You can remove some of the rooks from the method I'll give below, but I am aiming for a simple explanation of the method rather than the fewest possible number of rooks.</p> <p>First, we move all of the rooks far away in the <span class="math-container">$z$</span> direction, so that they cannot be threatened by the king. We also move each of the rooks so that they all have distinct <span class="math-container">$z$</span> coordinates. That way, they are free to move any number of steps in the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> directions without blocking each other. The king will be in check whenever it has the same <span class="math-container">$(x,y)$</span>-coordinate as one of the rooks. We can project onto the <span class="math-container">$(x,y)$</span>-plane to reduce it to a 2-dimensional board. Looked at this way, each rook can move any number of places in the <span class="math-container">$x$</span> or <span class="math-container">$y$</span> direction (and can pass through each other, can pass through the king, and you can multiple rooks in the same <span class="math-container">$(x,y)$</span>-square). The king is in check if it is on the same square as a rook.</p> <p>First, I'll describing the following "blocking move" to stop the king passing a given horizontal (or vertical) line.</p> <p><a href="https://i.sstatic.net/ZMZyA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZMZyA.png" alt="blocking move"></a> </p> <p>In the position above, the right-most 3 rooks are stopping the king moving past the red line on the next move. Then, once the king moves, do the following. (i) If the king's <span class="math-container">$x$</span>-coordinate does not change, do nothing. (ii) If the king's <span class="math-container">$x$</span>-coordinate increases by one, move the left-most rook so that it is to the right of the other three. Then you are back in the same position, just moved along by one step. (iii) If the king's <span class="math-container">$x$</span>-coordinate decreases by one step, do nothing. We are back in the same situation, except reflected (so, keep performing the same steps, but reflected in the <span class="math-container">$x$</span>-direction on subsequent moves).</p> <p>This way, we chase the king along the red line, but he can never cross it. Furthermore, if the king changes from going right to going left, we have a free move to do something elsewhere on the board. Actually, for this to work, if the king is in column <span class="math-container">$i$</span>, we just need three rooks at positions <span class="math-container">$i-1,i,i+1$</span> on the row above the red line, and one more at any other position on the row. Next, if we have 4 rooks stationed somewhere on the given horizontal row, how many moves does it take to move them into the blocking position? The answer is 6. You first move one rook to have the same <span class="math-container">$x$</span>-coordinate as the king (say, <span class="math-container">$x=i$</span>). After the king moves, by reflection we can assume he keeps the same <span class="math-container">$x$</span>-coordinate or moves one to the right. Then, move the next rook to position <span class="math-container">$i+2$</span>. Then, after the next move, move a rook to position <span class="math-container">$i-2$</span> or <span class="math-container">$i+4$</span> in such a way that we have three rooks on the row, with one space between each of them, and the kings is in one of the 3 middle columns. Say, the rooks are at positions <span class="math-container">$j-2,j,j+2$</span> and the king is in column <span class="math-container">$j-1,j$</span> or <span class="math-container">$j+1$</span>. If the king moves to column <span class="math-container">$j-1$</span> or <span class="math-container">$j+1$</span> we just move the 4th rook to this position and we have attained the blocking position. If the king moves to column <span class="math-container">$j$</span>, we move the rook in position <span class="math-container">$j-2$</span> to position <span class="math-container">$j-1$</span> and, on the next move, we can move the 4th rook in to attain the blocking position. If the king moves to column <span class="math-container">$j+2$</span>, we move the rook in column <span class="math-container">$j-2$</span> to <span class="math-container">$j+4$</span>, then we are in the position above where there are rooks at positions <span class="math-container">$k-2,k,k+2$</span> and the king in position <span class="math-container">$k$</span>, so it takes 2 more moves to attain the blocking position.</p> <p>So, we just need to keep 4 rooks stationed along the row which we wish to block the king from crossing. Whenever he moves within 6 steps from this row, start moving the rooks into the blocking position, and he can never step into the given row.</p> <p>Now, choose a large rectangle surrounding the king, and position 15 rooks in each corner as below.</p> <p><a href="https://i.sstatic.net/GIe2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GIe2C.png" alt="rectangle"></a> </p> <p>Also, position 4 rooks in arbitrary positions along each edge of the rectangle. So, that's <span class="math-container">$4\times15+4\times4=76$</span> rooks used so far. I puposefully left some of the board blank in the diagram above. The point is to not specify exactly how big the rectangle is. It doesn't matter, just so long as it is large enough to be able to move the 76 rooks into position before the black king can get within 6 steps of any of the edges of the rectangle.</p> <p>Now, once we are in this position, then whenever the black king moves within one of the red rectangles, use the 4 rooks positioned along the adjacent edge to perform the blocking move as described above to stop the king crossing that edge. We can keep doing this, and imprison the black king within the big rectangle. Furthermore, we keep getting free moves to do something else whenever the king moves out of the red rectangles, or whenever he changes direction within a red rectangle. Also, if the king is in one of the inside corners of a red rectangle, there is already a rook in the corresponding position at the adjacent edge of the big rectangle, giving us a free move.</p> <p>Now, suppose that we have an extra 20 rooks. During the free moves we get while chasing the king around the edge of the big square, we can move these to any position we like. With 20 rooks, we can position 16 of to the left of each of the 16 rooks near the right-hand corners of the big rectangle which have an empty square to their left. Also, position 4 rooks along the column one step to the left of the right-hand edge of the big rectangle. This way, we create a new rectangle one square smaller in the <span class="math-container">$x$</span>-direction. If the king ever enters the right-hand red rectangle or one step to the left of this, we use the new 4 blocking rooks to stop him from reaching the right-hand edge of the new big rectangle. If he is already within the red rectangle, and stays there then, when we get a free move, we can move one of the new blocking rooks to the position one above or below the row in which the king is. Then we can bring the other 3 rooks in, blocking him out of this column. In this way, we create a new big rectangle one step smaller in the <span class="math-container">$x$</span>-direction and with the king still trapped inside. Similarly, we can reduce the height of the big rectangle by 1. Repeat this, enclosing the king in ever smaller rectangles until, eventually, he gets trapped in the single square within a 3x3 rectangle surrounded by 8 rooks. Then bring one of the other rooks in to cover this square, which is checkmate.</p>
<p>As the beginnings of an impossibility proof, let's consider the possibility of blocking off one direction of motion to the king in three dimensions. Consider the following game:</p> <pre><code>............. ........... ......... ....... ..... ... K </code></pre> <p>Each turn, you get to place one # on the top row, and then the king gets to move his K one space forward, either straight or diagonally, but cannot pass through a #. If the K reaches the back row, you lose. This game is actually winnable with this size grid or larger, but not with a smaller grid.</p> <p>How does this compare to the chess game? Well, if we just consider one direction of motion, we can project from 3 dimensions down to two. If the king is not in the same plane as any of your rooks, then each of your rooks can block off a single square in the projected game.</p> <p>Unfortunately, you can't really guarantee being able to place one rook per turn in the correct position (and don't forget in the real game, we are trying to block 3 directions of motion at once!) So now consider the same game, but the king gets to make <em>two</em> moves for every move you make.</p> <p>I don't think you can win this version of the game.</p> <p>Now, a better version of the game would be to let you place one # per turn anywhere, and the king gets two moves (or less, if he wants) per turn in any direction. I don't know if this game might be winnable; it's not quite the Conway Angel problem that was mentioned (jumping isn't allowed).</p> <p>But again, you're trying to bound the king in three directions at once. So you're playing three games at once, and on your turn you get to place one piece on any of the three boards, but your opponent gets two moves (or less if he likes) on all boards at once. It seems... unlikely that this game is winnable.</p> <p>So, in order to actually force a checkmate, you're going to have to do one of:</p> <ul> <li>Take some advantage of the initial arrangement of your rooks</li> <li>Find some way to make useful moves on two or three of the projected boards at once</li> <li>Get your rooks in the same plane as the king frequently enough to be meaningful</li> </ul> <p>It's not clear that this can be done.</p> <hr> <p>Update: you don't need to win in 3 directions at once, just two. More precisely, consider a game on an infinite two-dimensional grid where you have pieces # and your opponent has a piece K. You and your opponent alternate turns, and the rules are:</p> <ul> <li>You can move a # horizontally or vertically any distance.</li> <li>You are allowed to move a # onto the K.</li> <li>Your opponent can move one space in any of the 8 directions or remain still.</li> <li>Your opponent is not allowed to end his turn on a #.</li> </ul> <p>You win if your opponent has no legal moves. If you can win this game in N moves with M #'s, then you can force a checkmate as follows:</p> <ul> <li>Imagine the above two-dimensional grid as a projection of the three-dimensional chessboard.</li> <li>Move your rooks vertically 2M + N + lots more spaces away from the king, all with different vertical coordinates.</li> <li>Play the above game, where the #'s are rooks and the K is the king.</li> </ul> <p>If you can build walls quickly enough in just two dimensions, of course, that would give a winning strategy for the above game (given enough #'s): wall in the K, then move #'s to fill all of the interior squares.</p>
logic
<p>Then otherwise the sentence <em>"It is not possible for someone to find a counter-example"</em> would be a proof.</p> <p>I mean, are there some hypotheses that are false but the counter-example is somewhere we cannot find even if we have super computers.</p> <p>Sorry, if this is a silly question. </p> <p>Thanks a lot.</p>
<p>A standard example of this is <a href="http://en.wikipedia.org/wiki/Halting_problem">the halting problem</a>, which states essentially:</p> <blockquote> <p>There is no program which can always determine whether another program will eventually terminate. </p> </blockquote> <p>Thus there must be some program which does not terminate, but no proof that it does not terminate exists. Otherwise for any program, we could run the program and at the same time search for a proof that it does not terminate, and either the program would eventually terminate or we would find such a proof. </p> <p>To match the phrasing of your question, this means that the statement:</p> <blockquote> <p>If a program does not terminate, there is some proof of this fact.</p> </blockquote> <p>is false, but no counterexample can be found.</p>
<p>I actually like this one:</p> <p>There are uncountably many real numbers. However, given that all specifications of specific real numbers (be it by digits, by an algorithm, or even a description of the number in plain English) is ultimately given by a finite string of finitely many symbols, there are only countably many descriptions of real numbers.</p> <p>A straightforward formalization (but not the only possible, nor the most general one) of that idea is to model the descriptions as natural numbers (think e.g. of storing the description in a file, and then interpreting the file as natural number), and then having a function from the natural numbers (that is, the descriptions) to subsets of the real numbers (namely the set of real numbers which fit the description). A description which uniquely describes a real number would, in this model, be a natural number which maps to a one-element subset of the real numbers; the single element of that subset is. Since there are only countably many natural numbers (by definition), they can only map to at most countably many one-element subsets, whose union therefore only contains countably many real numbers. Since there are uncountably many real numbers, there must be uncountably many numbers not in this set.</p> <p>Therefore in this formalization, for any given mapping, almost every real number cannot be individually specified by any description. Therefore there exist uncountably many counterexamples to the claim "you can uniquely specify any real number".</p> <p>Of course I cannot give a counterexample, because to give a counterexample, I'd have to specify an individual real number violating the claim, but if I could specify it, it would not violate the claim and therefore not be a counterexample.</p> <p>Note that in the first version, I omitted that possible formalization. As I learned from the comments and especially <a href="https://mathoverflow.net/a/44129/1682">this MathOverflow post</a> linked from them, in the original generality the argument is wrong.</p>
linear-algebra
<p>What purposes do the Dot and Cross products serve? </p> <p>Do you have any clear examples of when you would use them?</p>
<p>When you deal with vectors, sometimes you say to yourself, "Darn I wish there was a function that..."</p> <ul> <li><p>was zero when two vectors are perpendicular, letting me test perpendicularness."</p> <p><strong>Dot Product</strong></p></li> <li><p>would let me find the angle between two vectors."</p> <p><strong>Dot Product</strong> (actually gives the cosine of the angle between two normalized vectors)</p></li> <li><p>would let me 'project' one vector onto another, or give the length of one vector in the direction of another."</p> <p><strong>Dot Product</strong></p></li> <li><p>could tell me how much force is actually helping the object move, when pushing at an angle."</p> <p><strong>Dot Product</strong></p></li> <li><p>could tell me how much a vector field is 'spreading out'."</p> <p><strong>Cross Product</strong></p></li> <li><p>could give me a vector that is perpendicular to two other vectors."</p> <p><strong>Cross Product</strong></p></li> <li><p>could tell me how much torque a force was applying to a rotating system."</p> <p><strong>Cross Product</strong></p></li> <li><p>could tell me how much this vector field is 'curling' up."</p> <p><strong>Cross Product</strong></p></li> </ul> <p>There are actually a lot more uses, but the more I study vectors, the more and more I run into a situation where I need a function to do <em>exactly</em> something, and then realize that the cross/dot products already did exactly what I needed!</p>
<p>The dot product can be used to find the length of a vector or the angle between two vectors.</p> <p>The cross product is used to find a vector which is perpendicular to the plane spanned by two vectors.</p>
game-theory
<p>In the game <a href="https://en.wikipedia.org/wiki/Hex_(board_game)" rel="noreferrer">hex</a>, at least one player always wins because they can form a chain of hexagons across the board. This led me to wonder, what happens if we generalise to infinitely many points? </p> <p>Specifically, if every point in a unit square (including boundaries) is coloured red or blue, does there necessarily exist a continuous function $f: [0,1] \to [0,1]\times [0,1]$ such that $f(x)$ is either</p> <p>a) Always red$\space\space$ and $f(0)=(0,a), f(1)=(1,b)$ for some a,b</p> <p>b) Always blue and $f(0)=(a,0), f(1)=(b,1)$ for some a,b</p> <p>Furthermore, if there exists a function such that (a) is true, then does that necessarily mean there does not exist a function such that (b) is true?</p> <p>(In the example, red wins with the path shown an blue loses)</p> <p><a href="https://i.sstatic.net/IjM6e.png" rel="noreferrer"><img src="https://i.sstatic.net/IjM6e.png" alt="square filled with red and blue dots, with a green wavy line from left to right"></a></p> <p>My intuition tells me that this is true, but I have no idea how to begin proving it. My best idea was to colour the regions to the left and right of the square red. Then anything connected to this red region is marked green. If the other side is connected to this then we are done. Otherwise, take the points along the boundary of this green region. They must be blue otherwise there exists a point closer to the region that is blue (by definition of the green region). Hence this boundary reaches all the way down to the bottom and we are done. But I'm not sure if this green region is well-defined or anything and have no idea how to show that it is.</p> <p>(Also, I've got no idea what tag(s) to put on this, sorry)</p>
<p>No. In fact, it is possible to color the entire unit square red and blue so that there is <em>no</em> nonconstant continuous function $f:[0,1]\to[0,1]\times[0,1]$ whose image is entirely one color. The key observation is that the image of any such nonconstant function is a closed subset of $[0,1]\times[0,1]$ of cardinality $\mathfrak{c}$, and there are only $\mathfrak{c}$ such closed subsets. You can then construct a coloring by transfinite induction so that each such set has both a red point and a blue point.</p> <p>In detail, let $(X_\alpha)_{\alpha&lt;\mathfrak{c}}$ be an enumeration of all the closed subsets of $[0,1]\times[0,1]$ of cardinality $\mathfrak{c}$. We define sequences $(r_\alpha)_{\alpha&lt;\mathfrak{c}}$ and $(b_\alpha)_{\alpha&lt;\mathfrak{c}}$ by induction. Having defined $r_\beta$ and $b_\beta$ for all $\beta&lt;\alpha$, define $r_\alpha$ and $b_\alpha$ to be two distinct points of $X_\alpha$ which are not equal to $r_\beta$ or $b_\beta$ for any $\beta&lt;\alpha$. This is possible since we have chosen fewer than $\mathfrak{c}$ points so far, and $X_\alpha$ has cardinality $\mathfrak{c}$.</p> <p>We thus obtain two disjoint sets $R=\{r_\alpha\}_{\alpha&lt;\mathfrak{c}}$ and $B=\{b_\alpha\}_{\alpha&lt;\mathfrak{c}}$ which each intersect every $X_\alpha$. Color all the points in $R$ red, and all the points in $B$ blue, and all the points that are in neither $R$ nor $B$ however you want. Then neither the red points nor the blue points contain any $X_\alpha$, and thus neither contains the image of a nonconstant continuous function $f:[0,1]\to[0,1]\times[0,1]$.</p> <hr> <p>However, it is true that <em>at most</em> one of your conditions (a) and (b) are true. See <a href="https://math.stackexchange.com/a/1937554/86856">this answer to an earlier question</a> for a proof. (The proof there assumes the endpoints of the paths are the corners of the square, but a similar argument works in general. Or, you can apply a homeomorphism of the square that sends the endpoints of the paths to the corners.)</p>
<p>Color $(x,y)\in[0,1]^2$ red if $x=0$ or $y=1/2\cdot\sin(1/x)$. Color everything else blue. There are no paths of either color connecting its respective edges. </p> <p><img src="https://i.sstatic.net/AOnkB.png" width="200" /></p> <p>Note that the red path does not "reach" the line $x=0$. See also this post and the counterexample in the answers: <a href="https://math.stackexchange.com/questions/1280823/intermediate-value-theorem-for-curves">&quot;Intermediate Value Theorem&quot; for curves</a></p>
matrices
<p>Do positive semidefinite matrices have to be symmetric? Can you have a non-symmetric matrix that is positive definite? I can't seem to figure out why you wouldn't be able to have such a matrix, but all my notes specify positive definite matrices as "symmetric <span class="math-container">$n \times n$</span> matrices."</p> <p>Can anyone help me with an example of a non-symmetric positive definite matrix, or some insight into a proof for why it would need to be symmetric should that be the case? Thanks!</p>
<p>No, they don't, but symmetric positive definite matrices have very nice properties, so that's why they appear often.</p> <p>An example of a non-symmetric positive definite matrix is $$M=\pmatrix{2&amp;0\\2&amp;2}.$$ Indeed, $$\pmatrix{x\\y}^T\pmatrix{2&amp;0\\2&amp;2}\pmatrix{x\\y} = (x+y)^2 + x^2 + y^2$$ which is strictly greater than $0$ whenever the vector is non-zero.</p>
<p>Let me just add that there exist a branch of optimization and variational analysis in which the notion of a nonnegative definite matrix which is not symmetric is fundamental. Consider a smooth convex function; by convexity its Hessian is nonnegative positive definite while by Schwarz Theorem, it is also symmetric. However, there exist "non-gradient operators" i.e. which do not integrate into a function but still possess a notion of convexity. This is the notion of a monotone operator, which in the smooth case, its Jacobian is a nonnegative definite matrix which may not be symmetric.</p>
geometry
<p>The <a href="http://en.wikipedia.org/wiki/Pythagorean_theorem" rel="nofollow noreferrer">Pythagorean Theorem</a> is one of the most popular to prove by mathematicians, and there are <a href="http://www.cut-the-knot.org/pythagoras/" rel="nofollow noreferrer">many proofs available</a> (including one from <a href="http://en.wikipedia.org/wiki/James_A._Garfield" rel="nofollow noreferrer">James Garfield</a>).</p> <p>What's are some of the most elegant proofs?</p> <p>My favorite is this graphical one:</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/16/Pythagorean_Proof_%283%29.PNG/220px-Pythagorean_Proof_%283%29.PNG" alt="alt text" /></p> <p>According to cut-the-knot:</p> <blockquote> <p>Loomis (pp. 49-50) mentions that the proof &quot;was devised by Maurice Laisnez, a high school boy, in the Junior-Senior High School of South Bend, Ind., and sent to me, May 16, 1939, by his class teacher, Wilson Thornton.&quot;</p> <p>The proof has been published by Rufus Isaac in Mathematics Magazine, Vol. 48 (1975), p. 198.</p> </blockquote>
<p>I really like this one (image taken from <a href="http://www.cut-the-knot.org/pythagoras/" rel="noreferrer">cut-the-knot</a> #4)</p> <p><img src="https://i.sstatic.net/NRfbG.gif" alt="alt text"></p> <p>along with </p> <p>$$(a+b)^2 = 4\cdot\frac{1}{2}a b + c^2$$ $$\Leftrightarrow c^2 = a^2 + b^2$$</p> <p>It's so clear and easy ...</p> <p><em>Note: Just to make it clear, the side of the square is a+b</em></p>
<p>The proof that uses the fact that shearing a parallelogram parallel to one of its sides preserves area is my favorite. Here's an animation from <a href="http://www.arseweb.com/rupe/pythag/proof.html">this site</a>:</p> <p><img src="https://i.sstatic.net/W9r8D.gif" alt="animated proof"></p>
logic
<p>I am looking for an undecidable problem that I could give as an easy example in a presentation to the general public. I mean easy in the sense that the mathematics behind it can be described, well, without mathematics, that is with analogies and intuition, avoiding technicalities.</p>
<p><strong>"Are these two real numbers</strong> (or functions, or grammars, or mathematical statements) <strong>equivalent?"</strong><br> <em>(See also <a href="http://en.wikipedia.org/wiki/Word_problem_%28mathematics%29" rel="noreferrer">word problem</a>)</em></p> <p><strong>"Does this statement follow from these axioms?"</strong><br> <em>(Hilbert's <a href="http://en.wikipedia.org/wiki/Entscheidungsproblem" rel="noreferrer">Entscheidungsproblem</a>)</em></p> <p><strong>"Does this computer program ever stop?"</strong><br> <strong>"Does this computer program have any security vulnerabilities?"</strong><br> <strong>"Does this computer program do &lt;any non-trivial statement>?"</strong><br> <em>(The <a href="http://en.wikipedia.org/wiki/Halting_problem" rel="noreferrer">halting-problem</a>, from which <a href="https://en.wikipedia.org/wiki/Rice%27s_theorem" rel="noreferrer">all semantic properties</a> can be reduced)</em></p> <p><strong>"Can this set of domino-like tiles tile the plane?"</strong><br> <em>(See <a href="http://en.wikipedia.org/wiki/Wang_tile" rel="noreferrer">Tiling Problem</a>)</em></p> <p><strong>"Does this <a href="http://en.wikipedia.org/wiki/Diophantine_equation" rel="noreferrer">Diophantine equation</a> have an integer solution?"</strong><br> <em>(See <a href="http://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem" rel="noreferrer">Hilbert's Tenth Problem</a>)</em></p> <p><strong>"Given two lists of strings, is there a list of indices such that the concatenations from both lists are equal?"</strong><br> <em>(See <a href="http://www.loopycode.com/a-surprisingly-hard-problem-post-correspondence/" rel="noreferrer">Post correspondence problem</a>)</em></p> <hr> <p>There is also a large list on <a href="http://en.wikipedia.org/wiki/List_of_undecidable_problems" rel="noreferrer">wikipedia</a>.</p>
<p>I think the <a href="http://en.wikipedia.org/wiki/Post_correspondence_problem" rel="nofollow noreferrer">Post correspondence problem</a> is a very good example of a simple undecideable problem that is also relatively unknown.</p> <p>Given a finite set of string tuples, each with an index <span class="math-container">$i$</span>, a left string <span class="math-container">$l(i)$</span> and a right string <span class="math-container">$r(i)$</span>, the problem is to determine if there is a finite sequence of index values <span class="math-container">$i(1),i(2),\dots,i(n)$</span>, allowing for repetition, such that the concatenation of the left strings <span class="math-container">$l(i(1)),\dots,l(i(n))$</span> is equal to the concatenation of the corresponding right strings <span class="math-container">$r(i(1)),\dots,r(i(n))$</span>. For example, with three tuples with <span class="math-container">$(l(i), r(i)), i$</span> as follows:</p> <pre><code>(a , baa) X (ab, aa) Y (bba, bb) Z </code></pre> <p>we may use the index sequence <span class="math-container">$Z, Y, Z, X$</span>:</p> <pre><code>(bba, bb) Z (ab, aa) Y (bba, bb) Z (a, baa) X ------------ gives (bbaabbbaa, bbaabbbaa) </code></pre> <p>The only big issue I have with this problem is that the only undecideability proof I know of falls back on simulating a Turing machine --- it would be nice to find a more elementary alternate version.</p>
combinatorics
<p>I had seen this problem a long time back and wasn't able to solve it. For some reason I was reminded of it and thought it might be interesting to the visitors here.</p> <p>Apparently, this problem is from a mathematics magazine of some university in the United States (sorry, no idea about either).</p> <p>So the problem is:</p> <p>Suppose $S \subset \mathbb{Z}$ (set of integers) such that </p> <p>1) $|S| = 15$</p> <p>2) $\forall ~s \in S, \exists ~a,b \in S$ such that $s = a+b$</p> <p>Show that for every such $S$, there is a non-empty subset $T$ of $S$ such that the sum of elements of $T$ is zero and $|T| \leq 7$.</p> <p><strong>Update</strong> (Sep 13)</p> <p>Here is an approach which seems promising and others might be able to take it ahead perhaps.</p> <p>If you look at the set as a vector $s$, then there is a matrix $A$ with the main diagonal being all $1$, each row containing exactly one $1$ and one $-1$ (or a single $2$) in the non-diagonal position such that $As = 0$.</p> <p>The problem becomes equivalent to proving that for any such matrix $A$ the row space of $A$ contains a vector with all zeroes except for a $1$ and $-1$ or a vector with all zeroes except $\leq 7$ ones.</p> <p>This implies that the numbers in the set $S$ themselves don't matter and we can perhaps replace them with elements from a different field (like say reals, or complex numbers).</p>
<p>A weaker statement, where we allow elements in $T$ to be repeated, can be proved as below: </p> <p>Since we can look at the set $\{-s | s\in S \}$ we may assume there are at most $7$ positive numbers in $S$. Let each positive number be a vertex, from each vertex $s$ we draw an arrow to any vertex $a$ such that $s=a+b$. Since if $s>0$, one of $a,b$ must be positive, there is at least one arrow from any vertex. So there must be a cycle $s_1,\cdots,s_n=s_1$ with $n\leq 8$. We can let $T$ consists of $s_i-s_{i+1}$, $1\leq i\leq n-1$. </p>
<p>I started to write a short note devoted to the problem. Its current version is <a href="https://mega.nz/#!ohZilaxQ!gOnBanPN5f8r1niFya57VU3IbQb15x0m-wbM8iwp3pg" rel="nofollow noreferrer">here</a>. I introduced the following general framework for it.</p> <p>Let <span class="math-container">$S$</span> be a subset of an abelian group. The set <span class="math-container">$S$</span> is called <em>decomposable</em>, provided each its element <span class="math-container">$a$</span> is decomposable, that is there exist <span class="math-container">$b,c\in S$</span> with <span class="math-container">$a=b+c$</span>. Clearly, <span class="math-container">$S$</span> is decomposable iff <span class="math-container">$S\subset S+S$</span>. Let <span class="math-container">$z(S)$</span> be the smallest size of a non-empty subset <span class="math-container">$T$</span> of <span class="math-container">$S$</span> such that <span class="math-container">$\sum T=0$</span>, and <span class="math-container">$z(S)=\infty$</span>, otherwise. Given a natural number <span class="math-container">$n$</span> put <span class="math-container">$$z(n)=\sup\{z(S): S\subset S+S\subset\mathbb R, |S|=n\},$$</span> that is <span class="math-container">$z(n)$</span> is the smallest number <span class="math-container">$m$</span> such that any decomposable set of <span class="math-container">$n$</span> real numbers has a non-empty subset <span class="math-container">$T$</span> of size <span class="math-container">$m$</span> such that <span class="math-container">$\sum T=0$</span>.</p> <p>In the above terms, your question asks to show that <span class="math-container">$z(S)\le 7$</span> for any decomposable set <span class="math-container">$S$</span> consisting of <span class="math-container">$15$</span> <em>integer</em> numbers and Gjergji Zaimi’s <a href="https://mathoverflow.net/questions/16857/existence-of-a-zero-sum-subset">question</a> asks whether <span class="math-container">$z(n)$</span> is finite (in another words, whether <span class="math-container">$z(n)\le n$</span>) for each natural <span class="math-container">$n$</span>.</p> <p>I think that the proposed approaches were not successful because they don't fully exploit structure of decomposable sets. This concerns even so promising approaches as the <a href="https://math.stackexchange.com/a/2447/71850">search for cycles</a> by curious and the <a href="https://mathoverflow.net/a/16871/43954">summation</a> by Hsien-Chih Chang 張顯之. In particular, these approaches don't assure that found zero-sum sequences consist of distinct elements. We shall study decomposable sets in the note. </p> <p>Newertheless, our main result is the following</p> <p><strong>Proposition 3.</strong> For any <span class="math-container">$n\ge 2$</span>, <span class="math-container">$z(n)\ge \left\lfloor\tfrac n2\right\rfloor$</span>.</p> <p><em>Proof.</em> Given a natural number <span class="math-container">$k$</span> put <span class="math-container">$A=\{1,2,\dots, 2^{k-1}\}$</span> and <span class="math-container">$S=A\cup (A-(2^k-1))$</span>. Then <span class="math-container">$|S|=2k$</span>. The set <span class="math-container">$S$</span> is decomposable, because, clearly, each number but <span class="math-container">$1$</span> of the set <span class="math-container">$A$</span> is decomposable, <span class="math-container">$1=2^{k-1}+(2^{k-1}-(2^k-1))$</span>, <span class="math-container">$2^l-(2^k-1)=2^{l-1}+(2^{l-1}-(2^k-1))$</span> for each <span class="math-container">$l=1,\dots, k-1$</span>, and <span class="math-container">$1-(2^k-1)=(2^{k-1}-(2^k-1))+(2^{k-1}-(2^k-1))$</span>.</p> <p>Let <span class="math-container">$T$</span> be a subset of <span class="math-container">$S$</span> with <span class="math-container">$|T|\le k-1$</span> and <span class="math-container">$\sum T=0$</span>. Then clearly <span class="math-container">$k\ge 3$</span> and <span class="math-container">$T$</span> contains at most <span class="math-container">$k-2$</span> positive elements. Since all of them are distinct elements of <span class="math-container">$A$</span>, their sum is at most <span class="math-container">$2^k-2$</span>. On the other hand, the biggest negative element of the set <span class="math-container">$A-(2^k-1)$</span> is <span class="math-container">$-2^{k-1}+1=-(2^k-2)/2$</span>. Thus if <span class="math-container">$T$</span> contains at least two negative elements then <span class="math-container">$\sum T&lt;0$</span>. If <span class="math-container">$T$</span> contains exactly one negative element then <span class="math-container">$\sum T=0$</span> implies that we have a representation of <span class="math-container">$2^k-1$</span> as a sum of at most <span class="math-container">$k-1$</span> powers of <span class="math-container">$2$</span>. with at most one power used twice. This representation collapses to a sum of at most <span class="math-container">$k-1$</span> distinct powers of <span class="math-container">$2$</span>. If the representation contains a power <span class="math-container">$2^l$</span> with <span class="math-container">$l\ge k$</span> then it is bigger than <span class="math-container">$2^k-1$</span>. Otherwise the sum is at most <span class="math-container">$2^1+2^2+\dots +2^{k-1}=2^k-2&lt;2^k-1$</span>. Thus <span class="math-container">$z(S)\ge k$</span>.</p> <p>A set <span class="math-container">$\{-1,0,1\}$</span> witnesses that <span class="math-container">$z(3)\ge 1$</span>. To construct a decomposable set of size <span class="math-container">$2k+1$</span> for <span class="math-container">$k\ge 2$</span> put <span class="math-container">$S^+=S\cup \{2(-2^k+2)\}$</span>. Since the set <span class="math-container">$S$</span> is decomposable and <span class="math-container">$-2^k+2\in S$</span>, the set <span class="math-container">$S^+$</span> is decomposable too. Similarly to the above we can show that <span class="math-container">$z(S^+)\ge k$</span>. The new case is when <span class="math-container">$T$</span> contains exactly one negative element <span class="math-container">$2(-2^k+2)$</span>. Then <span class="math-container">$\sum T=0$</span> implies that we have a representation of <span class="math-container">$2(-2^k+2)$</span> as a sum of at most <span class="math-container">$k-1$</span> powers of <span class="math-container">$2$</span>. not bigger than <span class="math-container">$2^{k-1}$</span> with at most one power used twice. This sum is at most <span class="math-container">$2^2+2^3+\dots +2^{k-1}+2^{k-1}=2^k-4+2^{k-1}&lt;2(2^k-2).$</span> <span class="math-container">$\square$</span></p> <p>We conjecture that the lower bound in Proposition 3 is tight. This conjecture is confirmed for small <span class="math-container">$n$</span> by the problem from your question and in the note we proved it for <span class="math-container">$n\le 5$</span>. I’m going to finish there my draft proofs that the conjecture also holds for <span class="math-container">$n=6$</span> and <span class="math-container">$7$</span>. Of course, this is not a big deal, but Tao Te Ching teaches that <a href="https://en.wikipedia.org/wiki/A_journey_of_a_thousand_miles_begins_with_a_single_step" rel="nofollow noreferrer">a journey of a thousand miles begins with a single step</a>, so let's start. I hope that we’ll be able to continue the journey. </p> <p><strong>Update.</strong> Taras Banakh proved that any finite decomposable subset of an Abelian group contains two non-empty subsets <span class="math-container">$A$</span> and <span class="math-container">$B$</span> of <span class="math-container">$S$</span> such that <span class="math-container">$\sum A+\sum B=0$</span>. On the other had, I found that a counterpart of Proposition 3 does not hold for Abelian groups. Namely, given a natural number <span class="math-container">$n$</span> let <span class="math-container">$S=\{1,2,\dots, 2^{n-1}\}$</span> be a subset of a group <span class="math-container">$\Bbb Z_{2^n-1}$</span>. Since <span class="math-container">$2^i+2^i=2^{i+1}$</span> for each <span class="math-container">$0\le i\le n-2$</span> and <span class="math-container">$2^{n-1}+2^{n-1}\equiv 1\pmod {2^n-1}$</span>, the set <span class="math-container">$S$</span> is decomposable. On the other hand, for any proper non-empty subset <span class="math-container">$T$</span> of <span class="math-container">$S$</span> we have <span class="math-container">$\sum T\equiv t\pmod {2^n-1}$</span> for some <span class="math-container">$0&lt;t&lt;2^n-1$</span>, so <span class="math-container">$\sum T\not\equiv 0\pmod {2^n-1}$</span>. These results are in our paper, which I am preparing to a submission to arXiv. So I’ll going to provide here a link to it soon.</p>
probability
<p>I am confused with summing two random variables. Suppose $X$ and $Y$ are two random variables denoting how much is gained from each two games. If two games are played together, we can gain $E[X] + E[Y]$ in total. I understand until here. However, in many textbooks, the equation $E[X+Y]=E[X]+E[Y]$ is given as an explanation to expectation of playing two games together. Explanation is more difficult than the result.</p> <p>What does $X+Y$ and $E[X+Y]$ mean? We define $E[X]=\sum X_ip_i$. So, do we define $E[X+Y]=\sum (X_i+Y_i)p_i$ where $p_i$ is the same for both random variables.</p> <p>What if $X$ denotes the equally likely outcomes $1, 2, 3$ and $Y$ denotes the equally likely outcomes $1, 2, 3, 4, 5$?</p>
<p>When you are talking about two random variables, you need to think about their <em>joint distribution</em> - so, rather than talking about $P(X=i)$, you need to talk about $P(X=i\text{ and }Y=j)$, or, as we usually write it, $P(X=i,Y=j)$.</p> <p>If it helps, think of it as randomly choosing a vector with two components - then calling the first component $X$ and the second component $Y$. You can think of $X$ and $Y$ as the separate outcomes of two experiments - which may or may not be related. So, $X$ could be how much you win in the first hand of poker, and $Y$ how much you win in the second. Then $X+Y$ is how much you won in the first two hands together.</p> <p>With this in hand, for a function $f(x,y)$, we can define (for variables that take discrete values), $$ \mathbb{E}[f(X,Y)]=\sum_{x,y}f(x,y)\cdot P(X=x, Y=y). $$ So, in your particular case, $$ \mathbb{E}[X+Y]=\sum_{x,y}(x+y)P(X=x,Y=y)=\sum_{x,y}xP(X=x,Y=y)+\sum_{x,y}yP(X=x,Y=y). $$ Consider the first of these sums. Note $$ \sum_{x,y}xP(X=x,Y=y)=\sum_{x}x\sum_{y}P(X=x,Y=y). $$ The inner sum here is precisely $P(X=x)$: the event "$X=x$" is the same as the event "$X=x$ and $Y$ takes any value", whose probability is exactly this sum. So, $$ \sum_{x,y}xP(X=x,Y=y)=\sum_{x}x\sum_{y}P(X=x,Y=y)=\sum_{x}xP(X=x)=\mathbb{E}[X]. $$ Similarly, $$ \sum_{x,y}yP(X=x,Y=y)=\mathbb{E}[Y], $$ and combining these gives the formula $$ \mathbb{E}[X+Y]=\mathbb{E}[X]+\mathbb{E}[Y]. $$</p>
<p>Good answers from @nrpeterson and @Lord_Farin.</p> <p>For a practical demonstration consider an unbiased random number generator $i \in 6$, more commonly known as a dice.</p> <p>Let $X$ be the result of throwing this dice once (an <a href="https://en.wikipedia.org/wiki/Event_%28probability_theory%29" rel="noreferrer">event</a>), and $Y$ be the result of a subsequent throw (another <em><a href="https://en.wikipedia.org/wiki/Independence_%28probability_theory%29" rel="noreferrer">independent</a></em> <a href="https://en.wikipedia.org/wiki/Event_%28probability_theory%29" rel="noreferrer">event</a>).</p> <p>If you consider the <a href="https://en.wikipedia.org/wiki/Probability_space" rel="noreferrer">sample space</a> of each event they are the same and are:</p> <p>$$\begin{array}{|c|c|c|c|c|c|c|} \hline \text{Result} &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 \\ \hline \text{Probablity} &amp; \frac{1}{6} &amp; \frac{1}{6}&amp; \frac{1}{6}&amp; \frac{1}{6}&amp; \frac{1}{6}&amp; \frac{1}{6} \\ \hline \end{array}$$</p> <p>As you say, $E[X]=\sum X_ip_i$, so it is easy to show that $E(X)=E(Y)=3.5$. Please note that the expected value is not actually a result that is achievable; this is not uncommon. For what this <em>means</em> you are moving into philosophy rather than mathematics as discussed <a href="http://en.wikipedia.org/wiki/Probability_interpretations" rel="noreferrer">here</a>. Trivially $E(X)+E(Y)=7$</p> <p>So what is $X+Y$? Clearly it is an integer $\in [2,12]$ but unlike $X$ and $Y$ not all outcomes are equally likely. Consider the <a href="https://en.wikipedia.org/wiki/Joint_probability" rel="noreferrer">joint distribution</a> of $X+Y$:</p> <p>$$\begin{array}{c|c|c|c|c|c|c|c|} &amp; X &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 \\ \hline Y \\ \hline 1 &amp; &amp; 2 &amp; 3&amp;4&amp;5&amp;6&amp;7 \\ \hline 2&amp;&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8 \\ \hline 3&amp;&amp;4&amp;5&amp;6&amp;7&amp;8&amp;9 \\ \hline 4&amp;&amp;5&amp;6&amp;7&amp;8&amp;9&amp;10 \\ \hline 5&amp;&amp;6&amp;7&amp;8&amp;9&amp;10&amp;11 \\ \hline 6&amp;&amp;7&amp;8&amp;9&amp;10&amp;11&amp;12 \\ \hline \end{array}$$</p> <p>There are 36 possibilities that give rise to the 11 possible outcomes, the most common being 7 with $p=\frac{1}{6}$ and the least common being 2 and 12 with $p=\frac{1}{36}$. The expected value $E[X+Y]=\sum(X_i+Y_i)p_i$ can be calculated and is 7, so $E[X+Y]=E[X]+E[Y]$.</p>
differentiation
<p>I want to calculate the derivative of a function with respect to, not a variable, but respect to another function. For example: $$g(x)=2f(x)+x+\log[f(x)]$$ I want to compute $$\frac{\mathrm dg(x)}{\mathrm df(x)}$$ Can I treat $f(x)$ as a variable and derive "blindly"? If so, I would get $$\frac{\mathrm dg(x)}{\mathrm df(x)}=2+\frac{1}{f(x)}$$ and treat the simple $x$ as a parameter which derivative is zero. Or I should consider other derivation rules?</p>
<p>$$\frac{dg(x)}{df(x)} = \frac{dg(x)}{dx} \cdot \frac{1}{f'(x)} = \frac{g'(x)}{f'(x)}$$</p> <p>In your example,</p> <p>$$g'(x) = 2f'(x) + 1 + \frac{f'(x)}{f(x)}$$</p> <p>So:</p> <p>$$\frac{dg(x)}{df(x)} = \frac{2f'(x) + 1 + \frac{f'(x)}{f(x)}}{f'(x)} = 2 + \frac{1}{f'(x)} + \frac{1}{f(x)}$$</p>
<p>You can not. You have to derivate $f(x)$ as function.</p> <p>$g'(x) = 2f'(x) + 1 + {f'(x) \over f(x)}$</p> <p>EDIT: Sorry, That would make $dg(x) \over dx$, Deepak is right.</p>
linear-algebra
<p>I'm majoring in mathematics and currently enrolled in Linear Algebra. It's very different, but I like it (I think). My question is this: What doors does this course open? (I saw a post about Linear Algebra being the foundation for Applied Mathematics -- but I like doing math for the sake of math, not so much the applications.) Is this a stand-alone class, or will the new things I'm learning come into play later on? </p>
<p>Linear Algebra is indeed one of the foundations of modern mathematics. There are a lot of things which use the language and tools developed in linear algebra:</p> <ul> <li><p>Multidimensional Calculus, i.e. Analysis for functions of many variables, i.e. vectors (for example, the first derivative becomes a matrix)</p></li> <li><p>Differential Geometry, which investigates structures which look locally like a vector space and functions on this.</p></li> <li><p>Functional Analysis, which is essentially linear algebra on infinite-dimensional vector spaces which is the foundation of quantum mechanics.</p></li> <li><p>Multivariate Statistics, which investigates vectors whose entries are random. For instance, to describe the relation between two components of such random vector, one can calculate the correlation matrix. Furthermore, one can apply a technique called singular value decomposition (which is close to calculating the eigenvalues of a matrix) to find which components are having a main influence on the data.</p></li> <li><p>Tagging on to the Multivariate Statistics and multidimensional calculus, there are a number of Machine Learning techniques which require you to find a (local) minimum of a nonlinear function (the likelihood function), for example for neural nets. Generally speaking, on can try to find the parameters which maximize the likelihood, e.g. by applying the gradient descent method, which uses vectors arithmetic. (Thanks, frogeyedpeas!)</p></li> <li><p>Control Theory and Dynamical Systems theory is mainly concerned with differential equations where matrices are factors in front of the functions. It helps tremendously to know the eigenvalues of the matrices involved to predict how the system will behave and also how to change the matrices in front to make sure the system behaves like you want it to - in Control Theory, this is related to the poles and zeros of the transfer function, but in essence it's all about placing eigenvalues at the right place. This is not only relevant for mechanical systems, but also for electric engineering. (Thanks, Look behind you!) </p></li> <li><p>Optimization in general and Linear Programming in particular is closely related to multidimensional calculus, namely about finding minima (or maxima) of functions, but you can use the structure of vector spaces to simplify your problems. (Thanks, Look behind you!) </p></li> </ul> <p>On top of that, there are a lot of applications in engineering and physics which use tools of linear algebra and the fields listed above to solve real-world problems (often calculating eigenvalues and solving differential equations). </p> <p>In essence, a lot of things in the mathematical toolbox in one variable can be lifted up to the multivariable case with the help of Linear Algebra.</p> <p>Edit: This list is by no means complete, these were just the topics which came to my mind at first thought. Not mentioning a particular field doesn't mean that this field irrelevant, but just that I don't feel qualified to write a paragraph about it.</p>
<p>By now, we can roughly say that all we <em>fully</em> understand in Mathematics is <strong>linear</strong>.</p> <p>So I guess Linear Algebra is a good topic to master.</p> <p>Both in Mathematics and in Physics one usually brings the problem to a Linear problem and then solve it with Linear Algebras techniques. This happens in Algebra with linear representations, in Functional Analysis with the study of Hilbert Spaces, in Differential Geometry with the tangent spaces, and so on almost in every field of Mathematics. Indeed I think there should be at least 3 courses of Linear Algebra (undergraduate, graduate, advanced), everytime with different insights on the subject.</p>
number-theory
<p>Can someone provide the proof of the special case of <a href="http://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem" rel="noreferrer">Fermat's Last Theorem</a> for $n=3$, i.e., that $$ x^3 + y^3 = z^3, $$ has no positive integer solutions, as briefly as possible? </p> <p>I have seen some good proofs, but they are quite long (longer than a page) or use many variables. However, I would rather have an elementary long proof with many variables than a complex short proof.</p> <p><strong>Edit.</strong> Even if the bounty expires I will award one to someone if they have a satisfying answer.</p>
<p><strong>Main idea.</strong> The proof that follows is based on the <a href="http://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="noreferrer">infinite descent</a>, i.e., we shall show that if $(x,y,z)$ is a solution, then there exists another triplet $(k,l,m)$ of smaller integers, which is also a solution, and this leads apparently to a contradiction.</p> <p>Assume instead that $x, y, z\in\mathbb Z\smallsetminus\{0\}$ satisfy the equation (replacing $z$ by $-z$) $$x^3 + y^3 + z^3 = 0,$$ with $x, y$ and $z$ pairwise coprime. (Clearly at least one is negative.) One of them should be even, whereas the other two are odd. Assume $z$ to be even.</p> <p>Then $x$ and $y$ are odd. If $x = y$, then $2x^3 = −z^3$, and thus $x$ is also even, a contradiction. Hence $x\ne y$.</p> <p>As $x$ and $y$ are odd, then $x+y$, $x-y$ are both even numbers. Let $$ 2u = x + y, \quad 2v = x − y, $$ where the non-zero integers $u$ and $v$ are also coprime and of different parity (one is even, the other odd), and $$ x = u + v\quad \text{and}\quad y = u − v. $$ It follows that $$ −z^3 = (u + v)^3 + (u − v)^3 = 2u(u^2 + 3v^2). \tag{1} $$ Since $u$ and $v$ have different parity, $u^2 + 3v^2$ is an odd number. And since $z$ is even, then $u$ is even and $v$ is odd. Since $u$ and $v$ are coprime, then $$ {\mathrm{gcd}}\,(2u,u^2 + 3v^2)={\mathrm{gcd}}\,(2u,3v^2)\in\{1,3\}. $$</p> <p>Case I. $\,{\mathrm{gcd}}\,(2u,u^2 + 3v^2)=1$.</p> <p>In this case, the two factors of $−z^3$ in $(1)$ are coprime. This implies that $3\not\mid u$ and that both the two factors are perfect cubes of two smaller numbers, $r$ and $s$. $$ 2u = r^3\quad\text{and}\quad u^2 + 3v^2 = s^3. $$ As $u^2 + 3v^2$ is odd, so is $s$. We now need the following result: </p> <p><strong>Lemma.</strong> <em>If $\mathrm{gcd}\,(a,b)=1$, then every odd factor of $a^2 + 3b^2$ has this same form.</em></p> <p><em>Proof.</em> See <a href="http://fermatslasttheorem.blogspot.com/2005/05/fermats-last-theorem-n-3-a2-3b2.html" rel="noreferrer">here.</a> </p> <p>Thus, if $s$ is odd and if it satisfies an equation $s^3 = u^2 + 3v^2$, then it can be written in terms of two coprime integers $e$ and $f$ as $$ s = e^2 + 3f^2, $$ so that $$ u = e ( e^2 − 9f^2) \quad\text{and}\quad v = 3f ( e^2 − f^2). $$ Since $u$ is even and $v$ odd, then $e$ is even and $f$ is odd. Since $$ r^3 = 2u = 2e (e − 3f)(e + 3f), $$ the factors $2e$, $(e–3f )$, and $(e+3f )$ are coprime since $3$ cannot divide $e$. If $3\mid e$, then $3\mid u$, violating the fact that $u$ and $v$ are coprime. Since the three factors on the right-hand side are coprime, they must individually equal cubes of smaller integers $$ −2e = k^3,\,\,\, e − 3f = l^3,\,\,\, e + 3f = m^3, $$ which yields a smaller solution $k^3 + l^3 + m^3= 0$. Therefore, by the argument of infinite descent, the original solution $(x, y, z)$ was impossible.</p> <p>Case II. $\,{\mathrm{gcd}}\,(2u,u^2 + 3v^2)=3$.</p> <p>In this case, the greatest common divisor of $2u$ and $u^2 + 3v^2$ is $3$. That implies that $3\mid u$, and one may express $u = 3w$ in terms of a smaller integer, $w$. Since $4\mid u$, so is $w$; hence, $w$ is also even. Since $u$ and $v$ are coprime, so are $v$ and $w$. Therefore, neither $3$ nor $4$ divide $v$.</p> <p>Substituting $u$ by $w$ in $(1)$ we obtain $$ −z^3 = 6w(9w^2 + 3v^2) = 18w(3w^2 + v^2) $$ Because $v$ and $w$ are coprime, and because $3\not\mid v$, then $18w$ and $3w^2 + v^2$ are also coprime. Therefore, since their product is a cube, they are each the cube of smaller integers, $r$ and $s$: $$ 18w = r^3 \quad\text{and}\quad 3w^2 + v^2 = s^3. $$ By the same lemma, as $s$ is odd and equal to a number of the form $3w^2 + v^2$, it too can be expressed in terms of smaller coprime numbers, $e$ and $f$: $$ s = e^2 + 3f^2. $$ A straight-forward calculation shows that $$ v = e (e^2 − 9f^2) \quad\text{and}\quad w = 3f (e^2 − f^2). $$ Thus, $e$ is odd and $f$ is even, because $v$ is odd. The expression for $18w$ then becomes $$ r^3 = 18w = 54f (e^2 − f^2) = 54f (e + f) (e − f) = 3^3 \times 2f (e + f) (e − f). $$ Since $3^3$ divides $r^3$ we have that $3$ divides $r$, so $(r /3)^3$ is an integer that equals $2f (e + f) (e − f)$. Since $e$ and $f$ are coprime, so are the three factors $2e$, $e+f$, and $e−f$; therefore, they are each the cube of smaller integers, $k$, $l$, and $m$. $$ −2e = k^3,\,\,\, e + f = l^3,\,\,\, e − f = m^3, $$ which yields a smaller solution $k^3 + l^3 + m^3= 0$. Therefore, by the argument of infinite descent, the original solution $(x, y, z)$ was impossible.</p> <p><strong>Note.</strong> See also <a href="http://fermatslasttheorem.blogspot.com/2005/05/fermats-last-theorem-n-3-a2-3b2.html" rel="noreferrer">here.</a></p>
<p>There’s a wonderful elementary (and fairly short) proof in <a href="https://www.jstor.org/stable/23249523" rel="nofollow noreferrer">this paper by S.Dolan</a>.</p>
linear-algebra
<p>I'm trying to build an intuitive geometric picture about diagonalization. Let me show what I got so far.</p> <p>Eigenvector of some linear operator signifies a direction in which operator just ''works'' like a stretching, in other words, operator preserves the direction of its eigenvector. Corresponding eigenvalue is just a value which tells us for <em>how much</em> operator stretches the eigenvector (negative stretches = flipping in the opposite direction). When we limit ourselves to real vector spaces, it's intuitively clear that rotations don't preserve direction of any non-zero vector. Actually, I'm thinking about 2D and 3D spaces as I write, so I talk about ''rotations''... for n-dimensional spaces it would be better to talk about ''operators which act like rotations on some 2D subspace''.</p> <p>But, there are non-diagonalizable matrices that aren't rotations - all non-zero nilpotent matrices. My intuitive view of nilpotent matrices is that they ''gradually collapse all dimensions/gradually lose all the information'' (if we use them over and over again), so it's clear to me why they can't be diagonalizable.</p> <p>But, again, there are non-diagonalizable matrices that aren't rotations nor nilpotent, for an example:</p> <p>$$ \begin{pmatrix} 1 &amp; 1 \\ 0 &amp; 1 \end{pmatrix} $$</p> <p>So, what's the deal with them? Is there any kind of intuitive geometric reasoning that would help me grasp why there are matrices like this one? What's their characteristic that stops them from being diagonalizable?</p>
<p>I think a very useful notion here is the idea of a "<strong>generalized eigenvector</strong>".</p> <p>An <strong>eigenvector</strong> of a matrix $A$ is a vector $v$ with associated value $\lambda$ such that $$ (A-\lambda I)v=0 $$ A <strong>generalized eigenvector</strong>, on the other hand, is a vector $w$ with the same associated value such that $$ (A-\lambda I)^kw=0 $$ That is, $(A-\lambda I)$ is nilpotent on $w$. Or, in other words: $$ (A - \lambda I)^{k-1}w=v $$ For some eigenvector $v$ with the same associated value.</p> <hr> <p>Now, let's see how this definition helps us with a non-diagonalizable matrix such as $$ A = \pmatrix{ 2 &amp; 1\\ 0 &amp; 2 } $$ For this matrix, we have $\lambda=2$ as a unique eigenvalue, and $v=\pmatrix{1\\0}$ as the associated eigenvector, which I will let you verify. $w=\pmatrix{0\\1}$ is our generalized eiegenvector. Notice that $$ (A - 2I) = \pmatrix{ 0 &amp; 1\\ 0 &amp; 0} $$ Is a nilpotent matrix of order $2$. Note that $(A - 2I)v=0$, and $(A- 2I)w=v$ so that $(A-2I)^2w=0$. But what does this mean for what the matrix $A$ does? The behavior of $v$ is fairly obvious, but with $w$ we have $$ Aw = \pmatrix{1\\2}=2w + v $$ So $w$ behaves kind of like an eigenvector, but not really. In general, a generalized eigenvector, when acted upon by $A$, gives another vector in the generalized eigenspace.</p> <hr> <p>An important related notion is <a href="http://en.wikipedia.org/wiki/Jordan_normal_form">Jordan Normal Form</a>. That is, while we can't always diagonalize a matrix by finding a basis of eigenvectors, we can always put the matrix into Jordan normal form by finding a basis of generalized eigenvectors/eigenspaces.</p> <p>I hope that helps. I'd say that the most important thing to grasp from the idea of generalized eigenvectors is that every transformation can be related to the action of a nilpotent over some subspace. </p>
<p><strong>Edit:</strong> The algebra I speak of here is <em>not</em> actually the Grassmann numbers at all -- they are <span class="math-container">$\mathbb{R}[X]/(X^n)$</span>, whose generators <em>don't</em> satisfy the anticommutativity relation even though they satisfy all the nilpotency relations. The dual-number stuff for 2 by 2 is still correct, just ignore my use of the word "Grassmann".</p> <hr> <p>Non-diagonalisable 2 by 2 matrices can be diagonalised over the <a href="https://en.wikipedia.org/wiki/Dual_number" rel="nofollow noreferrer">dual numbers</a> -- and the "weird cases" like the Galilean transformation are not fundamentally different from the nilpotent matrices.</p> <p>The intuition here is that the Galilean transformation is sort of a "boundary case" between real-diagonalisability (skews) and complex-diagonalisability (rotations) (which you can sort of think in terms of discriminants). In the case of the Galilean transformation <span class="math-container">$\left[\begin{array}{*{20}{c}}{1}&amp;{v}\\{0}&amp;{1}\end{array}\right]$</span>, it's a small perturbation away from being diagonalisable, i.e. it sort of has "repeated eigenvectors" (you can visualise this with <a href="https://shadanan.github.io/MatVis/" rel="nofollow noreferrer">MatVis</a>). So one may imagine that the two eigenvectors are only an "epsilon" away, where <span class="math-container">$\varepsilon$</span> is the unit dual satisfying <span class="math-container">$\varepsilon^2=0$</span> (called the "soul"). Indeed, its characteristic polynomial is:</p> <p><span class="math-container">$$(\lambda-1)^2=0$$</span></p> <p>Whose solutions among the dual numbers are <span class="math-container">$\lambda=1+k\varepsilon$</span> for real <span class="math-container">$k$</span>. So one may "diagonalise" the Galilean transformation over the dual numbers as e.g.:</p> <p><span class="math-container">$$\left[\begin{array}{*{20}{c}}{1}&amp;{0}\\{0}&amp;{1+v\varepsilon}\end{array}\right]$$</span></p> <p>Granted this is not unique, this is formed from the change-of-basis matrix <span class="math-container">$\left[\begin{array}{*{20}{c}}{1}&amp;{1}\\{0}&amp;{\epsilon}\end{array}\right]$</span>, but any vector of the form <span class="math-container">$(1,k\varepsilon)$</span> is a valid eigenvector. You could, if you like, consider this a canonical or "principal value" of the diagonalisation, and in general each diagonalisation corresponds to a limit you can take of real/complex-diagonalisable transformations. Another way of thinking about this is that there is an entire eigenspace spanned by <span class="math-container">$(1,0)$</span> and <span class="math-container">$(1,\varepsilon)$</span> in that little gap of multiplicity. In this sense, the geometric multiplicity is forced to be equal to the algebraic multiplicity*.</p> <p>Then a nilpotent matrix with characteristic polynomial <span class="math-container">$\lambda^2=0$</span> has solutions <span class="math-container">$\lambda=k\varepsilon$</span>, and is simply diagonalised as:</p> <p><span class="math-container">$$\left[\begin{array}{*{20}{c}}{0}&amp;{0}\\{0}&amp;{\varepsilon}\end{array}\right]$$</span></p> <p>(Think about this.) Indeed, the resulting matrix has minimal polynomial <span class="math-container">$\lambda^2=0$</span>, and the eigenvectors are as before.</p> <hr> <p>What about higher dimensional matrices? Consider:</p> <p><span class="math-container">$$\left[ {\begin{array}{*{20}{c}}0&amp;v&amp;0\\0&amp;0&amp;w\\0&amp;0&amp;0\end{array}} \right]$$</span></p> <p>This is a nilpotent matrix <span class="math-container">$A$</span> satisfying <span class="math-container">$A^3=0$</span> (but not <span class="math-container">$A^2=0$</span>). The characteristic polynomial is <span class="math-container">$\lambda^3=0$</span>. Although <span class="math-container">$\varepsilon$</span> might seem like a sensible choice, it doesn't really do the trick -- if you try a diagonalisation of the form <span class="math-container">$\mathrm{diag}(0,v\varepsilon,w\varepsilon)$</span>, it has minimal polynomial <span class="math-container">$A^2=0$</span>, which is wrong. Indeed, you won't be able to find three linearly independent eigenvectors to diagonalise the matrix this way -- they'll all take the form <span class="math-container">$(a+b\varepsilon,0,0)$</span>.</p> <p>Instead, you need to consider a generalisation of the dual numbers, called the Grassmann numbers, with the soul satisfying <span class="math-container">$\epsilon^n=0$</span>. Then the diagonalisation takes for instance the form:</p> <p><span class="math-container">$$\left[ {\begin{array}{*{20}{c}}0&amp;0&amp;0\\0&amp;{v\epsilon}&amp;0\\0&amp;0&amp;{w\epsilon}\end{array}} \right]$$</span></p> <hr> <p>*Over the reals and complexes, when one defines algebraic multiplicity (as "the multiplicity of the corresponding factor in the characteristic polynomial"), there is a single eigenvalue corresponding to that factor. This is of course no longer true over the Grassmann numbers, because they are not a field, and <span class="math-container">$ab=0$</span> no longer implies "<span class="math-container">$a=0$</span> or <span class="math-container">$b=0$</span>".</p> <p>In general, if you want to prove things about these numbers, the way to formalise them is by constructing them as the quotient <span class="math-container">$\mathbb{R}[X]/(X^n)$</span>, so you actually have something clear to work with.</p> <p>(Perhaps relevant: <a href="https://math.stackexchange.com/questions/46078/grassmann-numbers-as-eigenvalues-of-nilpotent-operators">Grassmann numbers as eigenvalues of nilpotent operators?</a> -- discussing the fact that the Grassmann numbers are not a field).</p> <p>You might wonder if this sort of approach can be applicable to LTI differential equations with repeated roots -- after all, their characteristic matrices are exactly of this Grassmann form. As pointed out in the comments, however, this diagonalisation is still not via an invertible change-of-basis matrix, it's still only of the form <span class="math-container">$PD=AP$</span>, not <span class="math-container">$D=P^{-1}AP$</span>. I don't see any way to bypass this. See my posts <a href="https://thewindingnumber.blogspot.com/2019/02/all-matrices-can-be-diagonalised.html" rel="nofollow noreferrer">All matrices can be diagonalised</a> (a re-post of this answer) and <a href="https://thewindingnumber.blogspot.com/2018/03/repeated-roots-of-differential-equations.html" rel="nofollow noreferrer">Repeated roots of differential equations</a> for ideas, I guess.</p>
logic
<p>I have read somewhere there are some theorems that are shown to be "unprovable". It was a while ago and I don't remember the details, and I suspect that this question might be the result of a total misunderstanding. By the way, I assume that <em>unprovable theorem</em> does exist. Please correct me if I am wrong and skip reading the rest.</p> <p>As far as I know, the mathematical statements are categorized into: undefined concepts, definitions, axioms, conjectures, lemmas and theorems. There might be some other types that I am not aware of as an amateur math learner. In this categorization, an axiom is something that cannot be built upon other things and it is too obvious to be proved (is it?). So axioms are unprovable. A theorem or lemma is actually a conjecture that has been proved. So "a theorem that cannot be proved" sounds like a paradox.</p> <p>I know that there are some statements that cannot be proved simply because they are wrong. I am not addressing them because they are not <em>theorems</em>. So what does it mean that a theorem is unprovable? Does it mean that it cannot be proved by current mathematical tools and it may be proved in the future by more advanced tools that are not discovered yet? So why don't we call it a conjecture? If it cannot be proved at all, then it is better to call it an axiom.</p> <p>Another question is, <em>how can we be sure that a theorem cannot be proved</em>? I am assuming the description might be some high level logic that is way above my understanding. So I would appreciate if you put it into simple words.</p> <p><strong>Edit-</strong> Thanks to a comment by @user21820 I just read two other interesting posts, <a href="https://math.stackexchange.com/a/1643073/301977">this</a> and <a href="https://math.stackexchange.com/a/1808558/301977">this</a> that are relevant to this question. I recommend everyone to take a look at them as well.</p>
<p>When we say that a statement is 'unprovable', we mean that it is unprovable from the axioms of a particular theory. </p> <p>Here's a nice concrete example. Euclid's <em>Elements</em>, the prototypical example of axiomatic mathematics, begins by stating the following five axioms:</p> <blockquote> <p>Any two points can be joined by a straight line</p> <p>Any finite straight line segment can be extended to form an infinite straight line.</p> <p>For any point <span class="math-container">$P$</span> and choice of radius <span class="math-container">$r$</span> we can form a circle centred at <span class="math-container">$P$</span> of radius <span class="math-container">$r$</span></p> <p>All right angles are equal to one another.</p> <p>[The parallel postulate:] If <span class="math-container">$L$</span> is a straight line and <span class="math-container">$P$</span> is a point not on the line <span class="math-container">$L$</span> then there is at most one line <span class="math-container">$L'$</span> that passes through <span class="math-container">$P$</span> and is parallel to <span class="math-container">$L$</span>.</p> </blockquote> <p>Euclid proceeds to derive much of classical plane geometry from these five axioms. This is an important point. After these axioms have been stated, Euclid makes no further appeal to our natural intuition for the concepts of 'line', 'point' and 'angle', but only gives proofs that can be deduced from the five axioms alone. </p> <p>It is conceivable that you could come up with your own theory with 'points' and 'lines' that do not resemble points and lines at all. But if you could show that your 'points' and 'lines' obey the five axioms of Euclid, then you could interpret all of his theorems in your new theory. </p> <p>In the two thousand years following the publication of the <em>Elements</em>, one major question that arose was: do we need the fifth axiom? The fifth axiom - known as the parallel postulate - seems less intuitively obvious than the other four: if we could find a way of deducing the fifth axiom from the first four then it would become superfluous and we could leave it out. </p> <p>Mathematicians tried for millennia to find a way of deducing the parallel postulate from the first four axioms (and I'm sure there are cranks who are still trying to do so now), but were unable to. Gradually, they started to get the feeling that it might be impossible to prove the parallel postulate from the first four axioms. But how do you prove that something is unprovable?</p> <p>The right approach was found independently by Lobachevsky and Bolyai (and possibly Gauss) in the nineteenth century. They took the first four axioms and replaced the fifth with the following:</p> <blockquote> <p>[Hyperbolic parallel postulate:] If <span class="math-container">$L$</span> is a straight line and <span class="math-container">$P$</span> is a point not on the line <span class="math-container">$L$</span> then <strong>there are at least two</strong> lines that pass through <span class="math-container">$P$</span> and are parallel to <span class="math-container">$L$</span>.</p> </blockquote> <p>This axiom is clearly incompatible with the original parallel postulate. The remarkable thing is that there is a geometrical theory in which the first four axioms and the modified parallel postulate are true. </p> <p>The theory is called <em>hyperbolic geometry</em> and it deals with points and lines inscribed on the surface of a <em>hyperboloid</em>:</p> <p><a href="https://i.sstatic.net/LmlxP.png" rel="noreferrer"><img src="https://i.sstatic.net/LmlxP.png" alt="Wikimedia image: a triangle and a pair of diverging parallel lines inscribed on a hyperboloid"></a></p> <p><em>In the bottom right of the image above, you can see a pair of hyperbolic parallel lines. Notice that they diverge from one another.</em></p> <p>The first four axioms hold (and you can check this), but now if <span class="math-container">$L$</span> is a line and <span class="math-container">$P$</span> is a point not on <span class="math-container">$L$</span> then there are <em>infinitely many</em> lines parallel to <span class="math-container">$L$</span> passing through <span class="math-container">$P$</span>. So the original parallel postulate does not hold.</p> <p>This now allows us to prove very quickly that it is impossible to prove the parallel postulate from the other four axioms: indeed, suppose there were such a proof. Since the first four axioms are true in hyperbolic geometry, our proof would induce a proof of the parallel postulate in the setting of hyperbolic geometry. But the parallel postulate is not true in hyperbolic geometry, so this is absurd. </p> <hr> <p>This is a major method for showing that statements are unprovable in various theories. Indeed, a theorem of Gödel (Gödel's completeness theorem) tells us that if a statement <span class="math-container">$s$</span> in the language of some axiomatic theory <span class="math-container">$\mathbb T$</span> is unprovable then there is <em>always</em> some structure that satisfies the axioms of <span class="math-container">$\mathbb T$</span> in which <span class="math-container">$s$</span> is false. So showing that <span class="math-container">$s$</span> is unprovable often amounts to finding such a structure.</p> <p>It is also possible to show that things are unprovable using a direct combinatorial argument on the axioms and deduction rules you are allowed in your logic. I won't go into that here.</p> <p>You're probably interested in things like Gödel's incompleteness theorem, that say that there are statements that are unprovable in a particular theory called ZFC set theory, which is often used as the foundation of <em>all mathematics</em> (note: there is in fact plenty of mathematics that cannot be expressed in ZFC, so <em>all</em> isn't really correct here). This situation is not at all different from the geometrical example I gave above: </p> <p>If a particular statement is neither provable nor disprovable from the axioms of <em>all mathematics</em> it means that there are two structures out there, both of which interpret the axioms of <em>all mathematics</em>, in one of which the statement is true and in the other of which the statement is false. </p> <p>Sometimes we have explicit examples: one important problem at the turn of the century was the <em>Continuum Hypothesis</em>. The problem was solved in two steps:</p> <ul> <li>Gödel gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was true.</li> <li>Later, Cohen gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was false.</li> </ul> <p>Between them, these results show that the Continuum Hypothesis is in fact neither provable nor disprovable in ZFC set theory. </p>
<p>First of all in the following answer I allowed myself (contrary to my general nature) to focus my efforts on simplicity, rather than formal correctness.</p> <p>In general, I think that the way we teach the concept of <em>axioms</em> is rather unfortunate. While traditionally axioms were thought of as statements that are - in some philosophical way - <em>obviously true</em> and <em>don't need further justifications</em>, this view has shifted a lot in the last century or so. Rather than thinking of axioms as <em>obvious truths</em> think of them as statements that we <em>declare to be true</em>. Let $\mathcal A$ be a set of axioms. We can now ask a bunch of questions about $\mathcal A$.</p> <ul> <li>Is $\mathcal A$ self-contradictory? I.e. does there exist a proof (&lt;- this needs to be formalized, but for the sake of simplicity just think of your informal notion of proofs) - starting from formulas in $\mathcal A$ that leads to a contradiction? If that's the case, then $\mathcal A$ was poorly chosen. If all the statements in $\mathcal A$ should be true (in a philosophical sense), then they cannot lead to a contradiction. So our first requirement is that $\mathcal A$ - should it represent a collection of true statements - is not self-contradictory.</li> <li>Does $\mathcal A$ prove interesting statements? Take for example $\mathcal A$ as the axioms of set theory (e.g. $\mathcal A = \operatorname{ZFC}$). In this case we can prove all sorts of interesting mathematical statements. In fact, it seems reasonable that every mathematical theorem that can be proved by the usual style of informal proofs, can be formally proved from $\mathcal A$. This is one of the reasons, the axioms of set theory have been so successful.</li> <li>Is $\mathcal A$ a <em>natural</em> set of axioms? ...</li> <li>...</li> <li>Is there a statement $\phi$ which $\mathcal A$ does not decide? I.e. is there a statement $\phi$ such that there is no proof of $\phi$ or $\neg \phi$ starting from $\mathcal A$?</li> </ul> <p>The last point is what we mean when we say that <em>$\phi$ is unprovable from $\mathcal A$</em>. And if $\mathcal A$ is our background theory, say $\mathcal A = \operatorname{ZFC}$, we just say that <em>$\phi$ is unprovable</em>. </p> <p>By a very general theorem of Kurt Gödel, any <em>natural</em> set of axioms $\mathcal A$ has statements that are unprovable from it. In fact, the statement "$\mathcal A$ is not self-contradictory" is not provable from $\mathcal A$. So, while natural sets of axioms $\mathcal A$ are not self-contradictory - they themselves cannot prove this fact. This is rather unfortunate and demonstrates that David Hilbert's program on the foundation of mathematics - in its original form - is impossible. The natural workaround is something contrary to the general nature of mathematics - a leap of faith: If $\mathcal A$ is a sufficiently natural set of axioms (or otherwise <em>certified</em>), we <em>believe</em> that it is consistent (or - if you're more like me - you <em>assume</em> it is consistent until you see a reason not to). </p> <p>This is - for example - the case for $\mathcal A = \operatorname{ZFC}$ and for the remainder of my answer, I will restrict myself to this scenario. Now that we know that $\mathcal A$ does not decide all statements (and arguably does not prove some true statements - like its consistency), a new question arises:</p> <ul> <li>Does $\operatorname{ZFC}$ decide all <em>mathematical</em> statements? In other words: Is there a question about typical mathematical objects that $\operatorname{ZFC}$ does not answer?</li> </ul> <p>The - to some people unfortunate - answer is yes and the most famous example is</p> <blockquote> <p>$\operatorname{ZFC}$ does not decide how many real numbers there are.</p> </blockquote> <p>Actually proving this fact, took mathematicians (logicians) many decades. At the end of this effort, however, we not only had a way to prove this single statement, but we actually obtained a very general method to prove the independence of many statements (the so-called <strong>method of forcing</strong>, introduced by Paul Cohen in 1963).</p> <p>The idea - roughly speaking - is as follows: Let $\phi$ be a statement, say </p> <blockquote> <p>$\phi \equiv$ "there is no infinity strictly between the infinity of $\mathbb N$ and of $\mathbb R$" </p> </blockquote> <p>Let $\mathcal M$ be a model of $\operatorname{ZFC}$. Starting from $\mathcal M$ we would like to construct new models $\mathcal M_{\phi}$ and $\mathcal M_{\neg \phi}$ of $\operatorname{ZFC}$ such that $\mathcal M_{\phi} \models \phi$ and $\mathcal M_{\neg \phi} \models \neg \phi$ (i.e. $\phi$ is true in $\mathcal M_{\phi}$ and $\phi$ is false in $\mathcal M_{\neg \phi}$). If this is possible, then this proves that $\phi$ is not decided by $\operatorname{ZFC}$. Why is that?</p> <p>Well, if it were decided by $\operatorname{ZFC}$, then there would be a proof of $\phi$ or a proof of $\neg \phi$. Let us say that $\phi$ has a proof (the other case is the same). Then, by <em>soundness</em> of our proofs, any model that satisfies $\operatorname{ZFC}$ must satisfy $\phi$, so there cannot be a model $\mathcal M_{\neg \phi}$ as above.</p>
combinatorics
<p>If I want to find how many possible ways there are to choose k out of n elements I know you can use the simple formula below:</p> <p>$$ \binom{n}{k} = \frac{n! }{ k!(n-k)! } .$$</p> <p>What if I want to go the other way around though?</p> <p>That is, I know I want to have $X$ possible combinations, and I want to find all the various pairs of $n$ and $k$ that will give me that number of combinations.</p> <p>For example, if the number of combinations I want is $3$, I want a formula/method to find that all the pairs that will result in that number of combinations are $(3,1)$ and $(3,2)$</p> <p>I know I could test all the possible pairs, but this would be impractical for large numbers.</p> <p>But perhaps there's no easier way of doing this then the brute force approach?</p>
<p>If $X$ is only as large as $10^7$ then this is straightforward to do with computer assistance. First note the elementary inequalities $$\frac{n^k}{k!} \ge {n \choose k} \ge \frac{(n-k)^k}{k!}$$</p> <p>which are close to tight when $k$ is small. If $X = {n \choose k}$, then it follows that $$n \ge \sqrt[k]{k! X} \ge n-k$$</p> <p>hence that $$\sqrt[k]{k! X} + k \ge n \ge \sqrt[k]{k! X}$$</p> <p>so for fixed $k$ you only have to check at most $k+1$ possible values of $n$, which is manageable when $k$ is small. You can speed up this process by factoring $X$ if you want and applying <a href="http://en.wikipedia.org/wiki/Lucas%27_theorem#Variations_and_generalizations">Kummer's theorem</a> (the first bullet point in that section of the article), but computing binomial coefficients for $k$ small is straightforward so this probably isn't necessary. </p> <p>For larger $k$, note that you can always assume WLOG that $n \ge 2k$ since ${n \choose k} = {n \choose n-k}$, hence you can assume that $$X = {n \choose k} \ge {2k \choose k} &gt; \frac{4^k}{2k+1}$$</p> <p>(see <a href="http://en.wikipedia.org/wiki/Proof_of_Bertrand%27s_postulate">Erdős' proof of Bertrand's postulate</a> for details on that last inequality). Consequently you only have to check logarithmically many values of $k$ (as a function of $X$). For example, if $X \le 10^7$ you only have to check up to $k = 14$.</p> <p>In total, applying the above algorithm you only have to check $O(\log(X)^2)$ pairs $(n, k)$, and each check requires at worst $O(\log(X))$ multiplications of numbers at most as large as $X$, together with at worst a comparison of two numbers of size $O(X)$. So the above algorithm takes polynomial time in $\log(X)$. </p> <p><strong>Edit:</strong> It should be totally feasible to just factor $X$ at the sizes you're talking about, but if you want to apply the Kummer's theorem part of the algorithm to larger $X$, you don't actually have to completely factor $X$; you can probably do the Kummer's theorem comparisons on the fly by computing the greatest power of $2$ that goes into $X$, then $3$, then $5$, etc. and storing these as necessary. As a second step, if neither $X$ nor the particular binomial coefficient ${n_0 \choose k_0}$ you're testing are divisible by a given small prime $p$, you can appeal to Lucas' theorem. Of course, you have to decide at some point when to stop testing small primes and just test for actual equality. </p>
<p>Here's an implementation in code of Qiaochu's answer. The algorithm, to recap, is:</p> <ul> <li><p>Input <span class="math-container">$X$</span>. (We want to find all <span class="math-container">$(n, k)$</span> such that <span class="math-container">$\binom{n}{k} = X$</span>.)</p></li> <li><p>For each <span class="math-container">$k \ge 1$</span> such that <span class="math-container">$4^k/(2k + 1) &lt; X$</span>,</p> <ul> <li><p>Let <span class="math-container">$m$</span> be the smallest number such that <span class="math-container">$m^k \ge k!X$</span>.</p></li> <li><p>For each <span class="math-container">$n$</span> from <span class="math-container">$\max(m, 2k)$</span> to <span class="math-container">$m + k$</span> (inclusive),</p> <ul> <li>If <span class="math-container">$\binom{n}{k} = X$</span>, yield <span class="math-container">$(n, k)$</span> and <span class="math-container">$(n, n-k)$</span>.</li> </ul></li> </ul></li> </ul> <p>It is written in Python (chose this language for readability and native big integers, not for speed). It is careful to use only integer arithmetic, to avoid any errors due to floating-point precision.</p> <p>The version below is optimized to avoid recomputing <span class="math-container">$\binom{n+1}{k}$</span> from scratch after having computed <span class="math-container">$\binom{n}{k}$</span>; this speeds it up so that for instance for <span class="math-container">$\binom{1234}{567}$</span> (a <span class="math-container">$369$</span>-digit number) it takes (on my laptop) 0.4 seconds instead of the 50 seconds taken by the unoptimized version in the <a href="https://math.stackexchange.com/revisions/2381576/1">first revision</a> of this answer.</p> <pre><code>#!/usr/bin/env python from __future__ import division import math def binom(n, k): """Returns n choose k, for nonnegative integer n and k""" assert k &gt;= 0 assert n &gt;= 0 assert k == int(k) assert n == int(n) k = min(k, n - k) ans = 1 for i in range(k): ans *= n - i ans //= i + 1 return ans def first_over(k, c): """Binary search to find smallest value of n for which n^k &gt;= c""" n = 1 while n ** k &lt; c: n *= 2 # Invariant: lo**k &lt; c &lt;= hi**k lo = 1 hi = n while hi - lo &gt; 1: mid = lo + (hi - lo) // 2 if mid ** k &lt; c: lo = mid else: hi = mid assert hi ** k &gt;= c assert (hi - 1) ** k &lt; c return hi def find_n_k(x): """Given x, yields all n and k such that binom(n, k) = x.""" assert x == int(x) assert x &gt; 1 k = 0 while True: k += 1 # https://math.stackexchange.com/a/103385/205 if (2 * k + 1) * x &lt;= 4**k: break nmin = first_over(k, math.factorial(k) * x) nmax = nmin + k + 1 nmin = max(nmin, 2 * k) choose = binom(nmin, k) for n in range(nmin, nmax): if choose == x: yield (n, k) if k &lt; n - k: yield (n, n - k) choose *= (n + 1) choose //= (n + 1 - k) if __name__ == '__main__': import sys if len(sys.argv) &lt; 2: print('Pass X in the command to see (n, k) such that (n choose k) = X.') sys.exit(1) x = int(sys.argv[1]) if x == 0: print('(n, k) for any n and any k &gt; n, and (0, 0)') sys.exit(0) if x == 1: print('(n, 0) and (n, n) for any n, and (1, 1)') sys.exit(0) for (n, k) in find_n_k(x): print('%s %s' % (n, k)) </code></pre> <p>Example runs:</p> <pre><code>~<span class="math-container">$ ./mse_103377_binom.py 2 2 1 ~$</span> ./mse_103377_binom.py 3 3 1 3 2 ~<span class="math-container">$ ./mse_103377_binom.py 6 6 1 6 5 4 2 ~$</span> ./mse_103377_binom.py 10 10 1 10 9 5 2 5 3 ~<span class="math-container">$ ./mse_103377_binom.py 20 20 1 20 19 6 3 ~$</span> ./mse_103377_binom.py 55 55 1 55 54 11 2 11 9 ~<span class="math-container">$ ./mse_103377_binom.py 120 120 1 120 119 16 2 16 14 10 3 10 7 ~$</span> ./mse_103377_binom.py 3003 3003 1 3003 3002 78 2 78 76 15 5 15 10 14 6 14 8 ~<span class="math-container">$ ./mse_103377_binom.py 8966473191018617158916954970192684 8966473191018617158916954970192684 1 8966473191018617158916954970192684 8966473191018617158916954970192683 123 45 123 78 ~$</span> ./mse_103377_binom.py 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477440 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477440 1 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477440 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477439 1234 567 1234 667 </code></pre>
logic
<p>Suppose we have a line of people that starts with person #1 and goes for a (finite or infinite) number of people behind him/her, and this property holds for every person in the line: </p> <blockquote> <p><em>If everyone in front of you is bald, then you are bald</em>.</p> </blockquote> <p>Without further assumptions, does this mean that the first person is necessarily bald? Does it say <em>anything</em> about the first person at all? </p> <p>In my opinion, it means: </p> <blockquote> <p><em>If there exist anyone in front of you and they're all bald, then you're bald.</em> </p> </blockquote> <p>Generally, for a statement that consists of a subject and a predicate, if the subject doesn't exist, then does the statement have a truth value? </p> <p>I think there's a convention in math that if the subject doesn't exist, then the statement is right.</p> <p>I don't have a problem with this convention (in the same way that I don't have a problem with the meaning of '<em>or</em>' in math). My question is whether it's a clear logical implication of the facts, or we have to <strong>define the truth value</strong> for these subject-less statements.</p> <hr> <p><strong>Addendum:</strong> </p> <p>You can read up on this matter <a href="https://en.wikipedia.org/wiki/Syllogism#Existential_import" rel="noreferrer">here</a> (too).</p>
<p>You can see what's going on by reformulating the assumption in its equivalent contrapositive form: </p> <blockquote> <p><em>If I'm not bald, then there is someone in front of me who is not bald.</em></p> </blockquote> <p>Now the first person in line finds himself thinking, "There is <em>no one</em> in front of me. So it's not true that there is someone in front of me who is not bald. So it's not true that I'm not bald. So I must be bald!"</p>
<p>Mathematical logic <em>defines</em> a statement about <a href="https://en.wikipedia.org/wiki/Universal_quantification#The_empty_set" rel="nofollow noreferrer">all elements of an empty set</a> to be true. This is called <a href="https://en.wikipedia.org/wiki/Vacuous_truth" rel="nofollow noreferrer">vacuous truth</a>. It may be somewhat confusing since it doesn't agree with common everyday usage, where making a statement tends to suggest that there is some object for which the statement actually holds (like the person in front of you in your example).</p> <p>But it is <em>exactly</em> the right thing to do in a formal setup, for several reasons. One reason is that logical statements don't <em>suggest</em> anything: you must not assume any meaning in excess of what's stated explicitly. Another reason is that it makes several conversions possible without special cases. For example,</p> <p>$$\forall x\in(A\cup B):P(x)\;\Leftrightarrow \forall x\in A:P(x)\;\wedge\;\forall x\in B:P(x)$$</p> <p>holds even if $A$ (or $B$) happens to be the empty set. Another example is the conversion between universal and existential quantification <a href="https://math.stackexchange.com/users/86747/barry-cipra">Barry Cipra</a> <a href="https://math.stackexchange.com/a/1669020/35416">used</a>:</p> <p>$$\forall x\in A:\neg P(x)\;\Leftrightarrow \neg\exists x\in A:P(x)$$</p> <p>If you are into programming, then the following pseudocode snippet may also help explaining this:</p> <pre><code>bool universal(set, property) { for (element in set) if (not property(element)) return false return true } </code></pre> <p>As you can see, the universally quantified statement is <em>only</em> false if there <em>exists</em> an element of the set for which it does not hold. Conversely, you could define</p> <pre><code>bool existential(set, property) { for (element in set) if (property(element)) return true return false } </code></pre> <p>This is also similar to other empty-set definitions like</p> <p>$$\sum_{x\in\emptyset}f(x)=0\qquad\prod_{x\in\emptyset}f(x)=1$$</p> <blockquote> <p>If everyone in front of you is bald, then you are bald.</p> </blockquote> <p>Applying the above to the statement from your question: from</p> <p>$$\bigl(\forall y\in\text{People in front of }x: \operatorname{bald}(y) \bigr)\implies\operatorname{bald}(x)$$</p> <p>one can derive</p> <p>$$\emptyset=\text{People in front of }x\implies\operatorname{bald}(x)$$</p> <p>so <strong>yes, the first person must be bald</strong> because there is noone in front of him.</p> <p>Some formalisms prefer to write the “People in front of” as a pair of predicates instead of a set. In such a setup, you'd see fewer sets and more implications:</p> <p>$$\Bigl(\forall y: \bigl(\operatorname{person}(y)\wedge(y\operatorname{infrontof}x)\bigr)\implies\operatorname{bald}(y) \Bigr)\implies\operatorname{bald}(x)$$</p> <p>If there is no $y$ satisfying both predicates, then the left hand side of the first implication is always false, rendering the implication as a whole always true, thus allowing us to conclude the baldness of the first person. The fact that an implication with a false antecedent is always true is another form of vacuous truth.</p> <p>Note to self: <a href="https://math.stackexchange.com/questions/556117/good-math-bed-time-stories-for-children#comment1182925_556133">this comment</a> indicates that <em>Alice in Wonderland</em> was dealing with vacuous truth at some point. I should re-read that book and quote any interesting examples when I find the time.</p>
geometry
<p>If so, what would be the most efficient algorithm for generating spheres with different number of hexagonal faces at whatever interval required to make them fit uniformly or how might you calculate how many hexagonal faces are required for each subdivision?</p>
<p>No, not even if we permit non-regular hexagonal faces. (We do, however, preclude hexagons that are not strictly convex—where interior angles can be <span class="math-container">$180$</span> degrees or more—since those permit degenerate tilings of the sort David K mentions in the comments.) The reason is more graph-theoretical than geometrical.</p> <p>We begin with Euler's formula, relating the number of faces <span class="math-container">$F$</span>, the number of vertices <span class="math-container">$V$</span>, and the number of edges <span class="math-container">$E$</span>:</p> <p><span class="math-container">$$ F+V-E = 2 $$</span></p> <p>Consider the faces meeting at a vertex. There must be at least three of them, since it is not possible in a solid for only two faces to meet at a vertex.* Thus, if we add up the six vertices for each hexagonal face, we will count each vertex <em>at least</em> three times. That is to say,</p> <p><span class="math-container">$$ V \leq \frac{6F}{3} = 2F $$</span></p> <p>On the other hand, if we add up the six edges for each hexagonal face, we will count each edge <em>exactly</em> twice, so that</p> <p><span class="math-container">$$ E = \frac{6F}{2} = 3F $$</span></p> <p>Substituting these into Euler's formula, we obtain</p> <p><span class="math-container">$$ F+V-E \leq F+2F-3F = 0 $$</span></p> <p>But if <span class="math-container">$F+V-E \leq 0$</span>, then it is impossible that <span class="math-container">$F+V-E = 2$</span>, so no solid can be composed solely of hexagonal faces, even if we permit non-regular hexagons.</p> <blockquote> <p>*ETA (2022-05-17): This is a concession to geometry; the argument isn't <em>strictly</em> graph-theoretical. If you permit situations where only two faces meet at a vertex, then it <em>is</em> possible to create a polyhedron with only hexagons. For instance, start with a cube, and describe a circuit around the faces. (One such circuit goes along the faces of an ordinary die in numerical order.) Then on the edge adjoining two faces in this circuit, add an additional vertex in the middle. Each face will have two new vertices, converting each square into a &quot;hexagon&quot; of sorts. But I think most people would agree this isn't a very interesting positive response to the original question.</p> </blockquote> <hr /> <p><em>If</em> we now restrict ourselves to regular faces, we can show an interesting fact: Any solid with faces made up of nothing other than regular hexagons and pentagons must have exactly <span class="math-container">$12$</span> pentagons on it (the limiting case being the hexagon-free dodecahedron).</p> <p>Again, we begin with Euler's formula:</p> <p><span class="math-container">$$ F+V-E = 2 $$</span></p> <p>Let <span class="math-container">$F_5$</span> be the number of pentagonal faces, and <span class="math-container">$F_6$</span> be the number of hexagonal faces. Then</p> <p><span class="math-container">$$ F = F_5+F_6 $$</span></p> <p>The only number of faces that can meet at a vertex is three; there isn't enough angular room for four faces to meet, and as before, solids can't have only two faces meet at a vertex. If we add up the five vertices of each pentagon and the six vertices of each hexagon, then we have counted each vertex three times:</p> <p><span class="math-container">$$ V = \frac{5F_5+6F_6}{3} $$</span></p> <p>Similarly, if we count up the five edges of each pentagon and the six edges of each hexagon, then we have counted each edge twice, so</p> <p><span class="math-container">$$ E = \frac{5F_5+6F_6}{2} $$</span></p> <p>Plugging these expressions back into Euler's formula, we obtain</p> <p><span class="math-container">$$ F_5+F_6+\frac{5F_5+6F_6}{3}-\frac{5F_5+6F_6}{2} = 2 $$</span></p> <p>The <span class="math-container">$F_6$</span> terms cancel out, leaving</p> <p><span class="math-container">$$ \frac{F_5}{6} = 2 $$</span></p> <p>or just <span class="math-container">$F_5 = 12$</span>.</p> <hr /> <p>I've heard tell that any number of hexagonal faces <span class="math-container">$F_6$</span> is permitted except <span class="math-container">$F_6 = 1$</span>, but I haven't confirmed that for myself. The basic line of reasoning for excluding <span class="math-container">$F_6 = 1$</span> may be as follows: Suppose a thirteen-sided polyhedron with one hexagonal face and twelve pentagonal faces exists. Consider the hexagonal face. It must be surrounded by six pentagonal faces; call these <span class="math-container">$A$</span> through <span class="math-container">$F$</span>. Those pentagonal faces describe, at their &quot;outer&quot; edge, a perimeter with twelve edges and twelve vertices, which must be shared by a further layer of six pentagonal faces; call these <span class="math-container">$G$</span> through <span class="math-container">$L$</span>.</p> <p>There cannot be fewer than this, because the twelve edges are arranged in a cycle of six successive pairs, each pair belonging to one of <span class="math-container">$A$</span> through <span class="math-container">$F$</span>. No two faces can share more than one edge, so the twelve edges must be shared amongst six faces <span class="math-container">$G$</span> through <span class="math-container">$L$</span>, but &quot;out of phase&quot; with <span class="math-container">$A$</span> through <span class="math-container">$F$</span>.</p> <p>However, these pentagonal faces <span class="math-container">$G$</span> through <span class="math-container">$L$</span> cannot terminate in a single vertex—they would have to be squares to do that. Hence, they must terminate in a second hexagon. Thus, a polyhedron of the type envisioned cannot exist.</p> <p>Likely the above approach could be made more rigorous, or perhaps there is a more clever demonstration.</p>
<p>If a compromise is acceptable, the $2p, 4p, 8p, ... $ subdivisions of each side of an icosahedron in Buckminster Fuller domes leave behind 12 pentagons at each of its 12 vertices. </p> <p>Else, possible by means of a stereographic projection of a <em>flat regular hexagonal net</em> (a node of which touches south pole and other nodes/junctions connect to north pole), the curvilinear hexagon boundary cells shrink to zero towards north pole according to standard stereographic scaling. They can be seen on POV Ray image provided by user PM 2Ring below. A part of spiral has been traced connecting opposite vertices of some hexagons.</p> <p><a href="https://i.sstatic.net/aGy4l.png" rel="noreferrer"><img src="https://i.sstatic.net/aGy4l.png" alt="Riemann_Hexa_Stereo"></a></p> <p>It is a collection of log spirals centered at south pole that isogonally ( i.e., conformally) project to rhumb-lines (loxodromes constant inclination to meridians at $\pm \pi/6$) with corresponding latitude circles drawn. I could make an image later if you wish to see. Since these loxodromes are not overtly seen on the above image, some segments are indicated across some hexagon diameters.</p>
matrices
<p>In answering <a href="https://math.stackexchange.com/questions/71235/do-these-matrix-rings-have-non-zero-elements-that-are-neither-units-nor-zero-divi">Do these matrix rings have non-zero elements that are neither units nor zero divisors?</a> I was surprised how hard it was to find anything on the Web about the generalization of the following fact to commutative rings:</p> <blockquote> <p>A square matrix over a field has trivial kernel if and only if its determinant is non-zero.</p> </blockquote> <p>As Bill demonstrated in the above question, a related fact about fields generalizes directly to commutative rings:</p> <blockquote> <p>A square matrix over a commutative ring is invertible if and only if its determinant is invertible.</p> </blockquote> <p>However, the kernel being trivial and the matrix being invertible are not equivalent for general rings, so the question arises what the proper generalization of the first fact is. Since it took me quite a lot of searching to find the answer to this rather basic question, and it's <a href="http://blog.stackoverflow.com/2011/07/its-ok-to-ask-and-answer-your-own-questions/">excplicitly encouraged</a> to write a question and answer it to document something that might be useful to others, I thought I'd write this up here in an accessible form.</p> <p>So my questions are: What is the relationship between the determinant of a square matrix over a commutative ring and the triviality of its kernel? Can the simple relationship that holds for fields be generalized? And (generalizing with a view to the answer) what is a necessary and sufficient condition for a (not necessarily square) matrix over a commutative ring to have trivial kernel?</p>
<p>I found the answer in <a href="http://web.archive.org/web/20150326123218/http://math.ucdenver.edu/%7Espayne/classnotes/09LinAlg.pdf" rel="nofollow noreferrer">A Second Semester of Linear Algebra</a> by S. E. Payne (in Section <span class="math-container">$6.4.14$</span>, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.</p> <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$m\times n$</span> matrix over a commutative ring <span class="math-container">$R$</span>. We want to find a condition for the system of equations <span class="math-container">$Ax=0$</span> with <span class="math-container">$x\in R^n$</span> to have a non-trivial solution. If <span class="math-container">$R$</span> is a field, various definitions of the rank of <span class="math-container">$A$</span> <a href="http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Alternative_definitions" rel="nofollow noreferrer">coincide</a>, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful <em>generalization of rank</em> is the largest integer <span class="math-container">$k$</span> such that there is no non-zero element of <span class="math-container">$R$</span> that annihilates all minors of dimension <span class="math-container">$k$</span>, with <span class="math-container">$k=0$</span> if there is no such integer.</p> <blockquote> <p>We want to show that <span class="math-container">$Ax=0$</span> has a non-trivial solution if and only if <span class="math-container">$k\lt n$</span>.</p> </blockquote> <p>If <span class="math-container">$k=0$</span>, there is a non-zero element <span class="math-container">$r\in R$</span> which annihilates all matrix elements (the minors of dimension <span class="math-container">$1$</span>), so there is a non-trivial solution</p> <p><span class="math-container">$$A\pmatrix{r\\\vdots\\r}=0\;.$$</span></p> <p>Now assume <span class="math-container">$0\lt k\lt n$</span>. If <span class="math-container">$m\lt n$</span>, we can add rows of zeros to <span class="math-container">$A$</span> without changing <span class="math-container">$k$</span> or the solution set, so we can assume <span class="math-container">$k\lt n\le m$</span>. There is some non-zero element <span class="math-container">$r\in R$</span> that annihilates all minors of dimension <span class="math-container">$k+1$</span>, and there is a minor of dimension <span class="math-container">$k$</span> that isn't annihilated by <span class="math-container">$r$</span>. Without loss of generality, assume that this is the minor of the first <span class="math-container">$k$</span> rows and columns. Now consider the matrix formed of the first <span class="math-container">$k+1$</span> rows and columns of <span class="math-container">$A$</span>, and form a solution <span class="math-container">$x$</span> from the <span class="math-container">$(k+1)$</span>-th column of its <a href="http://en.wikipedia.org/wiki/Adjugate_matrix" rel="nofollow noreferrer">adjugate</a> by multiplying it by <span class="math-container">$r$</span> and padding it with zeros. By construction, the first <span class="math-container">$k$</span> entries of <span class="math-container">$Ax$</span> are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each <span class="math-container">$r$</span> times a minor of dimension <span class="math-container">$k+1$</span>, and thus also vanish. But the <span class="math-container">$(k+1)$</span>-th entry of this solution is non-zero, being <span class="math-container">$r$</span> times the minor of the first <span class="math-container">$k$</span> rows and columns, which isn't annihilated by <span class="math-container">$r$</span>. Thus we have constructed a non-trivial solution.</p> <p>In summary, if <span class="math-container">$k\lt n$</span>, there is a non-trivial solution to <span class="math-container">$Ax=0$</span>.</p> <p>Now assume conversely that there is such a solution <span class="math-container">$x$</span>. If <span class="math-container">$n\gt m$</span>, there are no minors of dimension <span class="math-container">$n$</span>, so <span class="math-container">$k\lt n$</span>. Thus we can assume <span class="math-container">$n\le m$</span>. The minors of dimension <span class="math-container">$n$</span> are the determinants of matrices <span class="math-container">$B$</span> formed by choosing any <span class="math-container">$n$</span> rows of <span class="math-container">$A$</span>. Since each row of <span class="math-container">$A$</span> times <span class="math-container">$x$</span> is <span class="math-container">$0$</span>, we have <span class="math-container">$Bx=0$</span>, and then multiplying by the adjugate of <span class="math-container">$B$</span> yields <span class="math-container">$\det B x=0$</span>. Since there is at least one non-zero entry in the non-trivial solution <span class="math-container">$x$</span>, there is at least one non-zero element of <span class="math-container">$R$</span> that annihilates all minors of size <span class="math-container">$n$</span>, and thus <span class="math-container">$k\lt n$</span>.</p> <p>Specializing to the case <span class="math-container">$m=n$</span> of square matrices, we can conclude:</p> <blockquote> <p>A system of linear equations <span class="math-container">$Ax=0$</span> with a square <span class="math-container">$n\times n$</span> matrix <span class="math-container">$A$</span> over a commutative ring <span class="math-container">$R$</span> has a non-trivial solution if and only if its determinant (its only minor of dimension <span class="math-container">$n$</span>) is annihilated by some non-zero element of <span class="math-container">$R$</span>, that is, if its determinant is a zero divisor or zero.</p> </blockquote>
<p>See Section III.8.7, entitled <a href="http://books.google.com/books?id=STS9aZ6F204C&amp;lpg=PP1&amp;dq=intitle%3Aalgebra%20inauthor%3Abourbaki&amp;pg=PA534#v=onepage&amp;q&amp;f=false" rel="nofollow noreferrer">Application to Linear Equations</a>, of <strong>Algebra</strong>, by Nicolas Bourbaki.</p> <p><strong>EDIT 1.</strong> Let <span class="math-container">$R$</span> be a commutative ring, let <span class="math-container">$m$</span> and <span class="math-container">$n$</span> be positive integers, let <span class="math-container">$M$</span> be an <span class="math-container">$R$</span>-module, and let <span class="math-container">$A:R^n\to M$</span> be <span class="math-container">$R$</span>-linear. </p> <p>Identify the <span class="math-container">$n$</span> th exterior power <span class="math-container">$\Lambda^n(R^n)$</span> of <span class="math-container">$R^n$</span> to <span class="math-container">$R$</span> in the obvious way, so that <span class="math-container">$\Lambda^n(A)$</span> is a map from <span class="math-container">$R$</span> to <span class="math-container">$\Lambda^n(M)$</span>. </p> <p>Put <span class="math-container">$v_i:=Ae_i$</span>, where <span class="math-container">$e_i$</span> is the <span class="math-container">$i$</span> th vector of the canonical basis of <span class="math-container">$R^n$</span>. In particular we have <span class="math-container">$$ Ax=\sum_{i=1}^n\ x_i\ v_i,\quad\Lambda^n(A)\ r=r\ v_1\wedge\cdots\wedge v_n. $$</span> (where <span class="math-container">$x_i$</span> is the <span class="math-container">$i$</span>-th coordinate of <span class="math-container">$x$</span>, and <span class="math-container">$r$</span> denotes any element of <span class="math-container">$\Lambda^n\left(R^n\right) \cong R$</span>).</p> <blockquote> <p>If <span class="math-container">$\Lambda^n(A)$</span> is injective, so is <span class="math-container">$A$</span>. </p> </blockquote> <p>In other words: </p> <blockquote> <p>If the <span class="math-container">$v_i$</span> are linearly dependent, then <span class="math-container">$r\ v_1\wedge\cdots\wedge v_n=0$</span> for some nonzero <span class="math-container">$r$</span> in <span class="math-container">$R$</span>. </p> </blockquote> <p>Indeed, for <span class="math-container">$x$</span> in <span class="math-container">$\ker A$</span> we have <span class="math-container">$$ \Lambda^n(A)\ x_1=x_1\ v_1\wedge v_2\wedge\cdots\wedge v_n= -\sum_{i=2}^n\ x_i\ v_i\wedge v_2\wedge\cdots\wedge v_n=0, $$</span> and, similarly, <span class="math-container">$\Lambda^n(A)\ x_i=0$</span> for all <span class="math-container">$i$</span>. </p> <p>[<em>Edit</em>: Old version (before Georges's comment): Assume now that <span class="math-container">$M$</span> embeds into <span class="math-container">$R^m$</span>.] </p> <p>Assume now that there is an <span class="math-container">$R$</span>-linear injection <span class="math-container">$B:M\to R^m$</span> such that <span class="math-container">$$ \Lambda^n(B):\Lambda^n(M)\to\Lambda^n(R^m) $$</span> is injective. This is always the case (for a suitable <span class="math-container">$m$</span>) if <span class="math-container">$M$</span> is projective and finitely generated. </p> <blockquote> <p>If <span class="math-container">$A$</span> is injective, so is <span class="math-container">$\Lambda^n(A)$</span>. </p> </blockquote> <p>In other words: </p> <blockquote> <p>If <span class="math-container">$r\ v_1\wedge\cdots\wedge v_n=0$</span> for some nonzero <span class="math-container">$r$</span> in <span class="math-container">$R$</span>, then the <span class="math-container">$v_i$</span> are linearly dependent. </p> </blockquote> <p>The proof is given in joriki's nice answer. </p> <p>This is also proved as <a href="http://books.google.com/books?id=STS9aZ6F204C&amp;lpg=PP1&amp;dq=intitle%3Aalgebra%20inauthor%3Abourbaki&amp;pg=PA519#v=onepage&amp;q&amp;f=false" rel="nofollow noreferrer">Proposition 12</a> in Bourbaki's <strong>Algebra</strong> III.7.9 p. 519. Unfortunately, I don't understand Bourbaki's argument. I'd be most grateful to whoever would be kind and patient enough to explain it to me. </p> <p><strong>EDIT 2.</strong> According to the indications given by Tsit-Yuen Lam on <a href="http://books.google.com/books?id=LKMCTP4EecYC&amp;lpg=PA150&amp;ots=8BnyVUOYkL&amp;dq=mccoy%20rank%20ring%20matrix&amp;pg=PA150#v=onepage&amp;q=mccoy%20rank%20ring%20matrix&amp;f=false" rel="nofollow noreferrer">page 150</a> of his book <strong>Exercises in modules and rings</strong>, the theorem is due to N. H. McCoy, and appeared first, as Theorem 1 page 288, in </p> <ul> <li>N. H. McCoy, Remarks on Divisors of Zero, The American Mathematical Monthly Vol. 49, No. 5 (May, 1942), pp. 286-295, <a href="http://www.jstor.org/stable/2303094" rel="nofollow noreferrer">JSTOR</a>. </li> </ul> <p>Lam also says that </p> <ul> <li>N. H. McCoy, <a href="http://books.google.com/books?id=FHAGAQAAIAAJ" rel="nofollow noreferrer">Rings and ideals</a>, The Carus Mathematical Monographs, no. 8, The Mathematical Association of America, 1948, </li> </ul> <p>is an "excellent exposition" of the subject. See Theorem 51 page 159. </p> <p>McCoy's Theorem is also stated and proved in the following texts: </p> <ul> <li><p>Ex. 5.23.A(3) on <a href="http://books.google.com/books?id=LKMCTP4EecYC&amp;lpg=PA150&amp;ots=8BnyVUOYkL&amp;dq=mccoy%20rank%20ring%20matrix&amp;pg=PA149#v=onepage&amp;q=mccoy%20rank%20ring%20matrix&amp;f=false" rel="nofollow noreferrer">page 149</a> of Lam's <strong>Exercises in modules and rings</strong>. </p></li> <li><p>Theorem 2.2 page 3 in Anton Gerashenko's notes from Lam's Course: Math 274, Commutative Rings, Fall 2006: <a href="https://stacky.net/files/written/CommRings/CommRing.pdf" rel="nofollow noreferrer">PDF file</a>.</p></li> <li><p>Theorem 1.6 in Chapter 13, entitled "Various topics", of <a href="http://math.uchicago.edu/~amathew/downloads.html" rel="nofollow noreferrer">The CRing Project</a>. --- <a href="http://math.uchicago.edu/~amathew/chvarious.pdf" rel="nofollow noreferrer">PDF file for Chapter 13</a>. --- <a href="http://math.uchicago.edu/~amathew/CRing.pdf" rel="nofollow noreferrer">PDF file for the whole book</a>. </p></li> <li><p>Blocki, Zbigniew, <a href="http://www2.im.uj.edu.pl/actamath/PDF/30-215-218.pdf" rel="nofollow noreferrer">An elementary proof of the McCoy theorem</a>, J. Univ. Iagel. Acta Math.; N 30; 1993; 215-218. </p></li> <li><p>Theorem 6.4.16. page 101, <strong>A Second Semester of Linear Algebra</strong>, Math 5718, by Stan Payne. <a href="https://web.archive.org/web/20161207060453/http://math.ucdenver.edu/~spayne/classnotes/09LinAlg.pdf" rel="nofollow noreferrer">PDF file</a>.</p></li> </ul>
logic
<p>Suppose some number <span class="math-container">$n \in \mathbb{N}$</span> is divisible by <span class="math-container">$144$</span>.</p> <p><span class="math-container">$$\implies \frac{n}{144}=k, \space \space \space k \in \mathbb{Z} \\ \iff \frac{n}{36\cdot4}=k \iff \frac{n}{36}=4k$$</span></p> <p>Since any whole number times a whole number is still a whole number, it follows that <span class="math-container">$n$</span> must also be divisible by <span class="math-container">$36$</span>. However, what I think I have just shown is:</p> <p><span class="math-container">$$\text{A number }n \space \text{is divisble by} \space 144 \implies n \space \text{is divisible by} \space 36 \space (1)$$</span></p> <p>Is that the same as saying: <span class="math-container">$$\text{For a number to be divisible by 144 it has to be divisible by 36} \space (2)$$</span></p> <p>In other words, are statements (1) and (2) equivalent?</p>
<p>Yes, it's the same. <span class="math-container">$A\implies B$</span> is equivalent to "if we have <span class="math-container">$A$</span>, we must have <span class="math-container">$B$</span>".</p> <p>And your proof looks fine. Good job.</p> <p>If I were to offer some constructive criticism, it would be of the general kind: In number theory, even though we call the property "divisible", we usually avoid division whenever possible. Of the four basic arithmetic operations it is the only one which makes integers into non-integers. And number theory is all about integers.</p> <p>Therefore, "<span class="math-container">$n$</span> is divisible by <span class="math-container">$144$</span>", or "<span class="math-container">$144$</span> divides <span class="math-container">$n$</span>" as it's also called, is defined a bit backwards:</p> <blockquote> <p>There is an integer <span class="math-container">$k$</span> such that <span class="math-container">$n=144k$</span></p> </blockquote> <p>(This is defined for any number in place of <span class="math-container">$144$</span>, except <span class="math-container">$0$</span>.)</p> <p>Using that definition, your proof becomes something like this:</p> <p>If <span class="math-container">$n$</span> is divisible by <span class="math-container">$144$</span>, then there is an integer <span class="math-container">$k$</span> such that <span class="math-container">$n=144k$</span>. This gives <span class="math-container">$$ n=144k=(36\cdot4)k=36(4k) $$</span> Since <span class="math-container">$4k$</span> is an integer, this means <span class="math-container">$n$</span> is also divisible by <span class="math-container">$36$</span>.</p>
<p>Yes that's correct or simply note that</p> <p><span class="math-container">$$n=144\cdot k= 36\cdot (4\cdot k)$$</span></p> <p>but <span class="math-container">$n=36$</span> is not divisible by <span class="math-container">$144$</span>.</p>
matrices
<p>In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a <span class="math-container">$2\times 2$</span> matrix by the formula. Our teacher showed us how to compute the determinant of an <span class="math-container">$n \times n$</span> matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?</p>
<p>Your trouble with determinants is pretty common. They’re a hard thing to teach well, too, for two main reasons that I can see: the formulas you learn for computing them are messy and complicated, and there’s no “natural” way to interpret the value of the determinant, the way it’s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. It’s hard to believe things like the invertibility condition you’ve stated when it’s not even clear what the numbers mean and where they come from.</p> <p>Rather than show that the many usual definitions are all the same by comparing them to each other, I’m going to state some general properties of the determinant that I claim are enough to specify uniquely what number you should get when you put in a given matrix. Then it’s not too bad to check that all of the definitions for determinant that you’ve seen satisfy those properties I’ll state.</p> <p>The first thing to think about if you want an “abstract” definition of the determinant to unify all those others is that it’s not an array of numbers with bars on the side. What we’re really looking for is a function that takes N vectors (the N columns of the matrix) and returns a number. Let’s assume we’re working with real numbers for now.</p> <p>Remember how those operations you mentioned change the value of the determinant?</p> <ol> <li><p>Switching two rows or columns changes the sign.</p> </li> <li><p>Multiplying one row by a constant multiplies the whole determinant by that constant.</p> </li> <li><p>The general fact that number two draws from: the determinant is <em>linear in each row</em>. That is, if you think of it as a function <span class="math-container">$\det: \mathbb{R}^{n^2} \rightarrow \mathbb{R}$</span>, then <span class="math-container">$$ \det(a \vec v_1 +b \vec w_1 , \vec v_2 ,\ldots,\vec v_n ) = a \det(\vec v_1,\vec v_2,\ldots,\vec v_n) + b \det(\vec w_1, \vec v_2, \ldots,\vec v_n),$$</span> and the corresponding condition in each other slot.</p> </li> <li><p>The determinant of the identity matrix <span class="math-container">$I$</span> is <span class="math-container">$1$</span>.</p> </li> </ol> <p>I claim that these facts are enough to define a <em>unique function</em> that takes in N vectors (each of length N) and returns a real number, the determinant of the matrix given by those vectors. I won’t prove that, but I’ll show you how it helps with some other interpretations of the determinant.</p> <p>In particular, there’s a nice geometric way to think of a determinant. Consider the unit cube in N dimensional space: the set of N vectors of length 1 with coordinates 0 or 1 in each spot. The determinant of the linear transformation (matrix) T is the <em>signed volume of the region gotten by applying T to the unit cube</em>. (Don’t worry too much if you don’t know what the “signed” part means, for now).</p> <p>How does that follow from our abstract definition?</p> <p>Well, if you apply the identity to the unit cube, you get back the unit cube. And the volume of the unit cube is 1.</p> <p>If you stretch the cube by a constant factor in one direction only, the new volume is that constant. And if you stack two blocks together aligned on the same direction, their combined volume is the sum of their volumes: this all shows that the signed volume we have is linear in each coordinate when considered as a function of the input vectors.</p> <p>Finally, when you switch two of the vectors that define the unit cube, you flip the orientation. (Again, this is something to come back to later if you don’t know what that means).</p> <p>So there are ways to think about the determinant that aren’t symbol-pushing. If you’ve studied multivariable calculus, you could think about, with this geometric definition of determinant, why determinants (the Jacobian) pop up when we change coordinates doing integration. Hint: a derivative is a linear approximation of the associated function, and consider a “differential volume element” in your starting coordinate system.</p> <p>It’s not too much work to check that the area of the parallelogram formed by vectors <span class="math-container">$(a,b)$</span> and <span class="math-container">$(c,d)$</span> is <span class="math-container">$\Big|{}^{a\;b}_{c\;d}\Big|$</span> either: you might try that to get a sense for things.</p>
<p>You could think of a determinant as a volume. Think of the columns of the matrix as vectors at the origin forming the edges of a skewed box. The determinant gives the volume of that box. For example, in 2 dimensions, the columns of the matrix are the edges of a rhombus.</p> <p>You can derive the algebraic properties from this geometrical interpretation. For example, if two of the columns are linearly dependent, your box is missing a dimension and so it's been flattened to have zero volume.</p>
probability
<p>Say I have $X \sim \mathcal N(a, b)$ and $Y\sim \mathcal N(c, d)$. Is $XY$ also normally distributed?</p> <p>Is the answer any different if we know that $X$ and $Y$ are independent?</p>
<p>The product of two Gaussian random variables is distributed, in general, as a linear combination of two Chi-square random variables:</p> <p>$$ XY \,=\, \frac{1}{4} (X+Y)^2 - \frac{1}{4}(X-Y)^2$$ </p> <p>Now, $X+Y$ and $X-Y$ are Gaussian random variables, so that $(X+Y)^2$ and $(X-Y)^2$ are Chi-square distributed with 1 degree of freedom.</p> <p>If $X$ and $Y$ are both zero-mean, then</p> <p>$$ XY \sim c_1 Q - c_2 R$$</p> <p>where $c_1=\frac{Var(X+Y)}{4}$, $c_2 = \frac{Var(X-Y)}{4}$ and $Q, R \sim \chi^2_1$ are central.</p> <p>The variables $Q$ and $R$ are independent if and only if $Var(X) = Var(Y)$.</p> <p>In general, $Q$ and $R$ are noncentral and dependent.</p>
<p>As @Yemon Choi showed in the first question, without any hypothesis the answer is negative since $P(X^2&lt;0)=0$ whereas $P(U&lt;0)\neq 0$ if $U$ is Gaussian.</p> <p>For the second question the answer is also no. Take $X$ and $Y$ two Gaussian random variables with mean $0$ and variance $1$. Since they have the same variance, $X-Y$ and $X+Y$ are independent Gaussian random variables. Put $Z:=\frac{X^2-Y^2}2=\frac{X-Y}{\sqrt 2}\frac{X+Y}{\sqrt 2}$. Then $Z$ is the product of two independent Gaussian, but the characteristic function of $Z$ is $\varphi_Z(t)=\frac 1{\sqrt{1+t^2}}$, which is not the characteristic function of a Gaussian.</p>
combinatorics
<p>Aside from <span class="math-container">$1!\cdot n!=n!$</span> and <span class="math-container">$(n!-1)!\cdot n! = (n!)!$</span>, the only product of factorials known is <span class="math-container">$6!\cdot 7!=10!$</span>.</p> <p>One might naturally associate these numbers with the permutations on <span class="math-container">$6, 7,$</span> and <span class="math-container">$10$</span> objects, respectively, and hope that this result has some kind of connection to a sporadic relation between such permutations - numerical &quot;coincidences&quot; often have deep math behind them, like how <span class="math-container">$1^2+2^2+\ldots+24^2=70^2$</span> can be viewed as an ingredient that makes the Leech lattice work.</p> <p>The most natural thing to hope for would be a product structure on the groups <span class="math-container">$S_6$</span> and <span class="math-container">$S_7$</span> mapping to <span class="math-container">$S_{10}$</span>, but as <a href="https://mathoverflow.net/questions/324436/does-the-symmetric-group-s-10-factor-as-a-knit-product-of-symmetric-subgroup">this MathOverflow thread</a> shows, one cannot find disjoint copies of <span class="math-container">$S_6$</span> and <span class="math-container">$S_7$</span> living in <span class="math-container">$S_{10}$</span>, so a product structure seems unlikely.</p> <p>However, I'm holding out hope that some weaker kind of bijection can be found in a &quot;natural&quot; way. Obviously one <em>can</em> exhibit a bijection. For instance, identify the relative ordering of <span class="math-container">$1,2,\ldots 7$</span> in a permutation of size <span class="math-container">$10$</span>, and then biject <span class="math-container">$_{10}P_{3}=720$</span> with <span class="math-container">$S_6$</span> in some way. But I'd like to know if there is a way to define such a bijection which arises naturally from the permutation structures on these sets, and makes it clear why the construction does not extend to other orders.</p> <p>I tried doing something with orderings on polar axes of the dodecahedron (<span class="math-container">$10!$</span>) and orderings on polar axes of the icosahedron (<span class="math-container">$6!$</span>), in the hopes that the sporadic structure and symmetry of these Platonic solids would allow for interesting constructions that don't generalize, but ran into issues with the dodecahedron (sequences of dodecahedral axes aren't particularly nice objects) and the question of how to extract a permutation of length <span class="math-container">$7$</span>.</p> <p>I'm curious if someone can either devise a natural bijection between these sets or link to previous work on this question.</p>
<p>This family of bijections (of sets) <span class="math-container">$S_6\times S_7 \to S_{10}$</span> has already been suggested in comments and linked threads, but it is so pretty I wanted to spell it out:</p> <p>There are <span class="math-container">$10$</span> ways of partitioning the numbers <span class="math-container">$1,2,3,4,5,6$</span> into two (unordered) pieces of equal size: <span class="math-container">$P_1,P_2,\cdots,P_{10}$</span>. Thus we have a canonical embedding <span class="math-container">$S_6\hookrightarrow S_{10}$</span>, coming from the induced action on the <span class="math-container">$P_i$</span>.</p> <p>Any distinct pair <span class="math-container">$P_i,P_j$</span> will be related by a unique transposition. For example <span class="math-container">$\{\{1,2,3\},\{4,5,6\}\}$</span> (denoted hereafter <span class="math-container">$\left(\frac{123}{456}\right)$</span>) is related to <span class="math-container">$\left(\frac{126}{453}\right)$</span> via the transposition <span class="math-container">$(36)$</span>.</p> <p>There are two types of ordered (distinct) triples <span class="math-container">$P_i, P_j,P_k$</span>:</p> <ol> <li><p>They may be related pairwise via transpositions <span class="math-container">$(ab),(cd),(ef)$</span> with <span class="math-container">$a,b,c,d,e,f$</span> distinct and each of <span class="math-container">$\{a,b\}, \{c,d\},\{e,f\}$</span> not on the same side of any of <span class="math-container">$P_i, P_j,P_k$</span>:<span class="math-container">$$ \left(\frac{ace}{bdf}\right), \left(\frac{bce}{adf}\right), \left(\frac{ade}{bcf}\right).$$</span><br /> Here, there are <span class="math-container">$10$</span> choices for <span class="math-container">$P_i$</span>, <span class="math-container">$9$</span> choices for <span class="math-container">$P_j$</span> and <span class="math-container">$4$</span> choices for <span class="math-container">$P_k$</span>, giving <span class="math-container">$360$</span> triples in total.</p> </li> <li><p>They may be related pairwise via transpositions <span class="math-container">$(ab),(ca),(bc)$</span> with <span class="math-container">$a,b,c$</span> distinct: <span class="math-container">$$ \left(\frac{ace}{bdf}\right), \left(\frac{bce}{adf}\right), \left(\frac{abe}{cdf}\right).$$</span><br /> Again, there are <span class="math-container">$10$</span> choices for <span class="math-container">$P_i$</span>, <span class="math-container">$9$</span> choices for <span class="math-container">$P_j$</span> and <span class="math-container">$4$</span> choices for <span class="math-container">$P_k$</span>, giving <span class="math-container">$360$</span> triples in total.</p> </li> </ol> <p>An element of the stabiliser (in <span class="math-container">$S_6$</span>) of a type 1 ordered triple (written as above) must preserve the pairs <span class="math-container">$\{a,b\}, \{c,d\},\{e,f\}$</span>. Further if it swaps any of these pairs it must swap all of them, so the only non-trivial element of the stabiliser is an odd permutation: <span class="math-container">$(ab)(cd)(ef)$</span>.</p> <p>An element of the stabiliser (in <span class="math-container">$S_6$</span>) of a type 2 ordered triple (written as above) must preserve the sets <span class="math-container">$\{d,f\}, \{e\},\{a,c,b\}$</span>. Further it must fix each of <span class="math-container">$a,b,c$</span>. Thus the only non-trivial element of the stabiliser is an odd permutation: <span class="math-container">$(df)$</span>.</p> <p>As <span class="math-container">$|A_6|=360$</span>, in particular this means there is a unique element of <span class="math-container">$A_6$</span> taking the ordered triple <span class="math-container">$P_1,P_2,P_3$</span> to a specified ordered triple <span class="math-container">$P_i,P_j,P_k$</span> of the same type as <span class="math-container">$P_1,P_2,P_3$</span>.</p> <p>Fix <span class="math-container">$t\in S_{10}$</span> a permutation taking <span class="math-container">$P_1,P_2,P_3$</span> to an ordered triple of the other type. Then there is a unique element in <span class="math-container">$A_6$</span> which composed with <span class="math-container">$t$</span> takes the ordered triple <span class="math-container">$P_1,P_2,P_3$</span> to a specified ordered triple <span class="math-container">$P_i,P_j,P_k$</span> of the other type to <span class="math-container">$P_1,P_2,P_3$</span>.</p> <p>Let <span class="math-container">$S_7$</span> denote the group of permutations of <span class="math-container">$P_4,P_5,\cdots,P_{10}$</span>. Then any permutation in <span class="math-container">$S_{10}$</span> may be written uniquely as an element of <span class="math-container">$S_7$</span> followed by an element of <span class="math-container">$(A_6\sqcup tA_6)$</span>, where the latter is determined by where <span class="math-container">$P_1,P_2,P_3$</span> are mapped to.</p> <p>Thus we have established a bijection of sets <span class="math-container">$$S_{10}\to (A_6\sqcup tA_6)\times S_7.$$</span> Once we fix an odd permutation <span class="math-container">$t'\in S_6$</span>, we may identify the sets <span class="math-container">$$(A_6\sqcup t'A_6)\to S_6.$$</span> Composing we get: <span class="math-container">$$S_{10}\to (A_6\sqcup tA_6)\times S_7\to (A_6\sqcup t'A_6)\times S_7\to S_6\times S_7.$$</span></p> <p>That is for any choice of the permutations <span class="math-container">$t,t'$</span> we have the required bijection of sets.</p>
<p>It may be connected with, of all things, the <span class="math-container">$3-4-5$</span> right triangle! This triangle and its multiples stand out as having the sides in arithmetic progression. Such an arithmetic progression leads to factorial expressions when the sides are multiplied together.</p> <p>As a preliminary step, consider a relatively unheralded property of right triangles: the diameter of the incircle plus the hypotenuse equals the sum of the other two sides. Suppose that the legs are <span class="math-container">$a$</span> and <span class="math-container">$b$</span>, and the hypoteneuse is <span class="math-container">$c$</span> where <span class="math-container">$c^2=a^2+b^2$</span>. The diameter of the incircle is then <span class="math-container">$2ab/(a+b+c)$</span> while the Pythagorean relation implies <span class="math-container">$$(a+b+c)(a+b-c)=(a^2+2ab+b^2)-(a^2+b^2)=2ab$$</span> Thereby the diameter of the incircle reduces to <span class="math-container">$a+b-c$</span>. Should there be a right triangle whose sides are in arithmetic progression, then, the diameter of the incircle will join this progression, making it longer and thus perhaps generating a bigger factorial upon multiplication.</p> <p>In <a href="https://math.stackexchange.com/questions/3626718/triangle-construction-given-semiperimeter-and-radii-of-inscribed-and-circumscrib/3627358?r=SearchResults#3627358">this question</a> it is shown that the product of the sides of any triangle is half the product of the diameter of the circumcircle (circumdiameter), the diameter of the incircle (indiameter), and the perimeter. Let us see where that leads if we apply it to a right triangle having sides <span class="math-container">$3,4,5$</span>. Multiplying the sides together then gives</p> <p><span class="math-container">$3×4×5=\text{circumdiameter}×\text{indiameter}×\text{perimeter}/2$</span></p> <p>We double the sides of the triangle to clear the fraction on the right side:</p> <p><span class="math-container">$6×8×10=\text{circumdiameter}×\text{indiameter}×\text{perimeter}×4$</span></p> <p>The circumdiameter is the hypoteneuse of the <span class="math-container">$3-4-5$</span> triangle, thus <span class="math-container">$5$</span> -- which is in the aforementioned arithmetic progression. The indiameter is <span class="math-container">$2$</span> from the above lemma, which precedes <span class="math-container">$3,4,5$</span> in the arithmetic progression. And the perimeter of the triangle is three times the longer leg, again due to the arithmetic progression, thus <span class="math-container">$4×3$</span>. Substituting these results into the above product equality then gives</p> <p><span class="math-container">$6×8×10=5×2×(3×4)×4=5!×4$</span></p> <p>And there is our factorial. To make it cleaner we should multiply by <span class="math-container">$3/2$</span>, absorbing the dangling factor <span class="math-container">$4$</span> into the factorial. We then get three different three-term products on the left side, depending on which of the factors <span class="math-container">$6,8,10$</span> we increment:</p> <p><span class="math-container">$\color{blue}{8×9×10}=6×10×12=6×8×15=5×2×(3×4)×6=6!$</span></p> <p>And from the three-term product shown in blue, we have</p> <p><span class="math-container">$6!=10!/7!$</span></p> <p>Why is this uniquely chosen? We see that the sides of a right triangle being in arithmetic progression lead to the factorial on the right in two ways, by making the perimeter a simple multiple of one leg and by incorporating the circum diameter into the arithmetic progression. Only the <span class="math-container">$3-4-5$</span> right triangle has these properties, and it leads specifically to <span class="math-container">$6!$</span> also being a factorial ratio.</p>
geometry
<p>Imagine that you're a flatlander walking in your world. How could you be able to distinguish between your world being a sphere versus a torus? I can't see the difference from this point of view.</p> <p>If you are interested, this question arose while I was watching <a href="https://www.youtube.com/watch?v=j3BlLo1QfmU">this video about the shape of space</a> by Jeff Weeks.</p>
<p>Get a (two-dimensional) dog and a very long (one-dimensional) leash. Send your dog out exploring, letting the leash play out. When the dog returns, try to pull in the leash. (Meaning, you try to reel in the loop with you and the dog staying put.) On a sphere, the leash can always be pulled in; on a torus, sometimes it can't be.</p> <p>(See <a href="http://en.wikipedia.org/wiki/Homotopy">homotopy</a>.)</p>
<p>The Gaussian Curvature is an example of an intrinsic curvature, i.e. it is detectable to the "inhabitants" of the surface. The Gauss-Bonnet Theorem gives a connection between the Gaussian Curvature $K$ and the Euler Characteristic $\chi$. For a smooth manifold $M$ without boundary: $$\int_M K~\mathrm{d}\mu = 2\pi \chi(M)$$ The Euler Characteristic and the genus of the surface are connected by $\chi(M) = 2-2g$. A sphere has genus zero and so $\chi(S^2) = 2$, while a torus has genus one and so $\chi(T)=0$.</p> <p>You could, as the ordinance survey people do, choose triangulation points on your surface, measure the Gaussian Curvature at those points and then use this to approximate the above integral. </p>
linear-algebra
<p>Here's a cute problem that was frequently given by the late Herbert Wilf during his talks. </p> <p><strong>Problem:</strong> Let $A$ be an $n \times n$ matrix with entries from $\{0,1\}$ having all positive eigenvalues. Prove that all of the eigenvalues of $A$ are $1$.</p> <p><strong>Proof:</strong></p> <blockquote class="spoiler"> <p> Use the AM-GM inequality to relate the trace and determinant.</p> </blockquote> <p>Is there any other proof?</p>
<p>If one wants to use the AM-GM inequality, you could proceed as follows: Since $A$ has all $1$'s or $0$'s on the diagonal, it follows that $tr(A)\leq n$. Now calculating the determinant by expanding along any row/column, one can easily see that the determinant is an integer, since it is a sum of products of matrix entries (up to sign). Since all eigenvalues are positive, this integer must be positive. AM-GM inequality implies $$det(A)^{\frac{1}{n}}=\left(\prod_{i}\lambda_{i}\right)^{\frac{1}{n}}\leq \frac{1}{n}\sum_{i=1}^{n}\lambda_{i}\leq 1.$$ Since $det(A)\neq 0$, and $m^{\frac{1}{n}}&gt;1$ for $m&gt;1$, the above inequality forces $det(A)=1$. We therefore have equality which happens precisely when $\lambda_{i}=\lambda_{j}$ for all $i,j$. Combining this with the above equality gives the result.</p>
<p>Suppose that A has a column with only zero entries, then we must have zero as an eigenvalue. (e.g. expanding det(A-rI) using that column). So it must be true that in satisfying the OP's requirements we must have each column containing a 1. The same holds true for the rows by the same argument. Now suppose that we have a linear relationship between the rows, then there exists a linear combination of these rows that gives rise to a new matrix with a zero row. We've already seen that this is not allowed so we must have linearly independent rows. I am trying to force the form of the permissible matrices to be restricted enough to give the result. Linear independence of the rows gives us a glimpse at the invertability of such matrices but alas not its diagonalizability. The minimal polynomial $(1-r)^n$ results from upper or lower triangular matrices with ones along the diagonal and I suspect that we may be able to complete this proof by looking at what happens to this polynomial when there are deviations from the triangular shape. The result linked by user1551 may be the key. Trying to gain some intuition about what are the possibilities leads one to match the binomial theorem with Newton's identities: <a href="http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums" rel="nofollow">http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums</a></p> <p>and the fact that the trace must be $n$(diagonal ones) and the determinant 1. I would like to show that any deviation from this minimal polynomial must lead to a non-positive eigenvalue. Two aspects of the analysis, a combinatorial argument to show what types of modifications(from triangular) are permissible while maintaining row/column independence and looking at the geometrical effects of the non-dominant terms in the resulting characteristic polynomials. Maybe some type of induction argument will surface here. </p>
game-theory
<p>Three people A,B,C attend the following game: from 0~100, the Host will come up a number with Uniform, but he doesn't tell them the number, the attendee will guess a number and the closes one will win. A choose the number first and tell the number, B will tell another different number based on A's number, C choose another different one based on both A and B. </p> <p>What number should A,B,C choose to make sure the probability is largest to win the game ?</p>
<p>To temporarily avoid problems due to the discreteness of the set $\{0,1,\dots,100\}$, let's pretend that the three people are guessing a real number between 0 and 1. If the first two guesses are $a$ and $b$, say $0&lt;a&lt;b&lt;1$, then C will want to guess</p> <ul> <li>a tiny bit less than $a$, if $\max\{a,\frac{b-a}2,1-b\} = a$;</li> <li>anything between $a$ and $b$, if $\max\{a,\frac{b-a}2,1-b\} = \frac{b-a}2$;</li> <li>a tiny bit more than $b$, if $\max\{a,\frac{b-a}2,1-b\} = b$.</li> </ul> <p>Then C's chance of winning will be precisely $\max\{a,\frac{b-a}2,1-b\}$.</p> <p>Unfortunately, the fact that C's winning move is not unique in one case makes B's strategy undefined: B needs to know how C will choose from among those choices, or what random distribution C will choose from.</p>
<p>Building upon Greg's answer, we can analyze what's best for $B$ then. </p> <p><strong>C's strategy</strong></p> <p>As pointed out, depending on $M = \max\{a, (b-a)/2, 1 - b\}$, $C$ will choose $c = a - \gamma$ if $a = M$, $c \in (a,b)$ if $(b-a)/2 = M$ and $c = 1 - b + \gamma$ if $1 - b = M$, for some small value $\gamma$. This is assuming $a &lt; b$, but of course an analogous result holds for $a &gt; b$.</p> <p><strong>B's strategy</strong></p> <p>If $A$ chooses $a = 1/2 - \alpha$ for some small $\alpha$, then $B$ can choose $b = 1/2 + \alpha + \beta$ for some small $\beta$, so that $C$'s optimal strategy would be to pick $c = 1/2 - \alpha - \gamma$ for some small $\gamma$. This gives $B$ a winning chance of almost $0.5$, which should be optimal, and $A$'s winning chances will be $(b - c)/2 = \alpha + \beta/2 + \gamma/2$ which will be small for small $\alpha, \beta, \gamma$.</p> <p>This holds for all sufficiently small $\alpha$, but when $\alpha = 1/4$ and $a = 1/4$ we get a new situation. Then with this strategy $(b - a)/2 = 1/4 + \beta/2 &gt; \{1 - b, a\} = \{1/4 - \beta, 1/4\}$, hence $C$ would choose a number between $a$ and $b$ as pointed out by Greg. Then (for $B$'s worst-case) if $C$ chooses $c$ close to $b$, $B$'s winning chances are reduced to less than $1/4$. Similarly, choosing $b \gg 3/4$ allows $C$ to choose $c = b - \gamma$ further decreasing $B$'s worst-case chances, while $b &lt; 3/4$ will make $C$ choose $c = b + \gamma$ also giving $B$ odds of less than $1/4$. So assuming the worst-case scenario for $B$, $B$ is best off by taking $b = 3/4$ if $a = 1/4$. Then $A$'s worst-case winning chances are always at least $1/4$.</p> <p>Finally, if $A$ chooses $a &lt; 1/4$, then $B$ can choose $b = a + 2/3(1 - a) + \beta &lt; 3/4$, so that $C$ will choose $c$ between $a$ and $b$, and $B$'s chances are more than $1/4$. In $A$'s worst-case scenario, this will mean that $C$ chooses $c$ close to $a$, so that $A$'s chances will be less than $1/4$.</p> <p><strong>A's strategy</strong></p> <p>As we saw above, the worst-case chances of $A$ winning, assuming $B$ maximizes his worst-case chances, are $&lt; 1/4$ for $a \in (1/4, 1/2]$, $\geq 1/4$ for $a = 1/4$ and $&lt; 1/4$ for $a \in (0, 1/4)$. So the best worst-case strategy for $A$ is to pick $a = 1/4$, in which case $B$'s best worst-case strategy is to pick $b = 3/4$, after which $C$ can pick any value $c \in (1/4, 3/4)$.</p>
probability
<p>I made a program to find out the number of primes within a certain range, for example between $1$ and $10000$ I found $1229$ primes, I then increased my range to $20000$ and then I found $2262$ primes, after doing it for $1$ to $30000$, I found $3245$ primes.</p> <p>Now a curious thing to notice is that each time, The probability of finding a prime in between $2$ multiples of $10000$ is decreasing, i.e it was $$\frac{2262-1229}{10000}=0.1033$$ between $10000$ and $20000$, and $$\frac{3245-2262}{10000}=0.0983$$ between $20000$ and $30000$, </p> <p>So from this can we infer that there will exist two numbers separated by a gap of $10000$ such that no number in between them is prime? If so how to determine the first two numbers with which this happens? Also I took $10000$ just as a reference here, what about if the gap between them in general is $x$, can we do something for this in generality?</p> <p>Thanks!</p>
<blockquote> <p>Can we infer that there exist two numbers separated by a gap of $10000$, such that no number in between them is prime?</p> </blockquote> <p>We can infer this regardless of what you wrote.</p> <p>For every gap $n\in\mathbb{N}$ that you can think of, I can give you a sequence of $n-1$ consecutive numbers, none of which is prime.</p> <p>There you go: $n!+2,n!+3,\dots,n!+n$.</p> <p>So there is no finite bound on the gap between two consecutive primes.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">Prime Number Theorem</a> states that the number of primes $\pi(x)$ up to a given $x$ is $$\pi(x) \sim \frac{x}{\log(x)}$$ which means that the probability of finding a prime is decreasing if you make your "population" $x$ larger. So yes, there exist a gap of $n$ numbers whereof none are prime. </p> <p>The way to find the first gap for some $n$ has to be done through the use of software, since the exact distribution of prime numbers is only approximated by $\frac{x}{\log(x)}$.</p> <p><strong>EDIT:</strong> That the PNT implies that there's always a gap of size $n$ can be seen by considering what would happen if this was not the case; if there was a maximum gap of $n$ that was reached after some $x$, the probability of finding a prime between $x$ and some larger number $m$ would no longer decrease as $m \to \infty$, which contradicts the PNT. </p>
number-theory
<blockquote> <p><span class="math-container">$\pi$</span> Pi</p> <p>Pi is an infinite, nonrepeating <span class="math-container">$($</span>sic<span class="math-container">$)$</span> decimal - meaning that every possible number combination exists somewhere in pi. Converted into ASCII text, somewhere in that infinite string of digits is the name of every person you will ever love, the date, time and manner of your death, and the answers to all the great questions of the universe.</p> </blockquote> <p>Is this true? Does it make any sense ?</p>
<p>It is not true that an infinite, non-repeating decimal must contain ‘every possible number combination’. The decimal $0.011000111100000111111\dots$ is an easy counterexample. However, if the decimal expansion of $\pi$ contains every possible finite string of digits, which seems quite likely, then the rest of the statement is indeed correct. Of course, in that case it also contains numerical equivalents of every book that will <strong>never</strong> be written, among other things.</p>
<p>Let me summarize the things that have been said which are true and add one more thing.</p> <ol> <li>$\pi$ is not known to have this property, but it is expected to be true.</li> <li>This property does not follow from the fact that the decimal expansion of $\pi$ is infinite and does not repeat.</li> </ol> <p>The one more thing is the following. The assertion that the answer to every question you could possibly want to ask is contained somewhere in the digits of $\pi$ may be true, but it's useless. Here is a string which may make this point clearer: just string together every possible sentence in English, first by length and then by alphabetical order. The resulting string contains the answer to every question you could possibly want to ask, but</p> <ul> <li>most of what it contains is garbage, </li> <li>you have no way of knowing what is and isn't garbage <em>a priori</em>, and</li> <li>the only way to refer to a part of the string that isn't garbage is to describe its position in the string, and the bits required to do this themselves constitute a (terrible) encoding of the string. So finding this location is exactly as hard as finding the string itself (that is, finding the answer to whatever question you wanted to ask).</li> </ul> <p>In other words, a string which contains everything contains nothing. Useful communication is useful because of what it does <em>not</em> contain.</p> <p>You should keep all of the above in mind and then read Jorge Luis Borges' <em><a href="http://en.wikipedia.org/wiki/The_Library_of_Babel">The Library of Babel</a></em>. (A library which contains every book contains no books.) </p>
probability
<p>I gave the following problem to students:</p> <blockquote> <p>Two $n\times n$ matrices $A$ and $B$ are <em>similar</em> if there exists a nonsingular matrix $P$ such that $A=P^{-1}BP$.</p> <ol> <li><p>Prove that if $A$ and $B$ are two similar $n\times n$ matrices, then they have the same determinant and the same trace.</p></li> <li><p>Give an example of two $2\times 2$ matrices $A$ and $B$ with same determinant, same trace but that are not similar.</p></li> </ol> </blockquote> <p>Most of the ~20 students got the first question right. However, almost none of them found a correct example to the second question. Most of them gave examples of matrices that have same determinant and same trace. </p> <p>But computations show that their examples are similar matrices. They didn't bother to check that though, so they just tried <em>random</em> matrices with same trace and same determinant, hoping it would be a correct example.</p> <p><strong>Question</strong>: how to explain that none of the random trial gave non similar matrices?</p> <p>Any answer based on density or measure theory is fine. In particular, you can assume any reasonable distribution on the entries of the matrix. If it matters, the course is about matrices with real coefficients, but you can assume integer coefficients, since when choosing numbers <em>at random</em>, most people will choose integers.</p>
<p>If <span class="math-container">$A$</span> is a <span class="math-container">$2\times 2$</span> matrix with determinant <span class="math-container">$d$</span> and trace <span class="math-container">$t$</span>, then the characteristic polynomial of <span class="math-container">$A$</span> is <span class="math-container">$x^2-tx+d$</span>. If this polynomial has distinct roots (over <span class="math-container">$\mathbb{C}$</span>), then <span class="math-container">$A$</span> has distinct eigenvalues and hence is diagonalizable (over <span class="math-container">$\mathbb{C}$</span>). In particular, if <span class="math-container">$d$</span> and <span class="math-container">$t$</span> are such that the characteristic polynomial has distinct roots, then any other <span class="math-container">$B$</span> with the same determinant and trace is similar to <span class="math-container">$A$</span>, since they are diagonalizable with the same eigenvalues.</p> <p>So to give a correct example in part (2), you need <span class="math-container">$x^2-tx+d$</span> to have a double root, which happens only when the discriminant <span class="math-container">$t^2-4d$</span> is <span class="math-container">$0$</span>. If you choose the matrix <span class="math-container">$A$</span> (or the values of <span class="math-container">$t$</span> and <span class="math-container">$d$</span>) &quot;at random&quot; in any reasonable way, then <span class="math-container">$t^2-4d$</span> will usually not be <span class="math-container">$0$</span>. (For instance, if you choose <span class="math-container">$A$</span>'s entries uniformly from some interval, then <span class="math-container">$t^2-4d$</span> will be nonzero with probability <span class="math-container">$1$</span>, since the vanishing set in <span class="math-container">$\mathbb{R}^n$</span> of any nonzero polynomial in <span class="math-container">$n$</span> variables has Lebesgue measure <span class="math-container">$0$</span>.) Assuming that students did something like pick <span class="math-container">$A$</span> &quot;at random&quot; and then built <span class="math-container">$B$</span> to have the same trace and determinant, this would explain why none of them found a correct example.</p> <p>Note that this is very much special to <span class="math-container">$2\times 2$</span> matrices. In higher dimensions, the determinant and trace do not determine the characteristic polynomial (they just give two of the coefficients), and so if you pick two matrices with the same determinant and trace they will typically have different characteristic polynomials and not be similar.</p>
<p>As Eric points out, such $2\times2$ matrices are special. In fact, there are only two such pairs of matrices. The number depends on how you count, but the point is that such matrices have a <em>very</em> special form.</p> <p>Eric proved that the two matrices must have a double eigenvalue. Let the eigenvalue be $\lambda$. It is a little exercise<sup>1</sup> to show that $2\times2$ matrices with double eigenvalue $\lambda$ are similar to a matrix of the form $$ C_{\lambda,\mu} = \begin{pmatrix} \lambda&amp;\mu\\ 0&amp;\lambda \end{pmatrix}. $$ Using suitable diagonal matrices shows that $C_{\lambda,\mu}$ is similar to $C_{\lambda,1}$ if $\mu\neq0$. On the other hand, $C_{\lambda,0}$ and $C_{\lambda,1}$ are not similar; one is a scaling and the other one is not.</p> <p>Therefore, up to similarity transformations, the only possible example is $A=C_{\lambda,0}$ and $B=C_{\lambda,1}$ (or vice versa). Since scaling doesn't really change anything, <strong>the only examples</strong> (up to similarity, scaling, and swapping the two matrices) are $$ A = \begin{pmatrix} 1&amp;0\\ 0&amp;1 \end{pmatrix}, \quad B = \begin{pmatrix} 1&amp;1\\ 0&amp;1 \end{pmatrix} $$ and $$ A = \begin{pmatrix} 0&amp;0\\ 0&amp;0 \end{pmatrix}, \quad B = \begin{pmatrix} 0&amp;1\\ 0&amp;0 \end{pmatrix}. $$ If adding multiples of the identity is added to the list of symmetries (then scaling can be removed), then there is only one matrix pair up to the symmetries.</p> <p>If you are familiar with the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="noreferrer">Jordan normal form</a>, it gives a different way to see it. Once the eigenvalues are fixed to be equal, the only free property (up to similarity) is whether there are one or two blocks in the normal form. The Jordan normal form is invariant under similarity transformations, so it gives a very quick way to solve problems like this.</p> <hr> <p><sup>1</sup> You only need to show that any matrix is similar to an upper triangular matrix. The eigenvalues (which now coincide) are on the diagonal. You can skip this exercise if you have Jordan normal forms at your disposal.</p>
probability
<p>There's an 80% probability of a certain outcome, we get some new information that means that outcome is 4 times more likely to occur.</p> <p>What's the new probability as a percentage and how do you work it out?</p> <p>As I remember it the question was posed like so:</p> <blockquote> <p>Suppose there's a student, Tom W, if you were asked to estimate the probability that Tom is a student of computer science. Without any other information you would only have the base rate to go by (percentage of total students enrolled on computer science) suppose this base rate is 80%.</p> <p>Then you are given a description of Tom W's personality, suppose from this description you estimate that Tom W is 4 times more likely to be enrolled on computer science.</p> <p>What is the new probability that Tom W is enrolled on computer science.</p> </blockquote> <p>The answer given in the book is 94.1% but I couldn't work out how to calculate it!</p> <p>Another example in the book is with a base rate of 3%, 4 times more likely than this is stated as 11%.</p>
<p>The most reasonable way to match the answer in the book would be to define the likelihood to be the ratio of success over failure (aka odds): $$ q=\frac{p}{1-p} $$ then the probability as a function of the odds is $$ p=\frac{q}{1+q} $$ In your case the odds are $4:1$ so $4$ times as likely would be $16:1$ odds which has a probability of $$ \frac{16}{17}=94.1176470588235\% $$ This matches the $3\%$ to $11.0091743119266\%$ transformation, as well.</p> <hr> <p><strong>Bayes' Rule</strong></p> <p><a href="http://en.wikipedia.org/wiki/Bayes%27_rule#Single_event">Bayes' Rule for a single event</a> says that $$ O(A\mid B)=\frac{P(B\mid A)}{P(B\mid\neg A)}\,O(A) $$ where the odds of $X$ is defined as earlier $$ O(X)=\frac{P(X)}{P(\neg X)}=\frac{P(X)}{1-P(X)} $$ This is exactly what is being talked about in the later addition to the question, where it is given that $$ \frac{P(B\mid A)}{P(B\mid\neg A)}=4 $$</p>
<p>Daniel Kahneman's book mentions Bayesian reasoning. An answer using Bayesian reasoning is as follows:</p> <p>Let $C$ be the event that Tom is compsci, $N$ be the event that he has a "nerdy" personality.</p> <p>We are given $P(N|C)/P(N|\neg C)= 4$, which implies that $P(N|\neg C) = P(N|C)/4$.</p> <p>By Bayes Theorem (and using the theorem of total probability to expand the denominator)</p> <p>$$\begin{eqnarray*} P(C|N) &amp;=&amp; \frac{P(N|C) P(C)}{ P(N)} \\ &amp;=&amp; \frac{P(N|C) P(C)}{P(N|C)P(C) + P(N|\neg C) P(\neg C)} \\ &amp;=&amp; \frac{P(N|C) P(C)}{P(N|C)P(C) + 0.25 P(N|C)P(\neg C)} \\ &amp;=&amp; \frac{P(C)}{P(C) + 0.25 P(\neg C)} \\ &amp;=&amp; \frac{0.8}{0.8 + 0.25 \times 0.2} \\ &amp;\approx&amp; 0.9411765 \end{eqnarray*}$$</p> <p>Similar reasoning in the 3% case leads to $P(C|N) = 0.03 / (0.03 + .25*.97) \approx 0.1100917$.</p>
game-theory
<p>Let's say we have a cable of unit length, which is damaged at one unknown point, the location of which is uniformly distributed. You are allowed to cut the cable at any point, and after a cut, you'd know which piece is damaged and which is not. You can do this as many times as you want, and you want to maximize the <em>expected length of the biggest undamaged piece</em> after all the cutting. What is the best you can do? The strategy need not be deterministic.</p> <p>I currently have a lower bound of $\frac12$ and upper bound of $\frac34$. The lower bound comes from just cutting the cable in half once.</p> <p>For the upper bound, notice that if the fault is at $\frac12 \pm x$, then you cannot do better than $\frac12 +x$. So taking the expected value, $$ \int_0^{\frac12} \left(\frac12+x\right)2 dx = \frac34$$</p> <p>I initially thought that an optimal strategy would have to be applied recursively to the damaged piece, but now I'm no longer convinced of this. If you've already obtained an undamaged piece of length $l$, then there is no point in cutting a damaged piece into two pieces of length $\leq l$.</p> <p>Any reference to an existing treatment is also welcome.</p>
<p>An &quot;algorithm&quot; (though it might possibly run indefinitely, depending on our strategy) necessarily goes like this</p> <blockquote> <p><strong>Step 0.</strong> Let <span class="math-container">$k\leftarrow1$</span>.</p> <p><strong>Step 1.</strong> <em>[Now we have <span class="math-container">$k$</span> parts of cable, i.e. subintervals of <span class="math-container">$[0,1]$</span>, exactly one of which is defective]</em> If the defective part is strictly longer than all good parts, go to step 2. Otherwise there exists some good part at least as long as the defective part and further subdivisions cannot improve the result; pick a longest good part as result and terminate.</p> <p><strong>Step 2.</strong> <em>[Now the defective part <span class="math-container">$[a,b]$</span> is strictly longer than all other parts]</em> Pick a suitable point of cutting the defective part, i.e. pick some <span class="math-container">$x_k\in(a,b)$</span>, thus replacing <span class="math-container">$[a,b]$</span> with <span class="math-container">$[a,x_k]$</span> and <span class="math-container">$[x_k,b]$</span>. <em>[The probability that <span class="math-container">$x_k$</span> equals the point of defect is zero and can be ignored]</em> Let <span class="math-container">$k\leftarrow k+1$</span> and go to step 1.</p> </blockquote> <p>Thus a strategy consists of a sequence of cutting points <span class="math-container">$x_i\in[0,1]$</span> for all <span class="math-container">$i\in N$</span> with <span class="math-container">$N=\mathbb N$</span> or <span class="math-container">$N=\{1,2,\ldots n\}$</span> (i.e. the sequence might be finite or infinte). We may assume wlog. that <span class="math-container">$N$</span> is not unnecessary big, i.e. if step 2 will never be entered with some value <span class="math-container">$k$</span>, no matter where the actual defect is in the cable, we can as well assume that <span class="math-container">$N\subseteq\{1,\ldots,k-1\}$</span>. In other words, if <span class="math-container">$k\in N$</span>, then there exists a point <span class="math-container">$x\in[0,1]$</span> such that a defect at <span class="math-container">$x$</span> will make the algorithm above make use of the cutting point <span class="math-container">$x_k$</span>.</p> <p>By symmetry, we may assume in step 2 that <span class="math-container">$$\tag1x_k-a\ge b-x_k,$$</span> i.e. the new left part is always at least as long as the new right part. Then we can show by induction, that the interval <span class="math-container">$[a,b]$</span> in step 2 is always <span class="math-container">$[0,x_{k-1}]$</span> (formally letting <span class="math-container">$x_0=1$</span>). In fact this is trivially true when <span class="math-container">$k=1$</span>. If the claim is true for some <span class="math-container">$k$</span>, then in step 2 interval <span class="math-container">$[0,x_{k-1}]$</span> is divided into <span class="math-container">$[0,x_k]$</span> and <span class="math-container">$[x_k,x_{k-1}]$</span>. By <span class="math-container">$(1)$</span> the interval <span class="math-container">$[0,x_k]$</span> is at least as long as <span class="math-container">$[x_k,x_{k-1}]$</span> (i.e. after <span class="math-container">$k\leftarrow k+1$</span>, interval <span class="math-container">$[0,x_{k-1}]$</span> is at least as long as <span class="math-container">$[x_{k-1},x_{k-2}]$</span>). Thus in step 1, we either terminate or continue with step 2 and <span class="math-container">$[a,b]=[0,x_{k-1}]$</span> as was to be shown.</p> <p>In other words, the sequence <span class="math-container">$(x_i)_{i\in N}$</span> is strictly decreasing and more precisely <span class="math-container">$$\tag2 \frac {x_{k-1}}2\le x_k&lt;x_{k-1}\qquad\text{for all }k\in N.$$</span> As the sequence is also bounded from below, <span class="math-container">$L:=\max\{\,x_{k-1}-x_{k}\mid k\in N\,\}$</span> exists. Note that we may assume <span class="math-container">$x_k\ge L$</span> for all <span class="math-container">$k\in N$</span> as otherwise by <span class="math-container">$(2)$</span> neither of the pieces <span class="math-container">$[0,x_k]$</span> or <span class="math-container">$[x_k,x_{k-1}]$</span> can be an improvement over <span class="math-container">$L$</span>.</p> <p>Let <span class="math-container">$f(x)$</span> denote the length of longest good cable obtained by employing the strategy <span class="math-container">$(x_n)_{n\in N}$</span> if the actual point of defect is at <span class="math-container">$x\in[0,1]$</span>. If <span class="math-container">$x_k&lt; x&lt; x_{k-1}$</span> for some <span class="math-container">$k\in N$</span>, then we will stop after performing the cut at <span class="math-container">$x_k$</span> and the longest part will be <span class="math-container">$[0,x_k]$</span>, so <span class="math-container">$f(x)=x_k$</span> for <span class="math-container">$x\in(x_k,x_{k-1})$</span>. If <span class="math-container">$x&lt;x_k$</span> for all <span class="math-container">$k\in N$</span>, then <span class="math-container">$f(x)=L$</span>. The expected length is simply <span class="math-container">$\int_0^1f(x)\,\mathrm dx$</span>. The graph of <span class="math-container">$f$</span> is bounded from above by the diagonal, except for the triangle on the lower left with vertices <span class="math-container">$(0,0) (0,L), (L,L)$</span>. On the other hand, a triangle of the same size is &quot;missing&quot; over an interval <span class="math-container">$[x_k,x_{k-1}]$</span> with <span class="math-container">$x_{k-1}-x_k=L$</span>. <img src="https://i.sstatic.net/IDpWG.png" alt="Proof without words of <span class="math-container">$(3)$</span> for an example function <span class="math-container">$f$</span>" /></p> <p>We conclude that <span class="math-container">$$\tag3 \int_0^1f(x)\,\mathrm dx\le \int_0^1x\,\mathrm dx=\frac12.$$</span></p> <p>The estimate <span class="math-container">$(3)$</span> also applies to mixed strategies (i.e. a strategy <span class="math-container">$(x_n)_{n\in N}$</span> is picked randomly according to some distribution). On the other hand, inequality <span class="math-container">$(3)$</span> is strict: A strategy that does indeed lead to <span class="math-container">$\frac12$</span> as expected value is given by <span class="math-container">$N=\{1\}$</span>, <span class="math-container">$x_1=\frac12$</span> (as already noted by the OP).</p>
<p>A strategy consists of snipping off pieces of length $\ell_1,\ell_2,\ldots$, subject to the condition $\ell_1+\ell_2+\cdots = 1-M$, where $M=\max\{\ell_1,\ell_2,\ldots\}$, and stopping if and when the cut-off piece is found to contain the bad spot. The probability that the bad spot lies in the $i$th snippet is $\ell_i$, and the probability it lies in none of them is $M$. Therefore the expected longest piece when the process ends is</p> <p>$$\ell_1(\ell_2+\ell_3+\cdots+M)+\ell_2(\ell_3+\ell_4+\cdots+M)+\cdots+M^2$$</p> <p>It's easy to see that this sum is unchanged if any $\ell_i$ and $\ell_{i+1}$ are interchanged:</p> <p>$$\begin{align} &amp;\cdots+\ell_{i-1}(\ell_i+\ell_{i+1}+\ell_{i+2}+\cdots+M)+\ell_i(\ell_{i+1}+\ell_{i+2}+\cdots+M)+\ell_{i+1}(\ell_{i+2}+\cdots+M)+\cdots\\ &amp;=+\ell_{i-1}(\ell_{i+1}+\ell_i+\ell_{i+2}+\cdots+M)+\ell_{i+1}(\ell_i+\ell_{i+2}+\cdots+M)+\ell_i(\ell_{i+2}+\cdots+M)+\cdots\\ \end{align}$$</p> <p>Consequently, any strategy is equivalent to one in which $\ell_1\ge\ell_2\ge\ell_3\ge\cdots$, for which $M=\ell_1$. But this leaves us in the case that kaine earlier analyzed: As soon as you are guaranteed a piece of length $M$ and plan to make all subsequent cuts no longer than $M$, you can only improve the final result by making the subsequent cuts infinitesimally short. So the best you can get, as kaine showed, is an expected value of $1/2$.</p>
linear-algebra
<p>Consider square matrices over a field $K$. I don't think additional assumptions about $K$ like algebraically closed or characteristic $0$ are pertinent, but feel free to make them for comfort. For any such matrix $A$, the set $K[A]$ of polynomials in $A$ is a commutative subalgebra of $M_n(K)$; the question is whether for any pair of commuting matrices $X,Y$ at least one such commutative subalgebra can be found that contains both $X$ and $Y$.</p> <p>I was asking myself this in connection with frequently recurring requests to completely characterise commuting pairs of matrices, like <a href="https://math.stackexchange.com/questions/323011/find-all-a-b-such-that-ab-ba-o">this one</a>. While providing a useful characterisation seems impossible, a positive anwer to the current question would at least provide <em>some</em> answer.</p> <p>Note that in many rather likely situations one can in fact take $A$ to be one of the matrices $X,Y$, for instance when one of the matrices <a href="https://math.stackexchange.com/questions/65012/if-matrices-a-and-b-commute-a-with-distinct-eigenvalues-then-b-is-a-po">has distinct eigenvalues</a>, or more generally if its <a href="https://math.stackexchange.com/questions/57308/commuting-matrices">minimal polynomial has degree $n$</a> (so coincides with the characteristic polynomial). However this is not always possible, as can be easily seen for instance for diagonal matrices $X=\operatorname{diag}(0,0,1)$ and $Y=\operatorname{diag}(0,1,1)$. However in that case both will be polynomials in $A=\operatorname{diag}(x,y,z)$ for any <em>distinct</em> values $x,y,z$ (then $K[A]$ consists of all diagonal matrices); although in the example in <a href="https://math.stackexchange.com/a/83977/18880">this answer</a> the matrices are not both diagonalisable, an appropriate $A$ can be found there as well. </p> <p>I thought for some time that any maximal commutative subalgebra of $M_n(K)$ was of the form $K[A]$ (which would imply a positive answer) for some $A$ with minimal polynomial of degree$~n$, and that a positive answer to my question was in fact instrumental in proving this. However I was wrong on both counts: there exist (for $n\geq 4$) commutative subalgebras of dimension${}&gt;n$ (whereas $\dim_KK[A]\leq n$ for all $A\in M_n(K)$) as shown in <a href="https://mathoverflow.net/questions/29087/commutative-subalgebras-of-m-n/29089#29089">this MathOverflow answer</a>, and I was forced to correct <a href="https://math.stackexchange.com/a/301972/18880">an anwer I gave here</a> in the light of this; however it seems (at least in the cases I looked at) that many (all?) pairs of matrices $X,Y$ in such a subalgebra still admit a matrix $A$ (which in general is <em>not in the subalgebra</em>) such that $X,Y\in K[A]$. This indicates that a positive answer to my question would not contradict the existence of such large commutative subalgebras: it would just mean that to obtain a maximal dimensional subalgebra containing $X,Y$ one should in general <em>avoid</em> throwing in an $A$ with $X,Y\in K[A]$. I do think these large subalgebras easily show that my question but for three commuting matrices has a negative answer.</p> <p>Finally I note that <a href="https://mathoverflow.net/questions/29087/commutative-subalgebras-of-m-n/30931#30931">this other answer</a> to the cited MO question mentions a result by Gerstenhaber that the dimension of the subalgebra generated two commuting matrices in $M_n(K)$ cannot exceed$~n$. This unfortunately does not settle my question (if $X,Y$ would generate a subalgebra of dimension${}&gt;n$, it would have proved a negative answer); it just might be that the mentioned result is true because of the existence of $A$ (I don't have access to a proof right now, but given the formulation it seems unlikely that it was done this way).</p> <p>OK, I've tried to build up the suspense. Honesty demands that I say that I do know the answer to my question, since a colleague of mine provided a convincing one. I will however not give this answer right away, but post it once there has been some time for gathering answers here; who knows somebody will prove a different answer than the one I have (heaven forbid), or at least give the same answer with a different justification.</p>
<p>As promised I will answer my own question. The answer is negative, it can happen that $X$ and $Y$ cannot be written as polynomials of any one matrix $A\in M_n(K)$.</p> <p>Following the comment by Martin Brandenburg, this can even happen when $X$ and $Y$ are both diagonal, if the field $K$ is too small. Indeed if $K$ is a finite field of $q$ elements then for any <em>diagonalisable</em> matrix $A$ one has $\dim K[A]\leq q$ because $q$ limits the degree of split polynomials without multiple roots, and the minimal polynomial of $A$ must be of this kind. This means that when $n&gt;q$ the dimension $n$ of the subalgebra $D$ of all diagonal matrices is too large for it to be of the form $K[A]$ for one of its members. And as long as $n\leq q^2$ one can find diagonal matrices $X,Y$ that generate all of $D$, while ensuring that all matrices$~A$ commuting with both $X$ and $Y$ must be diagonal; this excludes finding such $A$ with $X,Y\in K[A]$. Indeed one can arbitrarily label each of the $n$ standard basis vectors with distinct elements of $K\times K$, and define $X,Y$ so that each such basis vector is a common eigenvector, with respective eigenvalues given by the two components of the label; one then easily realises each projection on the $1$-dimensional space generated by one of the standard basis vectors as a polynomial in $X,Y$, forcing any matrix commuting with both $X$ and $Y$ (and therefore with these projections) to be diagonal.</p> <p>But for examples valid in arbitrary field, it is better to focus attention on nilpotent matrices, avoiding the "regular" ones with minimal polynomial $X^n$ (and hence with a single Jordan block). The counterexample that I had in mind was the pair of $4\times4$ matrices each of Jordan type $(2,2)$: $$ X=\begin{pmatrix}0&amp;1&amp;0&amp;0\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;1\\0&amp;0&amp;0&amp;0\end{pmatrix} ,\qquad Y=\begin{pmatrix}0&amp;0&amp;1&amp;0\\0&amp;0&amp;0&amp;1\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;0\end{pmatrix} ,\qquad \text{which have } XY=YX=\begin{pmatrix}0&amp;0&amp;0&amp;1\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;0\end{pmatrix} $$ while $X^2=Y^2=0$. Then the algebra $K[X,Y]$ has dimension $4$, and is contained in the subalgebra of upper triangular matrices $A$ with identical diagonal entries and also $A_{2,3}=0$. For any $A$ in this subalgebra, and therefore in particular for $A\in K[X,Y]$, one has $\dim K[A]\leq3$, since $(A-\lambda I)^3=0$ where $\lambda$ is the common diagonal entry of $A$; in particular $K[A]\not\supseteq K[X,Y]$ for such$~A$. But one also has $\dim K[A]\leq4$ for <em>any</em> $A\in M_4(K)$, and this shows that one cannot have $K[A]\supseteq K[X,Y]$ <em>unless</em> $A\in K[X,Y]$, and one therefore cannot have it at all.</p> <p>The <a href="https://mathoverflow.net/questions/34314/when-is-an-algebra-of-commuting-matrices-contained-in-one-generated-by-a-single/35024#35024">answer in the MO thread</a> indicated in the other answer indicates that there are counterexamples even for $n=3$; indeed it suffices to strip either the first row and column or the last row and column off the matrices $X,Y$ given above. The argument is similar but even simpler for them. The resulting matrices generate the full subalgebra of matrices of the form<br> $$ \begin{pmatrix}\lambda&amp;a&amp;b\\0&amp;\lambda&amp;0\\0&amp;0&amp;\lambda \end{pmatrix} \quad \text{respectively of those of form}\quad \begin{pmatrix}\lambda&amp;0&amp;b\\0&amp;\lambda&amp;a\\0&amp;0&amp;\lambda \end{pmatrix}; $$ having dimension $3$ it can only be contained in $K[A]$ if $A$ is in the subalgebra, but then the minimal polynomial of $A$ has degree at most $2$ so the algebra it generates cannot be all of that subalgebra.</p> <p>These two subalgebras are analogues of the commutative subalgebra of dimension${}&gt;n$ that I mentioned in the question, but for the case $n=3$ where its dimension is only equal to $n$. If had bothered to look at this more modest case rather than go directly for the "excessive" subalgebra for $n\geq4$ right away, then I might have found this example myself; I guess one should never neglect the small cases.</p>
<p>The answer is given in the accepted answer of <a href="https://mathoverflow.net/questions/34314">MO/34314</a>. Don't klick when you want to keep the suspense.</p>
combinatorics
<p>In attempting to answer <a href="https://math.stackexchange.com/questions/487957/how-prove-this-nice-limit-lim-n-to-infty-fraca-nn-frac12-log432/">this question</a>, I reduced it to a seemingly simple generating functions question, but after days of work was unable to construct a proof. Since I do not have experience trying to do asymptotics with generating functions, I would like to know if a proof is salvageable from these methods.</p> <p>The problem introduces the sequence <span class="math-container">$a_n$</span>, defined by <span class="math-container">$a_0 = 1$</span> and <span class="math-container">$$ a_{n}=a_{\left\lfloor n/2\right\rfloor}+a_{\left\lfloor n/3 \right\rfloor}+a_{\left\lfloor n/6\right\rfloor} $$</span> and asks for a proof that <span class="math-container">$$ \lim_{n\to\infty}\dfrac{a_{n}}{n}=\dfrac{12}{\log{432}}. $$</span> Writing the generating function <span class="math-container">$\displaystyle A(x) = \sum_{n \ge 0} a_n x^n$</span>, this translates to <span class="math-container">$$ A(x) = (1 + x)A(x^2) + (1 + x + x^2) A(x^3) + (1 + x + x^2 + \cdots + x^5)A(x^6) - 2 $$</span></p> <p>Even better, let <span class="math-container">$b_0 = a_0$</span> and <span class="math-container">$b_n = a_n - a_{n-1}$</span> for all <span class="math-container">$n \ge 1$</span>, and define the generating function <span class="math-container">$\displaystyle B(x) = \sum_{n \ge 0} b_n x^n = (1 - x)A(x)$</span>. Multiplying the above by <span class="math-container">$(1-x)$</span> gives <span class="math-container">$$ (1 - x)A(x) = (1 - x^2)A(x^2) + (1 - x^3)A(x^3) + (1 - x^6)A(x^6) + 2x - 2 $$</span> i.e. <span class="math-container">$$ B(x) = B(x^2) + B(x^3) + B(x^6) + 2x - 2 \tag{1} $$</span></p> <p>After unsuccessfully trying to do asymptotics with the above elegant formula, I used it to find an explicit representation of <span class="math-container">$B$</span>, using the <a href="https://mathworld.wolfram.com/DelannoyNumber.html" rel="nofollow noreferrer">Delannoy Numbers</a>:</p> <p><span class="math-container">$$ B(x) = 1 + 2 \sum_{l, m \ge 0} \sum_{d \ge 0} 2^d {l \choose d}{m \choose d} x^{2^l 3^m} $$</span></p> <p>It follows that in fact <span class="math-container">\begin{align*} b_n&amp;= \begin{cases} 1 &amp;n=0 \\ 2 \sum_{d \ge 0} 2^d \binom{l}{d} \binom{m}{d} &amp;n =2^l3^m \\ 0 &amp;\text{otherwise} \end{cases} \\[10pt] a_n&amp;=1+2\sum_{d\ge0}2^d\sum_{\begin{matrix}l,m\ge0\\2^l 3^m \le n\end{matrix}}{l \choose d}{m \choose d} \tag{2} \end{align*}</span></p> <p>One can do naive bounds on the sum in (2) - replacing the condition <span class="math-container">$2^l 3^m \le n$</span> with <span class="math-container">$2^l 2^m \le n$</span> and <span class="math-container">$3^l 3^m \le n$</span> for upper and lower bounds, respectively. But this isn't good enough; it gives (after algebra and combinatorial work) approximately <span class="math-container">$$ \frac{n^{\log_3(1 + \sqrt{2}) - 1}}{2} &lt; \frac{a_n}{n} &lt; \frac{n^{\log_2(1 + \sqrt{2}) - 1}}{2} $$</span></p> <p>This seems to suggest trying to approximate (2) with the condition <span class="math-container">$(1 + \sqrt{2})^l (1 + \sqrt{2})^m \le n$</span>, but I have no idea how to justify that.</p> <p>At any rate, I've made too much of what feels like progress to give up on the problem, and if anyone can think of a way to use (2) to get a solution or else to use (1) and find the asymptotics directly, I'd be very thankful.</p>
<p>The <a href="https://citeseerx.ist.psu.edu/doc_view/pid/513611099c8ba4c5b8b878af143641175fe351b0" rel="nofollow noreferrer" title="Tom Leighton: Notes on Better Master Theorems for Divide-and-Conquer Recurrences">&quot;master theorem&quot; by Leighton</a> is applicable. For a recurrence <span class="math-container">$T(z) = g(z) + \sum_{1 \le k \le n} a_k T(b_k z + h_k(z))$</span> where <span class="math-container">$z \ge 0$</span>, such that there are sufficient base cases; all <span class="math-container">$a_k &gt; 0$</span> and <span class="math-container">$0 &lt; b_k &lt; 1$</span>; there is a constant <span class="math-container">$c$</span> such that <span class="math-container">$\lvert g(z) \rvert = O(c^z)$</span>; and all <span class="math-container">$\lvert h_k(z)\rvert = O(z /(\log z)^2$</span>. Then for <span class="math-container">$p$</span> such that <span class="math-container">$\sum_{1 \le k \le n} a_k b_k^p = 1$</span>, the solution satisfies:</p> <p><span class="math-container">$$ T(z) = \Theta \left( z^p \left( 1 + \int_1^z \frac{g(u)}{u^{p + 1}} \, \mathrm{d} u \right) \right) $$</span></p> <p>The <span class="math-container">$h_k$</span> are fudge factors, they cover cases like the difference to floors and ceilings.</p> <p>Here we have <span class="math-container">$g(z) = 0$</span>, <span class="math-container">$a_1 = a_2 = a_3 = 1$</span>, <span class="math-container">$b_1 = 1/2$</span>, <span class="math-container">$b_2 = 1/3$</span>, and <span class="math-container">$b_3 = 1/6$</span>, so that <span class="math-container">$p = 1$</span>. With <span class="math-container">$g(z) = 0$</span>, the theorem tells you that <span class="math-container">$a_n = \Theta(n)$</span>.</p>
<p>A complete asymptotic solution to exactly this kind of recurrences is derived by Erdös et al <a href="https://www-users.cse.umn.edu/%7Eodlyzko/doc/arch/sequence.family.pdf" rel="nofollow noreferrer">&quot;The Asymptotic Behavior of a Family of Sequences&quot;</a> Pacific Journal of Mathematics 126:2 (1987), pp 227-241. They use this exact recurrence as an example, and show that:</p> <p><span class="math-container">$$ a_n \sim \frac{12}{\log 432} n $$</span></p>
differentiation
<p>I've got this task I'm not able to solve. So i need to find the 100-th derivative of $$f(x)=e^{x}\cos(x)$$ where $x=\pi$.</p> <p>I've tried using Leibniz's formula but it got me nowhere, induction doesn't seem to help either, so if you could just give me a hint, I'd be very grateful.</p> <p>Many thanks!</p>
<p>HINT:</p> <p>$e^x\cos x$ is the real part of $y=e^{(1+i)x}$</p> <p>As $1+i=\sqrt2e^{i\pi/4}$</p> <p>$y_n=(1+i)^ne^{(1+i)x}=2^{n/2}e^x\cdot e^{i(n\pi/4+x)}$</p> <p>Can you take it from here?</p>
<p>Find fewer order derivatives:</p> <p>\begin{align} f'(x)&amp;=&amp;e^x (\cos x -\sin x)&amp;\longleftarrow&amp;\\ f''(x)&amp;=&amp;e^x(\cos x -\sin x -\sin x -\cos x) \\ &amp;=&amp; -2e^x\sin x&amp;\longleftarrow&amp;\\ f'''(x)&amp;=&amp;-2e^x(\sin x + \cos x)&amp;\longleftarrow&amp;\\ f''''(x)&amp;=&amp; -2e^x(\sin x + \cos x + \cos x -\sin x)\\ &amp;=&amp; -4e^x \cos x \\ &amp;=&amp; -4f(x)&amp;\longleftarrow&amp;\\ &amp;...&amp;\\ \therefore f^{(100)}(\pi)&amp;=&amp;-4^{25} f(\pi) \end{align}</p>
differentiation
<p>I am a Software Engineering student and this year I learned about how CPUs work, it turns out that electronic engineers and I also see it a lot in my field, we do use derivatives with discontinuous functions. For instance in order to calculate the optimal amount of ripple adders so as to minimise the execution time of the addition process:</p> <p><span class="math-container">$$\text{ExecutionTime}(n, k) = \Delta(4k+\frac{2n}{k}-4)$$</span> <span class="math-container">$$\frac{d\,\text{ExecutionTime}(n, k)}{dk}=4\Delta-\frac{2n\Delta}{k^2}=0$$</span> <span class="math-container">$$k= \sqrt{\frac{n}{2}}$$</span></p> <p>where <span class="math-container">$n$</span> is the number of bits in the numbers to add, <span class="math-container">$k$</span> is the amount of adders in ripple and <span class="math-container">$\Delta$</span> is the &quot;delta gate&quot; (the time that takes to a gate to operate).</p> <p>Clearly you can see that the execution time function is not continuous at all because <span class="math-container">$k$</span> is a natural number and so is <span class="math-container">$n$</span>. This is driving me crazy because on the one hand I understand that I can analyse the function as a continuous one and get results in that way, and indeed I think that's what we do (&quot;I think&quot;, that's why I am asking), but my intuition and knowledge about mathematical analysis tells me that this is completely wrong, because the truth is that the function is not continuous and will never be and because of that, the derivative with respect to <span class="math-container">$k$</span> or <span class="math-container">$n$</span> does not exist because there is no rate of change.</p> <p>If someone could explain me if my first guess is correct or not and why, I'd appreciate it a lot, thanks for reading and helping!</p>
<p>In general, computing the extrema of a continuous function and rounding them to integers does <em>not</em> yield the extrema of the restriction of that function to the integers. It is not hard to construct examples.</p> <p>However, your particular function is <em>convex</em> on the domain <span class="math-container">$k&gt;0$</span>. In this case the extremum is at one or both of the two integers nearest to the <em>unique</em> extremum of the continuous function.</p> <p>It would have been nice to explicitly state this fact when determining the minimum by this method, as it is really not obvious, but unfortunately such subtleties are often forgotten (or never known in the first place) in such applied fields. So I commend you for noticing the problem and asking!</p>
<p>The main question here seems to be &quot;why can we differentiate a function only defined on integers?&quot;. The proper answer, as divined by the OP, is that we can't--there is no unique way to define such a derivative, because we can interpolate the function in many different ways. However, in the cases that you are seeing, what we are really interested in is not the derivative of the function, per se, but rather the extrema of the function. The derivative is just a tool used to find the extrema.</p> <p>So what's really going on here is that we start out with a function <span class="math-container">$f:\mathbb{N}\rightarrow \mathbb{R}$</span> defined only on positive integers, and we implicitly <em>extend</em> <span class="math-container">$f$</span> to another function <span class="math-container">$\tilde{f}:\mathbb{R}\rightarrow\mathbb{R}$</span> defined on all real numbers. By &quot;extend&quot; we mean that values of <span class="math-container">$\tilde{f}$</span> coincide with those of <span class="math-container">$f$</span> on the integers. Now, here's the crux of the matter: If we can show that there is some integer <span class="math-container">$n$</span> such that <span class="math-container">$\tilde{f}(n)\geq \tilde{f}(m)$</span> for all integers <span class="math-container">$m$</span>, i.e. <span class="math-container">$n$</span> is a maximum of <span class="math-container">$\tilde{f}$</span> <em>over the integers</em>, then we know the same is true for <span class="math-container">$f$</span>, our original function. The advantage of doing this is that now can use calculus and derivatives to analyze <span class="math-container">$\tilde{f}$</span>. It doesn't matter how we extend <span class="math-container">$f$</span> to <span class="math-container">$\tilde{f}$</span>, because at the end of the day we're are only using <span class="math-container">$\tilde{f}$</span> as a tool to find properties of <span class="math-container">$f$</span>, like maxima.</p> <p>In many cases, there is a natural way to extend <span class="math-container">$f$</span> to <span class="math-container">$\tilde{f}$</span>. In your case, <span class="math-container">$f=\text{ExecutionTime}$</span>, and to extend it you just take the formula <span class="math-container">$\Delta \left(4k + \frac{2n}{k} - 4\right)$</span> and allow <span class="math-container">$n$</span> and <span class="math-container">$k$</span> to be real-valued instead of integer-valued. You could have extended it a different way--e.g. <span class="math-container">$\Delta \left(4k + \frac{2n}{k} - 4\right) + \sin(2\pi k)$</span> is also a valid extension of <span class="math-container">$\text{ExecutionTime}(n,k)$</span>, but this is not as convenient. And all we are trying to do is find a convenient way to analyze the original, integer-valued function, so if there's a straightforward way to do it we might as well use it.</p> <hr /> <p>As an illustrative example, an interesting (and non-trivial) case of this idea of extending an integer-valued function to a continuous-valued one is the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="noreferrer">gamma function</a> <span class="math-container">$\Gamma$</span>, which is a continuous extension of the integer-valued factorial function. <span class="math-container">$\Gamma$</span> is not the only way to extend the factorial function, but it is for most purposes (in fact, all purposes that I know of) the most convenient.</p>
matrices
<p>It is well known that for invertible matrices $A,B$ of the same size we have $$(AB)^{-1}=B^{-1}A^{-1} $$ and a nice way for me to remember this is the following sentence:</p> <blockquote> <p>The opposite of putting on socks and shoes is taking the shoes off, followed by taking the socks off.</p> </blockquote> <p>Now, a similar law holds for the transpose, namely:</p> <p>$$(AB)^T=B^TA^T $$</p> <p>for matrices $A,B$ such that the product $AB$ is defined. My question is: is there any intuitive reason as to why the order of the factors is reversed in this case? </p> <p>[Note that I'm aware of several proofs of this equality, and a proof is not what I'm after]</p> <p>Thank you!</p>
<p>One of my best college math professor always said:</p> <blockquote> <p>Make a drawing first.</p> </blockquote> <p><img src="https://i.sstatic.net/uGxff.gif" alt="enter image description here"></p> <p>Although, he couldn't have made this one on the blackboard.</p>
<p>By dualizing $AB: V_1\stackrel{B}{\longrightarrow} V_2\stackrel{A}{\longrightarrow}V_3$, we have $(AB)^T: V_3^*\stackrel{A^T}{\longrightarrow}V_2^*\stackrel{B^T}{\longrightarrow}V_1^*$. </p> <p>Edit: $V^*$ is the dual space $\text{Hom}(V, \mathbb{F})$, the vector space of linear transformations from $V$ to its ground field, and if $A: V_1\to V_2$ is a linear transformation, then $A^T: V_2^*\to V_1^*$ is its dual defined by $A^T(f)=f\circ A$. By abuse of notation, if $A$ is the matrix representation with respect to bases $\mathcal{B}_1$ of $V_1$ and $\mathcal{B}_2$ of $V_2$, then $A^T$ is the matrix representation of the dual map with respect to the dual bases $\mathcal{B}_1^*$ and $\mathcal{B}_2^*$.</p>
game-theory
<p>I thought a lot about this question — and initially, I intended to ask this on <a href="https://gamedev.stackexchange.com/">gamedev.stackexchange.com</a> — but due to its rather theoretical aspects, I think it might be more appropriate to address a rather mathematical spectrum of readers.</p> <p>Imagine you had to design a dungeon in a game that comprises many puzzles that are mutually dependent from one another. If you lack an example, just think back to the times you've played Zelda or any other video game of this sort:</p> <p><img src="https://i.sstatic.net/FzVLu.png" alt="A Zelda dungeon"></p> <p><strong>Rules</strong>:</p> <p>The player may freely wander around the dungeon; some rooms are locked, some aren't. The main goal is to solve puzzles in rooms $\{R_0, ..., R_n\}$ to either achieve keys that help the player enter a locked room or in order to achieve an item that may allow the player to solve a puzzle in some room $R_k$ that, before, wasn't solvable without the ability of the item.</p> <p>Every key works with every door. Once a key is used, the door is unlocked forever and the player loses the key. In the end, the player needs to reach a specific room $R_b$ where he can fight the dungeon's end boss in order to finish the dungeon. The dungeon is <strong>solvable</strong> iff the player:</p> <ul> <li>can reach rooms $R_0$ to $R_n$, regardless of the order chosen </li> <li>has collected all items $I_0$ to $I_m$ that he needs to fight the dungeon's end boss once the player enters $R_b$</li> </ul> <p><strong>The goal</strong>:</p> <p>Your goal is to design the dungeon in a way that guarantees maximum <strong><a href="http://en.wikipedia.org/wiki/Nonlinear_gameplay" rel="nofollow noreferrer">nonlinearity</a></strong>, i.e., the player should, at any point of time in the dungeon, be able to try to solve as many puzzles as possible and thus be able to explore as many regions as possible without harming the solvability of the dungeon.</p> <p><strong>The problem</strong>:</p> <p>For example, imagine the player has two keys, but then utilizes both of them in order to get through two consecutive doors. The player then finds himself in a puzzle room whose solution requires some item $I_\alpha$ that he should actually have gotten using the two keys that he now does not possess anymore. Unfortunately, the player finds himself in a gridlock — the dungeon is now unsolvable.</p> <p>Can you guarantee that your dungeon design is free of gridlocks?</p> <p><strong>Disclaimer</strong></p> <p>I am not asking for any <em>implementation</em>, <em>code</em> or any of the like. This is a <strong>strictly theoretical question</strong> that I want to consider from a <strong>strictly mathematical</strong> point of view. Any suggestions that exceed these limitations (by providing any material such as the previously mentioned) are — albeit highly appreciated — explicitly not asked for.</p>
<p>This might not be exactly what you're looking for, but I think the safest (and probably easiest) way to accomplish this might be to design the whole dungeon to be (at least mostly) linear at first, then adding in some choices for the player, so it doesn't seem as linear.</p> <p>Say you come up with some plan: Visiting the rooms in the order 1,4,5,3,2 will allow the player to succeed, so make sure the necessary items are placed to allow the player to do that. Then you could conceivably move some of the necessary items around - maybe they would find the item that allows them to open room 3 in the first room as well, but they wouldn't know if they should head to room 4 or 3 first, for a little non-linearity.</p> <p>Alternatively, if the dungeon is fairly large, you could break it up into a few underlying linear pieces: Let's say it's got 4 groups of 5 rooms, call the groups $A,B,C$, and $D$. From $A$, you could complete either $B$ or $C$ first, while both $B$ and $C$ would need to be explored in order to get everything to work on the $D$ group of rooms. And from within $A$, perhaps you could start exploring the $C$ group, but couldn't finish until you'd exhausted $A$, just to layer in some additional complexity.</p> <p>A mathematical object that would help seems fairly tough. I imagine either <a href="http://en.wikipedia.org/wiki/Partially_ordered_set" rel="nofollow">partially ordered sets</a> or <a href="http://en.wikipedia.org/wiki/Directed_graph" rel="nofollow">directed graphs</a> could help you map out the dependencies. For directed graphs, the issue that seems difficult is that you've got two different things to consider - rooms and items. Either way, if the dungeon is complicated enough, then either of the above might grow too complicated to be of much help. I think partially ordered sets would be the most promising, and I can draw an example of how I picture it helping, if you're interested.</p> <p>I wish I knew of a great mathematical tool could help you, so my most promising idea seems to be chunking the dungeon up into semi-linear groups. That way you can focus on a collection of smaller problems individually and make sure the dependencies all work out, and add in some complexity safely.</p>
<p>The maximum non-linearity is achieved by allowing the player to tackle all of the puzzles in any order. This means that all of the puzzles can be accessed without using any keys, and that the keys must all be used in a linear corridor which separates the area containing the puzzles from $R_b$.</p> <p>However, I suspect this isn't what you're looking for, so you may need to refine the question.</p>
combinatorics
<p>This picture was in my friend's math book: </p> <p><a href="https://i.sstatic.net/7ngrZ.jpg"><img src="https://i.sstatic.net/7ngrZ.jpg" alt=""></a> </p> <p>Below the picture it says: </p> <blockquote> <p>There are $3072$ ways to draw this flower, starting from the center of the petals, without lifting the pen.</p> </blockquote> <p>I know it's based on combinatorics, but I don't know how to show that there are actually $3072$ ways to do this. I'd be glad if someone showed how to show that there are exactly $3072$ ways to draw this flower, starting from the center of the petals, without lifting the pen (assuming that $3072$ is the correct amount).</p>
<p>First you have to draw the petals. There are $4!=24$ ways to choose the order of the petals and $2^4=16$ ways to choose the direction you go around each petal. Then you go down the stem to the leaves. There are $2! \cdot 2^2=8$ ways to draw the leaves. Finally you draw the lower stem. $24 \cdot 16 \cdot 8=3072$</p>
<p>At the beginning you could go 8 different ways, then you could go 6 different ways, then you could go 4 and 2 different ways but in the down of the picture you could go at first 4 different ways and 2 at the end. $8\cdot6\cdot4\cdot2\cdot4\cdot2 = 3072$</p>
probability
<p>I recently <a href="https://twitter.com/willcole/status/999276319061041153" rel="noreferrer">posted a tweet</a> claiming I had encountered a real life <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="noreferrer">Monty Hall dilemma</a>. Based on the resulting discussion, I'm not sure I have. <hr></p> <p><strong>The Scenario</strong> </p> <ul> <li><p>I have 3 tacos (A,B,C) where tacos A and C are filled with beans, and taco B is filled with steak.</p></li> <li><p>I have no foreknowledge of the filling of any tacos.</p></li> <li><p>My wife <em>only</em> knows that taco C is filled with beans.</p></li> <li><p>My wife and I both know that I want steak.</p></li> <li><p>After I pick taco A, my wife informs me taco C is filled with beans. </p></li> <li><p>I switch my pick from taco A to taco B, thinking the logic behind the Monty Hall problem is relevant to my choice. <hr></p></li> </ul> <p><strong>Edit for clarity</strong></p> <ul> <li><p>Timing: The contents of taco C were not revealed to me until after I had made my selection of taco A.</p></li> <li><p>My knowledge of what my wife knew: When she told me the contents of taco C, I knew that she had previously opened taco C. I also knew that she had no other knowledge of the contents of the other tacos.</p></li> </ul> <p><strong>Questions</strong></p> <ol> <li>Even though my wife does not know the fillings of all the tacos, does her revealing that taco C is definitively not the taco I want after I've made my initial selection satisfy the logic for me switching (from A to B) if I thought it would give me a 66.6% chance of getting the steak taco?</li> <li>If this is not a Monty Hall situation, is there any benefit in me switching?</li> </ol>
<p>No, this is not a Monty Hall problem. If your wife only knew the contents of #3, and was going to reveal it regardless, then the odds were always 50/50/0. The information never changed. It was just delayed until after your original choice. Essentially, you NEVER had the chance to pick #3, as she would have immediately told you it was wrong. (In this case, she is on your team, and essentially part of the player). #3 would be eliminated regardless: "No, not that one!"</p> <p>Imagine you had picked #3. Monty Hall never said, "You picked a goat. Want to switch?" </p> <p>If he did, the odds would immediately become 50/50, which is what we have here.</p> <p>Monty always reveals the worst half of the 2/3 you didn't select, leaving the player at 33/67/0.</p>
<p><strong>tl;dr: Switching in this case has no effect, unlike the Monty Hall problem, where switching doubles your odds.</strong></p> <p>The reason this is different is that Will's wife knew the content of one, and only one taco, and that it was a bean one, and Will <em>knows</em> that his wife only knew the content of one bean one. (Monty is different because he knew <em>all</em> doors.)</p> <p>Here's why:</p> <p>Unlike the MH problem, which has one car, and two goats that can be treated as identical, the Will's Wife Problem has one Steak, One <em>known</em> bean, and one <em>unknown</em> bean, so the beans need to be considered differently.</p> <p>That's important because MH's reveal gave players a strong incentive to switch, but gave them NO new information by revealing a goat in an unpicked door - whatever the player picked, he could show a goat, so no new info is provided by the reveal. But that's not the case for Will's wife:</p> <p>Since she only knows the content of the <em>known</em> bean taco, her revealing that it's not one of the ones you picked actually changes what you know about your odds. Because she would have behaved differently if you'd picked the known bean one - she'd have said, "the one you picked is bean". </p> <p><em>Without</em> her info, you only had a 1/3 chance of having the steak, but by showing you that you <em>didn't</em> pick the only bean one she knew about, it means you <em>already</em> know you have a 50% chance of having the steak. </p> <p>And since you <em>also</em> have a 50% chance of having the <em>unknown</em> bean taco, it's irrelevant if you switch or not.</p> <p>In the MH problem, the key fact is this: Since the reveal in no way changes what you know about your odds, there's only a 1/3 chance that you START with the car. And since you:</p> <ul> <li>Win by staying in all cases where you started with the car, and</li> <li>Win by switching in all cases where you <em>didn't</em> start with the car...</li> </ul> <p>In the MH problem, switching doubles your odds (from 1/3 to 2/3), but in this case, switching has no impact (since it's 50% either way).</p>
logic
<p>I'm sure there are easy ways of proving things using, well... any other method besides this! But still, I'm curious to know whether it would be acceptable/if it has been done before?</p>
<p>There is a disappointing way of answering your question affirmatively: If $\phi$ is a statement such that First order Peano Arithmetic $\mathsf{PA}$ proves "$\phi$ is provable", then in fact $\mathsf{PA}$ also proves $\phi$. You can replace here $\mathsf{PA}$ with $\mathsf{ZF}$ (Zermelo Fraenkel set theory) or your usual or favorite first order formalization of mathematics. In a sense, this is exactly what you were asking: If we can prove that there is a proof, then there is a proof. On the other hand, this is actually unsatisfactory because there are no known natural examples of statements $\phi$ for which it is actually easier to prove that there is a proof rather than actually finding it. </p> <p>(The above has a neat formal counterpart, <a href="http://en.wikipedia.org/wiki/L%C3%B6b%27s_theorem" rel="nofollow noreferrer">Löb's theorem</a>, that states that if $\mathsf{PA}$ can prove "If $\phi$ is provable, then $\phi$", then in fact $\mathsf{PA}$ can prove $\phi$.)</p> <p>There are other ways of answering affirmatively your question. For example, it is a theorem of $\mathsf{ZF}$ that if $\phi$ is a $\Pi^0_1$ statement and $\mathsf{PA}$ does not prove its negation, then $\phi$ is true. To be $\Pi^0_1$ means that $\phi$ is of the form "For all natural numbers $n$, $R(n)$", where $R$ is a recursive statement (that is, there is an algorithm that, for each input $n$, returns in a finite amount of time whether $R(n)$ is true or false). Many natural and interesting statements are $\Pi^0_1$: The Riemann hypothesis, the Goldbach conjecture, etc. It would be fantastic to verify some such $\phi$ this way. On the other hand, there is no scenario for achieving anything like this.</p> <p>The key to the results above is that $\mathsf{PA}$, and $\mathsf{ZF}$, and any reasonable formalization of mathematics, are <em>arithmetically sound</em>, meaning that their theorems about natural numbers are actually true in the standard model of arithmetic. The first paragraph is a consequence of arithmetic soundness. The third paragraph is a consequence of the fact that $\mathsf{PA}$ proves all true $\Sigma^0_1$-statements. (Much less than $\mathsf{PA}$ suffices here, usually one refers to Robinson's arithmetic <a href="http://en.wikipedia.org/wiki/Robinson_arithmetic" rel="nofollow noreferrer">$Q$</a>.) I do not recall whether this property has a standard name. </p> <p>Here are two related posts on MO: </p> <ol> <li><a href="https://mathoverflow.net/q/49943/6085">$\mathrm{Provable}(P)\Rightarrow \mathrm{provable}(\mathrm{provable}(P))$?</a></li> <li><a href="https://mathoverflow.net/q/127322/6085">When does $ZFC \vdash\ ' ZFC \vdash \varphi\ '$ imply $ZFC \vdash \varphi$?</a></li> </ol>
<p>I'd say the model-theoretic proof of the Ax-Grothendieck theorem falls into this category. There may be other ways of proving it, but this is the only proof I saw in grad school, and it's pretty natural if you know model theory.</p> <p>The theorem states that for any polynomial map $f:\mathbb{C}^n \to\mathbb{C}^n$, if $f$ is injective (one-to-one), then it is surjective (onto). The theorem uses several results in model theory, and the argument goes roughly as follows. </p> <p>Let $ACL_p$ denote the theory of algebraically closed fields of characteristic $p$. $ACL_0$ is axiomatized by the axioms of an algebraically closed field and the axiom scheme $\psi_2, \psi_3, \psi_4,\ldots$, where $\psi_k$ is the statement "for all $x \neq 0$, $k x \neq 0$". Note that all $\psi_k$ are also proved by $ACL_p$, if $p$ does not divide $k$.</p> <ol> <li>The theorem is true in $ACL_p$, $p&gt;0$. This can be easily shown by contradiction: assume a counter example, then take the finite field generated by the elements in the counter-example, call that finite field $F_0$. Since $F_0^n\subseteq F^n$ is finite, and the map is injective, it must be surjective as well.</li> <li>The theory of algebraically closed fields in characteristic $p$ is complete (i.e. the standard axioms prove or disprove all statements expressible in the first order language of rings).</li> <li>For each degree $d$ and dimension $n$, restrict Ax-Grothendieck to a statement $\phi_{d,n}$, which is expressible as a statement in the first order language of rings. Then $\phi_{d,n}$ is provable in $ACL_p$ for all characteristics $p &gt; 0$.</li> <li>Assume the $\phi_{d,n}$ is false for $p=0$. Then by completeness, there is a proof $P$ of $\neg \phi_{d,n}$ in $ALC_0$. By the finiteness of proofs, there exists a finite subset of axioms for $ACL_0$ which are used in this proof. If none of the $\psi_k$ are used $P$, then $\neg \phi_{d,n}$ is true of all algebraically closed fields, which cannot be the case by (2). Let $k_0,\ldots, k_m$ be the collection of indices of $\psi_k$ used in $P$. Pick a prime $p_0$ which does not divide any of $k_0,\ldots,k_m$. Then all of the axioms used in $P$ are also true of $ACL_{p_0}$. Thus $ACL_{p_0}$ also proves $\neg \phi_{d,n}$, also contradicting (2). Contradiction. Therefore there is a proof of $\phi_{d,n}$ in $ACL_0$.</li> </ol> <p>So the proof is actually along the lines of "for each degree $d$ and dimension $n$ there is a proof of the Ax-Grothendieck theorem restricted to that degree and dimension." What any of those proofs are, I have no clue.</p>
geometry
<ol> <li><p>The volume of an $n$-dimensional ball of radius $1$ is given by the classical formula $$V_n=\frac{\pi^{n/2}}{\Gamma(n/2+1)}.$$ For small values of $n$, we have $$V_1=2\qquad$$ $$V_2\approx 3.14$$ $$V_3\approx 4.18$$ $$V_4\approx 4.93$$ $$V_5\approx 5.26$$ $$V_6\approx 5.16$$ $$V_7\approx 4.72$$ It is not difficult to prove that $V_n$ assumes its maximal value when $n=5$. </p> <p><strong>Question.</strong> Is there any non-analytic (i.e. geometric, probabilistic, combinatorial...) demonstration of this fact? What is so special about $n=5$?</p></li> <li><p>I also have a similar question concerning the $n$-dimensional volume $S_n$ ("surface area") of a unit $n$-sphere. Why is the maximum of $S_n$ attained at $n=7$ from a geometric point of view?</p></li> </ol> <p><strong>note</strong>: the question has also been <a href="https://mathoverflow.net/questions/53119/volumes-of-n-balls-what-is-so-special-about-n5">asked on MathOverflow</a> for those curious to other answers. </p>
<p>If you compare the volume of the sphere to that of its enclosing hyper-cube you will find that this ratio continually diminishes. The enclosing hyper-cube is 2 units in length per side if $R=1$. Then we have:</p> <p>$$V_1/2=1\qquad$$ $$V_2/4\approx 0.785$$ $$V_3/8\approx 0.5225$$ $$V_4/16\approx 0.308$$ $$V_5/32\approx 0.164$$</p> <p>The reason for this behavior is how we build hyper-spheres from low dimension to high dimensions. Think for example extending $S_1$ to $S_2$. We begin with a segment extending from $-1$ to $+1$ on the $x$ axis. We build a 2 sphere by sweeping this sphere out along the $y$ axis using the scaling factor $\sqrt{1-y^2}$. Compare this to the process of sweeping out the respective cube where the scale factor is $1$. So now we only occupy approximately $3/4$ of the enclosing cube (i.e. square for $n=2$). Likewise for $n=3$, we sweep the circle along the $z$ axis using the scaling factor, loosing even more volume compared to the cylinder if we had not scaled the circle as it was swept. So as we extend $S_{n-1}$ to get $S_n$ we start with the diminished volume we have and loose even more as we sweep out into the $n^{th}$ dimension.</p> <p>It would be easier to explain with figures, however hopefully you can work through how this works for lower dimensions and extend to higher ones.</p>
<p>At some point, the factorial must overtake the power function. This happens at five dimensions for the sphere, and seven for the surface. The actual race that is involved is between an alternation of $2$ and $\pi$, against $n/2$. At $n=5$, $\frac{5}{2}&gt;2$, but $\frac{6}{2}&lt;\pi$. </p> <p>That means that five dimensions is the last dimension that the sphere's volume stops increasing relative to the prism-product of the radius. </p> <p>After 19 dimensions, the surface of the sphere is less than the prism-product of the radius. </p>
probability
<p><strong>Question:</strong> Suppose we have one hundred seats, numbered 1 through 100. We randomly select 25 of these seats. What is the expected number of selected pairs of seats that are consecutive? (To clarify: we would count two consecutive selected seats as a single pair.)</p> <p>For example, if the selected seats are all consecutive (eg 1-25), then we have 24 consecutive pairs (eg 1&amp;2, 2&amp;3, 3&amp;4, ..., 24&amp;25). The probability of this happening is 75/($_{100}C_{25}$). So this contributes $24\cdot 75/(_{100}C_{25}$) to the expected number of consecutive pairs. </p> <p><strong>Motivation</strong>: I teach. Near the end of an exam, when most of the students have left, I notice that there are still many pairs of students next to each other. I want to know if the number that remain should be expected or not. </p>
<p>If you're just interested in the <em>expectation</em>, you can use the fact that expectation is additive to compute</p> <ul> <li>The expected number of consecutive integers among $\{1,2\}$, plus</li> <li>The expected number of consecutive integers among $\{2,3\}$, plus</li> <li>....</li> <li>plus the expected number of consecutive integers among $\{99,100\}$.</li> </ul> <p>Each of these 99 expectations is simply the probability that $n$ and $n+1$ are both chosen, which is $\frac{25}{100}\frac{24}{99}$.</p> <p>So the expected number of pairs is $99\frac{25}{100}\frac{24}{99} = 6$.</p>
<p>Let me present the approach proposed by Henning in another way. </p> <p>We have $99$ possible pairs: $\{(1,2),(2,3),\ldots,(99,100)\}$. Let's define $X_i$ as</p> <p>$$ X_i = \left \{ \begin{array}{ll} 1 &amp; i\text{-th pair is chosen}\\ 0 &amp; \text{otherwise}\\ \end{array} \right . $$</p> <p>The $i$-th pair is denoted as $(i, i+1)$. That pair is chosen when the integer $i$ is chosen, with probability $25/100$, <em>and</em> the integer $i+1$ is also chosen, with probability $24/99$. Then,</p> <p>$$E[X_i] = P(X_i = 1) = \frac{25}{100}\frac{24}{99},$$</p> <p>and this holds for $i = 1,2,\ldots,99$. The total number of chosen pairs is then given as</p> <p>$$X = X_1 + X_2 + \ldots + X_{99},$$</p> <p>and using the linearity of the expectation, we get</p> <p>\begin{align} E[X] &amp;= E[X_1] + E[X_2] + \cdots +E[X_{99}]\\ &amp;= 99E[X_1]\\ &amp;= 99\frac{25}{100}\frac{24}{99} = 6 \end{align}</p>
matrices
<p>Let $A$ be an $n\times m$ matrix. Prove that $\operatorname{rank} (A) = 1$ if and only if there exist column vectors $v \in \mathbb{R}^n$ and $w \in \mathbb{R}^m$ such that $A=vw^t$.</p> <hr> <p>Progress: I'm going back and forth between using the definitions of rank: $\operatorname{rank} (A) = \dim(\operatorname{col}(A)) = \dim(\operatorname{row}(A))$ or using the rank theorem that says $ \operatorname{rank}(A)+\operatorname{nullity}(A) = m$. So in the second case I have to prove that $\operatorname{nullity}(A)=m$-1</p>
<p><strong>Hints:</strong> </p> <p>$A=\mathbf v\mathbf w^T\implies\operatorname{rank}A=1$ should be pretty easy to prove directly. Multiply a vector in $\mathbb R^m$ by $A$ and see what you get. </p> <p>For the other direction, think about what $A$ does to the basis vectors of $\mathbb R^m$ and what this means about the columns of $A$. </p> <hr> <p><strong>Solution</strong> </p> <p>Suppose $A=\mathbf v\mathbf w^T$. If $\mathbf u\in\mathbb R^m$, then $A\mathbf u=\mathbf v\mathbf w^T\mathbf u=(\mathbf u\cdot\mathbf w)\mathbf v$. Thus, $A$ maps every vector in $\mathbb R^m$ to a scalar multiple of $\mathbf v$, hence $\operatorname{rank}A=\dim\operatorname{im}A=1$. </p> <p>Now, assume $\operatorname{rank}A=1$. Then for all $\mathbf u\in\mathbb R^m$, $A\mathbf u=k\mathbf v$ for some fixed $\mathbf v\in\mathbb R^n$. In particular, this is true for the basis vectors of $\mathbb R^m$, so every column of $A$ is a multiple of $\mathbf v$. That is, $$ A=\pmatrix{w_1\mathbf v &amp; w_2\mathbf v &amp; \cdots &amp; w_m\mathbf v}=\mathbf v\pmatrix{w_1&amp;w_2&amp;\cdots&amp;w_m}=\mathbf v\mathbf w^T. $$</p>
<p>Suppose that $A$ has rank one. Then its image is one dimensional, so there is some nonzero $v$ that generates it. Moreover, for any other $w$, we can write $$Aw = \lambda(w)v$$</p> <p>for some scalar $\lambda(w)$ that depends linearly on $w$ by virtue of $A$ being linear and $v$ being a basis of the image of $A$. This defines then a nonzero functional $\mathbb R^n \longrightarrow \mathbb R$ which must be given by taking dot product by some $w_0$; say $\lambda(w) =\langle w_0,w\rangle$. It follows then that </p> <p>$$ A(w) = \langle w_0,w\rangle v$$ for every $w$, or, what is the same, that $A= vw_0^t$. </p>
matrices
<p>I am a programmer, so to me $[x] \neq x$&mdash;a scalar in some sort of container is not equal to the scalar. However, I just read in a math book that for $1 \times 1$ matrices, the brackets are often dropped. This strikes me as very sloppy notation if $1 \times 1$ matrices are not at least <em>functionally equivalent</em> to scalars. As I began to think about the matrix operations I am familiar with, I could not think of any (tho I am weak on matrices) in which a $1 \times 1$ matrix would not act the same way as a scalar would when the corresponding scalar operations were applied to it. So, is $[x]$ functionally equivalent to $x$? And can we then say $[x] = x$? (And are those two different questions, or are entities in mathematics "duck typed" as we would say in the coding world?)</p>
<p>No. To give a concrete example, you can multiply a 2x2 matrix by a scalar, but you can't multiply a 2x2 matrix by a 1x1 matrix.</p> <p>It is sloppy notation.</p>
<h2>Some Background</h2> <p>There are three basic kinds of mathematical spaces that are being asked about in the original question: scalars, vectors, and matrices. In order to answer the question, it is perhaps useful to develop a better understanding of what each of these objects is, or how they should be interpreted.</p> <ol> <li><p>In this context, a <strong>scalar</strong> is a an element of a field. Without getting into too much detail, a field is an object made from three ingredients $(k,+,\cdot)$, where $k$ is a set, and $+$ and $\cdot$ are two binary operations (addition and multiplication). These operations are associative and commutative, there are distinguished identity elements (0 and 1) for each operation, every element has an additive inverse, every nonzero element has a multiplicative inverse, and multiplication distributes over addition.</p> <p>Examples of fields include (among others) the real numbers, as well as the rationals, the complex numbers, and (if you are feeling a little more esoteric) the $p$-adic numbers.</p></li> <li><p>A <strong>vector</strong> is an element of a vector space. A vector space is an object made from four ingredients $(V,k,+,\cdot)$, where $V$ is a set, $k$ is a field (the <em>base field</em>), $+$ is a binary operation which acts on two vectors, and $\cdot$ is a binary operation (called <em>scalar multiplication</em>) which acts on a scalar from $k$ and a vector from $V$. The addition is associative and commutative, there is a distinguished 0 vector (the additive identity), and every vector has an additive inverse. Scalar multiplication allows scalars to "act on" vectors; a vector can be multiplied by a scalar, and this multiplication "plays nice" with the addition (e.g. $a(b\mathbf{v}) = (ab)\mathbf{v}$ and $a(\mathbf{u}+\mathbf{v}) = a\mathbf{u} + a\mathbf{v}$ for scalars $a$ and $b$, and vectors $\mathbf{u}$ and $\mathbf{v}$).</p> <p>Examples of vector spaces include $\mathbb{R}^n$ (the base field is $k = \mathbb{R}$), $\mathbb{C}^n$ (the base field is $\mathbb{C}$), and the space of continuous functions from $[0,1]$ to $\mathbb{C}$ (i.e. the space $C([0,1])$–the base field here is $\mathbb{C}$ again).</p></li> <li><p>Finally, a <strong>matrix</strong> is an element of a specific kind of algebra over a field. An algebra has two ingredients: $(V,\times)$, were $V$ is a vector space and $\times$ is a binary operation called a <em>bilinear product</em>. This product is both left- and right-distributive over the addition in $V$, and plays nice with scalar multiplication. In detail, if $a$ and $b$ are scalars and $\mathbf{u}$, $\mathbf{v}$, and $\mathbf{w}$ are vectors, then</p> <ul> <li>$\mathbf{u} \times (\mathbf{v} + \mathbf{w}) = (\mathbf{u}\times \mathbf{v}) + (\mathbf{u}\times \mathbf{w})$, </li> <li>$(\mathbf{u} \times \mathbf{v}) \times \mathbf{w} = (\mathbf{u}\times \mathbf{w}) + (\mathbf{v}\times \mathbf{w})$, and</li> <li>$(a\mathbf{u})\times (b\mathbf{v}) = (ab)(\mathbf{u}\times \mathbf{v}).$</li> </ul> <p>Note that the underlying vector space (with its underlying field) is still running around, so there is a <em>ton</em> of structure here to play around with. </p> <p>Examples of algebras include $\mathbb{R}^3$ with the usual cross product, and matrix algebras such as the space of $n\times n$ matrices over $\mathbb{R}$ with the usual matrix multiplication. The structure of an algebra is actually more general than this, and there are much more interesting examples (such as $C^{\ast}$-algebras), but these are not really germane to this question.</p></li> </ol> <hr> <h2>The Question Itself</h2> <p>Now to attempt to answer the question:</p> <p>If $k$ is any field, then we can regard $k$ as a (one-dimensional) vector space over itself by taking the scalar multiplication to coincide with the multiplication in the field. That is, if we let $a$ denote a scalar and $\langle a \rangle$ denote a vector, then $$ a \cdot \langle b \rangle := \langle a\cdot b \rangle. $$ In a mathematically meaningful sense, this is a way of identifying the field $(k,+,\cdot)$ with the vector space $(V,k,+,\cdot)$ (though something is lost in the translation–it doesn't make sense to multiply two vectors, though it does make sense to multiply a vector by a scalar). This is a bit outside my area of expertise, but I think that the right language (from category theory) is that there is an <em>faithful functor</em> from the category of fields to the category of vector spaces. In more plain language, this says that fields and (certain) vector spaces are equivalent to each other. Specifically, we can sometimes regard a one-dimensional vector as a scalar–while they are not quite the same objects, they can, in the right context, be <em>treated</em> as though they are the same. In this sense, a scalar is "functionally equivalent" to a one-dimensional vector.</p> <p>By a similar argument, we can regard a field $k$ as an algebra over itself, with the underlying vector space being that obtained by viewing $k$ as a vector space, and the bilinear product being the product obtained by making the identification $$ [a]\times [b] = [a\cdot b]. $$ In this case, the algebra obtained is the algebra of $1\times 1$ matrices over $k$. Hence there is a faithful function identifying any field with an algebra of $1\times 1$ matrices. Again, this gives us a way of identifying scalars with matrices, so (again) we may meaningfully assert that scalars can be identified with matrices. In contrast to the previous example, we really don't lose anything in the translation. Even more surprising, the bilinear product ends up being a commutative operation, which is a neat property for an algebra to have.</p> <p>It is worth observing that, in this setting, $[a]$ is not "the scalar $a$ in some kind of container." The notation $[a]$ denotes a $1\times 1$ matrix with entry $a$. The brackets denote a great deal of structure–more than is implied by the simple statement "a scalar in a container."</p> <blockquote> <p><strong>Long Answer Short:</strong> A $1\times 1$ matrix is not a scalar–it is an element of a matrix algebra. However, there is sometimes a meaningful way of treating a $1\times 1$ matrix as though it were a scalar, hence in many contexts it is useful to treat such matrices as being "functionally equivalent" to scalars. It might be a little sloppy to do so, but a little bit of sloppiness is forgivable if it introduces no confusion or ambiguity, and if it aids brevity or clarity of exposition.</p> </blockquote>
matrices
<p>Let $A=\begin{bmatrix}a &amp; b\\ c &amp; d\end{bmatrix}$.</p> <p>How could we show that $ad-bc$ is the area of a parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+b, c+d)$?</p> <p>Are the areas of the following parallelograms the same? </p> <p>$(1)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+c, b+d)$.</p> <p>$(2)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+b, c+d)$.</p> <p>$(3)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+d, b+c)$.</p> <p>$(4)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+d, b+c)$.</p> <p>Thank you very much.</p>
<p>Spend a little time with this figure due to <a href="http://en.wikipedia.org/wiki/Solomon_W._Golomb" rel="noreferrer">Solomon W. Golomb</a> and enlightenment is not far off:</p> <p><img src="https://i.sstatic.net/gCaz3.png" alt="enter image description here" /></p> <p>(Appeared in <em>Mathematics Magazine</em>, March 1985.)</p>
<p><img src="https://i.sstatic.net/PFTa4.png" alt="enter image description here"></p> <p>I know I'm extremely late with my answer, but there's a pretty straightforward geometrical approach to explaining it. I'm surprised no one has mentioned it. It does have a shortcoming though - it does not explain why area flips the sign, because there's no such thing as negative area in geometry, just like you can't have a negative amount of apples(unless you are economics major).</p> <p>It's basically:</p> <pre><code> Parallelogram = Rectangle - Extra Stuff. </code></pre> <p>If you simplify $(c+a)(b+d)-2ad-cd-ab$ you will get $ad-bc$.</p> <p>Also interesting to note that if you swap vectors places then you get a negative(opposite of what $ad-bc$ would produce) area, which is basically:</p> <pre><code> -Parallelogram = Rectangle - (2*Rectangle - Extra Stuff) </code></pre> <p>Or more concretely:</p> <p>$(c+a)(b+d) - [2*(c+a)(b+d) - (2ad+cd+ab)]$</p> <p>Also it's $bc-ad$, when simplified.</p> <p>The sad thing is that there's no good geometrical reason why the sign flips, you will have to turn to linear algebra to understand that. </p> <p>Like others had noted, determinant is the scale factor of linear transformation, so a negative scale factor indicates a reflection.</p>
probability
<p>what are some good books on probabilities and measure theory? I already know basic probabalities, but I'm interested in sigma-algrebas, filtrations, stopping times etc, with possibly examples of "real life" situations where they would be used</p> <p>thanks</p>
<p>I'd recommend Klenke's <a href="http://books.google.ch/books?id=tcm3y5UJxDsC&amp;printsec=frontcover&amp;hl=de#v=onepage&amp;q&amp;f=false"><em>Probability Theory</em></a>.</p> <p>It gives a good overview of the basic ideas in probability theory. In the beginning it builds up the basics of measure theory and set functions.</p> <p>There are also some examples of applications of probability theory.</p>
<p>I think Chung's <a href="http://rads.stackoverflow.com/amzn/click/0121741516" rel="noreferrer">A Course in Probability Theory</a> is a good one that is rigorous. Also Sid Resnick's <a href="http://rads.stackoverflow.com/amzn/click/081764055X" rel="noreferrer">A Probability Path</a> is advanced but easy to read.</p>
linear-algebra
<p>In answering <a href="https://math.stackexchange.com/questions/71235/do-these-matrix-rings-have-non-zero-elements-that-are-neither-units-nor-zero-divi">Do these matrix rings have non-zero elements that are neither units nor zero divisors?</a> I was surprised how hard it was to find anything on the Web about the generalization of the following fact to commutative rings:</p> <blockquote> <p>A square matrix over a field has trivial kernel if and only if its determinant is non-zero.</p> </blockquote> <p>As Bill demonstrated in the above question, a related fact about fields generalizes directly to commutative rings:</p> <blockquote> <p>A square matrix over a commutative ring is invertible if and only if its determinant is invertible.</p> </blockquote> <p>However, the kernel being trivial and the matrix being invertible are not equivalent for general rings, so the question arises what the proper generalization of the first fact is. Since it took me quite a lot of searching to find the answer to this rather basic question, and it's <a href="http://blog.stackoverflow.com/2011/07/its-ok-to-ask-and-answer-your-own-questions/">excplicitly encouraged</a> to write a question and answer it to document something that might be useful to others, I thought I'd write this up here in an accessible form.</p> <p>So my questions are: What is the relationship between the determinant of a square matrix over a commutative ring and the triviality of its kernel? Can the simple relationship that holds for fields be generalized? And (generalizing with a view to the answer) what is a necessary and sufficient condition for a (not necessarily square) matrix over a commutative ring to have trivial kernel?</p>
<p>I found the answer in <a href="http://web.archive.org/web/20150326123218/http://math.ucdenver.edu/%7Espayne/classnotes/09LinAlg.pdf" rel="nofollow noreferrer">A Second Semester of Linear Algebra</a> by S. E. Payne (in Section <span class="math-container">$6.4.14$</span>, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.</p> <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$m\times n$</span> matrix over a commutative ring <span class="math-container">$R$</span>. We want to find a condition for the system of equations <span class="math-container">$Ax=0$</span> with <span class="math-container">$x\in R^n$</span> to have a non-trivial solution. If <span class="math-container">$R$</span> is a field, various definitions of the rank of <span class="math-container">$A$</span> <a href="http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Alternative_definitions" rel="nofollow noreferrer">coincide</a>, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful <em>generalization of rank</em> is the largest integer <span class="math-container">$k$</span> such that there is no non-zero element of <span class="math-container">$R$</span> that annihilates all minors of dimension <span class="math-container">$k$</span>, with <span class="math-container">$k=0$</span> if there is no such integer.</p> <blockquote> <p>We want to show that <span class="math-container">$Ax=0$</span> has a non-trivial solution if and only if <span class="math-container">$k\lt n$</span>.</p> </blockquote> <p>If <span class="math-container">$k=0$</span>, there is a non-zero element <span class="math-container">$r\in R$</span> which annihilates all matrix elements (the minors of dimension <span class="math-container">$1$</span>), so there is a non-trivial solution</p> <p><span class="math-container">$$A\pmatrix{r\\\vdots\\r}=0\;.$$</span></p> <p>Now assume <span class="math-container">$0\lt k\lt n$</span>. If <span class="math-container">$m\lt n$</span>, we can add rows of zeros to <span class="math-container">$A$</span> without changing <span class="math-container">$k$</span> or the solution set, so we can assume <span class="math-container">$k\lt n\le m$</span>. There is some non-zero element <span class="math-container">$r\in R$</span> that annihilates all minors of dimension <span class="math-container">$k+1$</span>, and there is a minor of dimension <span class="math-container">$k$</span> that isn't annihilated by <span class="math-container">$r$</span>. Without loss of generality, assume that this is the minor of the first <span class="math-container">$k$</span> rows and columns. Now consider the matrix formed of the first <span class="math-container">$k+1$</span> rows and columns of <span class="math-container">$A$</span>, and form a solution <span class="math-container">$x$</span> from the <span class="math-container">$(k+1)$</span>-th column of its <a href="http://en.wikipedia.org/wiki/Adjugate_matrix" rel="nofollow noreferrer">adjugate</a> by multiplying it by <span class="math-container">$r$</span> and padding it with zeros. By construction, the first <span class="math-container">$k$</span> entries of <span class="math-container">$Ax$</span> are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each <span class="math-container">$r$</span> times a minor of dimension <span class="math-container">$k+1$</span>, and thus also vanish. But the <span class="math-container">$(k+1)$</span>-th entry of this solution is non-zero, being <span class="math-container">$r$</span> times the minor of the first <span class="math-container">$k$</span> rows and columns, which isn't annihilated by <span class="math-container">$r$</span>. Thus we have constructed a non-trivial solution.</p> <p>In summary, if <span class="math-container">$k\lt n$</span>, there is a non-trivial solution to <span class="math-container">$Ax=0$</span>.</p> <p>Now assume conversely that there is such a solution <span class="math-container">$x$</span>. If <span class="math-container">$n\gt m$</span>, there are no minors of dimension <span class="math-container">$n$</span>, so <span class="math-container">$k\lt n$</span>. Thus we can assume <span class="math-container">$n\le m$</span>. The minors of dimension <span class="math-container">$n$</span> are the determinants of matrices <span class="math-container">$B$</span> formed by choosing any <span class="math-container">$n$</span> rows of <span class="math-container">$A$</span>. Since each row of <span class="math-container">$A$</span> times <span class="math-container">$x$</span> is <span class="math-container">$0$</span>, we have <span class="math-container">$Bx=0$</span>, and then multiplying by the adjugate of <span class="math-container">$B$</span> yields <span class="math-container">$\det B x=0$</span>. Since there is at least one non-zero entry in the non-trivial solution <span class="math-container">$x$</span>, there is at least one non-zero element of <span class="math-container">$R$</span> that annihilates all minors of size <span class="math-container">$n$</span>, and thus <span class="math-container">$k\lt n$</span>.</p> <p>Specializing to the case <span class="math-container">$m=n$</span> of square matrices, we can conclude:</p> <blockquote> <p>A system of linear equations <span class="math-container">$Ax=0$</span> with a square <span class="math-container">$n\times n$</span> matrix <span class="math-container">$A$</span> over a commutative ring <span class="math-container">$R$</span> has a non-trivial solution if and only if its determinant (its only minor of dimension <span class="math-container">$n$</span>) is annihilated by some non-zero element of <span class="math-container">$R$</span>, that is, if its determinant is a zero divisor or zero.</p> </blockquote>
<p>See Section III.8.7, entitled <a href="http://books.google.com/books?id=STS9aZ6F204C&amp;lpg=PP1&amp;dq=intitle%3Aalgebra%20inauthor%3Abourbaki&amp;pg=PA534#v=onepage&amp;q&amp;f=false" rel="nofollow noreferrer">Application to Linear Equations</a>, of <strong>Algebra</strong>, by Nicolas Bourbaki.</p> <p><strong>EDIT 1.</strong> Let <span class="math-container">$R$</span> be a commutative ring, let <span class="math-container">$m$</span> and <span class="math-container">$n$</span> be positive integers, let <span class="math-container">$M$</span> be an <span class="math-container">$R$</span>-module, and let <span class="math-container">$A:R^n\to M$</span> be <span class="math-container">$R$</span>-linear. </p> <p>Identify the <span class="math-container">$n$</span> th exterior power <span class="math-container">$\Lambda^n(R^n)$</span> of <span class="math-container">$R^n$</span> to <span class="math-container">$R$</span> in the obvious way, so that <span class="math-container">$\Lambda^n(A)$</span> is a map from <span class="math-container">$R$</span> to <span class="math-container">$\Lambda^n(M)$</span>. </p> <p>Put <span class="math-container">$v_i:=Ae_i$</span>, where <span class="math-container">$e_i$</span> is the <span class="math-container">$i$</span> th vector of the canonical basis of <span class="math-container">$R^n$</span>. In particular we have <span class="math-container">$$ Ax=\sum_{i=1}^n\ x_i\ v_i,\quad\Lambda^n(A)\ r=r\ v_1\wedge\cdots\wedge v_n. $$</span> (where <span class="math-container">$x_i$</span> is the <span class="math-container">$i$</span>-th coordinate of <span class="math-container">$x$</span>, and <span class="math-container">$r$</span> denotes any element of <span class="math-container">$\Lambda^n\left(R^n\right) \cong R$</span>).</p> <blockquote> <p>If <span class="math-container">$\Lambda^n(A)$</span> is injective, so is <span class="math-container">$A$</span>. </p> </blockquote> <p>In other words: </p> <blockquote> <p>If the <span class="math-container">$v_i$</span> are linearly dependent, then <span class="math-container">$r\ v_1\wedge\cdots\wedge v_n=0$</span> for some nonzero <span class="math-container">$r$</span> in <span class="math-container">$R$</span>. </p> </blockquote> <p>Indeed, for <span class="math-container">$x$</span> in <span class="math-container">$\ker A$</span> we have <span class="math-container">$$ \Lambda^n(A)\ x_1=x_1\ v_1\wedge v_2\wedge\cdots\wedge v_n= -\sum_{i=2}^n\ x_i\ v_i\wedge v_2\wedge\cdots\wedge v_n=0, $$</span> and, similarly, <span class="math-container">$\Lambda^n(A)\ x_i=0$</span> for all <span class="math-container">$i$</span>. </p> <p>[<em>Edit</em>: Old version (before Georges's comment): Assume now that <span class="math-container">$M$</span> embeds into <span class="math-container">$R^m$</span>.] </p> <p>Assume now that there is an <span class="math-container">$R$</span>-linear injection <span class="math-container">$B:M\to R^m$</span> such that <span class="math-container">$$ \Lambda^n(B):\Lambda^n(M)\to\Lambda^n(R^m) $$</span> is injective. This is always the case (for a suitable <span class="math-container">$m$</span>) if <span class="math-container">$M$</span> is projective and finitely generated. </p> <blockquote> <p>If <span class="math-container">$A$</span> is injective, so is <span class="math-container">$\Lambda^n(A)$</span>. </p> </blockquote> <p>In other words: </p> <blockquote> <p>If <span class="math-container">$r\ v_1\wedge\cdots\wedge v_n=0$</span> for some nonzero <span class="math-container">$r$</span> in <span class="math-container">$R$</span>, then the <span class="math-container">$v_i$</span> are linearly dependent. </p> </blockquote> <p>The proof is given in joriki's nice answer. </p> <p>This is also proved as <a href="http://books.google.com/books?id=STS9aZ6F204C&amp;lpg=PP1&amp;dq=intitle%3Aalgebra%20inauthor%3Abourbaki&amp;pg=PA519#v=onepage&amp;q&amp;f=false" rel="nofollow noreferrer">Proposition 12</a> in Bourbaki's <strong>Algebra</strong> III.7.9 p. 519. Unfortunately, I don't understand Bourbaki's argument. I'd be most grateful to whoever would be kind and patient enough to explain it to me. </p> <p><strong>EDIT 2.</strong> According to the indications given by Tsit-Yuen Lam on <a href="http://books.google.com/books?id=LKMCTP4EecYC&amp;lpg=PA150&amp;ots=8BnyVUOYkL&amp;dq=mccoy%20rank%20ring%20matrix&amp;pg=PA150#v=onepage&amp;q=mccoy%20rank%20ring%20matrix&amp;f=false" rel="nofollow noreferrer">page 150</a> of his book <strong>Exercises in modules and rings</strong>, the theorem is due to N. H. McCoy, and appeared first, as Theorem 1 page 288, in </p> <ul> <li>N. H. McCoy, Remarks on Divisors of Zero, The American Mathematical Monthly Vol. 49, No. 5 (May, 1942), pp. 286-295, <a href="http://www.jstor.org/stable/2303094" rel="nofollow noreferrer">JSTOR</a>. </li> </ul> <p>Lam also says that </p> <ul> <li>N. H. McCoy, <a href="http://books.google.com/books?id=FHAGAQAAIAAJ" rel="nofollow noreferrer">Rings and ideals</a>, The Carus Mathematical Monographs, no. 8, The Mathematical Association of America, 1948, </li> </ul> <p>is an "excellent exposition" of the subject. See Theorem 51 page 159. </p> <p>McCoy's Theorem is also stated and proved in the following texts: </p> <ul> <li><p>Ex. 5.23.A(3) on <a href="http://books.google.com/books?id=LKMCTP4EecYC&amp;lpg=PA150&amp;ots=8BnyVUOYkL&amp;dq=mccoy%20rank%20ring%20matrix&amp;pg=PA149#v=onepage&amp;q=mccoy%20rank%20ring%20matrix&amp;f=false" rel="nofollow noreferrer">page 149</a> of Lam's <strong>Exercises in modules and rings</strong>. </p></li> <li><p>Theorem 2.2 page 3 in Anton Gerashenko's notes from Lam's Course: Math 274, Commutative Rings, Fall 2006: <a href="https://stacky.net/files/written/CommRings/CommRing.pdf" rel="nofollow noreferrer">PDF file</a>.</p></li> <li><p>Theorem 1.6 in Chapter 13, entitled "Various topics", of <a href="http://math.uchicago.edu/~amathew/downloads.html" rel="nofollow noreferrer">The CRing Project</a>. --- <a href="http://math.uchicago.edu/~amathew/chvarious.pdf" rel="nofollow noreferrer">PDF file for Chapter 13</a>. --- <a href="http://math.uchicago.edu/~amathew/CRing.pdf" rel="nofollow noreferrer">PDF file for the whole book</a>. </p></li> <li><p>Blocki, Zbigniew, <a href="http://www2.im.uj.edu.pl/actamath/PDF/30-215-218.pdf" rel="nofollow noreferrer">An elementary proof of the McCoy theorem</a>, J. Univ. Iagel. Acta Math.; N 30; 1993; 215-218. </p></li> <li><p>Theorem 6.4.16. page 101, <strong>A Second Semester of Linear Algebra</strong>, Math 5718, by Stan Payne. <a href="https://web.archive.org/web/20161207060453/http://math.ucdenver.edu/~spayne/classnotes/09LinAlg.pdf" rel="nofollow noreferrer">PDF file</a>.</p></li> </ul>
probability
<p>In <em>Finite Mathematics</em> by Lial et al. (10th ed.), problem 8.3.34 says:</p> <blockquote> <p>On National Public Radio, the <em>Weekend Edition</em> program posed the following probability problem: Given a certain number of balls, of which some are blue, pick 5 at random. The probability that all 5 are blue is 1/2. Determine the original number of balls and decide how many were blue.</p> </blockquote> <p>If there are $n$ balls, of which $m$ are blue, then the probability that 5 randomly chosen balls are all blue is $\binom{m}{5} / \binom{n}{5}$. We want this to be $1/2$, so $\binom{n}{5} = 2\binom{m}{5}$; equivalently, $n(n-1)(n-2)(n-3)(n-4) = 2 m(m-1)(m-2)(m-3)(m-4)$. I'll denote these quantities as $[n]_5$ and $2 [m]_5$ (this is a notation for the so-called "falling factorial.")</p> <p>A little fooling around will show that $[m+1]_5 = \frac{m+1}{m-4}[m]_5$. Solving $\frac{m+1}{m-4} = 2$ shows that the only solution with $n = m + 1$ has $m = 9$, $n = 10$.</p> <p><strong>Is this the only solution?</strong></p> <p>You can check that $n = m + 2$ doesn't yield any integer solutions, by using the quadratic formula to solve $(m + 2)(m +1) = 2(m - 3)(m - 4)$. I have ruled out $n = m + 3$ or $n = m + 4$ with similar checks. For $n \geq m + 5$, solutions would satisfy a quintic equation, which of course has no general formula to find solutions.</p> <p>Note that, as $n$ gets bigger, the ratio of successive values of $\binom{n}{5}$ gets smaller; $\binom{n+1}{5} = \frac{n+1}{n-4}\binom{n}{5}$ and $\frac{n+1}{n-4}$ is less than 2—in fact, it approaches 1. So it seems possible that, for some $k$, $\binom{n+k}{5}$ could be $2 \binom{n}{5}$.</p> <p>This is now <a href="https://mathoverflow.net/questions/128036/solutions-to-binomn5-2-binomm5">a question at MathOverflow</a>.</p>
<p>Many Diophantine equations are solved using modern algebraic geometry. For an informal survey how this works, see</p> <p>M. Stoll, <em>How to solve a Diophantine equation</em>, <a href="http://arxiv.org/pdf/1002.4344.pdf">arXiv</a>.</p> <p>The most prominent example is Fermat's equation. But there are also interesting binomial equations. It has been shown very recently that $\binom{x}{5}=\binom{y}{2}$ has exactly $20$ integer solutions:</p> <p>Y. Bugeaud, M. Mignotte, S. Siksek, M. Stoll, Sz. Tengely, <em>Integral Points on Hyperelliptic Curves</em>, <a href="http://arxiv.org/pdf/0801.4459v4.pdf">arXiv</a></p> <p>I don't know if this can be proven by elementary means. And I don't know the situation for $\binom{x}{5}=2 \binom{y}{5}$. I just want to warn you that it <em>might be</em> a waste of time to look for elementary solutions, and that instead more sophisticated methods are necessary. On the other hand, this equation arises as a problem from a book, so I am not sure ...</p>
<p>I am putting some results that may or may not be part of an answer here, as a community wiki post, rather that cluttering the question with them. Perhaps they will help lead someone to a complete answer. If you have similar potentially useful information or partial answers, please feel free to add it here.</p> <hr> <p>Let's see what can be gleaned by looking at $\binom{n}{r} = 2 \binom{n-k}{r}$ for other values of $r$. There is an interesting duality: $\binom{n}{r} = 2 \binom{n-k}{r} \iff \binom{n}{k} = 2 \binom{n-r}{k}$. So we can find solutions to the original problem by finding solutions to $\binom{n}{k} = 2\binom{n-5}{k}$. For any $r$, there is a "standard" solution, $\binom{2r}{r} = 2\binom{2r-1}{r}$, and several "trivial" solutions, $\binom{i}{r} = 2\binom{j}{r}$ whenever $0 \leq i,j \leq r - 1$.</p> <p>For $r = 1$, there are infinitely many solutions; for any $k$, we have $\binom{2k}{1} = 2 \binom{k}{1}$. Under the duality, these correspond to the "standard" solutions $\binom{2k}{k} = 2 \binom{2k - 1}{k}$.</p> <p>For $r = 2$, there are also infinitely many solutions. It's fun to see why. A solution satisfies the equation $n(n-1) = 2 (n-k)(n-k-1)$ or $$\begin{equation}\tag{1}\label{eq:qud}n^2 - (4k+1)n + (2k^2 + 2k) = 0.\end{equation}$$ Thus $n = \frac{4k + 1 \pm \sqrt{8k^2 + 1}}{2}$. Since $8k^2 + 1$ is odd, this either has two integer solutions if $8k^2 + 1$ is a perfect square, or none at all. Thus, whenever there is one solution $n$ for a given difference $k$, <em>there must be a second.</em> The second key ingredient is that since we are multiplying evenly many terms, we can change the signs of all the terms without changing the result.</p> <p>So, start with the standard solution: $$ 4 \cdot 3 = 2 (3 \cdot 2). $$ Then it's also true that $$ 4 \cdot 3 = 2 (-2 \cdot -3), $$ in other words, $[4]_2 = 2[-2]_2 = 2[4-6]_2$, a solution with $k = 6$. So there must be another. When $k = 6$, equation $\eqref{eq:qud}$ is $n^2 - 25n + 84 = 0$, and we know we can factor out $(n - 4)$, which leaves $(n - 21)$. And indeed, $\binom{21}{2} = 2 \binom{15}{2}$. The duality gives $\binom{21}{6} = 2 \binom{19}{6}$. Repeating the process, we get $[21]_2 = 2 [-14]_2$ with a difference $k = 35$; factoring $(n - 21)$ from equation $\eqref{eq:qud}$ leaves $(n - 120)$, and indeed $\binom{120}{2} = 2 \binom{85}{2}$. Dually, $\binom{120}{35} = 2 \binom{118}{35}$. We can keep going up this "staircase" forever; if $k$ gives a solution, then so does $3k + \sqrt{8k^2 + 1}$. This is OEIS <a href="http://oeis.org/A001109">A001109</a>.</p> <p>Of course, this doesn't help for our case because $5$ doesn't appear. But it does show that solutions exist, besides the standard and trivial ones, for $r &gt; 2$.</p> <p>Now consider $r = 3$. We need to solve $[n]_3 = 2[n-k]_3$ or $$\begin{equation}\tag{2}\label{eq:cub}n^3 - (6k + 3)n^2 + (6k^2 + 12k + 2)n - (2k^3 + 6k^2 + 4k) = 0.\end{equation}$$ With $k = 0$ this factors as expected as $n(n-1)(n-2)$. With $k = 1$ it again gives us the known trivial and standard solutions, $(n-1)(n-2)(n-6)$. With $k = 2$, $\eqref{eq:cub}$ is $n^3 - 15n^2 + 50n - 48 = 0$, from which we can factor the known trivial solution: $(n-2)(n^2 - 13n + 24) = 0$. But the other factor has no integer solutions. For general $k$, we can apply the <a href="http://en.wikipedia.org/wiki/Cubic_formula">Cubic formula</a>. We find that the discriminant $\Delta$ is $4(-27k^6 + 108k^4 + 18k^2 + 1)$, which is positive for $k = 0,1,2$ and negative for $k \geq 3$, so that it has three real roots for $k \leq 2$ but only one real root for $k \geq 3$. This root turns out to be $$ 2k + 1 - \sqrt[3]{-3k^3 + \sqrt{k^6 - 4k^4 - \frac{2}{3}k^2 - \frac{1}{27}}} - \sqrt[3]{-3k^3 - \sqrt{k^6 - 4k^4 - \frac{2}{3}k^2 - \frac{1}{27}}}. $$ I don't think that this can ever be an integer, but I don't know how to prove this. Unfortunately it's not sufficient to show that the contents of the cube roots cannot be perfect cubes, since even if neither is an integer, their sum still could be. However, no solutions exist for $n$ up to 10,000 (checked in the <a href="http://oeis.org/A000292/b000292.txt">table</a> provided at <a href="http://oeis.org/A000292">OEIS A000292</a>.)</p> <p>We can at least check if any answers to our original question have the difference $n - m = 3$ by looking at the above root, with $k = 5$. It's about 25.268, not an integer, so any extra solutions have the difference at least 4.</p> <p>For $r = 4$, a similar "negation" trick should work as for $r = 2$. Unfortunately, it doesn't seem to yield extra positive solutions. The equation $[n]_4 = 2[n-k]_4$ expands to $$\begin{equation}\tag{3}\label{eq:qrt}n^4 - (8k + 6)n^3 + (12k^2 + 36k + 11)n^2 - (8k^3 + 36k^2 + 44k + 6)n + (2k^4 + 12k^3 + 22k^2 + 12k) = 0.\end{equation}$$ Starting from the standard solution, $[8]_4 = 2[7]_4$, we have another solution, $[8]_4 = 2[-4]_4$, with difference $k = 12$. And indeed, we can factor $(n - 8)$ out of equation $\eqref{eq:qrt}$ with $k = 12$, $$ n^4 - 102n^3 + 2171n^2 - 19542n + 65520 = (n-8)(n^3 - 94n^2 + 1419n - 8190). $$ But the other factor is a cubic with one real root, which isn't an integer. There are no solutions (besides the trivial and standard) up to $n=1002$ (checked in the <a href="http://oeis.org/A000332/b000332.txt">table</a> provided at <a href="http://oeis.org/A000332">OEIS A000332</a>.)</p> <p>When $k=5$, $\eqref{eq:qrt}$ becomes $n^4 - 46n^3 + 491n^2 - 2126n + 3360$, which (<a href="http://www.wolframalpha.com/input/?i=n%5E4+-+46n%5E3+%2B+491n%5E2+-+2126n+%2B+3360+%3D+0">according to Wolfram Alpha</a>) has no integer solutions. So for the original problem, $n - m &gt; 4$.</p> <hr> <p>Related OEIS sequences: </p> <ul> <li><a href="http://oeis.org/A000389">A000389: $\binom{n}{5}$</a> and <a href="http://oeis.org/A000389/b000389.txt">table up to $n=1000$</a></li> <li><a href="http://oeis.org/A001109">A001109</a> are values of $k$ such that $\binom{n}{2} = 2\binom{n-k}{2}$, or equivalently $\binom{n}{k} = 2\binom{n-2}{k}$, has a solution $n = \bigl(4k + 1 + \sqrt{8k^2 + 1}\bigr)/2$.</li> <li><a href="http://oeis.org/A082291">A082291</a> are the values of $n-2$ in the above, offset by one. That is, A001109 begins 0,1,6 and A082291 begins 2, 19, 118. $\binom{2+2}{1} = 2\binom{2}{1}$, and $\binom{19+2}{6} = 2\binom{19}{6}$.</li> </ul>
probability
<p>I have two red points, $r_1$ and $r_2$, and two blue points, $b_1$ and $b_2$. They are all placed randomly and uniformly in $[0,1]^2$. </p> <p>Each dot points to the closest dot from another colour; closest is defined wrt the Euclidean distance. We use $x \to y$ to indicate dot $x$ points to dot $y$.</p> <p>If $r_1 \to b_1$, what is the probability that $r_2 \to b_1$ too?</p> <p>NOTE it must be larger than 1/2 because $r_1 \to b_1$ tells us in a way that $b_1$ is likely to have a centric location, and thus is likely than it is closer to $r_2$ too than $b_2$.</p>
<p>Starting off in the same manner as Yanior Weg, assume <span class="math-container">$b_1$</span> and <span class="math-container">$b_2$</span> are fixed. Then <span class="math-container">$P(r_2 \to b_1|r_1 \to b_1) = \frac{P(r_2 \to b_1 \bigcap r_1 \to b_1)}{P(r_1 \to b_1)} = \frac{P(r_2 \to b_1 \bigcap r_1 \to b_1)}{\frac{1}{2}} = 2 \cdot P(r_2 \to b_1 \bigcap r_1 \to b_1) = 2(P(r_1 \to b_1))^2$</span></p> <p>In <a href="https://www.desmos.com/calculator/liamb6uvca" rel="nofollow noreferrer">this Desmos plot</a>, the depiction of this can be seen (I'll be referring to this later, so it may be useful to have it open). <span class="math-container">$P(r_1 \to b_1)$</span> is the area of the shaded region inside the square. By limiting <span class="math-container">$x_2, y_2$</span> such that <span class="math-container">$x_1 &lt; x_2 &lt; 1$</span>, <span class="math-container">$y_1 &lt; y_2 &lt; 1$</span>, later calculations can be simplified. The final answer is then <span class="math-container">$$\int_0^1\int_0^1\int_{x_1}^1\int_{y_1}^14\cdot2A^2 dy_2dx_2dy_1dx_1 = 8\int_0^1\int_0^1\int_{x_1}^1\int_{y_1}^1A^2 dy_2dx_2dy_1dx_1$$</span>, where <span class="math-container">$A$</span> is the area of the shaded region inside the square. <span class="math-container">$2A^2$</span> is multiplied by <span class="math-container">$4$</span> to account for <span class="math-container">$x_2&lt;x_1$</span> and <span class="math-container">$y_2&lt;y_1$</span>.</p> <p>The line separating the shaded region has equation <span class="math-container">$$y = f\left(x\right)=-\frac{x_{1}-x_{2}}{y_{1}-y_{2}}x+\frac{\left(x_{1}-x_{2}\right)\left(x_{1}+x_{2}\right)}{2\left(y_{1}-y_{2}\right)}+\frac{y_{1}+y_{2}}{2}$$</span></p> <p>This intersects <span class="math-container">$y = 1$</span> at <span class="math-container">$$x = I_1 = \frac{\left(y_{1}-y_{2}\right)\left(y_{1}+y_{2}-2\right)}{2\left(x_{1}-x_{2}\right)}+\frac{x_{1}+x_{2}}{2}$$</span></p> <p>and <span class="math-container">$y = 0$</span> at <span class="math-container">$$x = I_2 = \frac{\left(y_{1}-y_{2}\right)\left(y_{2}+y_{1}\right)}{2\left(x_{1}-x_{2}\right)}+\frac{x_{1}+x_{2}}{2}$$</span></p> <p>There are now four cases: <span class="math-container">$(1)\ I_1 &lt; 0, I_2 &lt; 1, \ (2)\ I_1 &lt; 0, I_2 &gt; 1, \ (3)\ I_1 &gt; 0, I_2 &lt; 1$</span>, and <span class="math-container">$(4)\ I_1 &gt; 0, I_2 &gt; 1$</span>.</p> <p>Letting <span class="math-container">$$F(x) = \int_0^x f(t)dt = -\frac{x_{1}-x_{2}}{2\left(y_{1}-y_{2}\right)}x^{2}+\left(\frac{\left(x_{1}-x_{2}\right)\left(x_{1}+x_{2}\right)}{2\left(y_{1}-y_{2}\right)}+\frac{y_{1}+y_{2}}{2}\right)x$$</span> here <span class="math-container">$A_i$</span> represents the area of case <span class="math-container">$i$</span>:</p> <p><span class="math-container">$A_1 = F(I_2)$</span></p> <p><span class="math-container">$A_2 = F(1)$</span></p> <p><span class="math-container">$A_3 = I_1 - F(I_1) + F(I_2)$</span></p> <p><span class="math-container">$A_4 = I_1 - F(I_1) + F(1)$</span></p> <p>From here, <span class="math-container">$$J = \underbrace{\int_{(1)}A_1^2dy_2dx_2dy_1dx_1}_{J_1}+\underbrace{\int_{(2)}A_2^2dy_2dx_2dy_1dx_1}_{J_2}+\underbrace{\int_{(3)}A_3^2dy_2dx_2dy_1dx_1}_{J_3}+\underbrace{\int_{(4)}A_4^2dy_2dx_2dy_1dx_1}_{J_4}$$</span>, where <span class="math-container">$(1)$</span> is the region in <span class="math-container">$x_1, y_1, x_2, y_2$</span> such that case 1 happens (restricted to <span class="math-container">$0 &lt; x1 &lt; x2 &lt; 1$</span> and <span class="math-container">$0 &lt; y1 &lt; y2 &lt; 1$</span>), etc.</p> <p>To simplify, making the substitution <span class="math-container">$x_s = x_2 + x_1, x_d = x_2 - x_1$</span> and <span class="math-container">$y_s = y_2 + y_1, y_d = y_2 - y_1$</span> helps a lot. The integral would then need to be multiplied by the Jacobian of <span class="math-container">$\frac{1}{4}$</span>.</p> <p>For case <span class="math-container">$1$</span>, the integral can be written out as <span class="math-container">$$J_1 = \frac{1}{4}\int_0^1 \int_{x_d}^{2-x_d} \left(\int_{1-\sqrt{1-x_{d}x_{s}}}^{x_{d}}\int_{0}^{-\frac{x_{d}x_{s}}{y_{d}}+2}A_1^2dy_{s}dy_{d}+\int_{x_{d}}^{\sqrt{2x_{d}-x_{d}x_{s}}}\int_{0}^{\frac{2x_{d}-x_{d}x_{s}}{y_{d}}}A_1^{2}dy_{s}dy_{d}-\int_{1-\sqrt{1-x_{d}x_{s}}}^{\sqrt{2x_{d}-x_{d}x_{s}}}\int_{0}^{y_{d}}A_1^{2}dy_{s}dy_{d}\right)dx_s dx_d = \frac{1}{32}-\frac{1}{24}\ln(2)$$</span></p> <p>For case <span class="math-container">$2$</span>: <span class="math-container">$$J_2 = \frac{1}{4}\int_0^1 \int_{x_d}^{2-x_d} \left(\int_{x_{d}}^{\sqrt{x_{s}x_{d}}}\int_{0}^{2-\frac{x_{s}x_{d}}{y_{d}}}A_2^{2}dy_{s}dy_{d}+\int_{\sqrt{x_{s}x_{d}}}^{1}\int_{0}^{2-y_{d}}A_2^{2}dy_{s}dy_{d}-\int_{x_{d}}^{\sqrt{2x_{d}-x_{d}x_{s}}}\int_{0}^{\frac{2x_{d}-x_{d}x_{s}}{y_{d}}}A_2^{2}dy_{s}dy_{d}-\int_{\sqrt{2x_{d}-x_{d}x_{s}}}^{1}\int_{0}^{y_{d}}A_2^{2}dy_{s}dy_{d}\right) dx_s dx_d = \frac{2501}{14400}-\frac{3}{80}\pi-\frac{2}{45}\ln\left(2\right)$$</span></p> <p>For case <span class="math-container">$3$</span>: <span class="math-container">$$J_3 = \frac{1}{4}\int_0^1 \int_{y_d}^{2-y_d} \left(\int_{y_{d}}^{\sqrt{y_{d}y_{s}}}\int_{0}^{2-\frac{y_{d}y_{s}}{x_{d}}}A_{3}^{2}dx_{s}dx_{d}+\int_{\sqrt{y_{d}y_{s}}}^{1}\int_{0}^{2-x_{d}}A_{3}^{2}dx_{s}dx_{d}-\int_{y_{d}}^{\sqrt{y_{d}\left(2-y_{s}\right)}}\int_{0}^{\frac{y_{d}\left(2-y_{s}\right)}{x_{d}}}A_{3}^{2}dx_{s}dx_{d}-\int_{\sqrt{y_{d}\left(2-y_{s}\right)}}^{1}\int_{0}^{x_{d}}A_{3}^{2}dx_{s}dx_{d}\right) dy_s dy_d = \frac{2501}{14400}-\frac{3}{80}\pi-\frac{2}{45}\ln\left(2\right)$$</span></p> <p>For case <span class="math-container">$4$</span>: <span class="math-container">$$J_4 = \frac{1}{4}\int_{0}^{1}\int_{1-\sqrt{1-x_{d}^{2}}}^{x_{d}}\int_{2+\frac{y_{d}^{2}-2y_{d}}{x_{d}}}^{2-x_{d}}\int_{\frac{2x_{d}-x_{d}x_{s}}{y_{d}}}^{2-y_{d}}A_{4}^{2}dy_{s}dx_{s}dy_{d}dx_{d}+\frac{1}{4}\int_{0}^{1}\int_{x_{d}}^{\sqrt{2x_{d}-x_{d}^{2}}}\int_{\frac{y_{d}^{2}}{x_{d}}}^{2-x_{d}}\int_{2-\frac{x_{d}x_{s}}{y_{d}}}^{2-y_{d}}A_{4}^{2}dy_{s}dx_{s}dy_{d}dx_{d} = -\frac{95}{288}+\frac{1}{12}\pi+\frac{1}{8}\ln(2)$$</span></p> <p>Adding these up yields that <span class="math-container">$J = \frac{39}{800}+\frac{\pi}{120}-\frac{\ln(2)}{180}$</span>. Multiplying by <span class="math-container">$8$</span> gives the final answer as <span class="math-container">$$\frac{39}{100}+\frac{\pi}{15}-\frac{2\ln(2)}{45} \approx 0.569$$</span>, which is in the simulated range Arthur mentioned in the comments.</p>
<p>From what you have written, J assume that all random dots in the question are distributed independently. </p> <p>Now, suppose <span class="math-container">$b_1 = (x_1, y_1)$</span> and <span class="math-container">$b_2 = (x_2, y_2)$</span> are fixed. Thus for a random dot <span class="math-container">$r_1$</span>, uniformly distributed on <span class="math-container">$[0; 1]^2$</span>, the probability, that it lies closer to <span class="math-container">$b_1$</span>, than to <span class="math-container">$b_2$</span> is <span class="math-container">$\mu(\{(x, y) \in [0;1]^2| (x_1 + x_2 - 2x)(x_2 -x_1) + (y_1 + y_2 - 2y)(y_2 -y_1)&gt;0\})$</span>, where <span class="math-container">$\mu$</span> stands for Lebesgue measure. Now, as <span class="math-container">$r_1$</span> and <span class="math-container">$r_2$</span> are independent, then the probability, that both <span class="math-container">$r_1$</span> and <span class="math-container">$r_2$</span> lie closer to <span class="math-container">$b_1$</span>, than to <span class="math-container">$b_2$</span> is <span class="math-container">$(\mu(\{(x, y) \in [0;1]^2| (x_1 + x_2 - 2x)(x_2 -x_1) + (y_1 + y_2 - 2y)(y_2 -y_1)&gt;0\}))^2$</span>.</p> <p>Now as <span class="math-container">$b_1$</span> and <span class="math-container">$b_2$</span> are also independent and uniformly distributed, we can conclude, that in our initial problem <span class="math-container">$$P(r_2 \to b_1|r_1 \to b_1) = \frac{\int_0^1 \int_0^1 \int_0^1 \int_0^1 (\mu(\{(x, y) \in [0;1]^2| (x_1 + x_2 - 2x)(x_2 -x_1) + (y_1 + y_2 - 2y)(y_2 -y_1)&gt;0\}))^2dx_1dy_1dx_2dy_2}{\int_0^1 \int_0^1 \int_0^1 \int_0^1 \mu(\{(x, y) \in [0;1]^2| (x_1 + x_2 - 2x)(x_2 -x_1) + (y_1 + y_2 - 2y)(y_2 -y_1)&gt;0\}))dx_1dy_1dx_2dy_2}$$</span></p>
geometry
<p>I believe that many of you know about the moving sofa problem; if not you can find the description of the problem <a href="https://en.wikipedia.org/wiki/Moving_sofa_problem" rel="noreferrer">here</a>. </p> <p><a href="https://i.sstatic.net/ihxu9.gif" rel="noreferrer"><img src="https://i.sstatic.net/ihxu9.gif" alt="From wikipedia"></a></p> <p><br>In this question I am going to rotate the L shaped hall instead of moving a sofa around the corner. By rotating the hall $180^{\circ}$ what remains between the walls will give the shape of the sofa. Like this: <br><br><br> <a href="https://i.sstatic.net/23dbk.gif" rel="noreferrer"><img src="https://i.sstatic.net/23dbk.gif" alt="enter image description here"></a></p> <p>The points on the hall have the following properties:</p> <p>\begin{eqnarray} A &amp; = &amp; \left( r\cos { \alpha } ,t\sin { \alpha } \right) \\ { A }' &amp; = &amp; \left( r\cos { \alpha } +\sqrt { 2 } \cos { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } ,t\sin { \alpha } +\sqrt { 2 } \sin { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } \right) \\ { B } &amp; = &amp; \left( r\cos { \alpha } -\frac { t\sin { \alpha } }{ \tan { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \\ { B }' &amp; = &amp; \left( r\cos { \alpha } -\frac { t\sin { \alpha } }{ \tan { \left( \frac { \alpha }{ 2 } \right) } } -\frac { 1 }{ \sin { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \\ C &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } ,0 \right) \\ { C }' &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } +\frac { 1 }{ \cos { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \end{eqnarray}</p> <p>Attention: $\alpha$ is not the angle of $AOC$, it is some angle $ADC$ where $D$ changes location on $x$ axis for $r\neq t$. I am saying this because images can create confusion. Anyways I will change them as soon as possible.</p> <p>I could consider $r=f(\alpha)$ and $t=g(\alpha)$ but for this question I am going to take $r$ and $t$ as constants. If they were functions of $\alpha$ there would appear some interesting shapes. I experimented for different functions however the areas are more difficult to calculate, that's why I am not going to share. Maybe in the future.</p> <p>We rotate the hall for $r=t$ in the example above: <br>In this case:</p> <ol> <li>point A moves on a semicircle </li> <li>The envelope of lines between A' and C' is a circular arc. One has to prove this but I assume that it is true for $r=t$.</li> </ol> <p>If my second assumption is correct the area of sofa is $A= 2r-\frac { \pi r^{ 2 } }{ 2 } +\frac { \pi }{ 2 } $. The maximum area is reached when $r = 2/\pi$ and it's value is: $$A = 2/\pi+\pi/2 = 2,207416099$$</p> <p>which matches with Hammersley's sofa. The shape is also similar or same:</p> <p><a href="https://i.sstatic.net/v0Llb.gif" rel="noreferrer"><img src="https://i.sstatic.net/v0Llb.gif" alt="enter image description here"></a></p> <p>Now I am going to increase $t$ with respect to $r$. For $r=2/\pi$ and $t=0.77$:</p> <p><a href="https://i.sstatic.net/6dzwq.gif" rel="noreferrer"><img src="https://i.sstatic.net/6dzwq.gif" alt="enter image description here"></a></p> <p>Well, this looks like <a href="https://i.sstatic.net/FBlms.png" rel="noreferrer">Gerver's sofa</a>. </p> <p>I believe thearea can be maximized by finding the equations of envelopes above and below the sofa. Look at <a href="https://math.stackexchange.com/questions/1775749/right-triangle-on-an-ellipse-find-the-area">this question</a> where @Aretino has computed the area below $ABC$. </p> <p>I don't know enough to find equations for envelopes. I am afraid that I will make mistakes. I considered to calculate area by counting number of pixels in it, but this is not a good idea because for optimizing the area I have to create many images.</p> <p>I will give a bounty of 200 for whom calculates the maximum area. As I said the most difficult part of the problem is to find equations of envelopes. @Aretino did it.</p> <p><strong>PLUS:</strong> Could following be the longest sofa where $(r,t)=((\sqrt 5+1)/2,1)$ ? <a href="https://i.sstatic.net/ojAxc.gif" rel="noreferrer"><img src="https://i.sstatic.net/ojAxc.gif" alt="enter image description here"></a></p> <p>If you want to investigate further or use animation for educational purposes here is the Geogebra file: <a href="http://ggbm.at/vemEtGyj" rel="noreferrer">http://ggbm.at/vemEtGyj</a></p> <hr> <p>Ok, I had some free time and I count number of pixels in the sofa and I am sure that I have something bigger than Hammersley's constant.</p> <p>First, I made a simulation for Hammersley's sofa where $r=t=2/\pi$ and exported the image to png in 300 dpi (6484x3342 pixels) and using Gimp counted number of pixels which have exactly same value. For Hammersley I got $3039086$ pixels. </p> <p>For the second case $r=0.59$ and $t=0.66$ and I got $3052780$ pixels. To calculate area for this case:</p> <p>$$\frac{3052780}{3039086}(2/\pi + \pi/2)=2.217362628$$</p> <p>which is slightly less than Gerver's constant which is $2.2195$. Here is the sofa:</p> <p><a href="https://i.sstatic.net/PhvTB.jpg" rel="noreferrer"><img src="https://i.sstatic.net/PhvTB.jpg" alt="enter image description here"></a></p>
<p>WARNING: this answer uses the new parameterization of points introduced by the OP: <span class="math-container">\begin{eqnarray} A &amp; = &amp; \left( r\cos { \alpha } ,t\sin { \alpha } \right) \\ { A }' &amp; = &amp; \left( r\cos { \alpha } +\sqrt { 2 } \cos { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } ,t\sin { \alpha } +\sqrt { 2 } \sin { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } \right) \\ C &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } ,0 \right) \\ { C }' &amp; = &amp; \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } +\frac { 1 }{ \cos { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \end{eqnarray}</span></p> <p>Another parameterization, which also apperaed in a first version of this question, was used in a <a href="https://math.stackexchange.com/questions/1775749/right-triangle-on-an-ellipse-find-the-area/1776770#1776770">previous answer</a> to a related question.</p> <p>The inner shape of the sofa is formed by the ellipse of semiaxes <span class="math-container">$r$</span>, <span class="math-container">$t$</span> and by the envelope of lines <span class="math-container">$AC$</span> (here and in the following I'll consider only that part of the sofa in the <span class="math-container">$x\ge0$</span> half-plane).</p> <p>The equations of lines <span class="math-container">$AC$</span> can be expressed as a function of <span class="math-container">$\alpha$</span> (<span class="math-container">$0\le\alpha\le\pi$</span>) as <span class="math-container">$F(x,y,\alpha)=0$</span>, where: <span class="math-container">$$ F(x,y,\alpha)= -t y \sin\alpha \tan{\alpha\over2} - t \sin\alpha \left(x - r \cos\alpha - t \sin\alpha \tan{\alpha\over2}\right). $$</span> The equation of the envelope can be found from: <span class="math-container">$$ F(x,y,\alpha)={\partial\over\partial\alpha}F(x,y,\alpha)=0, $$</span> giving the parametric equations for the envelope: <span class="math-container">$$ \begin{align} x_{inner}=&amp; (r-t) \cos\alpha+\frac{1}{2}(t-r) \cos2\alpha+\frac{1}{2}(r+t),\\ y_{inner}=&amp; 4 (t-r) \sin\frac{\alpha}{2}\, \cos^3\frac{\alpha}{2}.\\ \end{align} $$</span></p> <p>We need not consider this envelope if <span class="math-container">$t&lt;r$</span>, because in that case <span class="math-container">$y_{inner}&lt;0$</span>. If <span class="math-container">$t&gt;r$</span> the envelope meets the ellipse at a point <span class="math-container">$P$</span>: the corresponding value of <span class="math-container">$\alpha$</span> can be found from the equation <span class="math-container">$(x_{inner}/r)^2+(y_{inner}/t)^2=1$</span>, whose solution <span class="math-container">$\alpha=\bar\alpha$</span> is given by: <span class="math-container">$$ \begin{cases} \displaystyle\bar\alpha= 2\arccos\sqrt{t\over{t+r}}, &amp;\text{for $t\le3r$;}\\ \displaystyle\bar\alpha= \arccos\sqrt{t\over{2(t-r)}}, &amp;\text{for $t\ge3r$.}\\ \end{cases} $$</span></p> <p>The corresponding values <span class="math-container">$\bar\theta$</span> for the parameter of the ellipse can be easily computed from: <span class="math-container">$\bar\theta=\arcsin(y_{inner}(\bar\alpha)/t)$</span>: <span class="math-container">$$ \begin{cases} \displaystyle\bar\theta= \arcsin\frac{4 \sqrt{rt} (t-r)}{(r+t)^2}, &amp;\text{for $t\le3r$;}\\ \displaystyle\bar\theta= \arcsin\frac{\sqrt{t(t-2 r)}}{t-r}, &amp;\text{for $t\ge3r$.}\\ \end{cases} $$</span></p> <p>For <span class="math-container">$t\ge r$</span> we can then represent half the area under the inner shape of the sofa as an integral: <span class="math-container">$$ {1\over2}Area_{inner}=\int_0^{2t-r} y\,dx= \int_{\pi/2}^{\bar\theta}t\sin\theta{d\over d\theta}(r\cos\theta)\,d\theta+ \int_{\bar\alpha}^{\pi} y_{inner}{dx_{inner}\over d\alpha}\,d\alpha. $$</span></p> <p>This can be computed explicitly, here's for instance the result for <span class="math-container">$r&lt;t&lt;3r$</span>: <span class="math-container">$$ \begin{align} {1\over2}Area_{inner}= {\pi\over4}(r^2-rt+t^2) +\frac{1}{48} (t-r)^2 \left[-24 \cos ^{-1}\frac{\sqrt{t}}{\sqrt{r+t}} +12 \sin \left(2 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right)\\ +12 \sin \left(4 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) -4 \sin \left(6 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) -3 \sin \left(8 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) \right]\\ -2 r t {\sqrt{rt} |r^2-6 r t+t^2|\over(r+t)^4} -{1\over4} r t \sin ^{-1}\frac{4 \sqrt{rt} (t-r)}{(r+t)^2}\\ \end{align} $$</span></p> <p>The outer shape of the sofa is formed by line <span class="math-container">$y=1$</span> and by the envelope of lines <span class="math-container">$A'C'$</span>. By repeating the same steps as above one can find the parametric equations of the outer envelope: <span class="math-container">$$ \begin{align} x_{outer}&amp;= (r-t) \left(\cos\alpha-{1\over2}\cos2\alpha\right) +\cos\frac{\alpha}{2}+{1\over2}(r+t)\\ y_{outer}&amp;= \sin\frac{\alpha}{2} \left(-3 (r-t) \cos\frac{\alpha}{2} +(t-r) \cos\frac{3 \alpha}{2}+1\right)\\ \end{align} $$</span> This curve meets line <span class="math-container">$y=1$</span> for <span class="math-container">$\alpha=\pi$</span> if <span class="math-container">$t-r\le\bar x$</span>, where <span class="math-container">$\bar x=\frac{1}{432} \left(17 \sqrt{26 \left(11-\sqrt{13}\right)}-29 \sqrt{2 \left(11-\sqrt{13}\right)}\right)\approx 0.287482$</span>. In that case the intersection point has coordinates <span class="math-container">$(2t-r,1)$</span> and the area under the outer shape of the sofa can be readily found: <span class="math-container">$$ {1\over2}Area_{outer}={1\over3}(r+2t)+{\pi\over4}(1-(t-r)^2) $$</span> If, on the other hand, <span class="math-container">$t-r&gt;\bar x$</span> then one must find the value of parameter <span class="math-container">$\alpha$</span> at which the envelope meets the line, by solving the equation <span class="math-container">$y_{outer}=1$</span> and looking for the smallest positive solution. This has to be done, in general, by some numerical method.</p> <p>The area of the sofa can then be found as <span class="math-container">$Area_{tot}=Area_{outer}-Area_{inner}$</span>. I used Mathematica to draw a contour plot of this area, as a function of <span class="math-container">$r$</span> (horizontal axis) and <span class="math-container">$t$</span> (vertical axis):</p> <p><a href="https://i.sstatic.net/Z2Ypa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z2Ypa.png" alt="enter image description here" /></a></p> <p>There is a clear maximum in the region around <span class="math-container">$r = 0.6$</span> and <span class="math-container">$t = 0.7$</span>. In this region one can use the simple expressions for <span class="math-container">$Area_{inner}$</span> and <span class="math-container">$Area_{outer}$</span> given above, to find the exact value of the maximum. A numerical search gives <span class="math-container">$2.217856997942074266$</span> for the maximum area, reached for <span class="math-container">$r=0.605513519698965$</span> and <span class="math-container">$t=0.6678342468712839$</span>.</p>
<p>This is not an answer to the stated question, just an outline and an example of how to compute the envelope (and thus area) numerically, pretty efficiently.</p> <p>The code seems to work, but it is not tested, or nowhere near optimal in the algorithmic level; it is just a rough initial sketch to explore the problem at hand.</p> <p>The sofa is symmetric with respect to the $x$ axis, so we only need to consider the (positive $x$) half plane. If we use $\alpha$ for the rotation, $\alpha \in [0, \pi]$, the initially vertical walls (on the right side) are the only ones we need to consider. For simplicity, I'll use $r_x$ and $r_y$ for the radiuses (OP used $r$ and $t$, respectively).</p> <p>The equation for the points that form the near side wall ($t \ge 0$) is $$\vec{p}_{nw}(t, \alpha) = \begin{cases} x_{nw}(t, \alpha) = r_x \cos(\alpha) + t \sin(\alpha/2)\\ y_{nw}(t, \alpha) = r_y \sin(\alpha) - t \cos(\alpha/2)\end{cases}$$</p> <p>Setting $x_{nw}(t, \alpha) = x$, solving for $t$, and substituting into $y_{nw}(t, \alpha)$ yields $$y_n(x, \alpha) = r_y \sin(\alpha) + \frac{r_x \cos(\alpha) - x}{\tan(\alpha/2)}$$ Because the near wall starts at angle $\alpha$, we must only consider $x \ge x_n(0,\alpha)$. We can do that in practice by defining $$\alpha_0(x) = \left\lbrace\begin{matrix} r_y\sqrt{1 - \left(\frac{x}{r_x}\right)^2},&amp;x \lt r_x\\ 0,&amp;x \ge r_x\end{matrix}\right.$$ and only considering $\alpha_0 \le \alpha \le \pi$ when evaluating $y_n(x,\alpha)$. It reaches its maximum when its derivative, $$\frac{d y_n(x,\alpha)}{d \alpha} = \frac{x - r_x}{1 - \cos(\alpha)} - (r_x - r_y)\cos(\alpha)$$ is zero. There may be two real roots, $$\begin{align}\alpha_1(x) &amp;= \arccos\left(\frac{\sqrt{ (r_x - r_y)(5 r_x - r_y - 4 x)} + (r_x - r_y)}{2 ( r_x - r_y )}\right)\\ \alpha_2(x) &amp;= \pi - \arccos\left(\frac{\sqrt{ (r_x - r_y)(5 r_x - r_y - 4 x)} - (r_x - r_y)}{2 ( r_x - r_y )}\right)\end{align}$$ In summary, the near wall is the maximum of one, two, or three values: $y_n(x,\alpha_0)$; $y_n(x,\alpha_1)$ if $\alpha_0 \lt \alpha_1 \lt \pi$; and $y_n(x,\alpha_2$) if $\alpha_0 \lt \alpha_2 \lt \pi$.</p> <p>For the far side wall, the points are $$\vec{p}_f(t, \alpha) = \begin{cases} x_f(t) = r_x \cos(\alpha) + \cos(\alpha/2) + \sin(\alpha/2) + t \sin(\alpha/2)\\ y_f(t) = r_y \sin(\alpha) + \sin(\alpha/2) - \cos(\alpha/2) - t \cos(\alpha/2)\end{cases}$$ The first added term represents the corridor width, and the second the corridor height, both $1$. Setting $x_f(t, \alpha) = x$, solving for $t$, and substituting into $y_f(t, \alpha)$ yields $$y_f(x, \alpha) = \frac{(r_x + r_y - 2x)\cos(\alpha/2) + (r_x - r_y)\cos(3\alpha/2) + 2 }{2 \sin(\alpha/2)}$$ Its derivative is $$\frac{d y_f(x, \alpha)}{d \alpha} = \frac{rx - x + \cos(\alpha/2)}{\cos(\alpha) - 1} - \cos(\alpha)(r_x - r_y)$$ It can have up to four real roots (the roots of $4(r_x-r_y)\chi^4 - 6(r_x-r_y)\chi^2 - \chi + (r_x - r_y) + (x - r_y) = 0$). While it does have analytical solutions, they are nasty, so I prefer to use a binary search instead. I utilize the fact that the sign (and zeros) of the derivative are the same as the simpler function $$d_f(x, \alpha) = \cos(\alpha)\left(\cos(\alpha)-1\right)(r_x - r_y) - \cos(\alpha/2) - r_x + x$$ which does not have poles at $\alpha=0$ or $\alpha=\pi$.</p> <p>Here is an example implementation in C:</p> <pre><code>#include &lt;stdlib.h&gt; #include &lt;string.h&gt; #include &lt;stdio.h&gt; #include &lt;math.h&gt; #define PI 3.14159265358979323846 static double near_y(const double x, const double xradius, const double yradius) { double y = (x &lt; xradius) ? yradius * sqrt(1.0 - (x/xradius)*(x/xradius)) : 0.0; if (xradius != yradius) { const double a0 = (x &lt; xradius) ? acos(x/xradius) : 0.0; const double s = (xradius - yradius)*(5*xradius - yradius - 4*x); if (s &gt;= 0.0) { const double r = 0.5 * sqrt(s) / (xradius - yradius); if (r &gt; -1.5 &amp;&amp; r &lt; 0.5) { const double a1 = acos(r + 0.5); if (a1 &gt; a0 &amp;&amp; a1 &lt; PI) { const double y1 = yradius * sin(a1) + (xradius * cos(a1) - x) / tan(0.5 * a1); if (y &lt; y1) y = y1; } } if (r &gt; -0.5 &amp;&amp; r &lt; 1.5) { const double a2 = PI - acos(r - 0.5); if (a2 &gt; a0 &amp;&amp; a2 &lt; PI) { const double y2 = yradius * sin(a2) + (xradius * cos(a2) - x) / tan(0.5 * a2); if (y &lt; y2) y = y2; } } } } return y; } </code></pre> <p>Above, <code>near_y()</code> finds the maximum $y$ coordinate the near wall reaches at point $x$. </p> <pre><code>static double far_y(const double x, const double xradius, const double yradius) { const double rxy = xradius - yradius; const double rx = xradius - x; double retval = 1.0; double anext = 0.0; double dnext = x - 1.0 - xradius; double acurr, dcurr, y; /* Outer curve starts at min(1+xradius, 2*yradius-xradius). */ if (x &lt; 1.0 + xradius &amp;&amp; x &lt; yradius + yradius - xradius) return 1.0; while (1) { acurr = anext; dcurr = dnext; anext += PI/1024.0; if (anext &gt;= PI) break; dnext = cos(anext)*(cos(anext) - 1.0)*rxy - cos(anext*0.5) - rx; if ((dcurr &lt; 0.0 &amp;&amp; dnext &gt; 0.0) || (dcurr &gt; 0.0 &amp;&amp; dnext &lt; 0.0)) { double amin = (dcurr &lt; 0.0) ? acurr : anext; double amax = (dcurr &lt; 0.0) ? anext : acurr; double a, d; do { a = 0.5 * (amin + amax); d = cos(a)*(cos(a)-1.0)*rxy - cos(a*0.5) - rx; if (d &lt; 0.0) amin = a; else if (d &gt; 0.0) amax = a; else break; } while (amax &gt; amin &amp;&amp; a != amin &amp;&amp; a != amax); y = (cos(0.5*a)*(0.5*(xradius+yradius)-x) + cos(1.5*a)*rxy*0.5 + 1.0) / sin(0.5*a); if (retval &gt; y) { retval = y; if (y &lt;= 0.0) return 0.0; } } else if (dcurr == 0.0) { y = (cos(0.5*acurr)*(0.5*(xradius+yradius)-x) + cos(1.5*acurr)*rxy*0.5 + 1.0) / sin(0.5*acurr); if (retval &gt; y) { y = retval; if (y &lt;= 0.0) return 0.0; } } } return retval; } </code></pre> <p>Above, <code>far_y()</code> finds the minimum $y$ coordinate for the far wall at $x$. the far wall reaches at the same point. It calculates the sign of the derivative for 1024 values of $\alpha$, and uses a binary search to find the root (and the extremum $y$) whenever the derivative spans zero.</p> <p>With the above two functions, we only need to divide the full sofa width ($1 + r_x$) to slices, evaluate the $y$ coordinates for each slice, multiply the $y$ coordinate difference by the double slice width (since we only calculate one half of the sofa), to obtain an estimate for the sofa area (using the midpoint rule for the integral):</p> <pre><code>double sofa_area(const unsigned int xsamples, const double xradius, const double yradius) { if (xradius &gt; 0.0 &amp;&amp; yradius &gt; 0.0) { const double dx = (1.0 + xradius) / xsamples; double area = 0.0; unsigned int i; for (i = 0; i &lt; xsamples; i++) { const double x = dx * (0.5 + i); const double ymin = near_y(x, xradius, yradius); const double ymax = far_y(x, xradius, yradius); if (ymin &lt; ymax) area += ymax - ymin; } return 2*dx*area; } else return 0.0; } </code></pre> <p>As far as I have found, the best one is <code>sofa_area(N, 0.6055, 0.6678) = 2.21785</code> (with <code>N ≥ 5000</code>, larger <code>N</code> yields more precise estimates; I checked up to <code>N = 1,000,000</code>).</p> <p>The curve the inner corner makes ($(x_{wn}(0,\alpha), y_{wn}(0,\alpha))$, $0 \le \alpha \le \pi$) is baked inside <code>near_y()</code> and <code>far_y()</code> functions. However, it would be possible to replace $y_{wn}(0,\alpha)$ with a more complicated function (perhaps a polynomial scaling $r_y$, so that it is $1$ at $\alpha = 0, \pi$?), if one re-evaluates the functions above. I personally use Maple or Mathematica for the math, so the hard part, really, is to think of a suitable function that would allow "deforming" the elliptic path in interesting ways, without making the above equations too hard or slow to implement.</p> <p>The C code itself could be optimized, also. (I don't mean micro-optimizations; I mean things like using the trapezoid rule for the integral, better root finding approach for <code>far_y()</code>, and so on.)</p>
combinatorics
<p>I'm studying graphs in algorithm and complexity, (but I'm not very good at math) as in title:</p> <blockquote> <p>Why a complete graph has $\frac{n(n-1)}{2}$ edges?</p> </blockquote> <p>And how this is related with combinatorics?</p>
<p>A simpler answer without binomials: A complete graph means that every vertex is connected with every other vertex. If you take one vertex of your graph, you therefore have $n-1$ outgoing edges from that particular vertex. </p> <p>Now, you have $n$ vertices in total, so you might be tempted to say that there are $n(n-1)$ edges in total, $n-1$ for every vertex in your graph. But this method counts every edge twice, because every edge going out from one vertex is an edge going into another vertex. Hence, you have to divide your result by 2. This leaves you with $n(n-1)/2$.</p>
<p>A complete graph has an edge between any two vertices. You can get an edge by picking any two vertices.</p> <p>So if there are $n$ vertices, there are $n$ choose $2$ = ${n \choose 2} = n(n-1)/2$ edges.</p> <p>Does that help?</p>
number-theory
<p>The following appeared in the problems section of the March 2015 issue of the <em>American Mathematical Monthly</em>.</p> <blockquote> <p>Show that there are infinitely many rational triples $(a, b, c)$ such that $a + b + c = abc = 6$.</p> </blockquote> <p>For example, here are two solutions $(1,2,3)$ and $(25/21,54/35,49/15)$. </p> <p>The deadline for submitting solutions was July 31 2015, so it is now safe to ask: is there a simple solution? One that doesn't involve elliptic curves, for instance? </p>
<p>(<strong><em>Edit at the bottom</em></strong>.) Here is an <strong>elementary</strong> way (known to Fermat) to find an infinite number of rational points. From $a+b+c = abc = 6$, we need to solve the equation,</p> <p>$$ab(6-a-b) = 6\tag1$$</p> <p>Solving $(1)$ as a quadratic in $b$, its discriminant $D$ must be made a square,</p> <p>$$D := a^4-12a^3+36a^2-24a = z^2$$</p> <p>Using <strong><em>any</em></strong> non-zero solution $a_0$, do the transformation,</p> <p>$$a=x+a_0\tag2$$</p> <p>For this curve, let $a_0=2$, and we get,</p> <p>$$x^4-4x^3-12x^2+8x+16$$</p> <p>Assume it to be a square, </p> <p>$$x^4-4x^3-12x^2+8x+16 = (px^2+qx+r)^2$$</p> <p>Expand, then collect powers of $x$ to get the form,</p> <p>$$p_4x^4+p_3x^3+p_2x^2+p_1x+p_0 = 0$$</p> <p>where the $p_i$ are polynomials in $p,q,r$. Then solve the system of <strong>three</strong> equations $p_2 = p_1 = p_0 = 0$ using the <strong>three</strong> unknowns $p,q,r$. One ends up with,</p> <p>$$105/64x^4+3/8x^3=0$$</p> <p>Thus, $x =-16/35$ or,</p> <p>$$a = x+a_0 = -16/35+2 = 54/35$$</p> <p>and you have a new rational point, </p> <p>$$a_1 = 54/35 = 6\times 3^{\color{red}2}/35$$ </p> <p>Use this on $(2)$ as $x = y+54/35$ and repeat the procedure. One gets,</p> <p>$$a_2 = 6\times 4286835^{\color{red}2}/37065988023371$$</p> <p>Again using this on $(2)$, we eventually have,</p> <p>$$\small {a_3 = 6\times 11838631447160215184123872719289314446636565357654770746958595}^{\color{red}2} /d\quad$$</p> <p>where the denominator $d$ is a large integer too tedious to write. </p> <p><strong>Conclusion:</strong> Starting with a "seed" solution, just a few iterations of this procedure has yielded $a_i$ with a similar form $6n^{\color{red}2}/d$ that grow rapidly in "height". Heuristically, it then suggests an <strong><em>infinite</em></strong> sequence of distinct rational $a_i$ that grow in height with each iteration. </p> <p>$\color{blue}{Edit}$: Courtesy of Aretino's remark below, then another piece of the puzzle was found. We can translate his recursion into an identity. If,</p> <p>$$a^4-12a^3+36a^2-24a = z^2$$</p> <p>then subsequent ones are,</p> <p>$$v^4-12v^3+36v^2-24v = \left(\frac{12\,e\,g\,(e^2+3f^2)}{(e^2-f^2)^2}\right)^2$$</p> <p>where,</p> <p>$$\begin{aligned} v &amp;=\frac{-6g^2}{e^2-f^2}\\ \text{and,}\\ e &amp;=\frac{a^3-3a^2+3}{3a}\\ f &amp;=\frac{a^3-6a^2+9a-6}{z}\\ g &amp;=\frac{a^3-6a^2+12a-6}{z} \end{aligned}$$</p> <p>Starting with $a_0=2$, this leads to $v_1 = 6\times 3^2/35$, then $v_2 = 6\times 4286835^2/37065988023371$, <em>ad infinitum</em>. Thus, this is an <strong><em>elementary</em></strong> demonstration that there an infinite sequence of rational $a_i = v_i$ without appealing to elliptic curves.</p>
<p>More generally, suppose for some <span class="math-container">$S,P$</span> we're given a rational solution <span class="math-container">$(a_0,b_0,c_0)$</span> of the Diophantine equation <span class="math-container">$$ E = E_{S,P}: \quad a+b+c = S, \ \ abc = P. $$</span> Then, as long as <span class="math-container">$a,b,c$</span> are pairwise distinct, we can obtain a new solution <span class="math-container">$(a_1,b_1,c_1)$</span> by applying the transformation <span class="math-container">$$ T\bigl((a,b,c)\bigr) = \left( -\frac{a(b-c)^2}{(a-b)(a-c)} \, , -\frac{b(c-a)^2}{(b-c)(b-a)} \, , -\frac{c(a-b)^2}{(c-a)(c-b)} \right). $$</span> Indeed, it is easy to see that the coordinates of <span class="math-container">$T(a,b,c)$</span> multiply to <span class="math-container">$abc$</span>; that they also sum to <span class="math-container">$a+b+c$</span> takes only a bit of algebra. (This transformation was obtained by regarding <span class="math-container">$abc=P$</span> as a cubic curve in the plane <span class="math-container">$a+b+c=S$</span>, finding the tangent at <span class="math-container">$(a_0,b_0,c_0)$</span>, and computing its third point of intersection with <span class="math-container">$abc=P$</span>; see picture and further comments below.) We can then repeat the procedure, computing <span class="math-container">$$ (a_2,b_2,c_2) = T\bigl((a_1,b_1,c_1)\bigr), \quad (a_3,b_3,c_3) = T\bigl((a_2,b_2,c_2)\bigr), $$</span> etc., as long as each <span class="math-container">$a_i,b_i,c_i$</span> are again pairwise distinct. In our case <span class="math-container">$S=P=6$</span> and we start from <span class="math-container">$(a_0,b_0,c_0) = (1,2,3)$</span>, finding <span class="math-container">$(a_1,b_1,c_1) = (-1/2, 8, -3/2)$</span>, <span class="math-container">$(a_2,b_2,c_2) = (-361/68, -32/323, 867/76)$</span>, <span class="math-container">$$ (a_3,b_3,c_3) = \left( \frac{79790995729}{9885577384}\, ,\ -\frac{4927155328}{32322537971}\, ,\ -\frac{9280614987}{24403407416}\, \right), $$</span> "etcetera".</p> <p>As with the recursive construction given by <strong>Tito Piezas III</strong>, the construction of <span class="math-container">$T$</span> via tangents to a cubic is an example of a classical technique that has been incorporated into the modern theory of elliptic curves but does not require explicit delving into this theory. Also as with <strong>TPIII</strong>'s construction, completing the proof requires showing that the iteration does not eventually cycle. We do this by showing that (as suggested by the first three steps) the solutions <span class="math-container">$(a_i,b_i,c_i)$</span> get ever more complicated as <span class="math-container">$i$</span> increases.</p> <p>We measure the complexity of a rational number by writing it as <span class="math-container">$m/n$</span> <em>in lowest terms</em> and defining a "height" <span class="math-container">$H$</span> by <span class="math-container">$H(m/n) = \sqrt{m^2+n^2}$</span>. Using the defining equations of <span class="math-container">$E_{S,P}$</span>, we eliminate <span class="math-container">$b,c$</span> from the formula for the first coordinate of <span class="math-container">$T\bigl( (a,b,c) \bigr)$</span>, and likewise for each of the other two coordinates, finding that <span class="math-container">$$ T\bigl( (a,b,c) \bigr) = (t(a),t(b),t(c)) \bigr) $$</span> where <span class="math-container">$$ t(x) := -\frac{x^2(x-S)^2 - 4Px}{x^2(2x-S)+P}. $$</span> We find that the numerator and denominator are relatively prime as polynomials in <span class="math-container">$x$</span>, unless <span class="math-container">$P=0$</span> or <span class="math-container">$P=(S/3)^3$</span>, when <span class="math-container">$E_{S,P}$</span> is degenerate (obviously so if <span class="math-container">$P=0$</span>, and with an isolated double point at <span class="math-container">$a=b=c=S/3$</span> if <span class="math-container">$P=(S/3)^3$</span>). Thus <span class="math-container">$t$</span> is a rational function of degree <span class="math-container">$4$</span>, meaning that <span class="math-container">$$ t(m/n) = \frac{N(m,n)}{D(m,n)} $$</span> for some homogeneous polynomials <span class="math-container">$N,D$</span> of degree <span class="math-container">$4$</span> <em>without common factor</em>. We claim:</p> <p><strong>Proposition.</strong> <em>If <span class="math-container">$f=N/D$</span> is a rational function of degree <span class="math-container">$d$</span> then there exists <span class="math-container">$c&gt;0$</span> such that <span class="math-container">$H(t(x)) \geq c H(x)^d$</span> for all <span class="math-container">$x$</span>.</em></p> <p><strong>Corollary</strong>: <em>If <span class="math-container">$d&gt;1$</span> and a sequence <span class="math-container">$x_0,x_1,x_2,x_3,\ldots$</span> is defined inductively by <span class="math-container">$x_{i+1} = f(x_i)$</span>, then <span class="math-container">$H(x_i) \rightarrow \infty$</span> as <span class="math-container">$i \rightarrow \infty$</span> provided some <span class="math-container">$H(x_i)$</span> is large enough, namely <span class="math-container">$H(x_i) &gt; c^{-1/(d-1)}$</span>.</em></p> <p><em>Proof</em> of Proposition: This would be clear if we knew that the fraction <span class="math-container">$f(m/n) = N(m,n)/D(m,n)$</span> must be in lowest terms, because then we could take <span class="math-container">$$ c = c_0 := \min_{m^2+n^2 = 1} \sqrt{N(m,n)^2 + D(m,n)^2}. $$</span> (Note that <span class="math-container">$c_0$</span> is strictly positive, because it is the minimum value of a continuous positive function on the unit circle, and the unit circle is compact.) In general <span class="math-container">$N(m,n)$</span> and <span class="math-container">$D(m,n)$</span> need not be relatively prime, but their gcd is bounded above: because <span class="math-container">$N,D$</span> have no common factor, they have nonzero linear combinations of the form <span class="math-container">$R_1 m^{2d}$</span> and <span class="math-container">$R_2 n^{2d}$</span>, and since <span class="math-container">$\gcd(m^{2d},n^{2d}) = \gcd(m,n)^{2d} = 1$</span> we have <span class="math-container">$$ \gcd(N(m,n), D(m,n)) \leq R := \text{lcm} (R_1,R_2). $$</span> (In fact <span class="math-container">$R = \pm R_1 = \pm R_2$</span>, the common value being <span class="math-container">$\pm$</span> the <em>resultant</em> of <span class="math-container">$N$</span> and <span class="math-container">$D$</span>; but we do not need this.) Thus we may take <span class="math-container">$c = c_0/R$</span>, <strong>QED</strong>.</p> <p>For our degree-<span class="math-container">$4$</span> functions <span class="math-container">$t$</span> associated with <span class="math-container">$E_{S,P}$</span> we compute <span class="math-container">$R = P^2 (27P-S^3)^2$</span>, which is <span class="math-container">$18^4$</span> for our <span class="math-container">$S=P=6$</span>; and we calculate <span class="math-container">$c_0 &gt; 1/12$</span> (the minimum occurs near <span class="math-container">$(.955,.3)$</span>). Hence the sequence of solutions <span class="math-container">$(a_i,b_i,c_i)$</span> is guaranteed not to cycle once some coordinate has height at least <span class="math-container">$(12 \cdot 18^4)^{1/3} = 108$</span>. This already happens for <span class="math-container">$i=2$</span>, so we have proved that <span class="math-container">$E_{6,6}$</span> has infinitely many rational solutions. <span class="math-container">$\Box$</span></p> <p>The same technique works with <strong>TPIII</strong>'s recursion, which has <span class="math-container">$d=9$</span>.</p> <p>The following Sage plot shows:</p> <p>in <span class="math-container">$\color{blue}{\text{blue}}$</span>, the curve <span class="math-container">$E_{6,6}$</span>, projected to the <span class="math-container">$(a,b)$</span> plane (with both coordinates in <span class="math-container">$[-6,12]$</span>);</p> <p>in <span class="math-container">$\color{gray}{\text{gray}}$</span>, the asymptotes <span class="math-container">$a=0$</span>, <span class="math-container">$b=0$</span>, and <span class="math-container">$c=0$</span>;</p> <p>and in <span class="math-container">$\color{orange}{\text{orange}}$</span>, <span class="math-container">$\color{red}{\text{red}}$</span>, and <span class="math-container">$\color{brown}{\text{brown}}$</span>, the tangents to the curve at <span class="math-container">$(a_i,b_i,c_i)$</span> that meet the curve again at <span class="math-container">$(a_{i+1},b_{i+1},c_{i+1})$</span>, for <span class="math-container">$i=0,1,2$</span>:</p> <p><a href="https://i.sstatic.net/tf2eK.png" rel="noreferrer"><img src="https://i.sstatic.net/tf2eK.png" alt=""></a><br> <sub>(source: <a href="http://math.harvard.edu/~elkies/mx1384653.png" rel="noreferrer">harvard.edu</a>)</sub> </p> <p>Further solutions can be obtained intersecting <span class="math-container">$E$</span> with the line joining two non-consecutive points; this illustrated by the dotted <span class="math-container">$\color{green}{\text{green}}$</span> line, which connects the <span class="math-container">$i=0$</span> to the <span class="math-container">$i=2$</span> point, and meets <span class="math-container">$E$</span> again in a point <span class="math-container">$(20449/8023, 25538/10153, 15123/16159)$</span> with all coordinates positive.</p> <p>In the modern theory of elliptic curves, the rational points (including any rational "points at infinity", here the asymptotes) form an additive group, with three points adding to zero <strong>iff</strong> they are the intersection of <span class="math-container">$E$</span> with a line (counted with multiplicity). Hence if we denote our initial point <span class="math-container">$(1,2,3)=(a_0,b_0,c_0)$</span> by <span class="math-container">$P$</span>, the map <span class="math-container">$T$</span> is multiplication by <span class="math-container">$-2$</span> in the group law, so the <span class="math-container">$i$</span>-th iterate is <span class="math-container">$(-2)^i P$</span>, and <span class="math-container">$(20449/8023, 25538/10153, 15123/16159)$</span> is <span class="math-container">$-(P+4P) = -5P$</span>. Cyclic permutations of the coordinates is translation by a 3-torsion point (indeed an elliptic curve has a rational 3-torsion point <strong>iff</strong> it is isomorphic with <span class="math-container">$E_{S,P}$</span> for some <span class="math-container">$S$</span> and <span class="math-container">$P$</span>), and switching two coordinates is multiplication by <span class="math-container">$-1$</span>. The iteration constructed by <strong>Tito Piezas III</strong> is multiplication by <span class="math-container">$\pm 3$</span> in the group law; in general, multiplication by <span class="math-container">$k$</span> is a rational function of degree <span class="math-container">$k^2$</span>.</p>
linear-algebra
<p>First of all, I am very comfortable with the tensor product of vector spaces. I am also very familiar with the well-known generalizations, in particular the theory of monoidal categories. I have gained quite some intuition for tensor products and can work with them. Therefore, my question is not about the definition of tensor products, nor is it about its properties. It is rather about the mental images. My intuition for tensor products was never really <strong>geometric</strong>. Well, except for the tensor product of commutative algebras, which corresponds to the fiber product of the corresponding affine schemes. But let's just stick to real vector spaces here, for which I have some geometric intuition, for example from classical analytic geometry. </p> <p>The direct product of two (or more) vector spaces is quite easy to imagine: There are two (or more) "directions" or "dimensions" in which we "insert" the vectors of the individual vector spaces. For example, the direct product of a line with a plane is a three-dimensional space.</p> <p>The exterior algebra of a vector space consists of "blades", as is nicely explained in the <a href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="noreferrer">Wikipedia article</a>.</p> <p>Now what about the tensor product of two finite-dimensional real vector spaces $V,W$? Of course $V \otimes W$ is a direct product of $\dim(V)$ copies of $W$, but this description is not intrinsic, and also it doesn't really incorporate the symmetry $V \otimes W \cong W \otimes V$. How can we describe $V \otimes W$ geometrically in terms of $V$ and $W$? This description should be intrinsic and symmetric.</p> <p>Note that <a href="https://math.stackexchange.com/questions/115630">SE/115630</a> basically asked the same, but received no actual answer. The answer given at <a href="https://math.stackexchange.com/questions/309838">SE/309838</a> discusses where tensor products are used in differential geometry for more abstract notions such as tensor fields and tensor bundles, but this doesn't answer the question either. (Even if my question gets closed as a duplicate, then I hope that the other questions receive more attention and answers.)</p> <p>More generally, I would like to ask for a geometric picture of the tensor product of two vector bundles on nice topological spaces. For example, tensoring with a line bundle is some kind of twisting. But this is still some kind of vague. For example, consider the Möbius strip on the circle $S^1$, and pull it back to the torus $S^1 \times S^1$ along the first projection. Do the same with the second projection, and then tensor both. We get a line bundle on the torus, okay, but how does it look like geometrically?</p> <p>Perhaps the following related question is easier to answer: Assume we have a geometric understanding of two linear maps $f : \mathbb{R}^n \to \mathbb{R}^m$, $g : \mathbb{R}^{n'} \to \mathbb{R}^{m'}$. Then, how can we imagine their tensor product $f \otimes g : \mathbb{R}^n \otimes \mathbb{R}^{n'} \to \mathbb{R}^m \otimes \mathbb{R}^{m'}$ or the corresponding linear map $\mathbb{R}^{n n'} \to \mathbb{R}^{m m'}$ geometrically? This is connected to the question about vector bundles via their cocycle description.</p>
<p>Well, this may not qualify as "geometric intuition for the tensor product", but I can offer some insight into the tensor product of line bundles.</p> <p>A line bundle is a very simple thing -- all that you can "do" with a line is flip it over, which means that in some basic sense, the Möbius strip is the only really nontrivial line bundle. If you want to understand a line bundle, all you need to understand is where the Möbius strips are.</p> <p>More precisely, if $X$ is a line bundle over a base space $B$, and $C$ is a closed curve in $B$, then the preimage of $C$ in $X$ is a line bundle over a circle, and is therefore either a cylinder or a Möbius strip. Thus, a line bundle defines a function $$ \varphi\colon \;\pi_1(B)\; \to \;\{-1,+1\} $$ where $\varphi$ maps a loop to $-1$ if its preimage is a Möbius strip, and maps a loop to $+1$ if its preimage is a cylinder.</p> <p>It's not too hard to see that $\varphi$ is actually a homomorphism, where $\{-1,+1\}$ forms a group under multiplication. This homomorphism completely determines the line bundle, and there are no restrictions on the function $\varphi$ beyond the fact that it must be a homomorphism. This makes it easy to classify line bundles on a given space.</p> <p>Now, if $\varphi$ and $\psi$ are the homomorphisms corresponding to two line bundles, then the tensor product of the bundles corresponds to the <em>algebraic product of $\varphi$ and $\psi$</em>, i.e. the homomorphism $\varphi\psi$ defined by $$ (\varphi\psi)(\alpha) \;=\; \varphi(\alpha)\,\psi(\alpha). $$ Thus, the tensor product of two bundles only "flips" the line along the curve $C$ if exactly one of $\varphi$ and $\psi$ flip the line (since $-1\times+1 = -1$).</p> <p>In the example you give involving the torus, one of the pullbacks flips the line as you go around in the longitudinal direction, and the other flips the line as you around in the meridional direction:</p> <p><img src="https://i.sstatic.net/iEOgb.png" alt="enter image description here"> <img src="https://i.sstatic.net/SKGy1.png" alt="enter image description here"></p> <p>Therefore, the tensor product will flip the line when you go around in <em>either</em> direction:</p> <p><img src="https://i.sstatic.net/tmKVQ.png" alt="enter image description here"></p> <p>So this gives a geometric picture of the tensor product in this case.</p> <p>Incidentally, it turns out that the following things are all really the same:</p> <ol> <li><p>Line bundles over a space $B$</p></li> <li><p>Homomorphisms from $\pi_1(X)$ to $\mathbb{Z}/2$.</p></li> <li><p>Elements of $H^1(B,\mathbb{Z}/2)$.</p></li> </ol> <p>In particular, every line bundle corresponds to an element of $H^1(B,\mathbb{Z}/2)$. This is called the <a href="https://en.wikipedia.org/wiki/Stiefel%E2%80%93Whitney_class" rel="noreferrer">Stiefel-Whitney class</a> for the line bundle, and is a simple example of a <a href="https://en.wikipedia.org/wiki/Characteristic_class" rel="noreferrer">characteristic class</a>.</p> <p><strong>Edit:</strong> As Martin Brandenburg points out, the above classification of line bundles does not work for arbitrary spaces $B$, but does work in the case where $B$ is a CW complex.</p>
<p>Good question. My personal feeling is that we gain true geometric intuition of vector spaces only once norms/inner products/metrics are introduced. Thus, it probably makes sense to consider tensor products in the category of, say, Hilbert spaces (maybe finite-dimensional ones at first). My geometric intuition is still mute at this point, but I know that (for completed tensor products) we have an isometric isomorphism $$ L^2(Z_1) \otimes L^2(Z_2) \cong L^2(Z_1 \times Z_2) $$<br> where $Z_i$'s are measure spaces. In the finite-dimensional setting one, of course, just uses counting measures on finite sets. From this point, one can at least rely upon analytic intuition for the tensor product (Fubini theorem and computation of double integrals as iterated integrals, etc.).</p>
linear-algebra
<p><span class="math-container">$$\det(A^T) = \det(A)$$</span></p> <p>Using the geometric definition of the determinant as the area spanned by the <em>columns</em>, could someone give a geometric interpretation of the property?</p>
<p><em>A geometric interpretation in four intuitive steps....</em></p> <p><strong>The Determinant is the Volume Change Factor</strong></p> <p>Think of the matrix as a geometric transformation, mapping points (column vectors) to points: $x \mapsto Mx$. The determinant $\mbox{det}(M)$ gives the factor by which volumes change under this mapping.</p> <p>For example, in the question you define the determinant as the volume of the parallelepiped whose edges are given by the matrix columns. This is exactly what the unit cube maps to, so again, the determinant is the factor by which the volume changes.</p> <p><strong>A Matrix Maps a Sphere to an Ellipsoid</strong></p> <p>Being a linear transformation, a matrix maps a sphere to an ellipsoid. The singular value decomposition makes this especially clear.</p> <p>If you consider the principal axes of the ellipsoid (and their preimage in the sphere), the singular value decomposition expresses the matrix as a product of (1) a rotation that aligns the principal axes with the coordinate axes, (2) scalings in the coordinate axis directions to obtain the ellipsoidal shape, and (3) another rotation into the final position.</p> <p><strong>The Transpose Inverts the Rotation but Keeps the Scaling</strong></p> <p>The transpose of the matrix is very closely related, since the transpose of a product is the reversed product of the transposes, and the transpose of a rotation is its inverse. In this case, we see that the transpose is given by the inverse of rotation (3), the <em>same</em> scaling (2), and finally the inverse of rotation (1).</p> <p>(This is almost the same as the inverse of the matrix, except the inverse naturally uses the <em>inverse</em> of the original scaling (2).)</p> <p><strong>The Transpose has the Same Determinant</strong></p> <p>Anyway, the rotations don't change the volume -- only the scaling step (2) changes the volume. Since this step is exactly the same for $M$ and $M^\top$, the determinants are the same.</p>
<p>This is more-or-less a reformulation of Matt's answer. He relies on the existence of the SVD-decomposition, I show that <span class="math-container">$\det(A)=\det(A^T)$</span> can be stated in a little different way.</p> <p>Every square matrix can be represented as the product of an orthogonal matrix (representing an isometry) and an upper triangular matrix (<a href="http://en.wikipedia.org/wiki/QR_decomposition" rel="noreferrer">QR decomposition</a>)- where the determinant of an upper (or lower) triangular matrix is just the product of the elements along the diagonal (that stay in their place under transposition), so, by the Binet formula, <span class="math-container">$A=QR$</span> gives: <span class="math-container">$$\det(A^T)=\det(R^T Q^T)=\det(R)\det(Q^T)=\det(R)\det(Q^{-1}),$$</span> <span class="math-container">$$\det(A^T)=\frac{\det{R}}{\det{Q}}=\det(Q)\det(R)=\det(QR)=\det(A),$$</span> where we used that the transpose of an orthogonal matrix is its inverse, and the determinant of an orthogonal matrix belongs to <span class="math-container">$\{-1,1\}$</span> - since an orthogonal matrix represents an isometry.</p> <hr /> <p>You can also consider that <span class="math-container">$(*)$</span> the determinant of a matrix is preserved under Gauss-row-moves (replacing a row with the sum of that row with a linear combination of the others) and Gauss-column-moves, too, since the volume spanned by <span class="math-container">$(v_1,\ldots,v_n)$</span> is the same of the volume spanned by <span class="math-container">$(v_1+\alpha_2 v_2+\ldots,v_2,\ldots,v_n)$</span>. By Gauss-row-moves you can put <span class="math-container">$A$</span> in upper triangular form <span class="math-container">$R$</span>, then have <span class="math-container">$\det A=\prod R_{ii}.$</span> If you apply the same moves as column moves on <span class="math-container">$A^T$</span>, you end with <span class="math-container">$R^T$</span> that is lower triangular and has the same determinant of <span class="math-container">$R$</span>, obviously. So, in order to provide a &quot;really geometric&quot; proof that <span class="math-container">$\det(A)=\det(A^T)$</span>, we only need to provide a &quot;really geometric&quot; interpretation of <span class="math-container">$(*)$</span>. An intuition is that the volume of the parallelepiped originally spanned by the columns of <span class="math-container">$A$</span> is the same if we change, for istance, the base of our vector space by sending <span class="math-container">$(e_1,\ldots,e_n)$</span> into <span class="math-container">$(e_1,\ldots,e_{i-1},e_i+\alpha\, e_j,e_{i+1},\ldots,e_n)\,$</span> with <span class="math-container">$i\neq j$</span>, since the geometric object is the same, and we are only changing its &quot;description&quot;.</p>
combinatorics
<p>I have recently played the game <a href="http://gabrielecirulli.github.io/2048/">2048</a>, created by Gabriele Cirulli, which is fun. I suggest trying if you have not. But my brother posed this question to me about the game: </p> <p>If he were to write a script that made random moves in the game 2048, what is the probability that it would win the game? </p> <p>Combinatorics is not my area, so I did not even attempt to answer this, knowing this seems like a difficult question to answer. But I thought someone here might have a good idea. </p> <p>Also, since we are not concerned with time, just with winning, we can assume that every random move actually results in a tile moving. </p> <p><strong>Addendum</strong></p> <p>While the answers below shed light on the problem, only BoZenKhaa came close to providing a probability, even if it was an upper bound. So I would like to modify the question to: </p> <p>Can we find decent upper and lower bounds for this probability?</p>
<p>I implemented a simulation of 2048, because I wanted to analyze different strategies.</p> <p>Unsurprisingly, the result is that moving at random is a really bad strategy. <img src="https://i.sstatic.net/iGHwk.png" alt="enter image description here"> Above you can see the scores of $1000000$ random games (edit: <em>updated after bugfix, thanks to misof</em>). The score is defined as the sum of all numbers generated by merge. It can be viewed as a measure of how far you make it in the game. For a win you need a score of at least $16384$. You can see that most games end in a region below $2000$, that is they generate at most a 128-tile and loose subsequently. The heap on the right at $2500$ represents those games that manage to generate a 256 tile - those games are rather rare. No game made it to the 1024-tile.</p> <p>Upon request, here is the plot for highest number on a tile: <img src="https://i.sstatic.net/YXZgk.png" alt="enter image description here"> When it comes to "dumb strategies", you get better results cycling moves deterministically: move up, right, up, left and repeat. This improves the expected highest number by one tile.</p> <p>You can do your own experiments using the code <a href="https://github.com/bheuer/MSE/blob/master/minimalRandomGame.py" rel="noreferrer">here</a> and <a href="https://github.com/bheuer/MSE/blob/master/RandomGame.py" rel="noreferrer">here</a>.</p>
<p>Instead of trying to get an exact answer, let me give you a very rough intuition based estimate based on few observations about the game and a related question on SO:</p> <ul> <li>Before you make it to the 2048 tile, you will need to have at least 10 tiles of different values on the board: $2,4,8,16,32,64,128,256,512$ and $1024$.</li> <li>You will have to make at least $520$ moves to get to the 2048 tile (each time you make a move, the sum of tiles on the board increases by at most 4).</li> <li>This is the awesome post on SO concerning the same game: <a href="https://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048">https://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048</a> . It is noteworthy that the best algorithm mentioned in one of the answers has claimed success rate of around 90%, i.e. it does not get to win every time.</li> <li>In the abovementioned post, it is suggested that a good winning strategy is to select one side, say the top one, and then try not to ever move your highest number away from that side. It is also suggested that if you have to move the high numbers away from this side of choice, it can be hard to save the day and still win.</li> </ul> <p>Now for the sake of giving a pseudo-rudimentary estimate, let us entertain the idea that the last bullet point is right about the winning strategy and that this strategy covers most of the winning strategies. </p> <p>Next, imagine, that our Random AlgoriThm (RAT) made it to the stage where half the board is covered with different numbers, meaning there are 8 different numbers on board $2,4,8,16,32,64,128, 256$. This means we are on the move number at most around $256 = \frac{1}{2}{\sum_1^{8} 2^k}$. </p> <p>Also, our RAT miraculously made it this far and managed to keep it's high numbers by the top side of the board, as by the last bullet point. For the final assumption, assume that if the RAT presses the bottom arrow, it will always loose the game (because it is so random, it will not be able to salvage the situation.)</p> <p>Now, the chance of our RAT winning after the move 256 is for sure smaller than the chance of RAT not pressing the bottom arrow. There is $3/4$ chance of the RAT not pressing the down arrow on every move, and there are at least 256 moves to be made before the RAT can get the 2048 tile. Thus the chance of the RAT winning in our simplified scenario is smaller $\frac{3}{4}^{256} \leq \frac{1}{2^{32}}$. </p> <p>$P=\frac{1}{2^{32}}$ makes for a rather rare occurencce. As by N.Owad's comment, this chance is MUCH smaller than picking one specific second since the beginning of the universe. This should give you some intuition as to how unlikely a random win in this game actually is. </p> <p><strong>Disclaimer</strong>: I do not pretend that P is a bound of any sorts for the actual probability of random win due to the nature of simplifications made. It just tries to illustrates a number which which is likely larger than a chance of randomly winning. </p>
geometry
<p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p> <p>Using the Wolfram Alpha site, this input gave an almost-square: <code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p> <p>This input gave an almost-octagon: <code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p> <p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p> <p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p> <p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n &lt; 7$ or something), can those equations be provided?</p>
<p>Any polygon (regular or not) can be described by an equation involving only absolute values and polynomials. Here is a small explanation of how to do that.</p> <p>Let's say that a curve $C$ is given by the equation $f$ if we have $C = \{(x,y) \in \mathbb{R}^2, \, f(x,y) = 0\}$.</p> <ul> <li><p>If $C_1$ and $C_2$ are given by $f_1$ and $f_2$ respectively, then $C_1 \cup C_2$ is given by $f_1 . f_2$ and $C_1 \cap C_2$ is given by $f_1^2 + f_2^2$ (or $|f_1| + |f_2|$). So if $C_1$ and $C_2$ can be described by an equation involving absolute values and polynomials, then so do $C_1 \cup C_2$ and $C_1 \cap C_2$.</p></li> <li><p>If $C = \{(x,y) \in \mathbb{R}^2, \, f(x,y) \ge 0\}$, then $C$ is given by the equation $|f|-f$.</p></li> </ul> <p>Now, any segment $S$ can be described as $S = \{(x,y) \in \mathbb{R}^2, \, a x + b y = c, \, x_0 \le x \le x_1, \, y_0 \le y \le y_1\}$, which is given by a single equation by the above principles. And since union of segments also are given by an equation, you get the result.</p> <p>EDIT : For the specific case of the octagon of radius $r$, if you denote $s = \sin(\pi/8)$, $c = \cos(\pi/8)$, then one segment is given by $|y| \le rs$ and $x = rc$, for which an equation is</p> <p>$$f(x, y) = \left||rs - |y|| - (rs - |y|)\right| + |x-rc| = 0$$</p> <p>So I think the octagon is given by</p> <p>$$f(|x|,|y|) \ f(|y|,|x|) \ f\left(\frac{|x|+|y|}{\sqrt{2}}, \frac{|x|-|y|}{\sqrt{2}}\right) = 0$$ </p> <p>To get a general formula for a regular polygon of radius $r$ with $n$ sides, denote $c_n = \cos(\pi/n)$, $s_n = \sin(\pi/n)$ and</p> <p>$$f_n(x+iy) = \left||rs_n - |y|| - (rs_n - |y|)\right| + |x-rc_n|$$</p> <p>then your polygon is given by</p> <p>$$\prod_{k = 0}^{n-1} f_n\left(e^{-\frac{2 i k \pi}{n}} (x+iy)\right) = 0$$</p> <p>Depending on $n$, you can use symmetries to lower the degree a bit (as was done with $n = 8$).</p>
<p>Here's a parametric equation I have made for a regular <span class="math-container">$n$</span>-gon, coded in R:</p> <pre><code>n=5; theta=(0:999)/1000; r=cos(pi/n)/cos(2*pi*(n*theta)%%1/n-pi/n); plot(r*cos(2*pi*theta),r*sin(2*pi*theta),asp=1,xlab=&quot;X&quot;,ylab=&quot;Y&quot;, main=paste(&quot;Regular &quot;,n,&quot;-gon&quot;,sep=&quot;&quot;)); </code></pre> <p>And picture:</p> <p><img src="https://i.sstatic.net/bUhKk.png" alt="5-gon" /></p> <p>The formula I used is</p> <p><span class="math-container">$$\displaystyle r=\frac{\cos\left(\frac{\pi}{n}\right)}{\cos\left(\left(\theta \mod \frac{2\pi}{n}\right) -\frac{\pi}{n}\right)} \; .$$</span></p> <p>This equation is actually just the polar equation for the line through the point <span class="math-container">$(1,0)$</span> and <span class="math-container">$(\cos(2\pi/n),\sin(2\pi/n))$</span> which contains one of the edges. By restricting the range of the variable <span class="math-container">$\theta$</span> to the interval <span class="math-container">$[0,2\pi/n[$</span>, you will in fact just get that edge. Now, we want to replicate that edge by rotating it repeatedly through an angle <span class="math-container">$2\pi/n$</span> to get the full polygon. But this can also be achieved by using the modulus function and reducing all angles to the interval <span class="math-container">$[0,2\pi/n[$</span>. This way, you get the polar equation I propose.</p> <p>So, using polar plots and the modulo function, it's pretty easy to make regular <span class="math-container">$n$</span>-gons.</p>
linear-algebra
<p>Matrices such as</p> <p>$$ \begin{bmatrix} \cos\theta &amp; \sin\theta \\ \sin\theta &amp; -\cos\theta \end{bmatrix} \text{ or } \begin{bmatrix} \cos\theta &amp; i\sin\theta \\ -i\sin\theta &amp; -\cos\theta \end{bmatrix} \text{ or } \begin{bmatrix} \pm 1 &amp; 0 \\ 0 &amp; \pm 1 \end{bmatrix} $$</p> <p>are both <a href="http://en.wikipedia.org/wiki/Unitary_matrix">unitary</a> and <a href="http://en.wikipedia.org/wiki/Hermitian_matrix">Hermitian</a> (for $0 \le \theta \le 2\pi$). I call the latter type <strong>trivial</strong>, since its columns equal to plus/minus columns of the identity matrix.</p> <blockquote> <p>Do such matrices have any significance (in theory or practice)?</p> </blockquote> <p>In the <a href="http://wiki.answers.com/Q/Every_unitary_matrix_is_hermitian">answer to this question</a>, it is said that "for every Hilbert space except $\mathbb{C}^2$, a unitary matrix cannot be Hermitian and vice versa." It was <a href="http://wiki.answers.com/Q/Discuss%3aEvery_unitary_matrix_is_hermitian">commented</a> that identity matrices are always both unitary and Hermitian, and so this rule is not true. In fact, all <strong>trivial</strong> matrices (as defined above) have this property. Moreover, matrices such as</p> <p>$$ \begin{bmatrix} \sqrt {0.5} &amp; 0 &amp; \sqrt {0.5} \\ 0 &amp; 1 &amp; 0 \\ \sqrt {0.5} &amp; 0 &amp; -\sqrt {0.5} \end{bmatrix} $$</p> <p>are both unitary and Hermitian.</p> <p>So, the general rule in the aforementioned question seems to be pointless.</p> <blockquote> <p>It seems that, for any $n &gt; 1$, infinitely many matrices over the Hilbert space $\mathbb{C}^n$ are simultaneously unitary and Hermitian, right?</p> </blockquote>
<p>Unitary matrices are precisely the matrices admitting a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are on the unit circle. Hermitian matrices are precisely the matrices admitting a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are real. So unitary Hermitian matrices are precisely the matrices admitting a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are $\pm 1$.</p> <p>This is a very strong condition. As George Lowther says, any such matrix $M$ has the property that $P = \frac{M+1}{2}$ admits a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are $0, 1$; thus $P$ is a Hermitian <a href="http://en.wikipedia.org/wiki/Idempotence">idempotent</a>, or as George Lowther says an orthogonal projection. Of course such matrices are interesting and appear naturally in mathematics, but it seems to me that in general it's more natural to start from the idempotence condition.</p> <p>I suppose one could say that Hermitian unitary matrices precisely describe unitary representations of the cyclic group $C_2$, but from this perspective the fact that such matrices happen to be Hermitian is an accident coming from the fact that $2$ is too small. </p>
<p>A matrix $M$ is unitary and Hermitian if and only if $M=2P-1$ for an <a href="http://en.wikipedia.org/wiki/Orthogonal_projection#Orthogonal_projections">orthogonal projection</a> $P$. That is, $P$ is Hermitian and $P^2=P$.</p>
probability
<p>Not sure if this is a question for math.se or stats.se, but here we go:</p> <p>Our MUD (Multi-User-Dungeon, a sort of textbased world of warcraft) has a casino where players can play a simple roulette.</p> <p>My friend has devised this algorithm, which he himself calls genius:</p> <ul> <li>Bet 1 gold</li> <li>If you win, bet 1 gold again</li> <li>If you lose, bet double what you bet before. Continue doubling until you win.</li> </ul> <p>He claimed you will always win exactly 1 gold using this system, since even if you lose say 8 times, you lost 1+2+4+8+16+32+64+128 gold, but then won 256 gold, which still makes you win 1 gold.</p> <p>He programmed this algorithm in his favorite MUD client, let it run for the night. When he woke up the morning, he was broke. </p> <p>Why did he lose? What is the fault in his reasoning?</p>
<p>Suppose, for simplicity, that the probability of winning one round of this game is $\frac{1}{2}$, and the probability of losing is also $\frac{1}{2}$. (Roulette in real life is not such a game, unfortunately.) Let $X_0$ be the initial wealth of the player, and write $X_t$ for the wealth of the player at time $t$. Assuming that the outcome of each round of the game is independent and identically distributed, $(X_0, X_1, X_2, \ldots)$ forms what is known as a <a href="http://en.wikipedia.org/wiki/Martingale_%28probability_theory%29">martingale</a> in probability theory. Indeed, using the bet-doubling strategy outlined, at any time $t$, the expected wealth of the player at time $t + 1$ is $$\mathbb{E} \left[ X_{t+1} \middle| X_0, X_1, \ldots, X_t \right] = X_t$$ because the player wins or loses an equal amount with probability $\frac{1}{2}$ in each case, and $$\mathbb{E} \left[ \left| X_t \right| \right] &lt; \infty$$ because there are only finitely many different outcomes at each stage.</p> <p>Now, let $T$ be the first time the player either wins or goes bankrupt. This is a random variable depending on the complete history of the game, but we can say a few things about it. For instance, $$X_T = \begin{cases} 0 &amp; \text{ if the player goes bankrupt before winning once} \\ X_0 + 1 &amp; \text{ if the player wins at least once} \end{cases}$$ so by linearity of expectation, $$\mathbb{E} \left[ X_T \right] = (X_0 + 1) \mathbb{P} \left[ \text{the player wins at least once} \right]$$ and therefore we may compute the probability of winning as follows: $$\mathbb{P} \left[ \text{the player wins at least once} \right] = \frac{\mathbb{E} \left[ X_T \right]}{X_0 + 1}$$ But how do we compute $\mathbb{E} \left[ X_T \right]$? For this, we need to know that $T$ is almost surely finite. This is clear by case analysis: if the player wins at least once, then $T$ is finite; but the player cannot have an infinite losing streak before going bankrupt either. Thus we may apply the <a href="http://en.wikipedia.org/wiki/Optional_stopping_theorem">optional stopping theorem</a> to conclude: $$\mathbb{E} \left[ X_T \right] = X_0$$ $$\mathbb{P} \left[ \text{the player wins at least once} \right] = \frac{X_0}{X_0 + 1}$$ In other words, the probability of this betting strategy turning a profit is positively correlated with the amount $X_0$ of starting capital – no surprises there! </p> <hr> <p>Now let's do this repeatedly. The remarkable thing is that we get <em>another</em> martingale! Indeed, if $Y_n$ is the player's wealth after playing $n$ series of this game, then $$\mathbb{E} \left[ Y_{n+1} \middle| Y_0, Y_1, \ldots, Y_n \right] = 0 \cdot \frac{1}{Y_n + 1} + (Y_n + 1) \cdot \frac{Y_n}{Y_n + 1} = Y_n$$ by linearity of expectation, and obviously $$\mathbb{E} \left[ \left| Y_n \right| \right] \le Y_0 + n &lt; \infty$$ because $Y_n$ is either $0$ or $Y_{n-1} + 1$.</p> <p>Let $T_k$ be the first time the player either earns a profit of $k$ or goes bankrupt. So, $$Y_{T_k} = \begin{cases} 0 &amp;&amp; \text{ if the player goes bankrupt} \\ Y_0 + k &amp;&amp; \text{ if the player earns a profit of } k \end{cases}$$ and again we can apply the same analysis to determine that $$\mathbb{P} \left[ \text{the player earns a profit of $k$ before going bankrupt } \right] = \frac{Y_0}{Y_0 + k}$$ which is not too surprising – if the player is greedy and wants to earn a larger profit, then the player has to play more series of games, thereby increasing his chances of going bankrupt.</p> <p>But what we really want to compute is the probability of going bankrupt at all. I claim this happens with probability $1$. Indeed, if the player loses even once, then he is already bankrupt, so the only way the player could avoid going bankrupt is if he has an infinite winning streak; the probability of this happening is $$\frac{Y_0}{Y_0 + 1} \cdot \frac{Y_0 + 1}{Y_0 + 2} \cdot \frac{Y_0 + 2}{Y_0 + 3} \cdot \cdots = \lim_{n \to \infty} \frac{Y_0}{Y_0 + n} = 0$$ as claimed. So this strategy <a href="http://en.wikipedia.org/wiki/Almost_surely">almost surely</a> leads to ruin.</p>
<p>This betting strategy is very smart if you have access to infinite wealth or can go into infinite debt. In reality however, you will eventually lose all or most of your money.</p> <p>Say your friend had $k$ gold at the beginning. I assume that this simple roulette has a probability of both win and loss equal to $0.5$.</p> <p>First, let's see how many times you need to lose in a row in order to lose all your wealth.</p> <p>\begin{align} 1 + 2 + 2^2 + 2^3 + ... + 2^n &amp;\geq k \\ 2^{n+1} - 1 &amp;\geq k \\ n &amp;\geq \log_{2}(k+1) - 1 \end{align}</p> <p>So even if you start with $10000$ gold, after $13$ lost bets you are broke and in debt. Continuing this example, the probability of this happening in a one shot-game is a mere $0.02$%. However, if you keep the algorithm running all night for $8$ hours betting every $5$ seconds, your chances of having a <a href="http://www.sbrforum.com/betting-tools/streak-calculator/">losing streak</a> of $13$ in a row go up to $29.61$%.</p> <p>Assuming that you cannot go into debt and $12$ losses is the most you can handle, then with the same data the chance of losing most of your money goes up to $50.5$%.</p>
game-theory
<p>I have a fairly simple question. How many legal states of chess exists? "Legal" as in allowed by the rules and "state" as an unique configuration of the pieces.</p> <p>I'm <em>not</em> asking for the number of possible chess games. I'm asking for the number of possible <em>legal states</em>, or chess board configurations, the rules of chess allows.</p> <p>For example, a white pawn can't be on [A-H]1 and <strike>a king can't be in check by two different pieces at the same time</strike>. Such states are obviously not allowed by the rules and illegal.</p>
<p>You should clarify whether you wish to differentiate between positions based on en passant, castling, and whose side it is to move. Francis Labelle and others have used the term "chess position" to indicate a board state including the above information, and "chess diagram" to indicate a board state not including the above information, i.e. just what pieces are on the board and where. In neither case is information for drawing rules like the 50-move rule or the triple repetition rule included.</p> <p>The best upper bound found for the number of chess positions is 7728772977965919677164873487685453137329736522, or about $7.7 * 10^{45}$, based on a complicated program by John Tromp; according to him, better documentation is required in order for the program to be considered verifiable. He also has a much simpler program that gives an upper bound of $4.5 * 10^{46}$.</p> <p>For chess diagrams, Tromps simpler program gives an upper bound of about $2.2 * 10^{46}$; he does not say what bound is obtained by the complicated program, but it is probably a little less than half (since side to move doubles the bound, whereas castling and en passant add relatively little), so likely about $3.8 * 10^{45}$. More information at <a href="https://tromp.github.io/chess/chess.html" rel="noreferrer">Tromp's website</a></p> <p>The best bound published in a journal was obtained by Shirish Chinchalkar in "An Upper Bound for the Number of Reachable Positions". I do not have access to this paper, but according to Tromp it is about $10^{46.25}$. Although it refers to "positions", it could very easily be a bound for the number of diagrams.</p> <p>As for lower bounds, they are much more difficult, since a given position could be illegal for very subtle reasons. Wikipedia has claimed that the number of positions is "between $10^{43}$ and $10^{47}$", but I think it is unlikely that the lower bound has been proven.</p> <p>I would guess that the actual number of diagrams is between $10^{44}$ and $10^{45}$, but this is purely speculation.</p>
<p>Since you want all the states "that the rules allow," it is very unlikely that you can get an exact answer with current computers, but perhaps you can get upper and lower bounds. One hand-wavy proof for why this should be is that in order to count states, you would almost certainly have to enumerate them (or enumerate equivalence classes of them where you have a counting formula for each equivalence class). But in computer-based chess strategy, enumeration of possible future states (and understanding which state can lead to which state) is how virtually all computer chess players play, and yet they can't enumerate everything to the point of finding an initial winning strategy (or proving that no winning strategy exists). In fact, even super-computers can't even come close.</p>
combinatorics
<blockquote> <p>One needs to choose six real numbers <span class="math-container">$x_1,x_2,\cdots,x_6$</span> such that the product of any five of them is equal to other number. The number of such choices is</p> <p>A) <span class="math-container">$3$</span></p> <p>B) <span class="math-container">$33$</span></p> <p>C) <span class="math-container">$63$</span></p> <p>D) <span class="math-container">$93$</span></p> </blockquote> <p>I believe this is not so hard problem but I got no clue to proceed. The work I did till now.</p> <p>Say the numbers be <span class="math-container">$a,b,c,d,e, abcde$</span>. Then, <span class="math-container">$b\cdot c\cdot d\cdot e\cdot abcde=a$</span> hence <span class="math-container">$bcde= +-1$</span>.</p> <p>Basically I couldn't even find a single occasion where such things occcur except all of these numbers being either <span class="math-container">$1$</span> or <span class="math-container">$-1$</span>. Are these all cases?</p>
<p>Here's an argument which extends to <span class="math-container">$n$</span> real numbers quite nicely! We can start by noting that <span class="math-container">$$x_1 (x_2x_3x_4x_5x_6) = x_1^2$$</span> And by commutativity, we get <span class="math-container">$x_i^2 = x_j^2$</span> which implies that all the magnitudes are equal.</p> <p>Now if <span class="math-container">$L$</span> is the magnitude, we also must have <span class="math-container">$L = L^5$</span>, and from this we conclude that <span class="math-container">$L=0$</span> or <span class="math-container">$1$</span>.</p> <p>Now we just have to go through the possibilities. If <span class="math-container">$L = 0$</span>, then we have all <span class="math-container">$0$</span>'s</p> <p>If <span class="math-container">$L = 1$</span>, then we have all <span class="math-container">$-1$</span>'s and <span class="math-container">$1$</span>'s. This configuration works iff the number of <span class="math-container">$-1$</span>'s is even, as this would imply that <span class="math-container">$$\frac{\prod_{i=1}^6 x_i}{1} = 1 \text{ and } \frac{\prod_{i=1}^6 x_i}{-1} = -1$$</span> Now we can count configurations. There will be <span class="math-container">$$\sum_{i=0}^3 \binom{6}{2i} = 2^5$$</span> possibilities. And finally, we have <span class="math-container">$1+32 = 33$</span>.</p>
<p>Well, if there are an even number of <span class="math-container">$1$</span>'s and <span class="math-container">$-1$</span>'s, then the property holds, so that's <span class="math-container">$$\binom{6}{0}+ \binom{6}{2}+ \binom{6}{4}+ \binom{6}{6}=32$$</span> And then there’s the case that they’re all <strong>zero</strong>. Thus, there are <span class="math-container">$33$</span> total cases.</p>
matrices
<p>Can someone point me to a paper, or show here, why symmetric matrices have orthogonal eigenvectors? In particular, I'd like to see proof that for a symmetric matrix $A$ there exists decomposition $A = Q\Lambda Q^{-1} = Q\Lambda Q^{T}$ where $\Lambda$ is diagonal.</p>
<p>For any real matrix $A$ and any vectors $\mathbf{x}$ and $\mathbf{y}$, we have $$\langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle.$$ Now assume that $A$ is symmetric, and $\mathbf{x}$ and $\mathbf{y}$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda$ and $\mu$. Then $$\lambda\langle\mathbf{x},\mathbf{y}\rangle = \langle\lambda\mathbf{x},\mathbf{y}\rangle = \langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle = \langle\mathbf{x},A\mathbf{y}\rangle = \langle\mathbf{x},\mu\mathbf{y}\rangle = \mu\langle\mathbf{x},\mathbf{y}\rangle.$$ Therefore, $(\lambda-\mu)\langle\mathbf{x},\mathbf{y}\rangle = 0$. Since $\lambda-\mu\neq 0$, then $\langle\mathbf{x},\mathbf{y}\rangle = 0$, i.e., $\mathbf{x}\perp\mathbf{y}$.</p> <p>Now find an orthonormal basis for each eigenspace; since the eigenspaces are mutually orthogonal, these vectors together give an orthonormal subset of $\mathbb{R}^n$. Finally, since symmetric matrices are diagonalizable, this set will be a basis (just count dimensions). The result you want now follows.</p>
<p>Since being symmetric is the property of an operator, not just its associated matrix, let me use <span class="math-container">$\mathcal{A}$</span> for the linear operator whose associated matrix in the standard basis is <span class="math-container">$A$</span>. Arturo and Will proved that a real symmetric operator <span class="math-container">$\mathcal{A}$</span> has real eigenvalues (thus real eigenvectors) and that eigenvectors corresponding to different eigenvalues are orthogonal. <em>One question still stands: how do we know that there are no generalized eigenvectors of rank more than 1, that is, all Jordan blocks are one-dimensional?</em> Indeed, by referencing the theorem that any symmetric matrix is diagonalizable, Arturo effectively threw the baby out with the bathwater: showing that a matrix is diagonalizable is tautologically equivalent to showing that it has a full set of eigenvectors. Assuming this as a given dismisses half of the question: we were asked to show that <span class="math-container">$\Lambda$</span> is diagonal, and not just a generic Jordan form. Here I will untangle this bit of circular logic.</p> <p>We prove by induction in the number of eigenvectors, namely it turns out that finding an eigenvector (and at least one exists for any matrix) of a symmetric matrix always allows us to generate another eigenvector. So we will run out of dimensions before we run out of eigenvectors, making the matrix diagonalizable.</p> <p>Suppose <span class="math-container">$\lambda_1$</span> is an eigenvalue of <span class="math-container">$A$</span> and there exists at least one eigenvector <span class="math-container">$\boldsymbol{v}_1$</span> such that <span class="math-container">$A\boldsymbol{v}_1=\lambda_1 \boldsymbol{v}_1$</span>. Choose an orthonormal basis <span class="math-container">$\boldsymbol{e}_i$</span> so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span>. The change of basis is represented by an orthogonal matrix <span class="math-container">$V$</span>. In this new basis the matrix associated with <span class="math-container">$\mathcal{A}$</span> is <span class="math-container">$$A_1=V^TAV.$$</span> It is easy to check that <span class="math-container">$\left(A_1\right)_{11}=\lambda_1$</span> and all the rest of the numbers <span class="math-container">$\left(A_1\right)_{1i}$</span> and <span class="math-container">$\left(A_1\right)_{i1}$</span> are zero. In other words, <span class="math-container">$A_1$</span> looks like this: <span class="math-container">$$\left( \begin{array}{c|ccc} \lambda_1 &amp; \\ \hline &amp; &amp; \\ &amp; &amp; B_1 &amp; \\ &amp; &amp; \end{array} \right)$$</span> Thus the operator <span class="math-container">$\mathcal{A}$</span> breaks down into a direct sum of two operators: <span class="math-container">$\lambda_1$</span> in the subspace <span class="math-container">$\mathcal{L}\left(\boldsymbol{v}_1\right)$</span> (<span class="math-container">$\mathcal{L}$</span> stands for linear span) and a symmetric operator <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> whose associated <span class="math-container">$(n-1)\times (n-1)$</span> matrix is <span class="math-container">$B_1=\left(A_1\right)_{i &gt; 1,j &gt; 1}$</span>. <span class="math-container">$B_1$</span> is symmetric thus it has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which has to be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span> and the same procedure applies: change the basis again so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span> and <span class="math-container">$\boldsymbol{e}_2=\boldsymbol{v}_2$</span> and consider <span class="math-container">$\mathcal{A}_2=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1,\boldsymbol{v}_2\right)^{\bot}}$</span>, etc. After <span class="math-container">$n$</span> steps we will get a diagonal matrix <span class="math-container">$A_n$</span>.</p> <p>There is a slightly more elegant proof that does not involve the associated matrices: let <span class="math-container">$\boldsymbol{v}_1$</span> be an eigenvector of <span class="math-container">$\mathcal{A}$</span> and <span class="math-container">$\boldsymbol{v}$</span> be any vector such that <span class="math-container">$\boldsymbol{v}_1\bot \boldsymbol{v}$</span>. Then <span class="math-container">$$\left(\mathcal{A}\boldsymbol{v},\boldsymbol{v}_1\right)=\left(\boldsymbol{v},\mathcal{A}\boldsymbol{v}_1\right)=\lambda_1\left(\boldsymbol{v},\boldsymbol{v}_1\right)=0.$$</span> This means that the restriction <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> is an operator of rank <span class="math-container">$n-1$</span> which maps <span class="math-container">${\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> into itself. <span class="math-container">$\mathcal{A}_1$</span> is symmetric for obvious reasons and thus has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which will be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span>.</p>
combinatorics
<p>A somewhat information theoretical paradox occurred to me, and I was wondering if anyone could resolve it.</p> <p>Let <span class="math-container">$p(x) = x^n + c_{n-1} x^{n-1} + \cdots + c_0 = (x - r_0) \cdots (x - r_{n-1})$</span> be a degree <span class="math-container">$n$</span> polynomial with leading coefficient <span class="math-container">$1$</span>. Clearly, the polynomial can be specified exactly by its <span class="math-container">$n$</span> coefficients <span class="math-container">$c=\{c_{n-1}, \ldots, c_0\}$</span> <strong>OR</strong> by its <span class="math-container">$n$</span> roots <span class="math-container">$r=\{r_{n-1}, \ldots, r_0\}$</span>.</p> <p>So the roots and the coefficients contain the same information. However, it takes less information to specify the roots, because their <strong>order doesn't matter</strong>. (i.e. the roots of the polynomial require <span class="math-container">$\lg(n!)$</span> bits less information to specify than the coefficients).</p> <p>Isn't this a paradox? Or is my logic off somewhere?</p> <p>Edit: To clarify, all values belong to any <a href="https://en.wikipedia.org/wiki/Algebraically_closed_field" rel="noreferrer">algebraically closed field</a> (such as the complex numbers). And note that the leading coefficient is specified to be 1, meaning that there is absolutely a one-to-one correspondence between the <span class="math-container">$n$</span> remaining coefficients <span class="math-container">$c$</span> and the <span class="math-container">$n$</span> roots <span class="math-container">$r$</span>.</p>
<p>What is happening here is just a consequence that an infinite set and a proper subset can be in bijective correspondence. That's an well known fact about infinite sets. And it is a paradox in the sense that it is anti-intuitive, but not in the sense that it leads to a contradiction.</p>
<p>The ordering of the roots doesn't give you any new information about the polynomial. You have a map <span class="math-container">$$ {\mathbb C}^n \ni (r_1,r_2,\dots r_n) \mapsto (c_0,c_1\dots c_{n-1}) \in \mathbb{C}^n$$</span> This map is surjective, but not injective. That kind of things may happen because <span class="math-container">$\mathbb{C}^n$</span> is an infinte set. It is also true for other algebraically closed fields - <a href="https://math.stackexchange.com/questions/56397/do-finite-algebraically-closed-fields-exist">no algebraically closed field is finite</a>.</p>
linear-algebra
<p>$T(av_1 + bv_2) = aT(v_1) + bT(v_2)$</p> <p>Why is this called linear? $f(x) =ax + b$, the simplest linear equation does not satisfy $f(x_1 + x_2) = f(x_1) + f(x_2)$.</p> <p>Thank you.</p>
<p>Yep, in high school we say $f(x) = 3x + 4$ is a linear function, but as you point out, in linear algebra it's not. It's irritating.</p> <p>The word for $x \mapsto ax + b$ is <a href="https://en.wikipedia.org/wiki/Affine_transformation">affine transformation</a>, and this is the term you'll hear elsewhere in higher math.</p>
<p><a href="https://hsm.stackexchange.com/questions/2490/why-do-we-call-a-linear-mapping-linear-mapping">https://hsm.stackexchange.com/questions/2490/why-do-we-call-a-linear-mapping-linear-mapping</a></p> <p>As <a href="https://hsm.stackexchange.com/a/2492">explained there</a>, the term <em>linear mapping</em> was coined by Hermann Gra&szlig;mann. It describes mappings which preserve the <em>linear structure</em> of a space, meaning the way scaling the length of a vector parameterizes a <em>line</em>. If you apply a linear mapping, the image will still be a line.</p> <p>Now, that's actually true for affine maps as well, so it could be argued that the high school term, using <em>linear</em> to mean functions of the form $\backslash x \mapsto a\cdot x + b$, is actually more meaningful. But alas, sometimes sub-optimal terminology sticks. There has by now been so much written about <em>linear maps</em> meaning functions that fulfill $f(\mu\cdot \mathbf v + \nu\cdot \mathbf w) = \mu\cdot f(\mathbf v) + \nu\cdot f (\mathbf w)$, that it would mostly cause confusion to use it for anything else.</p>
linear-algebra
<p>I understand that a vector has direction and magnitude whereas a point doesn't.</p> <p>However, in the course notes that I am using, it is stated that a point is the same as a vector.</p> <p>Also, can you do cross product and dot product using two points instead of two vectors? I don't think so, but my roommate insists yes, and I'm kind of confused now.</p>
<p>Here's an answer without using symbols.</p> <p>The difference is precisely that between <em>location</em> and <em>displacement</em>.</p> <ul> <li>Points are <strong>locations in space</strong>.</li> <li>Vectors are <strong>displacements in space</strong>.</li> </ul> <p>An analogy with time works well.</p> <ul> <li>Times, (also called instants or datetimes) are <strong>locations in time</strong>.</li> <li>Durations are <strong>displacements in time</strong>.</li> </ul> <p>So, in time,</p> <ul> <li>4:00 p.m., noon, midnight, 12:20, 23:11, etc. are <em>times</em></li> <li>+3 hours, -2.5 hours, +17 seconds, etc., are <em>durations</em></li> </ul> <p>Notice how durations can be positive or negative; this gives them &quot;direction&quot; in addition to their pure scalar value. Now the best way to mentally distinguish times and durations is by the operations they support</p> <ul> <li>Given a time, you can add a duration to get a new time (3:00 + 2 hours = 5:00)</li> <li>You can subtract two times to get a duration (7:00 - 1:00 = 6 hours)</li> <li>You can add two durations (3 hrs, 20 min + 6 hrs, 50 min = 10 hrs, 10 min)</li> </ul> <p>But <em>you cannot add two times</em> (3:15 a.m. + noon = ???)</p> <p>Let's carry the analogy over to now talk about space:</p> <ul> <li><span class="math-container">$(3,5)$</span>, <span class="math-container">$(-2.25,7)$</span>, <span class="math-container">$(0,-1)$</span>, etc. are <em>points</em></li> <li><span class="math-container">$\langle 4,-5 \rangle$</span> is a <em>vector</em>, meaning 4 units east then 5 south, assuming north is up (sorry residents of southern hemisphere)</li> </ul> <p>Now we have exactly the same analogous operations in space as we did with time:</p> <ul> <li>You can add a point and a vector: Starting at <span class="math-container">$(4,5)$</span> and going <span class="math-container">$\langle -1,3 \rangle$</span> takes you to the point <span class="math-container">$(3,8)$</span></li> <li>You can subtract two points to get the displacement between them: <span class="math-container">$(10,10) - (3,1) = \langle 7,9 \rangle$</span>, which is the displacement you would take from the second location to get to the first</li> <li>You can add two displacements to get a compound displacement: <span class="math-container">$\langle 1,3 \rangle + \langle -5,8 \rangle = \langle -4,11 \rangle$</span>. That is, going 1 step north and 3 east, THEN going 5 south and 8 east is the same thing and just going 4 south and 11 east.</li> </ul> <p>But you cannot add two points.</p> <p>In more concrete terms: Moscow + <span class="math-container">$\langle\text{200 km north, 7000 km west}\rangle$</span> is another location (point) somewhere on earth. But Moscow + Los Angeles makes no sense.</p> <p>To summarize, a location is where (or when) you are, and a displacement is <em>how to get from one location to another</em>. Displacements have both magnitude (how far to go) and a direction (which in time, a one-dimensional space, is simply positive or negative). In space, locations are <strong>points</strong> and displacements are <strong>vectors</strong>. In time, locations are (points in) time, a.k.a. <strong>instants</strong> and displacements are <strong>durations</strong>.</p> <p><strong>EDIT 1</strong>: In response to some of the comments, I should point out that 4:00 p.m. is <em>NOT</em> a displacement, but &quot;+4 hours&quot; and &quot;-7 hours&quot; are. Sure you can get to 4:00 p.m. (an instant) by adding the displacement &quot;+16 hours&quot; to the instant midnight. You can also get to 4:00 p.m. by adding the diplacement &quot;-3 hours&quot; to 7:00 p.m. The source of the confusion between locations and displacements is that people mentally work in coordinate systems relative to some origin (whether <span class="math-container">$(0,0)$</span> or &quot;midnight&quot; or similar) and both of these concepts are represented as coordinates. I guess that was the point of the question.</p> <p><strong>EDIT 2</strong>: I added some text to make clear that durations actually have direction; I had written both -2.5 hours and +3 hours earlier, but some might have missed that the negative encapsulated a direction, and felt that a duration is &quot;only a scalar&quot; when in fact the adding of a <span class="math-container">$+$</span> or <span class="math-container">$-$</span> really does give it direction.</p> <p><strong>EDIT 3</strong>: A summary in table form:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Concept</th> <th>SPACE</th> <th>TIME</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">LOCATION</td> <td>POINT</td> <td>TIME</td> </tr> <tr> <td style="text-align: left;">DISPLACEMENT</td> <td>VECTOR</td> <td>DURATION</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Loc - Loc = Disp</td> <td>Pt - Pt = Vec</td> <td>Time - Time = Dur</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$(3,5)-(10,2) = \langle -7,3 \rangle$</span></td> <td>7:30 - 1:15 = 6hr15m</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Loc + Disp = Loc</td> <td>Pt + Vec = Pt</td> <td>Time + Dur = Time</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$(10,2)+ \langle -7,3 \rangle = (3,5)$</span></td> <td>3:15 + 2hr = 5:15</td> </tr> <tr> <td style="text-align: left;"></td> <td></td> <td></td> </tr> <tr> <td style="text-align: left;">Disp + Disp = Disp</td> <td>Vec + Vec = Vec</td> <td>Dur + Dur = Dur</td> </tr> <tr> <td style="text-align: left;"></td> <td><span class="math-container">$\langle 8, -5 \rangle + \langle -7, 3 \rangle = \langle 1, -2 \rangle$</span></td> <td>3hr + 5hr = 8hr</td> </tr> </tbody> </table> </div>
<p>Points and vectors are not the same thing. Given two points in 3D space, we can make a vector from the first point to the second. And, given a vector and a point, we can start at the point and "follow" the vector to get another point.</p> <p>There is a nice fact, however: the points in 3D space (or $\mathbb{R}^n$, more generally) are in a very nice correspondence with the vectors that start at the point $(0,0,0)$. Essentially, the idea is that we can represent the vector with its ending point, and no information is lost. This is sometimes called putting the vector in "standard position".</p> <p>For a course like vector calculus, it is important to keep a good distinction between points and vectors. Points correspond to vectors that start at the origin, but we may need vectors that start at other points.</p> <p>For example, given three points $A$, $B$, and $C$ in 3D space, we may want to find the equation of the plane that spans them, If we just knew the normal vector $\vec n$ of the plane, we could write the equation directly as $\vec n \cdot (x,y,z) = \vec n \cdot A$. So we need to find that normal $\vec n$. To do that, we compute the cross product of the vectors $\vec {AB}$ and $\vec{AC}$. If we computed the cross product of $A$ and $C$ instead (pretending they are vectors in standard position), we could not get the right normal vector.</p> <p>For example, if $A = (1,0,0)$, $B = (0,1,0)$, and $C = (0,0,1)$, the normal vector of the corresponding plane would not be parallel to any coordinate axis. But if we take any two of $A$, $B$, and $C$ and compute a cross product, we will get a vector parallel to one of the coordinate axes.</p>
number-theory
<p>Good evening! I am very new to this site. I would like to put the following material from Prof. Gandhi's note book and my observations. Of course it is little long with more questions. But, with good belief on this site, I am sending for good solutions/answers.</p> <p>If we take other than primes $2$, $5$ and $11$, every prime can be written as $x + y + z$, where $x$, $y$ and $z$ are some positive numbers. Interestingly, $x \times y \times z = c^3$, where $c$ is again some positive number. Let us see the magic for primes $3,7,13,31,43,73$ $$ \begin{align} 3 = 1 + 1 + 1 &amp;\Longrightarrow 1 \times 1 \times 1 = 1^3\\ 7 = 1 + 2 + 4 &amp;\Longrightarrow 1 \times 2 \times 4 = 2^3\\ 13 = 1 + 3 + 9 &amp;\Longrightarrow 1 \times 3 \times 9 = 3^3\\ 31 = 1 + 5 + 25 &amp;\Longrightarrow 1 \times 5 \times 25 = 5^3\\ 43 = 1 + 6 + 36 &amp;\Longrightarrow 1 \times 6 \times 36 = 6^3\\ 73 = 1 + 8 + 64 &amp;\Longrightarrow 1 \times 8 \times 64 = 8^3\\ \end{align} $$ Can you justify the above pattern? How to generalize the above statement either mathematically or by computer?</p> <p>But, I observed that it is true for primes less than $9500$. Can your provide a computational algorithm to describe this?</p> <p>Also, prove that, we conjecture that except $1, 2, 3, 5, 6, 7, 11, 13, 14, 15, 17, 22, 23$, every positive number can be written as a sum of four positive numbers and the product is again can be expressible in 4th power. Now, can we generalize this? Also, I want to know that, is there any such numbers can be expressible as some of $n$-integers with their product is again in $n$-th power?</p> <p>Thank you so much.</p> <p><strong>edit</strong></p> <p>Concerning this cubic property :</p> <p>Notice that this can be extended to hold for almost all squarefree positive integers $&gt; 2$, not just the primes.</p> <p>for instance : we know for the prime $7$ : $7=1+2+4$ so we also get $7A = 1A + 2A + 4A$ and $1A * 2A * 4A$ is simply equal to $8A^3$.</p> <p>In fact this can be extended to all odd positive integers $&gt;11$ if $25,121$ have a solution.</p> <p>Hence I am intrested in this and I placed a bounty.</p> <p>I edited the question because its to much for a comment and certainly not an answer.</p> <p>Btw Im curious about this Ghandi person though info about that does not get the bounty naturally.</p> <p>I would like to remind David Speyer's comment : Every prime that is $1 mod 3$ is of the form $a^ 2 +ab+b^ 2$ , so that covers half the primes immediately.</p> <p>So that might be a line of attack.</p>
<p>To add pessimism in finding such prime $p$ that cannot be written as sum of $3$ co-divisors of cube, I've drawn images:</p> <p><strong>Number of ways to write an integer number $p$ as the sum $x+y+z$, where $x\cdot y\cdot z=c^3$.</strong></p> <p>($\color{red}{\bf{red}}$ dots $-$ composite numbers, $\bf{black}$ dots $-$ prime numbers)</p> <hr> <p>$$p\le 500$$ <img src="https://i.sstatic.net/zuTWW.png" alt="p=x+y+z, xyz=c^3, p&lt;500"></p> <hr> <p>$$p\le 5\;000$$ <img src="https://i.sstatic.net/vx6KG.png" alt="p=x+y+z, xyz=c^3, p&lt;5 000"></p> <hr> <p>$$p\le 50\;000$$ <img src="https://i.sstatic.net/KT00u.png" alt="p=x+y+z, xyz=c^3, p&lt;50 000"></p> <hr> <p>$$p\le 500\;000$$ <img src="https://i.sstatic.net/4i5ws.png" alt="p=x+y+z, xyz=c^3, p&lt;500 000"></p> <hr> <p>$$p\le 5\;000\;000$$ <img src="https://i.sstatic.net/TUR4N.png" alt="p=x+y+z, xyz=c^3, p&lt;5 000 000"></p> <hr> <p>Control points: <br> $p=486$ (composite): $1$ way: $486 = 162+162+162$; <br> $p=2048$ (composite): $2$ ways: $2048 = 128+720+1200 = 224+256+1568$; <br> $p=6656$ (composite): $3$ ways: $\small {6656 = 416+2340+3900 = 512+1536+4608 = 728+832+5096}$; <br> $p=7559$ (prime): $4$ ways: $\scriptsize{7559 = 9+50+7500 = 114+225+7220 = 135+1024+6400 = 722+2809+4028}$; <br> $p=26624$ (composite): $5$ ways; <br> $p=58757$ (prime): $10$ ways; <br> $p=80429$ (prime): $13$ ways; <br> $p=111611$ (prime): $15$ ways; <br> $...$</p> <hr> <p>These images were made in style like <a href="http://en.wikipedia.org/wiki/Goldbach%27s_conjecture#Heuristic_justification" rel="noreferrer">here</a>.<br> (Since I've seen images for Goldbach's conjecture, I lost any hope to find contradiction for it).</p> <hr> <p>Method of search:</p> <p>to build array $a[3N]$ for storing number of ways;</p> <p>$a[p]$ is the number of ways to write $p$ as such sum; (initially, each $a[j]=0$);</p> <p>for $c = 1 ... N$<br> $\quad$ create list of prime divisors of $c^3$;<br> $\quad$ (there is enough to know prime decomposition of $c$)<br> $\quad$ for each pair $x,y$ of divisors of $c^3$ to find $z=\dfrac{c^3}{xy}$;<br> $\quad$ if $z\in\mathbb{Z}$ and if $x\le y\le z$, then increase $a[x+y+z]$.</p> <p>(if $c&gt;N$, then sum of co-divisors of $c^3$ is greater than $3N$).</p>
<p>Two comments on positivity in David Speyer's method.</p> <p>First, we can take $x,y &gt; 0$ in $p = x^2 + x y + y^2,$ which we can do when $p \equiv 1 \pmod 3.$ Given $p = u^2 + u v + v^2,$ with $u&gt;0$ but $v &lt; 0.$ If $u+v &gt; 0,$ we can use $p = (u+v)^2 + (u+v)(-v) + (-v)^2$ to get everything positive, as both $u+v, -v &gt; 0.$ If $u+v &lt;0,$ switch to $u^2 + u(-u-v)+ (-u-v)^2.$ </p> <p>Next, we get another good one with an indefinite form $$\color{blue}{x^2 + 4 x y + 2 y^2.}$$</p> <p><strong>Lemma</strong>: every (positive) prime $p \equiv \pm 1 \pmod 8 $ can be written as $p = u^2 - 2 v^2$ with $u &gt; 2v.$</p> <p><strong>Proof</strong>. Take the representation $p = u^2 - 2 v^2$ such that $v$ has the smallest possible positive value, with $u&gt;0.$ Note that $u &gt; v \sqrt 2,$ also $u$ is odd. We get a slightly different representation with $(u,-v).$ Next, we apply the generator of the automorphism group of $x^2 - 2 y^2$ to get another representation, $$ (3u-4v, 2u-3v). $$</p> <p>Now, if $2u - 3v \leq 0, $ by minimality of $|v|$ we also get $2u - 3 v \leq -v,$ so $2u \leq 2v$ and $u \leq v.$ This contradicts $u &gt; v \sqrt 2.$</p> <p>Therefore, $2u - 3 v &gt; 0,$ and minimality again says $2u-3v &gt; v.$ This gives us $2u&gt;4v$ and $u &gt; 2v.$</p> <p>So we have $u &gt; 2 v.$ Define $$ x = u - 2 v, \; \; y = v. $$ Then $$ p = x^2 + 4 x y + 2 y^2 = u^2 - 2 v^2, $$ with both $x,y &gt; 0.$</p> <p>EDIT, Thursday, August 21. The remaining primes are $5,11 \pmod 24.$ These are all represented by $2x^2 + 3 y^2,$ but the misfortune is that this does not give any cubes; furthermore, it appears that no $SL_2 \mathbb Z$-equivalent form has a product of the three coefficients a cube. So, the best i have come up with is $$ 14 x^2 + 36 xy + 147 y^2. $$ The good part is the cubes. there are two bad parts: it does represents only primes $5,11 \pmod {24},$ but the additional constraint is that the prime not be a quadratic residue $\bmod {17}.$ Furthermore, with a positivity constraint, as in the original problem, it represents only about half of those. So, all told, it gives only about one fourth of the remaining primes, such as $$ 197, 677, 1907, 1979, 2213 , 2237, 3083, 3803, 4091, 6011, 7349, 8429 , 10139 , 10781, 11213, \ldots $$</p> <p>So, not bad but not great, about 13/16 of the primes so far. </p> <p>EDIT, Friday, August 22.</p> <p>It appears that all primes $p &gt; 0$ with $(p|5) = 1$ and $(p|11) = 1$ are represented by $$ x^2 + 25 xy + 5 y^2 $$ with $x,y &gt; 0.$ I think I have the proof; it starts with an elementary argument that the same can be accomplished for $x^2 + 23 xy- 19 y^2,$ then some fiddling with improper automorphs similar to the proof for $x^2 + 4 x y + 2 y^2$ but with more steps. </p> <p>EEEDDDIIIITTTTTTT, Saturday, August 23:</p> <p>THEOREM: given a prime $p &gt; 30$ with $(p|5)=1$ and $(p|11)=1,$ we can write $ p = x^2 + 25 x y + 5 y^2 $ with positive integers $x,y.$</p> <p>Take prime $p &gt; 0$ and integers $s,t &gt; 0$ with $$ p = s^2 - 605 t^2. $$ Note that $s &gt; t \sqrt {605} \approx 24.5967 t.$Then we can write $$ p = x^2 + 23 x y - 19 y^2, $$ with $x = s - 23 t, \; \; y = 2 t,$ so that $x,y &gt;0.$ </p> <p>Now, find the smallest positive integer $v$ such that there is another positive integer $u$ with $$ p = u^2 + 23 u v - 19 v^2. $$</p> <p>Lemma: given $ p = x^2 + 23 x y - 19 y^2 $ with $y &lt; 0,$ then $y \leq -v.$</p> <p>Proof: if $x &lt; 0,$ we get a representation in positive integers $(-x,-y),$ so $-y \geq v.$ If $x &gt; 0,$ note that $$ x &gt; \frac{\sqrt {605} + 23}{2} |y| \approx 23.798 |y|. $$ Therefore the new representation $(x + 23 y, -y)$ is again in positives and $-y \geq v.$</p> <p>Corollary: if $ p = x^2 + 23 x y - 19 y^2 $ with $x &lt;0, y &gt;0,$ then $y \geq v.$</p> <p>Back to the minimum $(u,v),$ both positive. Note $$ u &gt; \frac{\sqrt {605} - 23}{2} v \approx 0.798 v. $$ </p> <p>There is another representation, $$(4u-3v,5u-4v)$$ We know $5u-4v \neq 0$ as $p$ is not a square. If $5u-4v &lt; 0,$ then $5u-4v \leq -v,$ thus $5u &lt; 3 v.$ But this gives $u &lt; 0.6 v,$ while in fact, $u &gt; 0.798 v. $</p> <p>Therefore, $5u-4v &gt; 0,$ whereupon $5u - 4 v \geq v.$ Thus $5 u \geq 5 v$ and $u \geq v.$ Finally, as $u,v$ are relatively prime, they are not equal unless both are actually $1,$ giving the prime $5.$ Thus, for $p &gt; 5,$ we get $u &gt; v.$</p> <p>Finally, finally, finally, we can write $$ \color{blue}{ p = x^2 + 25 x y + 5 y^2} $$ with $$ x = u - v, y = v $$ both positive. </p>
matrices
<p>I am helping designing a course module that teaches basic python programming to applied math undergraduates. As a result, I'm looking for examples of <em>mathematically interesting</em> computations involving matrices. </p> <p>Preferably these examples would be easy to implement in a computer program. </p> <p>For instance, suppose</p> <p>$$\begin{eqnarray} F_0&amp;=&amp;0\\ F_1&amp;=&amp;1\\ F_{n+1}&amp;=&amp;F_n+F_{n-1}, \end{eqnarray}$$ so that $F_n$ is the $n^{th}$ term in the Fibonacci sequence. If we set</p> <p>$$A=\begin{pmatrix} 1 &amp; 1 \\ 1 &amp; 0 \end{pmatrix}$$</p> <p>we see that</p> <p>$$A^1=\begin{pmatrix} 1 &amp; 1 \\ 1 &amp; 0 \end{pmatrix} = \begin{pmatrix} F_2 &amp; F_1 \\ F_1 &amp; F_0 \end{pmatrix},$$</p> <p>and it can be shown that</p> <p>$$ A^n = \begin{pmatrix} F_{n+1} &amp; F_{n} \\ F_{n} &amp; F_{n-1} \end{pmatrix}.$$</p> <p>This example is "interesting" in that it provides a novel way to compute the Fibonacci sequence. It is also relatively easy to implement a simple program to verify the above.</p> <p>Other examples like this will be much appreciated. </p>
<p>If $(a,b,c)$ is a <em>Pythagorean triple</em> (i.e. positive integers such that $a^2+b^2=c^2$), then $$\underset{:=A}{\underbrace{\begin{pmatrix} 1 &amp; -2 &amp; 2\\ 2 &amp; -1 &amp; 2\\ 2 &amp; -2 &amp; 3 \end{pmatrix}}}\begin{pmatrix} a\\ b\\ c \end{pmatrix}$$ is also a Pythagorean triple. In addition, if the initial triple is <em>primitive</em> (i.e. $a$, $b$ and $c$ share no common divisor), then so is the result of the multiplication.</p> <p>The same is true if we replace $A$ by one of the following matrices:</p> <p>$$B:=\begin{pmatrix} 1 &amp; 2 &amp; 2\\ 2 &amp; 1 &amp; 2\\ 2 &amp; 2 &amp; 3 \end{pmatrix} \quad \text{or}\quad C:=\begin{pmatrix} -1 &amp; 2 &amp; 2\\ -2 &amp; 1 &amp; 2\\ -2 &amp; 2 &amp; 3 \end{pmatrix}. $$</p> <p>Taking $x=(3,4,5)$ as initial triple, we can use the matrices $A$, $B$ and $C$ to construct a tree with all primitive Pythagorean triples (without repetition) as follows:</p> <p>$$x\left\{\begin{matrix} Ax\left\{\begin{matrix} AAx\cdots\\ BAx\cdots\\ CAx\cdots \end{matrix}\right.\\ \\ Bx\left\{\begin{matrix} ABx\cdots\\ BBx\cdots\\ CBx\cdots \end{matrix}\right.\\ \\ Cx\left\{\begin{matrix} ACx\cdots\\ BCx\cdots\\ CCx\cdots \end{matrix}\right. \end{matrix}\right.$$</p> <p><strong>Source:</strong> Wikipedia's page <a href="https://en.wikipedia.org/wiki/Tree_of_primitive_Pythagorean_triples" rel="noreferrer"><em>Tree of primitive Pythagorean triples</em>.</a></p>
<p><em>(Just my two cents.)</em> While this has not much to do with <em>numerical</em> computations, IMHO, a very important example is the modelling of complex numbers by $2\times2$ matrices, i.e. the identification of $\mathbb C$ with a sub-algebra of $M_2(\mathbb R)$.</p> <p>Students who are first exposed to complex numbers often ask <em>"$-1$ has two square roots. Which one is $i$ and which one is $-i$?"</em> In some popular models of $-i$, such as $(0,-1)$ on the Argand plane or $-x+\langle x^2+1\rangle$ in $\mathbb R[x]/(x^2+1)$, a student may get a false impression that there is a natural way to identify one square root of $-1$ with $i$ and the other one with $-i$. In other words, they may wrongly believe that the choice should be somehow related to the ordering of real numbers. In the matrix model, however, it is clear that one can perfectly identify $\pmatrix{0&amp;-1\\ 1&amp;0}$ with $i$ or $-i$. The choices are completely symmetric and arbitrary. Neither one is more natural than the other.</p>
probability
<p>I just learned about the Monty Hall problem and found it quite amazing. So I thought about extending the problem a bit to understand more about it.<hr> In this modification of the Monty Hall Problem, instead of three doors, we have four (or maybe $n$) doors, one with a car and the other three (or $n-1$) with a goat each (I want the car).</p> <p>We need to choose any one of the doors. After we have chosen the door, Monty deliberately reveals one of the doors that has a goat and asks us if we wish to change our choice.</p> <p>So should we switch the door we have chosen, or does it not matter if we switch or stay with our choice?</p> <p>It would be even better if we knew the probability of winning upon switching given that Monty opens $k$ doors.</p>
<p><em>I decided to make an answer out of <a href="https://math.stackexchange.com/questions/608957/monty-hall-problem-extended/609552#comment1283624_608977">my comment</a>, just for the heck of it.</em></p> <hr> <h2>$n$ doors, $k$ revealed</h2> <p>Suppose we have $n$ doors, with a car behind $1$ of them. The probability of choosing the door with the car behind it on your first pick, is $\frac{1}{n}$. </p> <p>Monty then opens $k$ doors, where $0\leq k\leq n-2$ (he has to leave your original door and at least one other door closed).</p> <p>The probability of picking the car if you choose a different door, is the chance of not having picked the car in the first place, which is $\frac{n-1}{n}$, times the probability of picking it <em>now</em>, which is $\frac{1}{n-k-1}$. This gives us a total probability of $$ \frac{n-1}{n}\cdot \frac{1}{n-k-1} = \frac{1}{n} \cdot \frac{n-1}{n-k-1} \geq \frac{1}{n} $$</p> <p><strong>No doors revealed</strong><br> If Monty opens no doors, $k = 0$ and that reduces to $\frac{1}{n}$, which means your odds remain the same.</p> <p><strong>At least one door revealed</strong><br> For all $k &gt; 0$, $\frac{n-1}{n-k-1} &gt; 1$ and so the probabilty of picking the car on your second guess is greater than $\frac{1}{n}$.</p> <p><strong>Maximum number of doors revealed</strong><br> If $k$ is at its maximum value of $n-2$, the probability of picking a car after switching becomes $$\frac{1}{n}\cdot \frac{n-1}{n-(n-2)-1} = \frac{1}{n}\cdot \frac{n-1}{1} = \frac{n-1}{n}$$ For $n=3$, this is the solution to the original Monty Hall problem.</p> <p>Switch.</p>
<p>By not switching, you win a car if and only if you chose correctly initially. This happens with probability $\frac{1}{4}$. If you switch, you win a car if and only if you chose <em>incorrectly</em> initially, and then of the remaining two doors, you choose correctly. This happens with probability $\frac{3}{4}\times\frac{1}{2}=\frac{3}{8}$. So if you choose to switch, you are more likely to win a car than if you do not switch.</p> <p>You never told me whether you'd prefer to win a car or a goat though, so I can't tell you what to do. </p> <p><a href="http://xkcd.com/1282" rel="noreferrer"><img src="https://imgs.xkcd.com/comics/monty_hall.png" alt="xkcd #1282" title="A few minutes later, the goat from behind door C drives away in the car."></a></p>
probability
<p>Hypothetically, if I have a 0.00048% chance of dying when I blink, and I blink once a second, what chance do I have of dying in a single day? </p> <p>I tried $1-0.0000048^{86400}$ but no calculator I could find would support this. How would I work this out manually?</p>
<p>When $n$ is large, $p$ is small and $np&lt;10$, then the Poisson approximation is very good. In that case, the answer is approximately: $$P =1 - e^{-\lambda}=1-0.6605 = 0.3395$$, where $\lambda = np = 0.41472.$</p>
<p>As @Saketh and @dxiv indicate, you want to take a large power: $(1 - p)^{86400}$, where $p$ is tiny. Calculators don't do well at this. But if you use the rule that $$ a^b = \exp(b \log a) $$ then you can compute $$ b \log a \approx 86400 \log .9999952 \approx -0.41472099533 $$ and compute $e$ to that power to get approximately $0.6605...$, and hence your probability of dying is 1 minus that, or about 34%. </p> <p>The key step is in using the logarithm to compute the exponent, for your calculator's built-in log function (perhaps called "ln") is very accurate near 1, and exponentiation is pretty accurate for numbers like $e$ (a little less than $3$) with exponents between $0$ and about $5$. </p>
geometry
<p>I have</p> <ul> <li>a circle of radius r</li> <li>a square of length l</li> </ul> <p>The centre of the square is currently rotating around the circle in a path described by a circle of radius <span class="math-container">$(r + \frac{l}{2})$</span></p> <p><a href="https://i.sstatic.net/fHk2U.gif" rel="noreferrer"><img src="https://i.sstatic.net/fHk2U.gif" alt="enter image description here"></a></p> <p>However, the square overlaps the circle at e.g. 45°. I do not want the square to overlap with the circle at all, and still smoothly move around the circle. The square must not rotate, and should remain in contact with the circle i.e. the distance of the square from the circle should fluctuate as the square moves around the circle.</p> <p>Is there a formula or algorithm for the path of the square?</p> <p>[edit]</p> <p>Thanks for all your help! I've implemented the movement based on <a href="https://math.stackexchange.com/a/3511270/742405">Yves</a> solution with help from <a href="https://gist.github.com/goedel-gang/1feea15f289e2892a90bf5ec66963455" rel="noreferrer">Izaak's source code</a>:</p> <p><a href="https://i.sstatic.net/vlw45.gif" rel="noreferrer"><img src="https://i.sstatic.net/vlw45.gif" alt="enter image description here"></a></p> <p>I drew a diagram to help me visualise the movement as well (the square moves along the red track):</p> <p><a href="https://i.sstatic.net/CxuQT.png" rel="noreferrer"><img src="https://i.sstatic.net/CxuQT.png" alt="enter image description here"></a></p>
<p>As said elsewhere, the trajectory of the center is made of four lines segments and four circular arcs obtained by shifting the quarters of the original circle. </p> <p><a href="https://i.sstatic.net/sttsR.png" rel="noreferrer"><img src="https://i.sstatic.net/sttsR.png" alt="enter image description here"></a></p> <p>But it is not enough to known the shape of the trajectory, we also need to know the distance as a function of time, assuming that the center rotates at constant angular speed (not the contact point).</p> <p>For convenience, the square is of side <span class="math-container">$2l$</span>. For a contact on the <em>left side</em> of the square, we intersect the line</p> <p><span class="math-container">$$d\,(\cos\theta,\sin\theta)$$</span> where <span class="math-container">$\theta=\omega t$</span>, with the straight part of the trajectory,</p> <p><span class="math-container">$$x=r+l$$</span></p> <p>and we obtain the point</p> <p><span class="math-container">$$(r+l,\tan\theta(r+l))$$</span> and the distance <span class="math-container">$$\color{green}{d=\frac{r+l}{\cos\theta}}.$$</span></p> <p>For a contact on the <em>bottom left corner</em>, we intersect the same line with the shifted circle</p> <p><span class="math-container">$$\left(x-l\right)^2+\left(y-l\right)^2=r^2$$</span></p> <p>and this gives us the quadratic equation in <span class="math-container">$d$</span></p> <p><span class="math-container">$$\color{green}{d^2-2ld(\cos\theta+\sin\theta)+2l^2-r^2=0}.$$</span></p> <p>You need to repeat the reasoning for the other sides and corners of the square.</p> <p>Finally, the <em>switch</em> from contact by the side to contact by the corner occurs at an angle <span class="math-container">$\theta$</span> such that</p> <p><span class="math-container">$$\begin{cases}d\cos\theta=r+l,\\d\sin\theta=l,\end{cases}$$</span></p> <p>i.e.</p> <p><span class="math-container">$$\color{green}{\tan\theta=\frac l{r+l}}.$$</span></p>
<p>At first it appears that the inner vertex of the square is a circle of radius <span class="math-container">$r$</span> and outer vertex goes on circle radius <span class="math-container">$ r+ \sqrt2 l $</span>.</p> <p>However the constraint of parallel displacement of squares forces the squares to move horizontally or vertically maintaining <em>sliding contact at four points</em> <span class="math-container">$$(r,0), (0,r),(-r,0), (0,-r) $$</span> with square side sliding on circle radius <span class="math-container">$a$</span> at the above points. It also defines an outer <em>square envelope</em> of side <span class="math-container">$(r+l)$</span> serving as outer boundary.</p> <p>A rough sketch (sorry about the hand drawn Paint diagram) can indicate what is happening at outer boundary. </p> <p>If <span class="math-container">$\tan β=\dfrac{l}{l+r}$</span> then the eight demarcation of change over polar angles to flat square boundary are given by:</p> <p><span class="math-container">$$ θ= β, π/2-β,π/2+β, π-β, π+β, 3π/2-β, 3π/2+β,-β$$</span></p> <p><a href="https://i.sstatic.net/56XO1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/56XO1.png" alt="How do I rotate.."></a> <em>Mathematica</em> has <a href="https://reference.wolfram.com/language/ref/RegionPlot.html" rel="nofollow noreferrer"><code>RegionPlot</code></a> command defining swept area:</p> <pre><code>r=a; a = 2; l = 1; RegionPlot[{x^2 + y^2 &gt; a^2, x^2 + y^2 &lt; (a + Sqrt[2] l)^2, Abs[x] &lt; a + l, Abs[y] &lt; a + l}, {x, -4, 4}, {y, -4, 4} ] </code></pre>
probability
<p>I understand how to define conditional expectation and how to prove that it exists.</p> <p>Further, I think I understand what conditional expectation means intuitively. I can also prove the tower property, that is if $X$ and $Y$ are random variables (or $Y$ a $\sigma$-field) then we have that</p> <p>$$\mathbb E[X] = \mathbb{E}[\mathbb E [X | Y]].$$</p> <p>My question is: What is the intuitive meaning of this? It seems quite puzzling to me.</p> <p>(I could find similar questions but not this one.)</p>
<p>First, recall that in <span class="math-container">$E[X|Y]$</span> we are taking the expectation with respect to <span class="math-container">$X$</span>, and so it can be written as <span class="math-container">$E[X|Y]=E_X[X|Y]=g(Y)$</span> . Because it's a function of <span class="math-container">$Y$</span>, it's a random variable, and hence we can take its expectation (with respect to <span class="math-container">$Y$</span> now). So the double expectation should be read as <span class="math-container">$E_Y[E_X[X|Y]]$</span>.</p> <p>About the intuitive meaning, there are several approaches. I like to think of the expectation as a kind of <strong>predictor/guess</strong> (indeed, it's the predictor that minimizes the mean squared error).</p> <p>Suppose for example that <span class="math-container">$X, Y$</span> are two (positively) correlated variables, say the weigth and height of persons from a given population. The expectation of the weight <span class="math-container">$E(X)$</span> would be my best guess of the weight of a unknown person: I'd bet for this value, if not given more data (my <strong>uninformed bet</strong> is constant). Instead, if I know the height, I'd bet for <span class="math-container">$E(X | Y)$</span> : that means that for different persons I'd bet a diferent value, and my <strong>informed bet</strong> would not be constant: sometimes I'd bet more that the &quot;uninformed bet&quot; <span class="math-container">$E(X)$</span> (for tall persons) , sometime less. The natural question arises, can I say something about my informed bet <strong>in average</strong>? Well, the tower property answers: In average, you'll bet the same.</p> <hr /> <p>Added : I agree (ten years later) with @Did 's comment below. My notation here is misleading, an expectation is defined in itself, it makes little or no sense to specify &quot;with respect to <span class="math-container">$Y$</span>&quot;. In <a href="http://math.stackexchange.com/questions/4049293/">my answer here</a> I try to clarify this, and reconcile this fact with the (many) examples where one qualifies (subscripts) the expectation (<a href="https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm#Description" rel="nofollow noreferrer">with respect of ...</a>).</p>
<p>For simple discrete situations from which one obtains most basic intuitions, the meaning is clear.</p> <p>I have a large bag of biased coins. Suppose that half of them favour heads, probability of head $0.7$. Two-fifths of them favour heads, probability of head $0.8$. And the rest favour heads, probability of head $0.9$.</p> <p>Pick a coin at random, toss it, say once. To find the expected number of heads, calculate the expectations, <strong>given</strong> the various biasing possibilities. Then average the answers, taking into consideration the proportions of the various types of coin. </p> <p>It is intuitively clear that this formal procedure "should" give about the same answer as the highly informal process of say repeating the experiment $1000$ times, and dividing by $1000$. For if we do that, in about $500$ cases we will get the first type of coin, and out of these $500$ we will get about $350$ heads, and so on. The informal arithmetic mirrors exactly the more formal process described in the preceding paragraph. </p> <p>If it is more persuasive, we can imagine tossing the chosen coin $12$ times.</p>
game-theory
<p>An extension of <a href="https://math.stackexchange.com/questions/382822/you-are-johnny-depp">this question</a> repeated below.</p> <blockquote> <p>A band of 9 pirates have just finished their latest conquest - looting, killing and sinking a ship. The loot amounts to 1000 gold coins.</p> <p>Arriving on a deserted island, they now have to split up the loot. You, as the captain of the band, have to propose a distribution plan (who gets what). What's your proposal?</p> <p>Consider that this bunch is a democratic lot. If your proposal is accepted by half of the group, then everybody adheres to it. However, if folks feel you are getting greedy, and less than half of the band agrees to your proposal, then they kill you, and then your First Mate gets to make a proposal. And so it goes in decreasing order of hierarchy/seniority.</p> </blockquote> <p>These pirates are unhappy with the poor definition of democratic voting and now insist that the vote must be carried by a clear majority and voting is compulsory! These are bloodthirsty pirates so life is cheap. Specifically it is worth 1 coin so the pirates criteria for accepting a proposal is (in order)</p> <ol> <li>They do not die</li> <li>It makes them the most money</li> <li>It allows them to kill the most pirates</li> </ol> <p>How does this change the outcome?</p> <p>For $n$ pirates, let $P_1$ be the last pirate, $P_2$ be the next to last and so on up to $P_n$ who is the (temporary?) leader.</p> <ol> <li>For $n=1$, $P_1$ takes all the money.</li> <li>For $n=2$, $P_2$ is a dead man since $P_1$ can vote no, kill $P_2$ AND get all the money.</li> <li>For $n=3$, $P_3$ can count on $P_2$ since if he votes no he is going to die.</li> <li>For $n=4$?</li> </ol> <p>I will post an answer after the weekend if no one else has.</p>
<p><strong>Assumptions:</strong></p> <ol> <li>Pirates are rational, greedy, and bloodthirsty.</li> <li>If the head pirate has multiple possible proposals in which he maximizes his profit, he will choose between them randomly, with a uniform probability distribution.</li> <li>The first two items on this list are common knowledge among pirates.</li> </ol> <p><strong>Notation:</strong></p> <p>Let $G$ be the number of gold coins. A proposal in which $P_i$ receives $g_i$ gold coins will be denoted by $(g_n,\ldots,g_1),$ where $P_n$ is the most senior pirate.</p> <p><strong>Case $n=4$:</strong></p> <p>$P_4$ must garner the support of two other pirates to obtain a clear majority. For the case $n=3$, $P_1$ and $P_2$ receive nothing, so a proposal of $(G-2,0,1,1)$ is accepted.</p> <p><strong>Case $n\ge 5$</strong></p> <p>A proposal of $(G+1-2\lfloor \frac{n}{2}\rfloor,0,1,g_{n-3},g_{n-4},\ldots,g_1)$ is accepted, where for $i=1,2,\ldots,n-3$, exactly $\lfloor\frac{n}{2}\rfloor-1$ of the $g_i$'s are $2$, and the rest are $0$. First, notice that by assumption $2$, for $i=1,2,\ldots,n-3$, the expected value of the proposal for $P_i$ is</p> <p>$$E_n=\frac{2\lfloor\frac{n}{2}\rfloor-2}{n-3}=\begin{cases}1&amp;\text{if $n$ is odd}\\[.1in]\frac{n-2}{n-3}&amp;\text{if $n$ is even}\end{cases}$$</p> <p>What is most important is that $1\le E_n&lt;2$ for $n\ge 5$.</p> <p>We prove that this proposal is accepted by induction. For the case $n=5$, the two possible proposals are $(G-3,0,1,2,0)$ and $(G-3,0,1,0,2)$. Comparing with case $n=4$, we see that $P_5$, $P_3,$ and $P_2$ will vote for the first proposal, while $P_5$, $P_3,$ and $P_1$ will vote for the second proposal.</p> <p>Now assume that the proposal holds for $n-1$ pirates, and that there are $n$ pirates. $P_{n-2}$ receives $0$ gold coins in the case of $n-1$ pirates, so he will vote for the proposal. Since $1\le E_n&lt;2$, any pirate from $\{P_1,\ldots,P_{n-3}\}$ will vote for a proposal in which he receives at least $2$ coins, and will vote against a proposal in which he receives less than $2$ coins. Thus, exactly $\lfloor\frac{n}{2}\rfloor-1$ of the pirates $P_1,\ldots,P_{n-3}$ vote for the proposal.</p> <p>Remembering that $P_n$ will vote for the proposal, that makes $1+1+\lfloor\frac{n}{2}\rfloor-1=1+\lfloor\frac{n}{2}\rfloor$ votes for the proposal, so it passes.</p> <p>For the case of $n=9$ and $G=1000$, $P_9$ receives $993$ coins.</p> <p>Comparing this with the result of the first question, it seems that the new voting system requires that $P_n$ give up exactly $2\lfloor\frac{n}{2}\rfloor-\lfloor\frac{n-1}{2}\rfloor-1$ extra gold coins ($n\ge 5$).</p> <p><strong>Edit</strong></p> <p>If, instead of assumption $2$, we assume that the head pirate chooses between multiple possible proposals that maximize his profit by rewarding seniority, then a proposal of $(G-1-\lfloor\frac{n}{2}\rfloor,0,1,2,0,1,0,1,\ldots,\frac{1+(-1)^n}{2})$ will be accepted. The proof is similar to the induction above. In the case of $n=9$ and $G=1000$, we see that $P_9$ receives $995$ gold coins.</p>
<p>In case of 1 pirate, there's no debate.</p> <p>If there are 2, then the n1 gets all the money, n2 gets none. Because if n2 proposes anything less, n1 will not agree, hence proposal won't get clear majority, hence n2 will die and n1 will get everything nonetheless. (1000,0)</p> <p>If there are 3 pirates, then n3 just has to give n2 1 coin to buy his loyalty. n1 goes penniless. (0,1,999)</p> <p>If there are 4 pirates, then n4 has to buy the loyalty of 2 pirates. He can give 1 coin to n1. That's easy. Next, he can give 2 coins to n2 and nothing to n3 (1,2,0,997)</p> <p>5 pirates (2,0,1,0,997)</p> <p>6 pirates (0,1,2,1,0,996)</p> <p>7 pirates (1,2,0,0,1,0,996) OR (1,0,0,2,1,0,996) (yup, it gets messy)</p> <p>8 pirates if 7 pirate arrangement was (1,2,0,0,1,0,996) then (2,0,1,1,0,1,0,995) OR (0,0,1,1,2,1,0,995)<br> if 7 pirate arrangement was (1,0,0,2,1,0,996) then (2,1,1,0,0,1,0,995) OR (0,1,1,0,2,1,0,995)</p> <p>9 pirates if 8 pirate arrangement was (2,0,1,1,0,1,0,995) then (0,1,2,0,1,0,1,0,995) OR (0,1,0,2,1,0,1,0,995) OR (0,1,0,0,1,2,1,0,995)<br> if 8 pirate arrangement was (0,0,1,1,2,1,0,995) then (1,1,2,0,0,0,1,0,995) OR (1,1,0,2,0,0,1,0,995) OR (1,1,0,0,0,2,1,0,995)<br> if 8 pirate arrangement was (2,1,1,0,0,1,0,995) then (0,2,0,1,1,0,1,0,995) OR (0,0,2,1,1,0,1,0,995) OR (0,0,0,1,1,2,1,0,995)<br> if 8 pirate arrangement was (0,1,1,0,2,1,0,995) then (1,2,0,1,0,0,1,0,995) OR (1,0,2,1,0,0,1,0,995) OR (1,0,0,1,0,2,1,0,995)</p> <p>So there's actually multiple proposals (permutations) that work (the question asks that the solver lay down the proposal, not merely calculate the number of coins the captain gets).</p> <p>But if we look a little more closely, what does this mean?</p> <p>I'm glad you asked.</p> <p>7 pirates step is crucial. For the first time, it gives us 2 proposals that may pass.</p> <p>If there are 8 pirates, then they don't know which of the two proposals will be made if they kill their captain. Even if they believe that the First Mate will randomly make one of the two proposals, their possible outcomes can be expressed as expected values: (1,1,0,1,1,0,996)</p> <p>In fact, there is an argument to be made that both n2 and n4 are optimistic and each see their 7 pirate outcomes to be bettered as 2 coins.</p> <p>Meaning the outcomes to be bettered in an 8 pirate group is (1,2,0,2,1,0,994)</p> <p>So that's what the Captain needs to propose (2,0,1,0,2,1,0,994)</p> <p>Which means, in case of 9 pirates the distribution will be (0,1,2,1,0,0,1,0,995) OR (0,1,0,1,0,2,1,0,995)</p>
linear-algebra
<p><a href="http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors">Wikipedia</a> defines an eigenvector like this:</p> <p><em>An eigenvector of a square matrix is a non-zero vector that, when multiplied by the matrix, yields a vector that differs from the original vector at most by a multiplicative scalar.</em></p> <p>So basically in layman language: An eigenvector is a vector that when you multiply it by a square matrix, you get the same vector or the same vector multiplied by a scalar.</p> <p>There are a lot of terms which are related to this like eigenspaces and eigenvalues and eigenbases and such, which I don't quite understand, in fact, I don't understand at all.</p> <p>Can someone give an explanation connecting these terms? So that it is clear what they are and why they are related.</p>
<p>Eigenvectors are those vectors that exhibit especially simple behaviour under a linear transformation: Loosely speaking, they don't bend and rotate, they simply grow (or shrink) in length (though a different interpretation of growth/shrinkage may apply if the ground field is not $\mathbb R$). If it is possible to express any other vector as a linear combination of eigenvectors (preferably if you can in fact find a whole basis made of eigenvectors) then applying the - otherwise complicated - linear transformation suddenly becomes easy because with respect to a basis of eigenvectors the linear transformation is given simply by a diagonal matrix.</p> <p>Especially when one wants to investigate higher powers of a linear transformation, this is practically only possible for eigenvectors: If $Av=\lambda v$, then $A^nv=\lambda^nv$, and even exponentials become easy for eigenvectors: $\exp(A)v:=\sum\frac1{n!}A^n v=e^\lambda v$. By the way, the exponential functions $x\mapsto e^{cx}$ are eigenvectors of a famous linear tranformation: differentiation, i.e. mapping a function $f$ to its derivative $f'$. That's precisely why exponetials play an important role as base solutions for linear differential equations (or even their discrete counterpart, linear recurrences like the Fibonacci numbers).</p> <p>All other terminology is based on this notion: An (nonzero) <strong>eigenvector</strong> $v$ such that $Av$ is a multiple of $v$ determines its <strong>eigenvalue</strong> $\lambda$ as the scalar factor such that $Av=\lambda v$. Given an eigenvalue $\lambda$, the set of eigenvectors with that eigenvalue is in fact a subspace (i.e. sums and multiples of eigenvectors with the same(!) eigenvalue are again eigen), called the <strong>eigenspace</strong> for $\lambda$. If we find a basis consisting of eigenvectors, then we may obviously call it <strong>eigenbasis</strong>. If the vectors of our vector space are not mere number tuples (such as in $\mathbb R^3$) but are also functions and our linear transformation is an operator (such as differentiation), it is often convenient to call the eigenvectors <strong>eigenfunctions</strong> instead; for example, $x\mapsto e^{3x}$ is an eigenfunction of the differentiation operator with eigenvalue $3$ (because the derivative of it is $x\mapsto 3e^{3x}$).</p>
<p>As far as I understand it, the 'eigen' in words like eigenvalue, eigenvector etc. means something like 'own', or a better translation in English would perhaps be 'characteristic'.</p> <p>Each square matrix has some special scalars and vectors associated with it. The eigenvectors are the vectors which the matrix preserves (up to scalar multiplication). As you probably know, an $n\times n$ matrix acts as a linear transformation on an $n$-dimensional space, say $F^n$. A vector and its scalar multiples form a line through the origin in $F^n$, and so you can think of the eigenvectors as indicating lines through the origin preserved by the linear transformation corresponding to the matrix.</p> <p><strong>Defn</strong> Let $A$ be an $n\times n$ matrix over a field $F$. A vector $v\in F^n$ is an <em>eigenvector</em> of $A$ if $Av = \lambda v$ for some $\lambda$ in $F$. A scalar $\lambda\in F$ is an <em>eigenvalue</em> of $A$ if $Av = \lambda v$ for some $v\in F^n$.</p> <p>The eigenvalues are then the factors by which these special lines through the origin are either stretched or contracted.</p>
geometry
<p>Is this really possible? Is there any other example of this other than the Koch Snowflake? If so can you prove that example to be true?</p>
<p>One can have a bounded region in the plane with finite area and infinite perimeter, and this (and not the reverse) is true for (the inside of) the <a href="http://en.wikipedia.org/wiki/Koch_snowflake" rel="noreferrer">Koch Snowflake</a>.</p> <p>On the other hand, the <a href="http://en.wikipedia.org/wiki/Isoperimetric_inequality" rel="noreferrer">Isoperimetric Inequality</a> says that if a bounded region has area $A$ and perimeter $L$, then $$4 \pi A \leq L^2,$$ and in particular, finite perimeter implies finite area. In fact, equality holds here if and only if the region is a disk (that is, if its boundary is a circle). See <a href="http://www.math.utah.edu/~treiberg/isoperim/isop.pdf" rel="noreferrer">these notes</a> (pdf) for much more about this inequality, including a few proofs.</p> <p>(As Peter LeFanu Lumsdaine observes in the comments below, proving this inequality in its full generality is technically demanding, but to answer the question of whether there's a bounded region with infinite area but finite perimeter, it's enough to know that there is <em>some</em> positive constant $\lambda$ for which $$A \leq \lambda L^2,$$ and it's easy to see this intuitively: Any closed, simple curve of length $L$ must be contained in the disc of radius $\frac{L}{2}$ centered at any point on the curve, and so the area of the region the curve encloses is smaller than the area of the disk, that is, $$A \leq \frac{\pi}{4} L^2.)$$</p> <p>NB that the Isoperimetric Inequality is not true, however, if one allows general surfaces (roughly, 2-dimensional shapes not contained in the plane. For example, if one starts with a disk and "pushes the inside out" without changing the circular boundary of the disk, then one can make a region with a given perimeter (the circumference of the boundary circle) but (finite) surface area as large as one likes.</p>
<p>It's a bit of a matter of semantics.</p> <p>What's a "shape" but a subset of the plane separated from the rest by a curve? But <em>which subset</em>?</p> <p>A circumference (for example) is a finite closed curve (with finite perimeter) that separates and defines two subsets of the plane - we conventionally pick the one with finite area and we call it "circle". But the circumference also defines the subset with infinite area that lays "outside" (which is a conventional concept). That other "outside shape" would be an example of a finite-perimeter curve with an infinite area.</p> <p>That sounds like cheating and playing with words. But think about it: what else could possibly an infinite area delimited by a finite curve look like? If you only allow yourself to look at the "inside" of any closed curve, it couldn't have an infinite area because you can always define a circumference "around it" whose circle would necessarily fully contain the first shape and also be of finite area. Any possible shape with infinite area and finite perimeter would have to be the "outside" delimited by a closed curve.</p> <p>So the answer to your question depends on whether you're interested in considering the "outside" of a closed curve (in which case all closed curves delimit such shapes), or whether you're not (in which case there cannot be any such shape).</p>
differentiation
<p>I want to calculate the derivative of a function with respect to, not a variable, but respect to another function. For example: $$g(x)=2f(x)+x+\log[f(x)]$$ I want to compute $$\frac{\mathrm dg(x)}{\mathrm df(x)}$$ Can I treat $f(x)$ as a variable and derive "blindly"? If so, I would get $$\frac{\mathrm dg(x)}{\mathrm df(x)}=2+\frac{1}{f(x)}$$ and treat the simple $x$ as a parameter which derivative is zero. Or I should consider other derivation rules?</p>
<p>$$\frac{dg(x)}{df(x)} = \frac{dg(x)}{dx} \cdot \frac{1}{f'(x)} = \frac{g'(x)}{f'(x)}$$</p> <p>In your example,</p> <p>$$g'(x) = 2f'(x) + 1 + \frac{f'(x)}{f(x)}$$</p> <p>So:</p> <p>$$\frac{dg(x)}{df(x)} = \frac{2f'(x) + 1 + \frac{f'(x)}{f(x)}}{f'(x)} = 2 + \frac{1}{f'(x)} + \frac{1}{f(x)}$$</p>
<p>You can not. You have to derivate $f(x)$ as function.</p> <p>$g'(x) = 2f'(x) + 1 + {f'(x) \over f(x)}$</p> <p>EDIT: Sorry, That would make $dg(x) \over dx$, Deepak is right.</p>
differentiation
<p><strong>Does anyone know anything about the following &quot;super-derivative&quot; operation?</strong> I just made this up so I don't know where to look, but it appears to have very meaningful properties. An answer to this question could be a reference and explanation, or known similar idea/name, or just any interesting properties or corollaries you can see from the definition here? Is there perhaps a better definition than the one I am using? What is your intuition for what the operator is doing (i.e. is it still in any sense a gradient)? Is there a way to separate the log part out, or remove it? Or is that an essential feature?</p> <p><strong>Definition:</strong> I'm using the word &quot;super-derivative&quot; but that is a made-up name. Define the &quot;super-derivative&quot;, operator <span class="math-container">$S_x^{\alpha}$</span>, about <span class="math-container">$\alpha$</span>, using the derivative type limit equation on the fractional derivative operator <span class="math-container">$D_x^\alpha$</span> <span class="math-container">$$ S_x^{\alpha} = \lim_{h \to 0} \frac{D^{\alpha+h}_x-D^{\alpha}_x}{h} $$</span> then for a function <span class="math-container">$$ S_x^{\alpha} f(x) = \lim_{h \to 0} \frac{D^{\alpha+h}_xf(x)-D^{\alpha}_x f(x)}{h} $$</span> for example, the [Riemann-Liouville, see appendix] fractional derivative of a power function is <span class="math-container">$$ D_x^\alpha x^k = \frac{\Gamma(k+1)}{\Gamma(k-\alpha+1)}x^{k-\alpha} $$</span> and apparently <span class="math-container">$$ S_x^{\alpha} x^k = \frac{\Gamma (k+1) x^{k-\alpha} (\psi ^{(0)}(-\alpha+k+1) - \log (x))}{\Gamma (-\alpha+k+1)} = (\psi ^{(0)}(-\alpha+k+1) - \log (x)) D_x^\alpha x^k $$</span> a nice example of this, the super-derivative of <span class="math-container">$x$</span> at <span class="math-container">$\alpha=1$</span> is <span class="math-container">$-\gamma - \log(x)$</span>, which turns up commonly. I'm wondering if this could be used to describe the series expansions of certain functions that have log or <span class="math-container">$\gamma$</span> terms, e.g. BesselK functions, or the Gamma function.</p> <p><strong>Potential relation to Bessel functions</strong>: For example, a fundamental function with this kind of series, (the inverse Mellin transform of <span class="math-container">$\Gamma(s)^2$</span>), is <span class="math-container">$2 K_0(2 \sqrt{x})$</span> with <span class="math-container">$$ 2 K_0(2 \sqrt{x}) = (-\log (x)-2 \gamma )+x (-\log (x)-2 \gamma +2)+\frac{1}{4} x^2 (-\log (x)-2 \gamma +3)+\\ +\frac{1}{108} x^3 (-3 \log (x)-6 \gamma +11)+\frac{x^4 (-6 \log (x)-12 \gamma +25)}{3456}+O\left(x^5\right) $$</span> in the end, taking the super-derivative of polynomials and matching coefficients we find <span class="math-container">$$ S_x^1[2 \sqrt{x}I_1(2\sqrt{x})] + I_0(2 \sqrt{x})\log(x) = 2K_0(2 \sqrt{x}) $$</span> which can also potentially be written in terms of linear operators as <span class="math-container">$$ [2 S_x x D_x + \log(x)]I_0(2 \sqrt{x}) = 2K_0(2 \sqrt{x}) $$</span> likewise <span class="math-container">$$ [2 S_x x D_x - \log(x)]J_0(2 \sqrt{x}) = \pi Y_0(2 \sqrt{x}) $$</span> I like this because it's similar to an eigensystem, but the eigenfunctions swap over.</p> <p><strong>Gamma Function:</strong> We can potentially define higher-order derivatives, for example <span class="math-container">$$ (S_x^{\alpha})^2 = \lim_{h \to 0} \frac{D^{\alpha+h}_x-2 D^{\alpha}_x + D^{\alpha-h}_x}{h^2} $$</span> and <span class="math-container">$$ (S_x^{\alpha})^3 = \lim_{h \to 0} \frac{D^{\alpha+3h}_x-3 D^{\alpha+2h}_x + 3 D^{\alpha+h}_x - D^{\alpha}_x}{h^3} $$</span></p> <p>this would be needed if there was any hope of explaining the series <span class="math-container">$$ \Gamma(x) = \frac{1}{x}-\gamma +\frac{1}{12} \left(6 \gamma ^2+\pi ^2\right) x+\frac{1}{6} x^2 \left(-\gamma ^3-\frac{\gamma \pi ^2}{2}+\psi ^{(2)}(1)\right)+ \\+\frac{1}{24} x^3 \left(\gamma ^4+\gamma ^2 \pi ^2+\frac{3 \pi ^4}{20}-4 \gamma \psi ^{(2)}(1)\right)+O\left(x^4\right) $$</span> using the 'super-derivative'. This appears to be <span class="math-container">$$ \Gamma(x) = [(S^1_x)^0 x]_{x=1} x^{-1} + [(S^1_x)^1 x]_{x=1} x + \frac{1}{2}[(S^1_x)^2 x]_{x=1} x^2 + \frac{1}{6} [(S^1_x)^3 x]_{x=1} x^3 + \cdots $$</span> so one could postulate <span class="math-container">$$ \Gamma(x) = \frac{1}{x}\sum_{k=0}^\infty \frac{1}{k!}[(S^1_x)^k x]_{x=1} x^{k} $$</span> which I think is quite beautiful.</p> <p><strong>Appendix:</strong> I used the following definition for the fractional derivative: <span class="math-container">$$ D_x^\alpha f(x) = \frac{1}{\Gamma(-\alpha)}\int_0^x (x-t)^{-\alpha-1} f(t) \; dt $$</span> implemented for example by the Wolfram Mathematica code found <a href="https://community.wolfram.com/groups/-/m/t/1313893" rel="noreferrer">here</a></p> <pre><code>FractionalD[\[Alpha]_, f_, x_, opts___] := Integrate[(x - t)^(-\[Alpha] - 1) (f /. x -&gt; t), {t, 0, x}, opts, GenerateConditions -&gt; False]/Gamma[-\[Alpha]] FractionalD[\[Alpha]_?Positive, f_, x_, opts___] := Module[ {m = Ceiling[\[Alpha]]}, If[\[Alpha] \[Element] Integers, D[f, {x, \[Alpha]}], D[FractionalD[-(m - \[Alpha]), f, x, opts], {x, m}] ] ] </code></pre> <p>I'm happy to hear more about other definitions for the fractional operators, and whether they are more suitable.</p>
<p>I've thought about this for a few days now, I didn't originally intend to answer my own question but it seems best to write this as an answer rather than add to the question. I think there is nice interpretation in the following: <span class="math-container">$$ f(x) = \lim_{h \to 0} \frac{e^{h f(x)}-1}{h} $$</span> also consider the Abel shift operator <span class="math-container">$$ e^{h D_x}f(x) = f(x+h) $$</span> from the limit form of the derivative we have (in the sense of an operator) <span class="math-container">$$ D_x = \lim_{h \to 0} \frac{e^{h D_x}-e^{0 D_x}}{h} = \lim_{h \to 0} \frac{e^{h D_x}-1}{h} $$</span> now we can also manipulate the first equation to get <span class="math-container">$$ \log f(x) = \lim_{h \to 0} \frac{f^h(x)-1}{h} $$</span> so by (a very fuzzy) extrapolation, we might have <span class="math-container">$$ \log(D_x) = \lim_{h \to 0} \frac{D_x^h-1}{h} $$</span> and applying that to a <em>function</em> we now get <span class="math-container">$$ \log(D_x) f(x) = \lim_{h \to 0} \frac{D_x^h f(x)-f(x)}{h} $$</span> which is the <span class="math-container">$\alpha = 0$</span> case of the 'super-derivative'. So <strong>one interpretation</strong> of this case is the logarithm of the derivative? If we apply the log-derivative to a fractional derivative then we have <span class="math-container">$$ \log(D_x) D^\alpha_x f(x) = \lim_{h \to 0} \frac{D_x^h D^\alpha_x f(x)-D^\alpha_x f(x)}{h} $$</span> there might be a question of the validity of <span class="math-container">$D_x^h D^\alpha_x = D_x^{\alpha+h}$</span> which I believe isn't always true for fractional derivatives.</p> <p>This interpretation would explain the <span class="math-container">$\log(x)$</span> type terms arising in the series above. I'd be interested to see if anyone has any comments on this? I'd love to see other similar interpretations or developments on this. What are the eigenfunctions for the <span class="math-container">$\log D_x$</span> operator for example? Can we form meaningful differential equations?</p> <p><strong>Edit:</strong> For some functions I have tried we do have the expected property <span class="math-container">$$ n \log(D_x) f(x) = \log(D_x^n) f(x) $$</span> with <span class="math-container">$$ \log(D_x^n) f(x) = \lim_{h \to 0} \frac{D_x^{n h} f(x)-f(x)}{h} $$</span></p>
<p>Seems like you have happened upon some relations similar to ones I've written about over several years. Try for starters the <a href="https://math.stackexchange.com/questions/125343/lie-group-heuristics-for-a-raising-operator-for-1n-fracdnd-betan-fra">MSE-Q&amp;A</a> &quot;Lie group heuristics for a raising operator for <span class="math-container">$(-1)^n \frac{d^n}{d\beta^n}\frac{x^\beta}{\beta!}|_{\beta=0}$</span>.&quot; There are several posts on my blog (see my user page) on this topic, logarithm of the derivative operator (see also <a href="https://oeis.org/A238363" rel="nofollow noreferrer">A238363</a> and links therein, a new one will be added soon, my latest blog post), and fractional differ-integral calculus.</p>
differentiation
<p>Does there exist a function $f: \mathbb{R} \to \mathbb{R}$ having the following properties?</p> <ul> <li>$f(x) = 0$ for all $x \le 0$.</li> <li>$f(x) = 1$ for all $x \ge 1$.</li> <li>For $0 &lt; x &lt; 1$, $f$ is strictly increasing.</li> <li>$f$ is everywhere $C^\infty$.</li> <li>The sequence of $L^\infty$ norms $\langle \left\lVert f \right\rVert_\infty, \left\lVert f' \right\rVert_\infty, \left\lVert f'' \right\rVert_\infty, \dots \rangle$ is bounded.</li> </ul> <p>If we impose only the first four conditions, there is <a href="https://math.stackexchange.com/questions/328868/how-to-build-a-smooth-transition-function-explicitly">a well-known answer</a>: for $0 &lt; x &lt; 1$, define $f(x)$ by $$ f(x) = \frac{e^{-1/x}}{e^{-1/x} + e^{-1/(1-x)}} = \frac{1}{1 + e^{1/x - 1/(1-x)}} $$ However, the derivatives of this function appear to grow quite rapidly. (I'm not sure how to verify this, but it seems at least exponential to me.)</p> <p>If such a function does not exist, what is the smallest order of asymptotic growth that the sequence $\langle \left\lVert f \right\rVert_\infty, \left\lVert f' \right\rVert_\infty, \left\lVert f'' \right\rVert_\infty, \dots \rangle$ can have?</p>
<p>Suppose $f$ satisfies your first four hypotheses. By the mean value theorem, there is some $x_1\in (0,1)$ such that $f'(x_1)=1$. There is then some $x_2\in(0,x_1)$ such that $f''(x_2)=1/x_1$, and then there is some $x_3\in(0,x_2)$ such that $f'''(x_3)=1/x_1x_2&gt;1/x_1^2$, and so on. By induction, we see that we have points $x_n$ such that $f^{(n)}(x_n) \ge 1/x_1^{n-1}$ for all $n \ge 1$. Thus the norms $\|f^{(n)}\|_\infty$ must grow at least exponentially in $n$.</p> <p>By a more careful argument, you can show that in fact the derivatives must grow factorially. More precisely, if $\|f^{(n)}\|_\infty=M$, we get that for any $x$, $|f^{(n-1)}(x)|\leq M|x|$, and then we get $|f^{(n-2)}(x)|\leq M|x|^2/2$, and so on, up to $|f(x)|\leq M|x|^n/n!$. It follows that if there exists a constant $c&gt;0$ such that $\|f^{(n)}\|_\infty&lt;c^nn!$ for all $n$, then $f(x)=0$ for $|x|&lt;1/c$. But then replacing $f(x)$ with $g(x)=f(x\pm 1/c)$, we get that $f(x)=0$ for all $|x|&lt;2/c$, and iterating this we get that $f$ vanishes everywhere. So the norms $\|f^{(n)}\|_\infty$ must grow faster than $c^nn!$ for all $c&gt;0$.</p>
<p>If the sequence of infinity norms $\{\|f^{(n)}\|_{\infty}\}_{n = 0}^{\infty}$ is bounded, i.e. there exists a constant $M$ such that for all $n \in \mathbb{N}_0$, $|f^{(n)}(x)| \le M$ for all $x \in \mathbb{R}$, then by <a href="https://en.wikipedia.org/wiki/Analytic_function#Alternative_characterizations">this theorem</a> $f$ is analytic. </p> <p>However, if an analytic function is $0$ on any interval (such as $(-\infty,0)$), then its Taylor series about any point in that interval is identically $0$. Then, since $f$ is analytic, the Taylor series converges to $f$ everywhere, i.e. $f \equiv 0$.</p> <p>Therefore, no such function exists.</p>
differentiation
<p>In calculus of one variable I read:</p> <p>A function <span class="math-container">$f$</span> is differentiable on an <strong>open interval</strong> if it is differentiable at every number of the interval.</p> <p>I wonder why in the definition we suppose that the interval is open. This is the case in Rolle theorem, Mean value theorem,etc.</p> <blockquote> <p>Do we have a notion of a function being differentiable on a closed interval <span class="math-container">$[a,b]$</span>?</p> </blockquote>
<p>The problem is one of <em>consistent definitions</em>. Intuitively we can make sense of differentiable on a closed interval: but it requires a slightly more careful phrasing of the definition of "differentiable at a point". I don't know which book you are using, but I am betting that it contains (some version of the) following (naive) definition:</p> <blockquote> <p><strong>Definition</strong> A function <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x$</span> if <span class="math-container">$\lim_{y\to x} \frac{f(y) - f(x)}{y-x}$</span> exists and is finite. </p> </blockquote> <p>To make sense of the limit, often times the textbook will explicitly require that <span class="math-container">$f$</span> be <em>defined on an open interval containing <span class="math-container">$x$</span></em>. And if the definition of differentiability at a point requires <span class="math-container">$f$</span> to be defined on an open interval of the point, the definition of differentiability on a set can only be stated for sets for which every point is contained in an open interval. To illustrate, consider a function <span class="math-container">$f$</span> defined <em>only</em> on <span class="math-container">$[0,1]$</span>. Now you try to determine whether <span class="math-container">$f$</span> is differentiable at <span class="math-container">$0$</span> by naively applying the above definition. But since <span class="math-container">$f(y)$</span> is undefined if <span class="math-container">$y&lt;0$</span>, the limit</p> <p><span class="math-container">$$ \lim_{y\to 0^-} \frac{f(y) - f(0)}{y} $$</span></p> <p>is <em>undefined</em>, and hence the derivative cannot exist at <span class="math-container">$0$</span> using one particular reading of the above definition. </p> <p>For this purposes some people use the notion of <a href="http://en.wikipedia.org/wiki/Semi-differentiability" rel="noreferrer">semi-derivatives or one-sided derivatives</a> when dealing with boundary points. Other people just make the convention that when speaking of <em>closed</em> intervals, on the boundary the derivative is necessarily defined using a one-sided limit. </p> <hr> <p>Your textbook is not just being pedantic, however. If one wishes to study multivariable calculus, the definition of differentiability which requires taking limits in all directions is much more robust, compared to one-sided limits: the main problem being that in one dimension, given a boundary point, there is clearly a "left" and a "right", and each occupies "half" the available directions. This is no longer the case for domains in higher dimensions. Consider the domain</p> <p><span class="math-container">$$ \Omega = \{ y \leq \sqrt{|x|} \} \subsetneq \mathbb{R}^2$$</span></p> <p><span class="math-container">$\hspace{5cm}$</span><a href="https://i.sstatic.net/3gAKo.png" rel="noreferrer"><img src="https://i.sstatic.net/3gAKo.png" alt="enter image description here"></a></p> <p>A particular boundary point of <span class="math-container">$\Omega$</span> is the origin. However, from the origin, almost all directions point to inside <span class="math-container">$\Omega$</span> (the only one that doesn't is the one that points straight up, in the positive <span class="math-container">$y$</span> direction). So the <a href="http://en.wikipedia.org/wiki/Total_derivative" rel="noreferrer">total derivative</a> cannot be defined at the origin if a function <span class="math-container">$f$</span> is only defined on <span class="math-container">$\Omega$</span>. But if you try to loosen the definitions and allow to consider only those "defined" directional derivatives, they may not patch together nicely at all. (A canonical example is the function <span class="math-container">$$f(x,y) = \begin{cases} 0 &amp; y \leq 0 \\ \text{sgn}(x) y^{3/2} &amp; y &gt; 0\end{cases}$$</span> where <span class="math-container">$\text{sgn}$</span> return <span class="math-container">$+1$</span> if <span class="math-container">$x &gt; 0$</span>, <span class="math-container">$-1$</span> if <span class="math-container">$x &lt; 0$</span>, and <span class="math-container">$0$</span> if <span class="math-container">$x = 0$</span>. Its graph looks like what happens when you tear a piece of paper.) </p> <p><span class="math-container">$\hspace{4cm}$</span><a href="https://i.sstatic.net/kEqHq.png" rel="noreferrer"><img src="https://i.sstatic.net/kEqHq.png" alt="enter image description here"></a></p> <hr> <p>But note that this is mainly a <em>failure</em> of the original naive definition of differentiability (which, however, may be pedagogically more convenient). A much more general notion of differentiability can be defined:</p> <blockquote> <p><strong>Definition</strong> Let <span class="math-container">$S\subseteq \mathbb{R}$</span>, and <span class="math-container">$f$</span> a <span class="math-container">$\mathbb{R}$</span>-valued function defined over <span class="math-container">$S$</span>. Let <span class="math-container">$x\in S$</span> be a <a href="http://en.wikipedia.org/wiki/Limit_point" rel="noreferrer">limit point</a> of <span class="math-container">$S$</span>. Then we say that <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x$</span> if there exists a linear function <span class="math-container">$L$</span> such that for every sequence of points <span class="math-container">$x_n\in S$</span> different from <span class="math-container">$x$</span> but converging to <span class="math-container">$x$</span>, we have that <span class="math-container">$$ \lim_{n\to\infty} \frac{f(x_n) - f(x) - L(x_n-x)}{|x_n - x|} = 0 $$</span></p> </blockquote> <p>This definition is a mouthful (and rather hard to teach in an introductory calculus course), but it has several advantages:</p> <ol> <li>It readily includes the case of the closed intervals.</li> <li>It doesn't even need intervals. For example, you can let <span class="math-container">$S$</span> be the set <span class="math-container">$\{0\} \cup \{1/n\}$</span> where <span class="math-container">$n$</span> range over all positive integers. Then <span class="math-container">$0$</span> is a limit point, and so you can consider whether a function defined on this set is differentiable at the origin.</li> <li>It easily generalises to higher dimensions, and vector valued functions. Just let <span class="math-container">$f$</span> take values in <span class="math-container">$\mathbb{R}^n$</span>, and let the domain <span class="math-container">$S\subseteq \mathbb{R}^d$</span>. The rest of the definition remains unchanged. </li> <li>It captures, geometrically, the essence of the differentiation, which is "approximation by tangent planes". </li> </ol> <p>For this definition, you can easily add</p> <blockquote> <p><strong>Definition</strong> If <span class="math-container">$S\subseteq \mathbb{R}$</span> is such that every point <span class="math-container">$x\in S$</span> is a limit point of <span class="math-container">$S$</span>, and <span class="math-container">$f$</span> is a real valued function on <span class="math-container">$S$</span>, we say that <span class="math-container">$f$</span> is differentiable on <span class="math-container">$S$</span> if <span class="math-container">$f$</span> is differentiable at all points <span class="math-container">$x\in S$</span>. </p> </blockquote> <p>Note how this looks very much like the statement you quoted in your question. In the definition of pointwise differentiability we replaced the condition "<span class="math-container">$x$</span> is contained in an open neighborhood" by "<span class="math-container">$x$</span> is a limit point". And in the definition of differentiability on a set we just replaced the condition "every point has an open neighborhood" by "every point is a limit point". (This is what I meant by consistency: however you define pointwise differentiability necessarily effect how you define set differentiability.)</p> <hr> <p>If you go on to study differential geometry, this issue manifests behind the definitions for "manifolds", "manifolds with boundaries", and "manifolds with corners". </p>
<p>A function is differentiable on a set $S$, if it is differentiable at every point of $S$. This is the definition that I seen in the beginning/classic calculus texts, and this mirrors the definition of continuity on a set. </p> <p>So $S$ could be an open interval, closed interval, a finite set, in fact, it could be any set you want.</p> <p>So yes, we do have a notion of a function being differentiable on a closed interval.</p> <p>The reason Rolle's theorem talks about differentiabilty on the open interval $(a,b)$ is that it is a <em>weaker</em> assumption than requiring differentiability on $[a,b]$.</p> <p>Normally, theorems might try to make the assumptions as weak as possible, to be more generally applicable.</p> <p>For instance, the function:</p> <p>$$f(x) = x \sin \frac{1}{x}, x \gt 0$$ $$f(0) = 0$$</p> <p>is continuous at $0$, and differentiable everywhere except at $0$.</p> <p>You can still apply Rolle's theorem to this function on say the interval $(0,\frac{1}{\pi})$. If the statement of Rolle's theorem required the use of the closed interval, then you could not apply it to this function.</p>
matrices
<p>I'm TAing linear algebra next quarter, and it strikes me that I only know one example of an application I can present to my students. I'm looking for applications of elementary linear algebra outside of mathematics that I might talk about in discussion section.</p> <p>In our class, we cover the basics (linear transformations; matrices; subspaces of $\Bbb R^n$; rank-nullity), orthogonal matrices and the dot product (incl. least squares!), diagonalization, quadratic forms, and singular-value decomposition.</p> <p>Showing my ignorance, the only application of these I know is the one that was presented in the linear algebra class I took: representing dynamical systems as Markov processes, and diagonalizing the matrix involved to get a nice formula for the $n$th state of the system. But surely there are more than these.</p> <p>What are some applications of the linear algebra covered in a first course that can motivate the subject for students? </p>
<p>I was a teaching assistant in Linear Algebra previous semester and I collected a few applications to present to my students. This is one of them:</p> <p><strong>Google's PageRank algorithm</strong></p> <p>This algorithm is the "heart" of the search engine and sorts documents of the world-wide-web by their "importance" in decreasing order. For the sake of simplicity, let us look at a system only containing of four different websites. We draw an arrow from $i$ to $j$ if there is a link from $i$ to $j$.</p> <p><img src="https://i.sstatic.net/mHgB7.png" alt=""></p> <p>The goal is to compute a vector $\underline{x} \in \mathbb{R}^4$, where each entry $x_i$ represents the website's importance. A bigger value means the website is more important. There are three criteria contributing to the $x_i$:</p> <ol> <li>The more websites contain a link to $i$, the bigger $x_i$ gets.</li> <li>Links from more important websites have a more relevant weight than those of less important websites.</li> <li>Links from a website which contains many links to other websites (outlinks) have less weight.</li> </ol> <p>Each website has exactly one "vote". This vote is distributed uniformly to each of the website's outlinks. This is known as <em>Web-Democracy</em>. It leads to a system of linear equations for $\underline{x}$. In our case, for</p> <p>$$P = \begin{pmatrix} 0&amp;0&amp;1&amp;1/2\\ 1/3&amp;0&amp;0&amp;0\\ 1/3&amp; 1/2&amp;0&amp;1/2\\ 1/3&amp;1/2&amp;0&amp;0 \end{pmatrix}$$</p> <p>the system of linear equations reads $\underline{x} = P \underline{x}$. The matrix $P$ is a stochastical matrix, hence $1$ is an eigenvalue of $P$. One of the corresponding eigenvectors is</p> <p>$$\underline{x} = \begin{pmatrix} 12\\4\\9\\6 \end{pmatrix},$$</p> <p>hence $x_1 &gt; x_3 &gt; x_4 &gt; x_2$. Let</p> <p>$$G = \alpha P + (1-\alpha)S,$$</p> <p>where $S$ is a matrix corresponding to purely randomised browsing without links, i.e. all entries are $\frac{1}{N}$ if there are $N$ websites. The matrix $G$ is called the <em>Google-matrix</em>. The inventors of the PageRank algorithm, Sergey Brin and Larry Page, chose $\alpha = 0.85$. Note that $G$ is still a stochastical matrix. An eigenvector for the eigenvalue $1$ of $\underline{x} = G \underline{x}$ in our example would be (rounded)</p> <p>$$\underline{x} = \begin{pmatrix} 18\\7\\14\\10 \end{pmatrix},$$</p> <p>leading to the same ranking.</p>
<p>Another very useful application of Linear algebra is</p> <p><strong>Image Compression (Using the SVD)</strong></p> <p>Any real matrix $A$ can be written as</p> <p>$$A = U \Sigma V^T = \sum_{i=1}^{\operatorname{rank}(A)} u_i \sigma_i v_i^T,$$</p> <p>where $U$ and $V$ are orthogonal matrices and $\Sigma$ is a diagonal matrix. Every greyscale image can be represented as a matrix of the intensity values of its pixels, where each element of the matrix is a number between zero and one. For images of higher resolution, we have to store more numbers in the intensity matrix, e.g. a 720p greyscale photo (1280 x 720), we have 921'600 elements in its intensity matrix. Instead of using up storage by saving all those elements, the singular value decomposition of this matrix leads to a simpler matrix that requires much less storage.</p> <p>You can create a <em>rank $J$ approximation</em> of the original image by using the first $J$ singular values of its intensity matrix, i.e. only looking at</p> <p>$$\sum_{i=1}^J u_i \sigma_i v_i^T .$$</p> <p>This saves a large amount of disk space, but also causes the image to lose some of its visual clarity. Therefore, you must choose a number $J$ such that the loss of visual quality is minimal but there are significant memory savings. Example:</p> <p><img src="https://i.sstatic.net/JkUsU.png" alt=""></p> <p>The image on the RHS is an approximation of the image on the LHS by keeping $\approx 10\%$ of the singular values. It takes up $\approx 18\%$ of the original image's storage. (<a href="http://www.math.uci.edu/icamp/courses/math77c/demos/SVD_compress.pdf" rel="noreferrer">Source</a>)</p>
probability
<p><strong>Question :</strong> What is the difference between Average and Expected value?</p> <hr /> <p>I have been going through the definition of expected value on <a href="http://en.wikipedia.org/wiki/Expected_value" rel="noreferrer">Wikipedia</a> beneath all that jargon it seems that the expected value of a distribution is the average value of the distribution. Did I get it right ?</p> <p>If yes, then what is the point of introducing a new term ? Why not just stick with the average value of the distribution ?</p>
<p>The concept of expectation value or expected value may be understood from the following example. Let $X$ represent the outcome of a roll of an unbiased six-sided die. The possible values for $X$ are 1, 2, 3, 4, 5, and 6, each having the probability of occurrence of 1/6. The expectation value (or expected value) of $X$ is then given by </p> <p>$(X)\text{expected} = 1(1/6)+2\cdot(1/6)+3\cdot(1/6)+4\cdot(1/6)+5\cdot(1/6)+6\cdot(1/6) = 21/6 = 3.5$</p> <p>Suppose that in a sequence of ten rolls of the die, if the outcomes are 5, 2, 6, 2, 2, 1, 2, 3, 6, 1, then the average (arithmetic mean) of the results is given by</p> <p>$(X)\text{average} = (5+2+6+2+2+1+2+3+6+1)/10 = 3.0$</p> <p>We say that the average value is 3.0, with the distance of 0.5 from the expectation value of 3.5. If we roll the die $N$ times, where $N$ is very large, then the average will converge to the expected value, i.e.,$(X)\text{average}=(X)\text{expected}$. This is evidently because, when $N$ is very large each possible value of $X$ (i.e. 1 to 6) will occur with equal probability of 1/6, turning the average to the expectation value.</p>
<p>The expected value, or mean $\mu_X =E_X[X]$, is a parameter associated with the distribution of a random variable $X$.</p> <p>The average $\overline X_n$ is a computation performed on a sample of size $n$ from that distribution. It can also be regarded as an unbiased estimator of the mean, meaning that if each $X_i\sim X$, then $E_X[\overline X_n] = \mu_X$.</p>
matrices
<p>I wrote an answer to <a href="https://math.stackexchange.com/questions/854154/when-is-r-a-1-rt-invertible/854160#854160">this</a> question based on determinants, but subsequently deleted it because the OP is interested in non-square matrices, which effectively blocks the use of determinants and thereby undermined the entire answer. However, it can be salvaged if there exists a function $\det$ defined on <strong>all</strong> real-valued matrices (not just the square ones) having the following properties.</p> <ol> <li>$\det$ is real-valued</li> <li>$\det$ has its usual value for square matrices</li> <li>$\det(AB)$ always equals $\det(A)\det(B)$ whenever the product $AB$ is defined.</li> <li>$\det(A) \neq 0$ iff $\det(A^\top) \neq 0$</li> </ol> <p>Does such a function exist?</p>
<p>Such a function cannot exist. Let $A = \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ 0 &amp; 0\end{pmatrix}$ and $B = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \end{pmatrix}$. Then, since both $AB$ and $BA$ are square, if there existed a function $D$ with the properties 1-3 stated there would hold \begin{align} \begin{split} 1 &amp;= \det \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \end{pmatrix} = \det(BA) = D(BA) = D(B)D(A) \\ &amp;= D(A)D(B) = D(AB) = \det(AB) = \det \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{pmatrix} = 0. \end{split} \end{align}</p>
<p>This extension of determinants has all 4 properties if A is a square matrix (except it loses the sign), and retains some attributes of determinants otherwise.</p> <p>If A has more rows than columns, then <span class="math-container">$$|A|^2=|A^{T}A|$$</span> If A has more columns than rows, then <span class="math-container">$$|A|^2=|AA^{T}|$$</span></p> <p>This has a valid and useful geometric interpretation. If you have a transformation <span class="math-container">$A$</span>, this can still return how the transformation will scale an input transformation, limited to the smaller of the dimensions of the input and output space.</p> <p>You may take this to be the absolute value of the determinant. It's always positive because, when looking at a space embedded in a higher dimensional space, a positive area can become a negative area when looked at from behind.</p>
linear-algebra
<p>Recently, I answered <a href="https://math.stackexchange.com/q/1378132/80762">this question about matrix invertibility</a> using a solution technique I called a &quot;<strong>miracle method</strong>.&quot; The question and answer are reproduced below:</p> <blockquote> <p><strong>Problem:</strong> Let <span class="math-container">$A$</span> be a matrix satisfying <span class="math-container">$A^3 = 2I$</span>. Show that <span class="math-container">$B = A^2 - 2A + 2I$</span> is invertible.</p> <p><strong>Solution:</strong> Suspend your disbelief for a moment and suppose <span class="math-container">$A$</span> and <span class="math-container">$B$</span> were scalars, not matrices. Then, by power series expansion, we would simply be looking for <span class="math-container">$$ \frac{1}{B} = \frac{1}{A^2 - 2A + 2} = \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A^4}{8}-\frac{A^5}{8} + \cdots$$</span> where the coefficient of <span class="math-container">$A^n$</span> is <span class="math-container">$$ c_n = \frac{1+i}{2^{n+2}} \left((1-i)^n-i (1+i)^n\right). $$</span> But we know that <span class="math-container">$A^3 = 2$</span>, so <span class="math-container">$$ \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A^4}{8}-\frac{A^5}{8} + \cdots = \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A}{4}-\frac{A^2}{4} + \cdots $$</span> and by summing the resulting coefficients on <span class="math-container">$1$</span>, <span class="math-container">$A$</span>, and <span class="math-container">$A^2$</span>, we find that <span class="math-container">$$ \frac{1}{B} = \frac{2}{5} + \frac{3}{10}A + \frac{1}{10}A^2. $$</span> Now, what we've just done should be total nonsense if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are really matrices, not scalars. But try setting <span class="math-container">$B^{-1} = \frac{2}{5}I + \frac{3}{10}A + \frac{1}{10}A^2$</span>, compute the product <span class="math-container">$BB^{-1}$</span>, and you'll find that, <strong>miraculously</strong>, this answer works!</p> </blockquote> <p>I discovered this solution technique some time ago while exploring a similar problem in Wolfram <em>Mathematica</em>. However, I have no idea why any of these manipulations should produce a meaningful answer when scalar and matrix inversion are such different operations. <strong>Why does this method work?</strong> Is there something deeper going on here than a serendipitous coincidence in series expansion coefficients?</p>
<p>The real answer is the set of $n\times n$ matrices forms a Banach algebra - that is, a Banach space with a multiplication that distributes the right way. In the reals, the multiplication is the same as scaling, so the distinction doesn't matter and we don't think about it. But with matrices, scaling and multiplying matrices is different. The point is that there is no miracle. Rather, the argument you gave only uses tools from Banach algebras (notably, you didn't use commutativity). So it generalizes nicely. </p> <p>This kind of trick is used all the time to great effect. One classic example is proving that when $\|A\|&lt;1$ there is an inverse of $1-A$. One takes the argument about geometric series from real analysis, checks that everything works in a Banach algebra, and then you're done. </p>
<p>Think about how you derive the finite version of the geometric series formula for scalars. You write:</p> <p>$$x \sum_{n=0}^N x^n = \sum_{n=1}^{N+1} x^n = \sum_{n=0}^N x^n + x^{N+1} - 1.$$</p> <p>This can be written as $xS=S+x^{N+1}-1$. So you move the $S$ over, and you get $(x-1)S=x^{N+1}-1$. Thus $S=(x-1)^{-1}(x^{N+1}-1)$.</p> <p>There is only one point in this calculation where you needed to be careful about commutativity of multiplication, and that is in the step where you multiply both sides by $(x-1)^{-1}$. In the above I was careful to write this on the <em>left</em>, because $xS$ originally multiplied $x$ and $S$ with $x$ on the left. Thus, provided we do this one multiplication step on the left, everything we did works when $x$ is a member of any ring with identity such that $x-1$ has a multiplicative inverse. </p> <p>As a result, if $A-I$ is invertible, then </p> <p>$$\sum_{n=0}^N A^n = (A-I)^{-1}(A^{N+1}-I).$$</p> <p>Moreover, if $\| A \| &lt; 1$ (in any operator norm), then the $A^{N+1}$ term decays as $N \to \infty$. As a result, the partial sums are Cauchy, and so if the ring in question is also complete with respect to this norm, you obtain</p> <p>$$\sum_{n=0}^\infty A^n = (I-A)^{-1}.$$</p> <p>In particular, in this situation we recover the converse: if $\| A \| &lt; 1$ then $I-A$ is invertible.</p>
logic
<p>I wonder about the foundations of set theory and my question can be stated in some related forms:</p> <ul> <li><p>If we base Zermelo–Fraenkel set theory on first order logic, does that mean first order logic is not allowed to contain the notion of sets?</p></li> <li><p>The axioms of Zermelo–Fraenkel set theory seem to already expect the notion of a set to be defined. Is there are pre-definition of what we are dealing with? And where?</p></li> <li><p>In set theory, if a <em>function</em> is defined as a set using tuples, why or how does first order logic and the axioms of Zermelo–Fraenkel set theory contain parameter dependend properties $\psi(u_1,u_2,q,...)$, which basically are functions?</p></li> </ul>
<p>(1) This is actually not a problem in the form you have stated it -- the rules of what is a valid proof in first-order logic can be stated without any reference to sets, such as by speaking purely about operations on concrete strings of symbols, or by arithmetization with Gödel numbers. </p> <p>However, if you want to do <em>model theory</em> on your first-order theory you need sets. And even if you take the syntactical viewpoint and say that it is all just strings, that just pushes the fundamental problem down a level, because how can we then formalize reasoning about natural numbers (or symbol strings) if first-order logic itself "depends on" natural numbers (or symbol strings)?</p> <p>The answer to that is that is just how it is -- the formalization of first-order logic is <em>not</em> really the ultimate basis for all of mathematics, but a <em>mathematical model</em> of mathematical reasoning itself. The model is not the thing, and mathematical reasoning is ultimately not really a formal theory, but something we do because we <strong>intuitively believe</strong> that it works.</p> <p>(2) This is a misunderstanding. In axiomatic set theory, the axioms themselves <em>are</em> the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p> <p>(3) What you quote is how functions usually are <em>modeled in set theory</em>. Again, the model is not the thing, and just because we can create a model of our abstract concept of functional relation in set theory, it doesn't mean that our abstract concept <em>an sich</em> is necessarily a creature of set theory. Logic has its own way of modeling functional relations, namely by writing down syntactic rules for how they must behave -- this is less expressive but sufficient for logic's need, and is <em>no less valid</em> as a model of functional relations than the set-theoretic model is.</p>
<p>For those concerned with secure foundations for mathematics, the problem is basically this: In order to even state the axioms of ZF set theory, we need to have developed first-order logic; but in order to recognize conditions under which first-order logical formulas can be regarded as "true" or "valid", we need to give a model-theoretic semantics for first-order logic, and this in turn requires certain notions from set theory. At first glance, this seems troublingly circular.</p> <p>Obviously, this is an extremely subtle area. But as someone who has occasionally been troubled by these kinds of questions, let me offer a few personal thoughts, for whatever they may be worth. </p> <p>My own view goes something like this. Let's first distinguish carefully between logic and metalogic. </p> <p>In logic, which we regard as a pre-mathematical discipline, all we can do is give grammatical rules for constructing well-formed statements and derivation rules for constructing "proofs" (or perhaps, so as not to beg the question, we should call them "persuasive demonstrations"); these should be based purely on the grammatical forms of the various statements involved. There is, at this point, no technical or mathematical theory of how to assign meanings to these formulas; instead, we regard them merely as symbolic abbreviations for the underlying natural language utterances, with whatever informal psychological interpretations these already have attached to them. We "justify" the derivation rules informally by persuading ourselves through illustrative examples and informal semantical arguments that they embody a reasonable facsimile of some naturally occurring reasoning patterns we find in common use. What we don't do, at this level, is attempt to specify any formal semantics. So we can't yet prove or even make sense of the classical results from metalogic such as the soundness and/or completeness of a particular system of derivation rules. For us, "soundness" means only that each derivation rule represents a commonly accepted and intuitively legitimate pattern of linguistic inference; "completeness" means basically nothing. We think of first-order logic as merely a transcription into symbolic notation of certain relatively uncontroversial aspects of our pre-critical reasoning habits. We insist on this symbolic representation for two reasons: (i) regimentation, precisely because it severely limits expressive creativity, is a boon to precision; and (ii) without such a symbolic representation, there would be no hope of eventually formalizing anything.</p> <p>Now we take our first step into true formalization by setting down criteria (in a natural language) for what should constitute formal expression in the first place. This involves taking as primitives (i) the notion of a token (i.e., an abstract stand-in for a "symbol"), (ii) a binary relation of distinctness between tokens, (iii) the notion of a string, and (iv) a ternary relation of "comprisal" between strings, subject to certain axioms. The axioms governing these primitives would go something like this. First, each token is a string; these strings are not comprised of any other strings. By definition, two strings that are both tokens are distinct if the two tokens are distinct, and any string that is a token is distinct from any string that is not a token. Secondly, for any token t and any string S, there is a unique string U comprised of t and S (we can denote this U by 'tS'; this is merely a notation for the string U, just as 't' is merely a notation for the abstract token involved --- it should not be thought of as being the string itself). By definition, tS and t'S' are distinct strings if either t and t' are distinct tokens or S and S' are distinct strings (this is intuitively a legitimate recursive definition). Thirdly, nothing is a string except by virtue of the first two axioms. We can define the concatenation UV of strings U and V recursively: if U is a token t, define UV as the string tV; otherwise, U must be t for some token t and string S, and we define UV as the string tW, where W is the (already defined) concatenation SV. This operation can be proven associative by induction. All this is part of intuitive, rather than formal, mathematics. There is simply no way to get mathematics off the ground without understanding that some portion of it is just part of the linguistic activity we routinely take part in as communicative animals. This said, it is worth the effort to formulate things so that this part of it becomes as small as possible.</p> <p>Toward that end, we can insist that any further language used in mathematics be formal (or at least capable of formal representation). This means requiring that some distinguishable symbol be set aside and agreed upon for each distinct token we propose to take for granted, and that everything asserted of the tokens can ultimately be formulated in terms of the notions of distinctness, strings, comprisal, and concatenation discussed above. Notice that there is no requirement to have a finite set of tokens, nor do we have recourse within a formal language to any notion of a set or a number at all.</p> <p>Coming back to our symbolic logic, we can now prove as a theorem of intuitive (informal) mathematics that, with the symbolic abbreviations (the connectives, quantifiers, etc.) mapped onto distinct tokens, we can formulate the grammatical and derivational rules of logic strictly in terms of strings and the associated machinery of formal language theory. This brings one foot of symbolic logic within the realm of formal mathematics, giving us a language in which to express further developments; but one foot must remain outside the formal world, connecting our symbolic language to the intuitions that gave birth to it. The point is that we have an excellent, though completely extra-mathematical, reason for using this particular formal language and its associated deduction system: namely, we believe it captures as accurately as possible the intuitive notions of correct reasoning that were originally only written in shorthand. This is the best we can hope to do; there must be a leap of faith at some point. </p> <p>With our technically uninterpreted symbolic/notational view of logic agreed upon (and not everyone comes along even this far, obviously -- the intuitionists, for example), we can proceed to formulate the axioms of ZF set theory in the usual way. This adds sets and membership to the primitive notions already taken for granted (tokens, strings, etc.). The motivating considerations for each new axiom, for instance the desire to avoid Russell's paradox, make sense within the intuitive understanding of first-order logic, and these considerations are at any rate never considered part of the formal development of set theory, on any approach to foundational questions.</p> <p>Once we have basic set theory, we can go back and complete our picture of formal reasoning by defining model-theoretic semantics for first-order logic (or indeed, higher-order logic) in terms of sets and set-theoretic ideas like inclusion and n-tuples. The purpose of this is twofold: first, to deepen our intuitive understanding of correct reasoning by embedding all of our isolated instincts about valid patterns of deduction within a single rigid structure, as well as to provide an independent check on those instincts; and secondly, to enable the development of the mathematical theory of logic, i.e., metalogic, where beautiful results about reasoning systems in their own right can be formulated and proven.</p> <p>Once we have a precise understanding of the semantics (and metatheory) of first-order logic, we can of course go back to the development of set theory and double-check that any logical deductions we used in proving theorems there were indeed genuinely valid in the formal sense (and they all turn out to be, of course). This doesn't count as a technical justification of one by means of the other, but only shows that the two together have a comforting sort of coherence. I think of their relation as somehow analogous to the way in which the two concepts of "term" and "formula" in logical grammar cannot be defined in isolation from one another, but must instead be defined by a simultaneous recursion. That's obviously just a vague analogy, but it's one I find helpful and comforting.</p> <p>I hope this made sense and was somewhat helpful. My apologies for being so wordy. Cheers. - Joseph</p>
probability
<p>A game is played where a standard six sided dice is rolled until a $6$ is rolled, and the sum of all of the rolls up to and including the $6$ is taken. What is the probability that this sum is even?</p> <p>I know that this is a geometric distribution and the expected number of rolls is $\frac1{1/6} = 6$ rolls until a $6$ occurs, along with the expected value being $21$ ($6$ rolls times expected value of $3.5$ per roll), but I'm not certain how to proceed from there. Would the expected range (if it is even relevant) be from $11 = 1·5+6$ to $31 = 5·5+6$? The answer is supposedly $\frac47$. I'm also curious about how this question would change if the stopping number was anything else, say a $3$ stopping the sequence rather than a $6$. Thank you in advance!</p>
<p>Let $p$ be the desired probability, and consider the first roll. It is either a $6$, in which case we're done and the sum is even, a $2$ or $4$, in which case we want the sum of the rest of the terms to be even, or a $1,3$, or $5$, in which case we want the sum of the rest to be odd.</p> <p>Thus $$p = \frac{1}{6}+ \frac{1}{3}p+\frac{1}{2}(1-p)$$ which simplifies to $p=\frac{4}{7}$.</p>
<p>We need only consider the rolls <em>before</em> a $6$ is obtained, because rolling an even $6$ doesn't change the parity of our total. Let $p_n$ represent the probability that the sum of $n$ rolls, not including any $6$, is even. Then we have: $p_{n+1}=\frac25p_n + \frac35(1-p_n)$, because a $2$ or a $4$ keeps a previous even total even, while a $1$, $3$ or $5$ makes a previous odd total into an even total. This simplifies to: $p_{n+1}=\frac35-\frac15p_n$. We also have $p_0=1$. We can solve this recurrence, and find that</p> <p>$$p_n=\frac12\left(1+\left(-\frac15\right)^n\right)$$</p> <p>Now, let $x_n$ represent the probability of rolling $n$ non-6's before the first $6$, so $x_n=\frac16\left(\frac56\right)^n$. The number we need is:</p> <p>$$\begin{align} \sum\limits_{n=0}^\infty x_np_n &amp;= \sum\limits_{n=0}^\infty \left[\frac16\left(\frac56\right)^n\cdot\frac12\left(1+\left(-\frac15\right)^n\right)\right]\\ &amp;=\frac1{12}\sum\limits_{n=0}^\infty \left[\left(\frac56\right)^n + \left(-\frac16\right)^n\right]\\ &amp;=\frac1{12}\left(6 + \frac67\right) = \frac47 \end{align}$$</p> <p>That said, @carmichael561's answer is much, much nicer.</p>
differentiation
<p>Are $\sin$ and $\cos$ the only functions that satisfy the following relationship: $$ x'(t) = -y(t)$$ and $$ y'(t) = x(t) $$</p>
<p>The relationships $x'(t) = -y(t)$ and $y'(t) = x(t)$ imply $$x''(t) = -y'(t) = -x(t)$$ i.e. $$x''(t) = -x(t)$$ which only has solutions $x(t) = A \cos t + B \sin t$ for some constants $A$, $B$. For a given choice of the constants we then get $y(t) = -x'(t) = A \sin t - B \cos t$.</p>
<p>Basically, yes, they are. More precisely: if $x,y\colon\mathbb{R}\longrightarrow\mathbb{R}$ are differentiable functions such that $x'=-y$ and that $y'=x$, then there are numbers $k$ and $\omega$ such that$$(\forall t\in\mathbb{R}):x(t)=k\cos(t+\omega)\text{ and }y(t)=k\sin(t+\omega).$$</p>
probability
<p>There are many descriptions of the "birthday problem" on this site — the problem of finding the probability that in a group of $n$ people there will be any (= at least 2) sharing a birthday.</p> <p>I am wondering how to find instead the expected number of people sharing a birthday in a group of $n$ people. I remember that expectation means the weighted sum of the probabilities of each outcome:</p> <p>$$E[X]=\sum_{i=0}^{n-1}x_ip_i$$</p> <p>And here $x$ must mean the number of collisions involving $i+1$ people, which is $n\choose i$. All $n$ people born on different days means no collisions, $i=0$; two people born on the same day means $n$ collisions, $i=1$; all $n$ people born on the same day means $n$ collisions, $i=n-1$.</p> <p>Since the probabilities of three or more people with the same birthday are vanishingly small compared to two people with the same birthday, and decreases faster than $x$ increases, is it correct to say that this expectation can be approximated by</p> <p>$$E[X]\approx {n\choose 0}p_{no\ collisions}+{n\choose 1}p_{one\ collision}$$</p> <p>This doesn't look right to me and I'd appreciate some guidance.</p> <hr> <p>Sorry - edited to change ${n\choose 1}$ to ${n\choose 0}$ in second equation. Sloppy of me.</p>
<p>The probability person $B$ shares person $A$'s birthday is $1/N$, where $N$ is the number of equally possible birthdays, </p> <p>so the probability $B$ does not share person $A$'s birthday is $1-1/N$, </p> <p>so the probability $n-1$ other people do not share $A$'s birthday is $(1-1/N)^{n-1}$, </p> <p>so the expected number of people who do not have others sharing their birthday is $n(1-1/N)^{n-1}$, </p> <p>so the expected number of people who share birthdays with somebody is $n\left(1-(1-1/N)^{n-1}\right)$.</p>
<p>I will try to get control of the most standard interpretation of our question by using (at first) very informal language. Let us call someone <em>unhappy</em> if one or more people share his/her "birthday." We want to find the "expected number" of unhappy people.</p> <p>Define the random variable $X$ by saying that $X$ is the number of unhappy people. We want to find $\text{E}(X)$. Let $p_i$ be the probability that $X=i$. Then $$\text{E}(X)=\sum_{i=0}^{n} i\,p_i$$ That is roughly the approach that you took. That approach is correct, and a very reasonable thing to try. Indeed have been <em>trained</em> to use this approach, since that's exactly how you solved the exercises that followed the definition of expectation. </p> <p>Unfortunately, in this problem, finding the $p_i$ is very difficult. One could, as you did, decide that for a good approximation, only the first few $p_i$ really matter. That is sometimes true, but depends quite a bit on the values $N$ of "days in the year" and the number $n$ of people.</p> <p>Fortunately, in this problem, and many others like it, there is an alternative <em>very</em> effective approach. It involves a bit of theory, but the payoff is considerable.</p> <p>Line the people up in a row. Define the random variables $U_1,U_2,U_3,\dots,U_n$ by saying that $U_k=1$ if the $k$-th person is unhappy, and $U_k=0$ if the $k$-th person is not unhappy. The crucial observation is that $$X=U_1+U_2+U_3+\cdots + U_n$$ </p> <p>One way to interpret this is that you, the observer, go down the line of people, making a tick mark on your tally sheet if the person is unhappy, and making no mark if the person is not unhappy. The number of tick marks is $X$, the number of unhappy people. It is also the sum of the $U_k$.</p> <p>We next use the following very important theorem: <strong>The expectation of a sum is the sum of the expectations</strong>. This theorem holds "always." The random variables you are summing <em>need not be independent</em>. In our situation, the $U_k$ are not independent, but, for expectation of a sum, that does not matter. So we have $$\text{E}(X)=\text{E}(U_1) + \text{E}(U_2)+ \text{E}(U_3)+\cdots +\text{E}(U_n)$$</p> <p>Finally, note that the probability that $U_k=1$ is, as carefully explained by @Henry, equal to $p$, where $$p=1-(1-1/N)^{n-1}$$ It follows that $\text{E}(U_k)=p$ for any $k$, and therefore $\text{E}(X)=np$.</p>