tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
linear-algebra
<p>How can I understand that $A^TA$ is invertible if $A$ has independent columns? I found a similar <a href="https://math.stackexchange.com/questions/1181271/if-ata-is-invertible-then-a-has-linearly-independent-column-vectors">question</a>, phrased the other way around, so I tried to use the theorem</p> <p>$$ rank(A^TA) \le min(rank(A^T),rank(A)) $$</p> <p>Given $rank(A) = rank(A^T) = n$ and $A^TA$ produces an $n\times n$ matrix, I can't seem to prove that $rank(A^TA)$ is actually $n$.</p> <p>I also tried to look at the question another way with the matrices</p> <p>$$ A^TA = \begin{bmatrix}a_1^T \\ a_2^T \\ \ldots \\ a_n^T \end{bmatrix} \begin{bmatrix}a_1 a_2 \ldots a_n \end{bmatrix} = \begin{bmatrix}A^Ta_1 A^Ta^2 \ldots A^Ta_n\end{bmatrix} $$</p> <p>But I still can't seem to show that $A^TA$ is invertible. So, how should I get a better understanding of why $A^TA$ is invertible if $A$ has independent columns?</p>
<p>Consider the following: $$A^TAx=\mathbf 0$$ Here, $Ax$, an element in the range of $A$, is in the null space of $A^T$. However, the null space of $A^T$ and the range of $A$ are orthogonal complements, so $Ax=\mathbf 0$.</p> <p>If $A$ has linearly independent columns, then $Ax=\mathbf 0 \implies x=\mathbf 0$, so the null space of $A^TA=\{\mathbf 0\}$. Since $A^TA$ is a square matrix, this means $A^TA$ is invertible.</p>
<p>If $A $ is a real $m \times n $ matrix then $A $ and $A^T A $ have the same null space. Proof: $A^TA x =0\implies x^T A^T Ax =0 \implies (Ax)^TAx=0 \implies \|Ax\|^2 = 0 \implies Ax = 0 $. </p>
combinatorics
<p>Suppose a biased coin (probability of head being $p$) was flipped $n$ times. I would like to find the probability that the length of the longest run of heads, say $\ell_n$, exceeds a given number $m$, i.e. $\mathbb{P}(\ell_n &gt; m)$. </p> <p>It suffices to find the probability that length of any run of heads exceeds $m$. I was trying to approach the problem by fixing a run of $m+1$ heads, and counting the number of such configurations, but did not get anywhere.</p> <p>It is easy to simulate it:</p> <p><img src="https://i.sstatic.net/jPJxb.png" alt="Distribution of the length of the longest run of head in a sequence of 1000 Bernoulli trials with 60% change of getting a head"></p> <p>I would appreciate any advice on how to analytically solve this problem, i.e. express an answer in terms of a sum or an integral.</p> <p>Thank you.</p>
<p>This problem was solved using generating functions by de Moivre in 1738. The formula you want is $$\mathbb{P}(\ell_n \geq m)=\sum_{j=1}^{\lfloor n/m\rfloor} (-1)^{j+1}\left(p+\left({n-jm+1\over j}\right)(1-p)\right){n-jm\choose j-1}p^{jm}(1-p)^{j-1}.$$</p> <p><strong>References</strong></p> <ol> <li><p>Section 14.1 <em>Problems and Snapshots from the World of Probability</em> by Blom, Holst, and Sandell</p></li> <li><p>Chapter V, Section 3 <em>Introduction to Mathematical Probability</em> by Uspensky</p></li> <li><p>Section 22.6 <em>A History of Probability and Statistics and Their Applications before 1750</em> by Hald gives solutions by de Moivre (1738), Simpson (1740), Laplace (1812), and Todhunter (1865) </p></li> </ol> <hr> <p><strong>Added:</strong> The combinatorial class of all coin toss sequences without a run of $ m $ heads in a row is $$\sum_{k\geq 0}(\mbox{seq}_{&lt; m }(H)\,T)^k \,\mbox{seq}_{&lt; m }(H), $$ with corresponding counting generating function $$H(h,t)={\sum_{0\leq j&lt; m }h^j\over 1-(\sum_{0\leq j&lt; m }h^j)t}={1-h^ m \over 1-h-(1-h^ m )t}.$$ We introduce probability by replacing $h$ with $ps$ and $t$ by $qs$, where $q=1-p$: $$G(s)={1-p^ m s^ m \over1-s+p^ m s^{ m +1}q}.$$ The coefficient of $s^n$ in $G(s)$ is $\mathbb{P}(\ell_n&lt;m).$</p> <p>The function $1/(1-s(1-p^ m s^ m q ))$ can be rewritten as \begin{eqnarray*} \sum_{k\geq 0}s^k(1-p^ m s^ m q )^k &amp;=&amp;\sum_{k\geq 0}\sum_{j\geq 0} {k\choose j} (-p^ m q)^js^{k+j m }\\ %&amp;=&amp;\sum_{j\geq 0}\sum_{k\geq 0} {k\choose j} (-p^ m q )^js^{k+j m }. \end{eqnarray*} The coefficient of $s^n$ in this function is $c(n)=\sum_{j\geq 0}{n-j m \choose j}(-p^ m q)^j$. Therefore the coefficient of $s^n$ in $G(s)$ is $c(n)-p^ m c(n- m ).$ Finally, \begin{eqnarray*} \mathbb{P}(\ell_n\geq m)&amp;=&amp;1-\mathbb{P}(\ell_n&lt;m)\\[8pt] &amp;=&amp;p^ m c(n- m )+1-c(n)\\[8pt] &amp;=&amp;p^ m \sum_{j\geq 0}(-1)^j{n-(j+1) m \choose j}(p^ m q)^j+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^ m q)^j\\[8pt] &amp;=&amp;p^ m \sum_{j\geq 1}(-1)^{j-1}{n-j m \choose j-1}(p^m q)^{j-1}+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^mq )^j\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m \choose j-1}q+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m +1\choose j}q \right]p^{ jm} q^{j-1}\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[p+{n-j m +1\over j}\, q\right] {n-j m \choose j-1}\,p^{ jm} q^{j-1}. \end{eqnarray*}</p>
<p>Define a Markov chain with states $0, 1, \ldots m$ so that with probability $1$ the chain moves from $m$ to $m$ and for $i&lt;m$ with probability $p$ the chain moves from $i$ to $i+1$ and with probability $1-p$ the chain moves from $i$ to $0$. If you look at the $n$th power of the transition matrix for this chain you can read off the probability that in $n$ flips you have a sequence of at least $m$ consecutive heads.</p>
logic
<p>Given two statements, $P$ and $Q$, and the logical connective, $\implies$, the truth table for $P \implies Q$ is:</p> <p>$$\begin{array}{ c | c || c | } P &amp; Q &amp; P\Rightarrow Q \\ \hline \text T &amp; \text T &amp; \text T \\ \text T &amp; \text F &amp; \text F \\ \text F &amp; \text T &amp; \text T \\ \text F &amp; \text F &amp; \text T \end{array}$$</p> <p>Lines one and two are quite clear. What is ambiguous is lines three and four.</p> <p>One explanation as to why $P \implies Q$ is true when $P$ is false, provided by Velleman goes:</p> <blockquote> <p>Let $P(x)$ be the statement $x&gt;2$ and $Q(x)$, $x^2 &gt; 4$. When $x=3$, $P$ is true, and $Q(x) = 9$ thus $Q$ is true. If $P(x) = 1$ then $Q(x) = 1$ and they are both false. If $P(x) = -3$ then $Q(x) = 9$ and so the statement is true.</p> </blockquote> <p>This explanation was quite unsatisfactory to me. Looking at Enderton, we have:</p> <blockquote> <p>For example, we might translate the English sentence, ”If you're telling the truth then I'm a monkey's uncle,” by the formula ($V \implies M$). We assign this formula the value $T$ whenever you are fibbing. In assigning the value $T$, we are certainly not assigning any causal connection between your veracity and any simian features of my nephews or nieces. The sentence in question is a conditional statement. It makes an assertion about my relatives provided a certain condition — that you are telling the truth — is met. Whenever that condition fails, the statement is vacuously true.</p> <p>Very roughly, we can think of a conditional formula ($p\implies q$) as expressing a promise that if a certain condition is met (viz., that $p$ is true), then q is true. If the condition $p$ turns out not to be met, then the promise stands unbroken, regardless of $q$.</p> </blockquote> <p>Though a significant improvement over the Velleman explanation, I still feel uncomfortable with it.</p> <p>Really, it seems we can conjure up as many silly counter-examples as we like, such as:</p> <blockquote> <p>If pigs can fly, then I can walk on water.</p> </blockquote> <p>Though following the above truth table, the implication is that my ability to walk on water is true.</p> <p>After considering it, it seems to me that $\implies$ is only meaningful when $P$ is true, then we can look at it in relation to $Q$. However, if $P$ is false, then we actually know nothing about the relationship between $P$ and $Q$. This would give a truth table of:</p> <p>$$\begin{array}{ c | c || c | } P &amp; Q &amp; P\Rightarrow Q \\ \hline \text T &amp; \text T &amp; \text T \\ \text T &amp; \text F &amp; \text F \\ \text F &amp; \text T &amp; \text ? \\ \text F &amp; \text F &amp; \text ? \end{array}$$</p> <p>where the $?$ denotes that given $P$ we actually don't know anything about $ \implies $.</p> <p>Thus. one way to clear this up would be to assume that $? = T$. Thus the vacuous truth is a "definition of convenience" in a sense.</p> <p>The above is my take on the matter.</p> <p>Could someone provide some clarification on the conditional logical connective? </p>
<p>One can, correctly, assign the truth-value of <strong>true</strong> to the statement $P\implies Q$ whenever $P$ is false, or whenever $Q$ is true. $P\implies Q\,$ is false if and only if both $P$ is true <strong>and</strong> $Q$ is false. That covers all the cases. So we <strong>can say</strong> that $P\implies Q$ is true, unless "proven false", by which I mean to say:</p> <blockquote> <p>$P \implies Q$ is true if and only if it is <strong>not</strong> the case that both $P$ is true <strong>and</strong> $Q$ is false.</p> </blockquote> <p>What we can also say is that in classical logic and in math, it is a mistake to attribute any sort of causal relationship between $P$ and $Q$ when writing or reading an implication $P\implies Q$. Put differently, $P\implies Q$, by itself, does <strong>not</strong> imply any causal relationship between $P$ and $Q$: It is defined to convey nothing more, and nothing less, than is conveyed by the statement: $\;\lnot P \lor Q$, or if you prefer, it tells us nothing more (and nothing less) than what is conveyed by the statement: $\;\lnot(P \land \lnot Q)$. </p> <hr> <p>Your concern is not trivial, nor are you alone in being "bothered" by that lack of some stronger relationship between $P$ and $Q$. There are logics, such as <strong><a href="http://en.wikipedia.org/wiki/Relevance_logic">relevance logic</a></strong> which aim to capture aspects of implication that are ignored by the "material implication" operator in classical truth-functional logic, requiring some sort of relevance between antecedent and conditional of a true implication. See also the Wikipedia entry entitled: <strong><a href="http://en.wikipedia.org/wiki/Paradoxes_of_material_implication">Paradoxes of material implication</a></strong> for more on "alternate" non-classical logics.</p>
<p>"... the vacuous truth is a "definition of convenience" in a sense."? No, not mere convenience. There are strong pressures that push towards making this choice of truth-values in lines 3 and 4 of the truth-table for the conditional. Here's one that others haven't mentioned. </p> <p>One thing mathematicians need to be very clear about is the use of <em>statements of generality</em> and especially <em>statements of multiple generality</em> – you know the kind of thing, e.g. the definition of continuity that starts <em>for any $\epsilon$ ... there is a $\delta$ ...</em> And the quantifier-variable notation serves mathematicians brilliantly to regiment statements of multiple generality and make them utterly unambiguous and transparent. </p> <p>Quantifiers matter to mathematicians, then: that's uncontentious. OK, so now think about <em>restricted</em> quantifiers that talk about only some of a domain (e.g. talk not about all numbers but just about all the even ones). How might we render Goldbach's Conjecture, say? As a first step, we might write</p> <blockquote> <p>$(\forall n \in \mathbb{N})$(if $n$ is even and greater than 2, then $n$ is the sum of two primes)</p> </blockquote> <p>We restrict the quantifier here by using a conditional. So <em>now think about the embedded conditional here</em>. What if $n$ is odd, so the antecedent of the conditional is false??? If we say this instance of the conditional lacks a truth-value, or may be false, then the quantification would have non-true instances and so would not be true! But of course we can't refute Goldbach's Conjecture by looking at odd numbers!! So, if the quantified conditional is indeed to come out true when Goldbach is right, then <em>we'll have to say that the irrelevant instances of the conditional with a false antecedent come out true by default</em>. In other words, the embedded conditional will have to be treated as a material conditional.</p> <p>So: to put it a bit tendentiously and over-briefly, if mathematicians are to deal nicely with expressions of generality using the quantifier-variable notation they have come to know and love, they will have to get used to using material conditionals too.</p>
combinatorics
<p>This was asked on sci.math ages ago, and never got a satisfactory answer.</p> <blockquote> <p>Given a number of sticks of integral length $ \ge n$ whose lengths add to $n(n+1)/2$. Can these always be broken (by cuts) into sticks of lengths $1,2,3, \ldots ,n$?</p> </blockquote> <p>You are not allowed to glue sticks back together. Assume you have an accurate measuring device.</p> <p>More formally, is the following conjecture true? (Taken from iwriteiam link below).</p> <blockquote> <p><strong>Cutting Sticks Conjecture</strong>: For all natural numbers $n$, and any given sequence $a_1, .., a_k$ of natural numbers greater or equal $n$ of which the sum equals $n(n+1)/2$, there exists a partitioning $(P_1, .., P_k)$ of $\{1, .., n\}$ such that sum of the numbers in $P_i$ equals $a_i$, for all $1 \leq i \leq k$.</p> </blockquote> <p>Some links which discuss this problem:</p> <ul> <li><a href="http://www.iwriteiam.nl/cutsticks.html">http://www.iwriteiam.nl/cutsticks.html</a></li> </ul>
<p>This is not a solution, just something I found that might be relevant.</p> <p>On the page linked to in the question, a reduction and various strategies are considered. I'll briefly reproduce the reduction, both because I think it's the most useful part of that page and perhaps not everyone will want to read that entire page, and also because I need it to say what I found.</p> <p>Let a counterexample with minimal $n$ be given. If one of the sticks were of length $n$, we could use that stick as the target stick of length $n$ and cut the remaining sticks into lengths $1$ through $n-1$, since otherwise they would form a smaller counterexample. Likewise, if one of the sticks had length greater than $2n-2$, we could cut off a stick of length $n$ and the remaining sticks would all be of length $\ge n-1$, so again we could cut them into lengths $1$ through $n-1$ because otherwise they would form a smaller counterexample. Thus,</p> <blockquote> <p>the lengths of the sticks in a counterexample with minimal $n$ must be $\gt n$ and $\lt 2n-1$.</p> </blockquote> <p>Problem instances that satisfy these conditions for a potential minimal counterexample are called "hard" on that page; I suggest we adopt that terminology here.</p> <p>The strategies discussed on that page include various ways of forming the target sticks in order of decreasing length. It was found that there are counterexamples both for the strategy of always cutting the next-longest target stick from the shortest possible remaining stick (counterexample $\langle11,12,16,16\rangle$) and for the strategy of always cutting the next-longest target stick from the longest remaining stick unless it already exists (counterexample $\langle10,10,12,13\rangle$), whereas if the stick to cut from was randomized, it was always possible to form the desired sticks up to $n=23$.</p> <p>I've checked that all hard problem instances up to $n=30$ are solvable, and I found that they remain solvable independent of which stick we cut the target stick of length $n$ from. This is equivalent to saying that a problem instance for $n-1$ can always be solved if all stick lengths except one are $\gt n$ and $\lt 2n-1$ and one is $\lt n-1$, since all of these instances can result from cutting a stick of length $n$ from a hard problem instance for $n$.</p> <p>I thought that this might be generalized to the solvability of an instance being entirely determined by whether the sticks of length $\le n$ can be cut to form distinct integers, but that's not the case, since it's possible to leave only a few holes below $n$ such that the few remaining sticks above $n$ can't fill them.</p>
<p>I have implemented the suggestion I made in a comment to joriki's answer. For $3 \le n \le 18$, I have generated a list of subsets $S \subset \{1,2,...,n-1\}$ with the property that if a set of sticks with total length $n(n+1)/2$ takes all the lengths in $S$, together with any other lengths ≥n, then the sticks can always be cut into sticks of length $1,2,...,n$. It is available at <a href="http://www.megafileupload.com/en/file/326981/Sticks-txt.html">this link</a> (it's about 900K). </p> <p>I stared at it for a while, but nothing jumped out at me.</p> <p><strong>Edited to add:</strong> I have changed the program to output the sets in a more human-friendly order: <a href="http://pastebin.com/0yL3rvnJ">part 1 (n = 1 to 17)</a> and <a href="http://pastebin.com/LzwUSUDS">part 2 (n = 18)</a>.</p>
logic
<p>Though I've understood the logic behind's Russell's paradox for long enough, I have to admit I've never really understood why mathematicians and mathematical historians thought it so important. Most of them mark its formulation as an epochal moment in mathematics, it seems to me. To my uneducated self, Russell's paradox seems almost like a trick question, like a child asking what the largest organism on Earth is, and getting back an answer talking about <a href="http://en.wikipedia.org/wiki/Armillaria_solidipes">a giant fungus.</a> </p> <p>"<em>Well, OK</em>..."</p> <p>The set of all sets admits all sorts of unmathematical objects right? Chickens, spoons, 2 $\times$ 4s, what have you...I can't imagine a situation where a mathematician solving a concrete problem would invoke a set like that. The typical sets one sees in everyday mathematics are pretty well defined and limited in scope: $\mathbb{Z}$, $\mathbb{R}$, $\mathbb{C}$, $\mathbb{R}^n, \mathbb{Q}_p$, subsets of those things. They are all built off each other in a nice way.</p> <p>But clearly important people think the result significant. It prompted Gottlob Frege, a smart and accomplished man, who Wikipedia tells me extended the study of logic in non-trivial ways, to say that the "foundations of [his] edifice were shaken." So I suppose Russell's paradox is "deep" somehow? Is there a practical example in mathematics that showcases this, where one might be led horribly astray if one doesn't place certain restrictions on sets, in say, dealing with PDEs, elliptic curves, functional equations? Why isn't Russell's paradox just a "gotcha"?</p>
<p>Russell's paradox means that you can't just take some formula and talk about all the sets which satisfy the formula.</p> <p>This <em>is</em> a big deal. It means that some collections cannot be sets, which in the universe of set theory means these collections are not elements of the universe.</p> <p>Russell's paradox is a form of diagonalization. Namely, we create some diagonal function in order to show some property. The most well-known proofs to feature diagonalization are the fact that the universe of Set Theory is not a set (Cantor's paradox), and Cantor's theorem about the cardinality of the power set.</p> <p>The point, eventually, is that some collections are not sets. This is a big deal when your universe is made out of sets, but I think that one really has to study some set theory in order to truly understand why this is a problem.</p>
<p>My dad likes to tell of a quotation he once read in a book on philosophy of mathematics. He does not remember which book it was, and I have never tried to track it down; this is really hearsay to the fourth degree, so it may not even be true. But I think it is pretty apt. The quote describes a castle on a cliff where, after each a storm finally dies down, the spiders come out andrun around frantically rebuilding their spiderwebs, afraid that if they don't get them up quickly enough the castle will fall down. </p> <p>The interesting thing about the quote was that it was attributed to a book on the logical foundations of mathematics.</p> <p>First, note that you are looking at the problem from a perspective of someone who "grew up" with sets that were, in some way, <em>carefully</em> "built off each other in a nice way." This was not always the case. Mathematicians were not always very careful with their foundations: and when they started working with infinite sets/collections, they <em>were</em> not being particularly careful. Dedekind does not start from the Axiom of Infinity to construct the naturals and eventually get to the reals; and moreover, when he gives his construction it is precisely to try to answer the question of just what <em>is</em> a real number! </p> <p>In some ways, Russell's paradox was a storm that sent the spiders running around to reconstruct the spiderwebs. Mathematicians hadn't been working with infinite collections/sets for very long, at least not as "completed infinities". The work of Dedekind on the reals, and even on algebraic number theory with the definitions of ideals and modules, was not without its critics. </p> <p><em>Some</em> mathematicians had become interested in the issues of foundations; one such mathematician was Hilbert, both through his work on the Dirichlet Principle (justifying the work of Riemann), and his work on Geometry (with the problems that had become so glaring in the "unspoken assumptions" of Euclid). Hilbert was such a towering figure at the time that his interest was itself interesting, of course, but there weren't that many mathematicians working on the foundations of mathematics.</p> <p>I would think like Sebastian, that most "working mathematicians" didn't worry too much about Russell's paradox; much like they didn't worry too much about the fact that Calculus was not, originally, on solid logical foundation. Mathematics clearly <em>worked</em>, and the occasional antinomy or paradox was likely not a matter of interest or concern.</p> <p>On the other hand, the 19th century had highlighted a <em>lot</em> of issues with mathematics. During this century all sorts of tacit assumptions that mathematicians had been making had been exploded. Turns out, functions can be <em>very</em> discontinuous, not just at a few isolated points; they can be continuous but not differentiable everywhere; you can have a curve that fills up a square; the Dirichlet Principle need not hold; there are geometries where there are no parallels, and geometries where there are an infinite number of parallels to a given line and through a point outside of it; etc. While it was clear that mathematics <em>worked</em>, there was a general "feeling" that it would be a good idea to clear up these issues.</p> <p>So some people began to study foundations specifically, and try to build a solid foundation (perhaps like Weierstrass had given a solid foundation to calculus). Frege was one such.</p> <p>And to people who were very interested in logic and foundations, like Frege, Russell's paradox it <em>was</em> a big deal because it pinpointed that one particular tool that was very widely used led to serious problems. This tool was unrestricted comprehension: any "collection" you can name was an object that could be manipulated and played with.</p> <p>You might say, "well, but Russell's paradox arises in a very artificial context, it would never show up with a "real" mathematical collection." But then, one might say that functions that are continuous everywhere and nowhere differentiable are "very artificial, and would never show up in a 'real' mathematical problem". True: but it means that certain results that had been taken for granted no longer can be taken for granted, and need to be restricted, checked, or justified anew, if you want to claim that the argument is valid.</p> <p>In context, Russell's paradox showed an entirely new thing: there can be collections that are not sets, that are not objects that can be dealt with mathematically. This is a very big deal if you don't even have that concept to begin with! Think about finding out that a "function" doesn't have to be "essentially continuous" and can be an utter mess: an entirely new concept or idea; and entirely new possibility that has to be taken into account when thinking about functions. So with Russell's, an entirely new idea that needs to be taken into account when thinking about collections and sets. All the work that had been done before which tacitly assumed that just because you could name a collection it was an object that could be mathematically manipulated was now, in some sense, "up in the air" (as much as those castle walls are "up in the air" until the spiders rebuild their webs, perhaps, or perhaps more so). </p> <p>If nothing else, Russell's paradox creates an entirely new category of things that did not exist before: <strong>not-sets</strong>. Now you think, "oh, piffle; I could have told them that", but that's because you grew up in a mathematical world where the notion that there are such things as "not-sets" is taken for granted. At the time, it was the exact opposite that was taken for granted, and Russell's paradox essentially tells everyone that something they all thought was true just isn't true. Today we are so used to idea that it seems like an observation that is not worth that much, but that's because we grew up in a world that already knew it. </p> <p>I would say that Russell's paradox was a big deal and wasn't a big deal. It was a big deal for anyone who was concerned with foundations, because it said "you need to go further back: you need to figure out what <em>is</em> and what is <em>not</em> a collection you can work with." It undermined all of Frege's attempt at setting up a foundation for mathematics (which is why Frege found it so important: he certainly had invested a lot of himself into efforts that were not only cast into doubt, but were essentially demolished before they got off the ground). It was such a big deal that it completely infuses our worldview today, when we simply <em>take for granted</em> that some things are not sets.</p> <p>On the other hand, it did not have a major impact on things like calculus of variations, differential equations, etc., because those fields do not really rely very heavily on their foundations, but only on the properties of the objects they are working with; just like most people don't care about the Kuratowski definition of an ordered pair; it's kind of nice to know it's there, but most will treat the ordered pair as a black box. I would expect most of them to think "Oh, okay; get back to me when you sort that out." Perhaps like the servants living on the castle not worrying too much about whether the spiders are done building their webs or not. Also, much like the way that after Weierstrass introduced the notion of $\epsilon$-$\delta$ definitions and limits into calculus, and then re-established what everyone was using <em>anyway</em> when they were using calculus, it had little impact in terms of applications of calculus.</p> <p>That rambles a bit, perhaps. And I'm not a very learned student of history, so my impressions may be off anyway.</p>
matrices
<p>In a scientific paper, I've seen the following</p> <p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-1}\frac{\delta K}{\delta p}K^{-1}$$</span></p> <p>where <span class="math-container">$K$</span> is a <span class="math-container">$n \times n$</span> matrix that depends on <span class="math-container">$p$</span>. In my calculations I would have done the following</p> <p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-2}\frac{\delta K}{\delta p}=-K^{-T}K^{-1}\frac{\delta K}{\delta p}$$</span></p> <p>Is my calculation wrong?</p> <p>Note: I think <span class="math-container">$K$</span> is symmetric.</p>
<p>The major trouble in matrix calculus is that the things are no longer commuting, but one tends to use formulae from the scalar function calculus like $(x(t)^{-1})'=-x(t)^{-2}x'(t)$ replacing $x$ with the matrix $K$. <strong>One has to be more careful here and pay attention to the order</strong>. The easiest way to get the derivative of the inverse is to derivate the identity $I=KK^{-1}$ <em>respecting the order</em> $$ \underbrace{(I)'}_{=0}=(KK^{-1})'=K'K^{-1}+K(K^{-1})'. $$ Solving this equation with respect to $(K^{-1})'$ (again paying attention to the order (!)) will give $$ K(K^{-1})'=-K'K^{-1}\qquad\Rightarrow\qquad (K^{-1})'=-K^{-1}K'K^{-1}. $$</p>
<p>Yes, your calculation is wrong, note that $K$ may not commute with $\frac{\partial K}{\partial p}$, hence you must apply the chain rule correctly. The derivative of $\def\inv{\mathrm{inv}}\inv \colon \def\G{\mathord{\rm GL}}\G_n \to \G_n$ is <strong>not</strong> given by $\inv'(A)B = -A^2B$, but by $\inv'(A)B = -A^{-1}BA^{-1}$. To see that, note that for small enough $B$ we have \begin{align*} \inv(A + B) &amp;= (A + B)^{-1}\\ &amp;= (\def\I{\mathord{\rm Id}}\I + A^{-1}B)^{-1}A^{-1}\\ &amp;= \sum_k (-1)^k (A^{-1}B)^kA^{-1}\\ &amp;= A^{-1} - A^{-1}BA^{-1} + o(\|B\|) \end{align*} Hence, $\inv'(A)B= -A^{-1}BA^{-1}$, and therefore, by the chain rule $$ \partial_p (\inv \circ K) = \inv'\circ K\bigl(\partial_p K) = -K^{-1}(\partial_p K) K^{-1} $$ </p>
differentiation
<p>I am sort of confused regarding differentiable functions, continuous derivatives, and continuous functions. And I just want to make sure I'm thinking about this correctly.</p> <p>(1) If you have a function that's continuous everywhere, then this doesn't necessarily mean its derivative exists everywhere, correct? e.g., <span class="math-container">$$f(x) = |x|$$</span> has an undefined derivative at <span class="math-container">$x=0$</span></p> <p>(2) So this above function, even though its continuous, does not have a continuous derivative? </p> <p>(3) Now say you have a derivative that's continuous everywhere, then this doesn't necessarily mean the underlying function is continuous everywhere, correct? For example, consider <span class="math-container">$$ f(x) = \begin{cases} 1 - x \ \ \ \ \ x&lt;0 \\ 2 - x \ \ \ \ \ x \geq 0 \end{cases} $$</span> So its derivative is -1 everywhere, hence continuous, but the function itself is not continuous? </p> <p>So what does a function with a continuous derivative say about the underlying function? </p>
<p>A function may or may not be continuous.</p> <p>If it is continuous, it may or may not be differentiable. <span class="math-container">$f(x) = |x|$</span> is a standard example of a function which is continuous, but not (everywhere) differentiable. However, any differentiable function is necessarily continuous.</p> <p>If a function is differentiable, its derivative may or may not be continuous. This is a bit more subtle, and the standard example of a differentiable function with discontinuous derivative is a bit more complicated: <span class="math-container">$$ f(x) = \cases{x^2\sin(1/x) &amp; if $x\neq 0$\\ 0 &amp; if $x = 0$} $$</span> It is differentiable everywhere, <span class="math-container">$f'(0) = 0$</span>, but <span class="math-container">$f'(x)$</span> oscillates wildly between (a little less than) <span class="math-container">$-1$</span> and (a little more than) <span class="math-container">$1$</span> as <span class="math-container">$x$</span> comes closer and closer to <span class="math-container">$0$</span>, so it isn't continuous.</p>
<ol> <li>Indeed.</li> <li>Indeed, the derivative does not even exist at <span class="math-container">$x=0$</span> as you argued in (1).</li> <li>Continuity is a necessary condition for differentiability (but it is not sufficient, as you have found in (1)). Your example is not differentiable at <span class="math-container">$0$</span>, since it is discontinuous there. In particular, you can see this directly since <span class="math-container">$$ \lim_{h\to0^+} \frac{f(h)-f(0)}{h} = \lim_{h\to0^+} \frac{(2-h)-2}{h} = -1, $$</span> whereas <span class="math-container">$$\lim_{h\to0^-} \frac{f(h)-f(0)}{h} = \lim_{h\to0^-} \frac{(1-h)-2}{h} = \lim_{h\to0^-} \frac{-1-h}{h}$$</span> does not even exist.</li> </ol>
logic
<p>I am looking for an undecidable problem that I could give as an easy example in a presentation to the general public. I mean easy in the sense that the mathematics behind it can be described, well, without mathematics, that is with analogies and intuition, avoiding technicalities.</p>
<p><strong>"Are these two real numbers</strong> (or functions, or grammars, or mathematical statements) <strong>equivalent?"</strong><br> <em>(See also <a href="http://en.wikipedia.org/wiki/Word_problem_%28mathematics%29" rel="noreferrer">word problem</a>)</em></p> <p><strong>"Does this statement follow from these axioms?"</strong><br> <em>(Hilbert's <a href="http://en.wikipedia.org/wiki/Entscheidungsproblem" rel="noreferrer">Entscheidungsproblem</a>)</em></p> <p><strong>"Does this computer program ever stop?"</strong><br> <strong>"Does this computer program have any security vulnerabilities?"</strong><br> <strong>"Does this computer program do &lt;any non-trivial statement>?"</strong><br> <em>(The <a href="http://en.wikipedia.org/wiki/Halting_problem" rel="noreferrer">halting-problem</a>, from which <a href="https://en.wikipedia.org/wiki/Rice%27s_theorem" rel="noreferrer">all semantic properties</a> can be reduced)</em></p> <p><strong>"Can this set of domino-like tiles tile the plane?"</strong><br> <em>(See <a href="http://en.wikipedia.org/wiki/Wang_tile" rel="noreferrer">Tiling Problem</a>)</em></p> <p><strong>"Does this <a href="http://en.wikipedia.org/wiki/Diophantine_equation" rel="noreferrer">Diophantine equation</a> have an integer solution?"</strong><br> <em>(See <a href="http://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem" rel="noreferrer">Hilbert's Tenth Problem</a>)</em></p> <p><strong>"Given two lists of strings, is there a list of indices such that the concatenations from both lists are equal?"</strong><br> <em>(See <a href="http://www.loopycode.com/a-surprisingly-hard-problem-post-correspondence/" rel="noreferrer">Post correspondence problem</a>)</em></p> <hr> <p>There is also a large list on <a href="http://en.wikipedia.org/wiki/List_of_undecidable_problems" rel="noreferrer">wikipedia</a>.</p>
<p>I think the <a href="http://en.wikipedia.org/wiki/Post_correspondence_problem" rel="nofollow noreferrer">Post correspondence problem</a> is a very good example of a simple undecideable problem that is also relatively unknown.</p> <p>Given a finite set of string tuples, each with an index <span class="math-container">$i$</span>, a left string <span class="math-container">$l(i)$</span> and a right string <span class="math-container">$r(i)$</span>, the problem is to determine if there is a finite sequence of index values <span class="math-container">$i(1),i(2),\dots,i(n)$</span>, allowing for repetition, such that the concatenation of the left strings <span class="math-container">$l(i(1)),\dots,l(i(n))$</span> is equal to the concatenation of the corresponding right strings <span class="math-container">$r(i(1)),\dots,r(i(n))$</span>. For example, with three tuples with <span class="math-container">$(l(i), r(i)), i$</span> as follows:</p> <pre><code>(a , baa) X (ab, aa) Y (bba, bb) Z </code></pre> <p>we may use the index sequence <span class="math-container">$Z, Y, Z, X$</span>:</p> <pre><code>(bba, bb) Z (ab, aa) Y (bba, bb) Z (a, baa) X ------------ gives (bbaabbbaa, bbaabbbaa) </code></pre> <p>The only big issue I have with this problem is that the only undecideability proof I know of falls back on simulating a Turing machine --- it would be nice to find a more elementary alternate version.</p>
game-theory
<p>I looking for good books/lecture notes/etc to learn game theory. I do not fear the math, so I'm not looking for a "non-mathematical intro" or something like that. Any suggestions are welcome. Just put here any references you've seen and some brief description and/or review. Thanks.</p> <p>Edit: I'm not constrained to any particular subject. Just want to get a feeling of the type of books out there. Then I can decide what to read. I would like to see here a long list of books on the subject and its applications, together with reviews or opinions of those books.</p>
<p>Game theory is a very broad subject, you should also know what you intend to use this knowledge for. Anyway, the link below contains video lectures from Yale professor Benjamin Polak. It is given as an introduction to game theory course and contains very good material. Hope this helps, cheers. <a href="http://academicearth.org/speakers/benjamin-polak">http://academicearth.org/speakers/benjamin-polak</a></p>
<p>If you are not affraid of math then i recommend: Osborne, Rubinstein: Course in game theory</p> <p>If you need more examples and more introductory style then: Osborne: Introduction to game theory</p> <p>These books do not cover combinatorial game theory and differential games.</p>
probability
<p>Not sure if this is a question for math.se or stats.se, but here we go:</p> <p>Our MUD (Multi-User-Dungeon, a sort of textbased world of warcraft) has a casino where players can play a simple roulette.</p> <p>My friend has devised this algorithm, which he himself calls genius:</p> <ul> <li>Bet 1 gold</li> <li>If you win, bet 1 gold again</li> <li>If you lose, bet double what you bet before. Continue doubling until you win.</li> </ul> <p>He claimed you will always win exactly 1 gold using this system, since even if you lose say 8 times, you lost 1+2+4+8+16+32+64+128 gold, but then won 256 gold, which still makes you win 1 gold.</p> <p>He programmed this algorithm in his favorite MUD client, let it run for the night. When he woke up the morning, he was broke. </p> <p>Why did he lose? What is the fault in his reasoning?</p>
<p>Suppose, for simplicity, that the probability of winning one round of this game is $\frac{1}{2}$, and the probability of losing is also $\frac{1}{2}$. (Roulette in real life is not such a game, unfortunately.) Let $X_0$ be the initial wealth of the player, and write $X_t$ for the wealth of the player at time $t$. Assuming that the outcome of each round of the game is independent and identically distributed, $(X_0, X_1, X_2, \ldots)$ forms what is known as a <a href="http://en.wikipedia.org/wiki/Martingale_%28probability_theory%29">martingale</a> in probability theory. Indeed, using the bet-doubling strategy outlined, at any time $t$, the expected wealth of the player at time $t + 1$ is $$\mathbb{E} \left[ X_{t+1} \middle| X_0, X_1, \ldots, X_t \right] = X_t$$ because the player wins or loses an equal amount with probability $\frac{1}{2}$ in each case, and $$\mathbb{E} \left[ \left| X_t \right| \right] &lt; \infty$$ because there are only finitely many different outcomes at each stage.</p> <p>Now, let $T$ be the first time the player either wins or goes bankrupt. This is a random variable depending on the complete history of the game, but we can say a few things about it. For instance, $$X_T = \begin{cases} 0 &amp; \text{ if the player goes bankrupt before winning once} \\ X_0 + 1 &amp; \text{ if the player wins at least once} \end{cases}$$ so by linearity of expectation, $$\mathbb{E} \left[ X_T \right] = (X_0 + 1) \mathbb{P} \left[ \text{the player wins at least once} \right]$$ and therefore we may compute the probability of winning as follows: $$\mathbb{P} \left[ \text{the player wins at least once} \right] = \frac{\mathbb{E} \left[ X_T \right]}{X_0 + 1}$$ But how do we compute $\mathbb{E} \left[ X_T \right]$? For this, we need to know that $T$ is almost surely finite. This is clear by case analysis: if the player wins at least once, then $T$ is finite; but the player cannot have an infinite losing streak before going bankrupt either. Thus we may apply the <a href="http://en.wikipedia.org/wiki/Optional_stopping_theorem">optional stopping theorem</a> to conclude: $$\mathbb{E} \left[ X_T \right] = X_0$$ $$\mathbb{P} \left[ \text{the player wins at least once} \right] = \frac{X_0}{X_0 + 1}$$ In other words, the probability of this betting strategy turning a profit is positively correlated with the amount $X_0$ of starting capital – no surprises there! </p> <hr> <p>Now let's do this repeatedly. The remarkable thing is that we get <em>another</em> martingale! Indeed, if $Y_n$ is the player's wealth after playing $n$ series of this game, then $$\mathbb{E} \left[ Y_{n+1} \middle| Y_0, Y_1, \ldots, Y_n \right] = 0 \cdot \frac{1}{Y_n + 1} + (Y_n + 1) \cdot \frac{Y_n}{Y_n + 1} = Y_n$$ by linearity of expectation, and obviously $$\mathbb{E} \left[ \left| Y_n \right| \right] \le Y_0 + n &lt; \infty$$ because $Y_n$ is either $0$ or $Y_{n-1} + 1$.</p> <p>Let $T_k$ be the first time the player either earns a profit of $k$ or goes bankrupt. So, $$Y_{T_k} = \begin{cases} 0 &amp;&amp; \text{ if the player goes bankrupt} \\ Y_0 + k &amp;&amp; \text{ if the player earns a profit of } k \end{cases}$$ and again we can apply the same analysis to determine that $$\mathbb{P} \left[ \text{the player earns a profit of $k$ before going bankrupt } \right] = \frac{Y_0}{Y_0 + k}$$ which is not too surprising – if the player is greedy and wants to earn a larger profit, then the player has to play more series of games, thereby increasing his chances of going bankrupt.</p> <p>But what we really want to compute is the probability of going bankrupt at all. I claim this happens with probability $1$. Indeed, if the player loses even once, then he is already bankrupt, so the only way the player could avoid going bankrupt is if he has an infinite winning streak; the probability of this happening is $$\frac{Y_0}{Y_0 + 1} \cdot \frac{Y_0 + 1}{Y_0 + 2} \cdot \frac{Y_0 + 2}{Y_0 + 3} \cdot \cdots = \lim_{n \to \infty} \frac{Y_0}{Y_0 + n} = 0$$ as claimed. So this strategy <a href="http://en.wikipedia.org/wiki/Almost_surely">almost surely</a> leads to ruin.</p>
<p>This betting strategy is very smart if you have access to infinite wealth or can go into infinite debt. In reality however, you will eventually lose all or most of your money.</p> <p>Say your friend had $k$ gold at the beginning. I assume that this simple roulette has a probability of both win and loss equal to $0.5$.</p> <p>First, let's see how many times you need to lose in a row in order to lose all your wealth.</p> <p>\begin{align} 1 + 2 + 2^2 + 2^3 + ... + 2^n &amp;\geq k \\ 2^{n+1} - 1 &amp;\geq k \\ n &amp;\geq \log_{2}(k+1) - 1 \end{align}</p> <p>So even if you start with $10000$ gold, after $13$ lost bets you are broke and in debt. Continuing this example, the probability of this happening in a one shot-game is a mere $0.02$%. However, if you keep the algorithm running all night for $8$ hours betting every $5$ seconds, your chances of having a <a href="http://www.sbrforum.com/betting-tools/streak-calculator/">losing streak</a> of $13$ in a row go up to $29.61$%.</p> <p>Assuming that you cannot go into debt and $12$ losses is the most you can handle, then with the same data the chance of losing most of your money goes up to $50.5$%.</p>
geometry
<p>What's the analogue to spherical coordinates in $n$-dimensions? For example, for $n=2$ the analogue are polar coordinates $r,\theta$, which are related to the Cartesian coordinates $x_1,x_2$ by</p> <p>$$x_1=r \cos \theta$$ $$x_2=r \sin \theta$$</p> <p>For $n=3$, the analogue would be the ordinary spherical coordinates $r,\theta ,\varphi$, related to the Cartesian coordinates $x_1,x_2,x_3$ by</p> <p>$$x_1=r \sin \theta \cos \varphi$$ $$x_2=r \sin \theta \sin \varphi$$ $$x_3=r \cos \theta$$</p> <p>So these are my questions: Is there an analogue, or several, to spherical coordinates in $n$-dimensions for $n&gt;3$? If there are such analogues, what are they and how are they related to the Cartesian coordinates? Thanks.</p>
<p>These are <a href="http://en.wikipedia.org/wiki/N-sphere#Spherical_coordinates" rel="noreferrer">hyperspherical coordinates</a>. You can see an example of them being put to use in <a href="https://math.stackexchange.com/questions/50953/volume-of-region-in-5d-space/51406#51406">this answer</a>.</p>
<p>I was trying to answer exercise 9 of <span class="math-container">$ I.5. $</span> from Einstein gravity in a nutshell by A. Zee that I saw this question so what I am going to say is from this question. It is said that the d-dimensional unit sphere <span class="math-container">$S^d$</span> is embedded into <span class="math-container">$E^{d+1}$</span> by usual Pythagorean relation<span class="math-container">$$(X^1)^2+(X^2)^2+.....+(X^{d+1})^2=1$$</span>. Thus <span class="math-container">$S^1$</span> is the circle and <span class="math-container">$S^2$</span> the sphere. A. Zee says we can generalize what we know about polar and spherical coordinates to higher dimensions by defining<br /> <br /> <span class="math-container">$X^1=\cos\theta_1\quad X^2=\sin\theta_1 \cos\theta_2,\ldots $</span><br /> <br /> <span class="math-container">$X^d=\sin\theta_1 \ldots \sin\theta_{d-1} \cos\theta_d,$</span><br /> <br /> <span class="math-container">$X^{d+1}=\sin\theta_1 \ldots \sin\theta_{d-1} \sin\theta_d$</span><br /> <br /> where <span class="math-container">$0\leq\theta_{i}\lt \pi \,$</span> for <span class="math-container">$ 1\leq i\lt d $</span> but <span class="math-container">$ 0\leq \theta_d \lt 2\pi $</span>.<br /> <br /> So for <span class="math-container">$S^1$</span> we just have (<span class="math-container">$\theta_1$</span>):<br /> <br /> <span class="math-container">$X^1=\cos\theta_1,\quad X^2=\sin\theta_1$</span><br /> <br /> <span class="math-container">$S^1$</span> is embedded into <span class="math-container">$E^2$</span> and for the metric on <span class="math-container">$S^1$</span> we have: <span class="math-container">$$ds_1^2=\sum_1^2(dX^i)^2=d\theta_1^2$$</span><br /> for <span class="math-container">$S^2$</span> we have (<span class="math-container">$\theta_1, \theta_2$</span>) so for Cartesian coordinates we have:<br /> <br /> <span class="math-container">$X^1=\cos\theta_1,\quad X^2=\sin\theta_1\cos\theta_2,$</span> <span class="math-container">$\quad X^3=\sin\theta_1\sin\theta_2$</span><br /> <br /> and for its metric: <span class="math-container">$$ds_2^2=\sum_1^3(dX^i)^2=d\theta_1^2+\sin^2\theta_1 d\theta_2^2$$</span><br /> for <span class="math-container">$S^3$</span> which is embedded into <span class="math-container">$E^4$</span> we have(<span class="math-container">$ \theta_1,\theta_2,\theta_3 $</span>):</p> <p><br /> <span class="math-container">$X^1=\cos\theta_1,\quad X^2=\sin\theta_1\cos\theta_2,$</span> <span class="math-container">$\quad X^3=\sin\theta_1\sin\theta_2\cos\theta_3$</span> <span class="math-container">$\quad X^4=\sin\theta_1\sin\theta_2\sin\theta_3 $</span><br /> <span class="math-container">$$ds_3^2=\sum_{i=1}^4(dX^1)^i=d\theta_1^2+\sin^2\theta_1 d\theta_2^2+sin^2\theta_1\sin^2\theta_2\,d\theta_3^2$$</span><br /> Finally, it is not difficult to show the metric on <span class="math-container">$S^d$</span> will be:</p> <p><span class="math-container">$$ds_d^2=\sum_{i=1}^{d+1}(dX^1)^i=d\theta_1^2+\sin^2\theta_1 d\theta_2^2+sin^2\theta_1\sin^2\theta_2\,d\theta_3^2+\cdots+sin^2\theta_1\cdots\sin^2\theta_{d-1}\,d\theta_d^2$$</span></p>
probability
<p>Wikipedia says:</p> <blockquote> <p>The probability density function is nonnegative everywhere, and its integral over the entire space is equal to one.</p> </blockquote> <p>and it also says.</p> <blockquote> <p>Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac{1}{2}]$ has probability density $f(x) = 2$ for $0 ≤ x ≤ \frac{1}{2}$ and $f(x) = 0$ elsewhere.</p> </blockquote> <p>How are these two things compatible?</p>
<p>Consider the uniform distribution on the interval from $0$ to $1/2$. The value of the density is $2$ on that interval, and $0$ elsewhere. The area under the graph is the area of a rectangle. The length of the base is $1/2$, and the height is $2$ $$ \int\text{density} = \text{area of rectangle} = \text{base} \cdot\text{height} = \frac 12\cdot 2 = 1. $$</p> <p>More generally, if the density has a large value over a small region, then the probability is comparable to the value times the size of the region. (I say "comparable to" rather than "equal to" because the value my not be the same at all points in the region.) The probability within the region must not exceed $1$. A large number---much larger than $1$---multiplied by a small number (the size of the region) can be less than $1$ if the latter number is small enough.</p>
<p>Remember that the 'PD' in PDF stands for "probability density", not probability. Density means probability per unit value of the random variable. That can easily exceed $1$. What has to be true is that the integral of this density function taken with respect to this value must be exactly $1$.</p> <p>If we know a PDF function (e.g. normal distribution), and want to know the "probability" of a given value, say $x=1$, what will people usually do? To find the probability that the output of a random event is within some range, you integrate the PDF over this range.</p> <p>Also see <a href="http://www.mathworks.com/matlabcentral/newsreader/view_thread/248797" rel="noreferrer">Why mvnpdf give probablity larger than 1?</a></p>
probability
<p>I found this problem in a contest of years ago, but I'm not very good at probability, so I prefer to see how you do it:</p> <blockquote> <p>A man gets drunk half of the days of a month. To open his house, he has a set of keys with <span class="math-container">$5$</span> keys that are all very similar, and only one key lets him enter his house. Even when he arrives sober he doesn't know which key is the correct one, so he tries them one by one until he chooses the correct key. When he's drunk, he also tries the keys one by one, but he can't distinguish which keys he has tried before, so he may repeat the same key.</p> <p>One day we saw that he opened the door on his third try.</p> <p>What is the probability that he was drunk that day?</p> </blockquote>
<p>The key thing here is this: let $T$ be the number of tries it takes him to open the door. Let $D$ be the event that the man is drunk. Then $$ P(D\mid T=3)=\frac{P(T=3, D)}{P(T=3)}. $$ Now, the event that it takes three tries to open the door can be decomposed as $$ P(T=3)=P(T=3\mid D)\cdot P(D)+P(T=3\mid \neg D)\cdot P(\neg D). $$ By assumption, $P(D)=P(\neg D)=\frac{1}{2}$. So, we just need to compute the probability of requiring three attempts when drunk and when sober.</p> <p>When he's sober, it takes three tries precisely when he chooses a wrong key, followed by a different wrong key, followed by the right key; the probability of doing this is $$ P(T=3\mid \neg D)=\frac{4}{5}\cdot\frac{3}{4}\cdot\frac{1}{3}=\frac{1}{5}. $$</p> <p>When he's drunk, it is $$ P(T=3\mid D)=\frac{4}{5}\cdot\frac{4}{5}\cdot\frac{1}{5}=\frac{16}{125}. $$</p> <p>So, all told, $$ P(T=3)=\frac{16}{125}\cdot\frac{1}{2}+\frac{1}{5}\cdot\frac{1}{2}=\frac{41}{250}. $$ Finally, $$ P(T=3, D)=P(T=3\mid D)\cdot P(D)=\frac{16}{125}\cdot\frac{1}{2}=\frac{16}{250} $$ (intentionally left unsimplified). So, we get $$ P(D\mid T=3)=\frac{\frac{16}{250}}{\frac{41}{250}}=\frac{16}{41}. $$</p>
<p>Let's first compute the probability that he wins on the third try in each of the two cases:</p> <p>Sober: The key has to be one of the (ordered) five, with equal probability for each, so $p_{sober}=p_s=\frac 15$.</p> <p>Drunk: Success on any trial has probability $\frac 15$. To win on the third means he fails twice then succeeds, so $p_{drunk}=p_d=\frac 45\times \frac 45 \times \frac 15 = \frac {16}{125}$</p> <p>Since our prior was $\frac 12$ the new estimate for the probability is $$\frac {.5\times p_d}{.5p_d+.5p_s}=\frac {16}{41}=.\overline {39024}$$</p>
linear-algebra
<p>What is the importance of eigenvalues/eigenvectors? </p>
<h3>Short Answer</h3> <p><em>Eigenvectors make understanding linear transformations easy</em>. They are the "axes" (directions) along which a linear transformation acts simply by "stretching/compressing" and/or "flipping"; eigenvalues give you the factors by which this compression occurs. </p> <p>The more directions you have along which you understand the behavior of a linear transformation, the easier it is to understand the linear transformation; so you want to have as many linearly independent eigenvectors as possible associated to a single linear transformation.</p> <hr> <h3>Slightly Longer Answer</h3> <p>There are a lot of problems that can be modeled with linear transformations, and the eigenvectors give very simply solutions. For example, consider the system of linear differential equations \begin{align*} \frac{dx}{dt} &amp;= ax + by\\\ \frac{dy}{dt} &amp;= cx + dy. \end{align*} This kind of system arises when you describe, for example, the growth of population of two species that affect one another. For example, you might have that species $x$ is a predator on species $y$; the more $x$ you have, the fewer $y$ will be around to reproduce; but the fewer $y$ that are around, the less food there is for $x$, so fewer $x$s will reproduce; but then fewer $x$s are around so that takes pressure off $y$, which increases; but then there is more food for $x$, so $x$ increases; and so on and so forth. It also arises when you have certain physical phenomena, such a particle on a moving fluid, where the velocity vector depends on the position along the fluid.</p> <p>Solving this system directly is complicated. But suppose that you could do a change of variable so that instead of working with $x$ and $y$, you could work with $z$ and $w$ (which depend linearly on $x$ and also $y$; that is, $z=\alpha x+\beta y$ for some constants $\alpha$ and $\beta$, and $w=\gamma x + \delta y$, for some constants $\gamma$ and $\delta$) and the system transformed into something like \begin{align*} \frac{dz}{dt} &amp;= \kappa z\\\ \frac{dw}{dt} &amp;= \lambda w \end{align*} that is, you can "decouple" the system, so that now you are dealing with two <em>independent</em> functions. Then solving this problem becomes rather easy: $z=Ae^{\kappa t}$, and $w=Be^{\lambda t}$. Then you can use the formulas for $z$ and $w$ to find expressions for $x$ and $y$..</p> <p>Can this be done? Well, it amounts <em>precisely</em> to finding two linearly independent eigenvectors for the matrix $\left(\begin{array}{cc}a &amp; b\\c &amp; d\end{array}\right)$! $z$ and $w$ correspond to the eigenvectors, and $\kappa$ and $\lambda$ to the eigenvalues. By taking an expression that "mixes" $x$ and $y$, and "decoupling it" into one that acts independently on two different functions, the problem becomes a lot easier. </p> <p>That is the essence of what one hopes to do with the eigenvectors and eigenvalues: "decouple" the ways in which the linear transformation acts into a number of independent actions along separate "directions", that can be dealt with independently. A lot of problems come down to figuring out these "lines of independent action", and understanding them can really help you figure out what the matrix/linear transformation is "really" doing. </p>
<h3>A short explanation</h3> <p>Consider a matrix <span class="math-container">$A$</span>, for an example one representing a physical transformation (e.g rotation). When this matrix is used to transform a given vector <span class="math-container">$x$</span> the result is <span class="math-container">$y = A x$</span>.</p> <p>Now an interesting question is</p> <blockquote> <p>Are there any vectors <span class="math-container">$x$</span> which do not change their direction under this transformation, but allow the vector magnitude to vary by scalar <span class="math-container">$ \lambda $</span>?</p> </blockquote> <p>Such a question is of the form <span class="math-container">$$A x = \lambda x $$</span></p> <p>So, such special <span class="math-container">$x$</span> are called <em>eigenvector</em>(s) and the change in magnitude depends on the <em>eigenvalue</em> <span class="math-container">$ \lambda $</span>.</p>
matrices
<p>I've noticed that $\mathrm{GL}_n(\mathbb R)$ is not a connected space, because if it were $\det(\mathrm{GL}_n(\mathbb R))$ (where $\det$ is the function ascribing to each $n\times n$ matrix its determinant) would be a connected space too, since $\det$ is a continuous function. But $\det(\mathrm{GL}_n(\mathbb R))=\mathbb R\setminus\{0\},$ so not connected.</p> <p>I started thinking if I could prove that $\det^{-1}((-\infty,0))$ and $\det^{-1}((0,\infty))$ are connected. But I don't know how to prove that. I'm reading my notes from the topology course I took last year and I see nothing about proving connectedness...</p>
<p>Your suspicion is correct, $GL_n$ has two components, and $\det$ may be used to show there are at least two of them. The other direction is slightly more involved and requires linear algebra rather than topology. Here is a sketch of how to do this:</p> <p>i) If $b$ is any vector, let $R_b$ denote the reflection through the hyperplane perpendicular to $b$. These are all reflections. Any two reflections $R_a, R_b$ with $a, b$ linear independent can be joined by a path consisting of reflections, namely $R_{ta+ (1-t)b}, t\in[0,1]$. </p> <p>ii) Any $X\in O^+(n)$ (orthogonal matrices with positive determinant) is the product of an even number of reflections. Since matrix multiplication is continuous $O(n)\times O(n) \rightarrow O(n)$ and by i) you can join any product $R_a R_b$ with $R_a R_a = Id$ it follows that $O^+(n)$ is connected.</p> <p>iii) $\det$ shows $O(n)$ is not connected.</p> <p>iv) $O^-(n) = R O^+ (n)$ for any reflection $R$. Hence $O^-(n)$ is connected.</p> <p>v) Any $ X\in GL_n$ is the product $AO$ of a positive matrix $A$ and $O \in O(n)$ (polar decomposition). Now you only need to show that the positive matrices are connected, which can be shown again using convex combination with $Id$. This proves the claim.</p>
<p>Here's another proof. First, by <a href="http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process">Gram-Schmidt</a>, any element of $\text{GL}_n(\mathbb{R})$ may be connected by a path to an element of $\text{O}(n)$. Second, by the <a href="http://en.wikipedia.org/wiki/Spectral_theorem">spectral theorem</a>, any element of $\text{SO}(n)$ is connected to the identity by a <a href="http://en.wikipedia.org/wiki/One-parameter_group">one-parameter group</a>. Multiplying by an element of $\text{O}(n)$ not in $\text{SO}(n)$, the conclusion follows.</p> <p>The first part of the proof can actually be augmented to say much stronger: it turns out that Gram-Schmidt shows that $\text{GL}_n(\mathbb{R})$ <a href="http://en.wikipedia.org/wiki/Deformation_retract">deformation retracts</a> onto $\text{O}(n)$, so not only do they have the same number of connected components, but they are homotopy equivalent. </p> <p>Note that $\text{GL}_n(\mathbb{R})$ is a <a href="http://en.wikipedia.org/wiki/Manifold">manifold</a>, hence <a href="http://en.wikipedia.org/wiki/Locally_connected_space#Components_and_path_components">locally path-connected</a>, so its components and path components coincide. </p>
probability
<p>Some years ago I was interested in the following Markov chain whose state space is the positive integers. The chain begins at state "1", and from state "n" the chain next jumps to a state uniformly selected from {n+1,n+2,...,2n}.</p> <p>As time goes on, this chain goes to infinity, with occasional large jumps. In any case, the chain is quite unlikely to hit any particular large n.</p> <p>If you define p(n) to be the probability that this chain visits state "n", then p(n) goes to zero like c/n for some constant c. In fact, </p> <p>$$ np(n) \to c = {1\over 2\log(2)-1} = 2.588699. \tag1$$</p> <p>In order to prove this convergence, I recast it as an analytic problem. Using the Markov property, you can see that the sequence satisfies: </p> <p>$$ p(1)=1\quad\mbox{ and }\quad p(n)=\sum_{\lceil n/2\rceil}^{n-1} {p(j)\over j}\mbox{ for }n&gt;1. \tag2$$</p> <p>For some weeks, using generating functions etc. I tried and failed to find an analytic proof of the convergence in (1). Finally, at a conference in 2003 Tom Mountford showed me a (non-trivial) probabilistic proof.</p> <p>So the result is true, but since then I've continued to wonder if I missed something obvious. Perhaps there is a standard technique for showing that (2) implies (1).</p> <p><strong>Question:</strong> Is there a direct (short?, analytic?) proof of (1)? </p> <p>Perhaps someone who understands sequences better than I do could take a shot at this.</p> <p><strong>Update:</strong> I'm digging through my old notes on this. I now remember that I had a proof (using generating functions) that <em>if</em> $\ np(n)$ converges, then the limit is $1\over{2\log (2)-1}$. It was the convergence that eluded me.</p> <p>I also found some curiosities like: $\sum_{n=1}^\infty {p(n)\over n(2n+1)}={1\over 2}.$ </p> <p><strong>Another update</strong>: Here is the conditional result mentioned above. </p> <p>As in Qiaochu's answer, define $Q$ to be the generating function of $p(n)/n$, that is, $Q(t)=\sum_{n=1}^\infty {p(n)\over n} t^n$ for $0\leq t&lt;1$. Differentiating gives $$Q^\prime(t)=1+{Q(t)-Q(t^2)\over 1-t}.$$ This is slightly different from Qiaochu's expression because $p(n)\neq \sum_{j=\lceil n/2\rceil}^{n-1} {p(j)\over j}$ when $n=1$, so that $p(1)$ has to be treated separately.</p> <p>Differentiating again and multiplying by $1-t$, we get $$(1-t)Q^{\prime\prime}(t)=-1+2\left[Q^\prime(t)-t Q^\prime(t^2)\right],$$ that is, $$(1-t)\sum_{j=0}^\infty (j+1) p(j+2) t^j = -1+2\left[\sum_{j=1}^\infty (jp(j)) {t^j-t^{2j}\over j}\right].$$</p> <p>Assume that $\lim_n np(n)=c$ exists. Letting $t\to 1$ above the left hand side gives $c$, while the right hand side is $-1+2c\log(2)$ and hence $c={1\over 2\log(2)-1}$.</p> <p>Note: $\sum_{j=1}^\infty {t^j-t^{2j}\over j}=\log(1+t).$ </p> <p><strong>New update: (Sept. 2)</strong> </p> <p>Here's an alternative proof of the conditional result that my colleague Terry Gannon showed me in 2003.</p> <p>Start with the sum $\sum_{n=2}^{2N}\ p(n)$, substitute the formula in the title, exchange the variables $j$ and $n$, and rearrange to establish the identity:</p> <p>$${1\over 2}=\sum_{j=N+1}^{2N} {j-N\over j}\ {p(j)}.$$</p> <p>If $jp(j)\to c$, then $1/2=\lim_{N\to\infty} \sum_{j=N+1}^{2N} {j-N\over j^2}\ c=(\log(2)-1/2)\ c,$ so that $c={1\over 2\log(2)-1}$.</p> <p><strong>New update: (Sept. 8)</strong> Despite the nice answers and interesting discussion below, I am still holding out for an (nice?, short?) analytic proof of convergence. Basic Tauberian theory is allowed :) </p> <p><strong>New update: (Sept 13)</strong> I have posted a sketch of the probabilistic proof of convergence under "A fun and frustrating recurrence sequence" in the "Publications" section of my homepage.</p> <p><strong>Final Update:</strong> (Sept 15th) The deadline is approaching, so I have decided to award the bounty to T.. Modulo the details(!), it seems that the probabilistic approach is the most likely to lead to a proof.</p> <p>My sincere thanks to everyone who worked on the problem, including those who tried it but didn't post anything. </p> <p>In a sense, I <em>did</em> get an answer to my question: there doesn't seem to be an easy, or standard proof to handle this particular sequence. </p>
<p>Update: the following probabilistic argument I had posted earlier shows only that $p(1) + p(2) + \dots + p(n) = (c + o(1)) \log(n)$ and not, as originally claimed, the convergence $np(n) \to c$. Until a complete proof is available [edit: George has provided one in another answer] it is not clear whether $np(n)$ converges or has some oscillation, and at the moment there is evidence in both directions. Log-periodic or other slow oscillation is known to occur in some problems where the recursion accesses many previous terms. Actually, everything I can calculate about $np(n)$ is consistent with, and in some ways suggestive of, log-periodic fluctuations, with convergence being the special case where the bounds could somehow be strengthened and the fluctuation scale thus squeezed down to zero.</p> <hr> <p>$p(n) \sim c/n$ is [edit: only in average] equivalent to $p(1) + p(2) + \dots + p(n)$ being asymptotic to $c \log(n)$. The sum up to $p(n)$ is the expected time the walk spends in the interval [1,n]. For this quantity there is a simple probabilistic argument that explains (and can rigorously demonstrate) the asymptotics.</p> <p>This Markov chain is a discrete approximation to a log-normal random walk. If $X$ is the position of the particle, $\log X$ behaves like a simple random walk with steps $\mu \pm \sigma$ where $\mu = 2 \log 2 - 1 = 1/c$ and $\sigma^2 = (1- \mu)^2/2$. This is true because <em>the Markov chain is bounded between two easily analyzed random walks with continuous steps</em>.</p> <p>(Let X be the position of the particle and $n$ the number of steps; the walk starts at X=1, n=1.)</p> <p>Lower bound walk $L$: at each step, multiply X by a uniform random number in [1,2] and replace n by (n+1). $\log L$ increases, on average, by $\int_1^2 log(t) dt = 2 \log(2) - 1$ at each step.</p> <p>Upper bound walk $U$: at each step, jump from X to uniform random number in [X+1,2X+1] and replace n by (n+1). </p> <p>$L$ and $U$ have means and variances that are the same within $O(1/n)$, where the $O()$ constants can be made explicit. Steps of $L$ are i.i.d and steps of $U$ are independent, asymptotically identical-distributed. Thus, the Central Limit theorem shows that $\log X$ after $n$ steps is approximately a Gaussian with mean $n\mu + O(\log n)$ and variance $n\sigma^2 + O(\log n)$. </p> <p>The number of steps for the particle to escape the interval $[1,t]$ is therefore $({\log t})/\mu$ with fluctuations of size $A \sqrt{\log t}$ having probability that decays rapidly in A (bounded by $|A|^p \exp(-qA^2)$ for suitable constants). Thus, the sum p(1) + p(2) + ... p(n) is asymptotically equivalent to $(\log n)/(2\log (2)-1)$.</p> <p>Maybe this is equivalent to the 2003 argument from the conference. If the goal is to get a proof from the generating function, it suggests that dividing by $(1-x)$ may be useful for smoothing the p(n)'s.</p>
<p>After getting some insight by looking at some numerical calculations to see what $np(n)-c$ looks like for large $n$, I can now describe the convergence in some detail. First, the results of the simulations strongly suggested the following.</p> <ul> <li>for $n$ odd, $np(n)$ converges to $c$ from below at rate $O(1/n)$.</li> <li>for $n$ a multiple of 4, $np(n)$ converges to $c$ from above at rate $O(1/n^2)$.</li> <li>for $n$ a multiple of 2, but not 4, $np(n)$ converges to $c$ from below at rate $O(1/n^2)$.</li> </ul> <p>This suggests the following asymptotic form for $p(n)$. $$ p(n)=\frac{c}{n}\left(1+\frac{a_1(n)}{n}+\frac{a_2(n)}{n^2}\right)+o\left(\frac{1}{n^3}\right) $$ where $a_1(n)$ has period 2, with $a_1(0)=0$, $a_1(1) &lt; 0$ and $a_2(n)$ has period 4 with $a_2(0) &gt; 0$ and $a_2(2) &lt; 0$. In fact, we can expand $p(n)$ to arbitrary order [Edit: Actually, not quite true. See below] $$ \begin{array} {}p(n)=c\sum_{r=0}^s\frac{a_r(n)}{n^{r+1}}+O(n^{-s-2})&amp;&amp;(1) \end{array} $$ and the term $a_r(n)$ is periodic in $n$, with period $2^r$. Here, I have normalized $a_0(n)$ to be equal to 1.</p> <p>We can compute all of the coefficients in (1). As $p$ satisfies the recurrence relation $$ \begin{array} \displaystyle p(n+1)=(1+1/n)p(n)-1_{\lbrace n\textrm{ even}\rbrace}\frac2np(n/2) -1_{\lbrace n=1\rbrace}.&amp;&amp;(2) \end{array} $$ we can simply plug (1) into this, expand out the terms $(n+1)^{-r}=\sum_s\binom{-r}{s}n^{-r-s}$ on the left hand side, and compare coefficients of $1/n$. $$ \begin{array} \displaystyle a_r(k+1)-a_r(k)=a_{r-1}(k)-\sum_{s=0}^{r-1}\binom{-s-1}{r-s}a_{s}(k+1)-1_{\lbrace k\textrm{ even}\rbrace}2^{r+1}a_{r-1}(k/2).&amp;&amp;(3) \end{array} $$ Letting $\bar a_r$ be the average of $a_r(k)$ as $k$ varies, we can average (3) over $k$ to get a recurrence relation for $\bar a_r$. Alternatively, the function $f(n)=\sum_r\bar a_rn^{-r-1}$ must satisfy $f(n+1)=(1+1/n)f(n)-f(n/2)/n$ which is solved by $f(n)=1/(n+1)=\sum_{r\ge0}(-1)^rn^{-r-1}$, so we get $\bar a_r=(-1)^r$. Then, (3) can be applied iteratively to obtain $a_r(k+1)-a_r(k)$ in terms of $a_s(k)$ for $s &lt; r$. Together with $\bar a_r$, this gives $a_r(k)$ and it can be seen that the period of $a_r(k)$ doubles at each step. Doing this gives $a_r\equiv(a_r(0),\ldots,a_r(2^r-1))$ as follows $$ \begin{align} &amp; a_0=(1),\\\\ &amp; a_1=(0,-2),\\\\ &amp; a_2=(7,-3,-9,13)/2 \end{align} $$ These agree with the numerical simulation mentioned above.</p> <hr> <p>Update: I tried another numerical simulation to check these asymptotics, by successively subtracting out the leading order terms. You can see it converges beautifully to the levels $a_0$, $a_1$, $a_2$ but, then...</p> <p><img src="https://i.sstatic.net/ZDNzb.gif" alt="convergence to asymptotics"></p> <p>... it seems that after the $a_2n^{-3}$ part, there is an oscillating term! I wasn't expecting that, but it there is an asymptotic of the form $cn^{-r}\sin(a\log n+\theta)$, where you can solve to leading order to obtain $r\approx3.54536$, $a\approx10.7539$.</p> <hr> <p>Update 2: I was re-thinking this question a few days ago, and it suddenly occured how you can not only prove it using analytic methods, but give a full asymptotic expansion to arbitrary order. The idea involves some very cool maths! (if I may say so myself). Apologies that this answer has grown and grown, but I think it's worth it. It is a very interesting question and I've certainly learned a lot by thinking about it. The idea is that, instead of using a power series generating function as in Qiaochu's answer, you use a Dirichlet series which can be inverted with <a href="http://en.wikipedia.org/wiki/Perron%27s_formula" rel="nofollow noreferrer">Perron's formula</a>. First, the expansion is as follows, $$ \begin{array}{ccc} \displaystyle p(n)=\sum_{\Re[r]+k\le \alpha}a_{r,k}(n)n^{-r-k}+o(n^{-\alpha}),&amp;&amp;(4) \end{array} $$ for any real $\alpha$. The sum is over nonnegative integers $k$ and complex numbers $r$ with real part at least 1 and satisfying $r+1=2^r$ (the leading term being $r=1$), and $a_{r,k}(n)$ is a periodic function of $n$, with period $2^k$. The reason for such exponents is that the difference equation (2) has the continuous-time limit $p^\prime(x)=p(x)/x-p(x/2)/x$ which has solutions $p(x)=x^{-r}$ for precisely such exponents. Splitting into real and imaginary parts $r=u+iv$, all solutions to $r+1=2^r$ lie on the line $(1+u)^2+v^2=4^u$ and, other than the leading term $u=1$, $v=0$, there is precisely one complex solution with imaginary part $2n\pi\le v\log2\le2n\pi+\pi/2$ (positive integer $n$) and, together with the complex conjugates, this lists all possible exponents $r$. Then, $a_{r,k}(n)$ is determined (as a multiple of $a_{r,0}$) for $k &gt; 0$ by substituting this expansion back into the difference equation as I did above. I arrived at this expansion after running the simulations plotted above (and T..'s mention of complex exponents of n helped). Then, the Dirichlet series idea nailed it.</p> <p>Define the Dirichlet series with coefficients $p(n)/n$ $$ L(s)=\sum_{n=1}^\infty p(n)n^{-1-s}, $$ which converges absolutely for the real part of $s$ large enough (greater than 0, since $p$ is bounded by 1). It can be seen that $L(s)-1$ is of order $2^{-1-s}$ in the limit as the real part of $s$ goes to infinity. Multiplying (2) by $n^{-s}$, summing and expanding $n^{-s}$ in terms of $(n+1)^{-s}$ on the LHS gives the functional equation $$ \begin{array} \displaystyle (s-1+2^{-s})L(s)=s+\sum_{k=1}^\infty(-1)^k\binom{-s}{k+1}(L(s+k)-1).&amp;&amp;(5) \end{array} $$ This extends $L$ to a meromorphic function on the whole complex plane with simple poles precisely at the points $-r$ with real part at least one and $r+1 = 2^r$, and then at $-r-k$ for nonnegative integers $k$. The pole with largest real component is at $s = -1$ and has residue $$ {\rm Res}(L,-1)={\rm Res}(s/(s-1+2^{-s}),-1)=\frac{1}{2\log2-1}. $$ If we <em>define</em> $p^\prime(n)$ by the truncated expansion (4) for some suitably large $\alpha$, then it will satisfy the recurrence relation (2) up to an error term of size $O(n^{-\alpha-1})$ and its Dirichlet series will satisfy the functional equation (5) up to an error term which will be an analytic function over $\Re[s] &gt; -\alpha$ (being a Dirichlet series with coefficients $o(n^{-\alpha-1}))$. It follows that $p^\prime$ has simple poles in the same places as $p$ on the domain $\Re[s] &gt; -\alpha$ and, by choosing $a_{r,0}$ appropriately, it will have the same residues. Then, the Dirichlet series associated with $q(n)= p^\prime(n)-p(n)$ will be analytic on $\Re[s] &gt; -\alpha$ We can use <a href="http://en.wikipedia.org/wiki/Perron%27s_formula" rel="nofollow noreferrer">Perron's formula</a> to show that $q(n)$ is of size $O(n^\beta)$ for any $\beta &gt; -\alpha$ and, by making $\alpha$ as large as we like, this will prove the asymptotic expansion (4). Differentiated, Perron's formula gives $$ dQ(x)/dx = \frac{1}{2\pi i}\int_{\beta-i\infty}^{\beta+i\infty}x^sL(s)\,ds $$ where $Q(x)=\sum_{n\le x}q(n)$ and $\beta &gt; -\alpha$. This expression is intended to be taken in the sense of distributions (i.e., multiply both sides by a smooth function with compact support in $(0,\infty)$ and integrate). If $\theta$ is a smooth function of compact support in $(0,\infty)$ then $$ \begin{array} \displaystyle\sum_{n=1}^\infty q(n)\theta(n/x)/x&amp;\displaystyle=\int_0^\infty\theta(y)\dot Q(xy)\,dy\\\\ &amp;\displaystyle=\frac{1}{2\pi i}\int_{\beta-i\infty}^{\beta+i\infty}x^sL(s)\int\theta(y)y^s\,dy\,ds=O(x^\beta)\ \ (6) \end{array} $$ We obtain the final bound, because by integration by parts, the integral of $\theta(y)y^s$ can be shown to go to zero faster than any power of $1/s$, so the integrand is indeed integrable and the $x^\beta$ term can be pulled outside. This is enough to show that $q(n)$ is itself of $O(n^\beta)$. Trying to finish this answer off without too much further detail, the argument is as follows. If $q(n)n^{-\beta}$ was unbounded then would keep exceeding its previous maximum and, by the recurrence relation (2), it would take time at least <a href="http://en.wikipedia.org/wiki/Big-omega_notation" rel="nofollow noreferrer">&Omega;(n)</a> to get back close to zero. So, if $\theta$ has support in $[1,1+\epsilon]$ for small enough $\epsilon$, the integral $\int\theta(y)\dot Q(ny)\,dy$ will be of order $q(n)$ at such times and, as this happens infinitely often, it would contradict (6). Phew! I knew that this could be done, but it took some work! Possibly not as simple or direct as you were asking for, but Dirichlet series are quite standard (more commonly in analytic number theory, in my experience). However, maybe not really more difficult than the probabilistic method and you do get a whole lot more. This approach should also work for other types of recurrence relations and differential equations too.</p> <p>Finally, I added a much more detailed writeup on my blog, fleshing out some of the details which I skimmed over here. See <a href="http://almostsure.wordpress.com/2010/10/06/asymptotic-expansions-of-a-recurrence-relation/" rel="nofollow noreferrer">Asymptotic Expansions of a Recurrence Relation</a>.</p>
probability
<p>I've read the proof for why <span class="math-container">$\int_0^\infty P(X &gt;x)dx=E[X]$</span> for nonnegative random variables (located <a href="https://en.wikipedia.org/wiki/Expected_value#Formula_for_non-negative_random_variables" rel="noreferrer">here</a>) and understand its mechanics, but I'm having trouble understanding the intuition behind this formula or why it should be the case at all. Does anyone have any insight on this? I bet I'm missing something obvious.</p>
<p>For the discrete case, and if $X$ is nonnegative, $E[X] = \sum_{x=0}^\infty x P(X = x)$. That means we're adding up $P(X = 0)$ zero times, $P(X = 1)$ once, $P(X = 2)$ twice, etc. This can be represented in array form, where we're adding column-by-column:</p> <p>$$\begin{matrix} P(X=1) &amp; P(X = 2) &amp; P(X = 3) &amp; P(X = 4) &amp; P(X = 5) &amp; \cdots \\ &amp; P(X = 2) &amp; P(X = 3) &amp; P(X = 4) &amp; P(X = 5) &amp; \cdots \\ &amp; &amp; P(X = 3) &amp; P(X = 4) &amp; P(X = 5) &amp; \cdots \\ &amp; &amp; &amp; P(X = 4) &amp; P(X = 5) &amp; \cdots \\ &amp; &amp; &amp; &amp; P(X = 5) &amp; \cdots\end{matrix}.$$</p> <p>We could also add up these numbers row-by-row, though, and get the same result. The first row has everything but $P(X = 0)$ and so sums to $P(X &gt; 0)$. The second row has everything but $P(X =0)$ and $P(X = 1)$ and so sums to $P(X &gt; 1)$. In general, the sum of row $x+1$ is $P(X &gt; x)$, and so adding the numbers row-by-row gives us $\sum_{x = 0}^{\infty} P(X &gt; x)$, which thus must also be equal to $\sum_{x=0}^\infty x P(X = x) = E[X].$</p> <p>The continuous case is analogous.</p> <p>In general, switching the order of summation (as in the proof the OP links to) can always be interpreted as adding row-by-row vs. column-by-column.</p>
<p>A hint and a proof. </p> <p><em>Hint:</em> if $X=x$ with full probability, the integral is the integral of $1$ on $(0,x)$, hence the LHS and the RHS are both $x$. </p> <p><em>Proof:</em> apply (Tonelli-)Fubini to the function $(\omega,x)\mapsto\mathbf 1_{X(\omega)&gt;x}$ and to the sigma-finite measure $P\otimes\mathrm{Leb}$ on $\Omega\times\mathbb R_+$. One gets $$ \int_\Omega\int_{\mathbb R_+}\mathbf 1_{X(\omega)&gt;x}\mathrm dx\mathrm dP(\omega)=\int_\Omega\int_0^{X(\omega)}\mathrm dx\mathrm dP(\omega)=\int_\Omega X(\omega)\mathrm dP(\omega)=E(X), $$ while, using the shorthand $A_x=\{\omega\in\Omega\mid X(\omega)&gt;x\}$, $$ \int_{\mathbb R_+}\int_\Omega\mathbf 1_{X(\omega)&gt;x}\mathrm dP(\omega)\mathrm dx=\int_{\mathbb R_+}\int_\Omega\mathbf 1_{\omega\in A_x}\mathrm dP(\omega)\mathrm dx=\int_{\mathbb R_+}P(A_x)\mathrm dx=\int_{\mathbb R_+}P(X&gt;x)\mathrm dx. $$</p>
logic
<p>In fact I don't understand the meaning of the word "metamathematics". I just want to know, for example, why can we use mathematical induction in the proof of logical theorems, like The Deduction Theorem, or even some more fundamental proposition like "every formula has equal numbers of left and right brackets"?</p> <p>What exactly can we use when talking about metamathematics? If induction is OK, then how about axiom of choice/determincacy? Can I use axiom of choice on collection of sets of formulas?(Of course it may be meaningless. By the way I don't understand why we can talk about a "set" of formulas either)</p> <p>I have asked one of my classmates about these, and he told me he had stopped thinking about this kind of stuff. I feel like giving up too......</p>
<p>This is not an uncommon confusion for students that are introduced to formal logic for the first time. It shows that you have a slightly wrong expectations about what metamathematics is for and what you'll get out of it.</p> <p>You're probably expecting that it <em>ought to</em> go more or less like in first-year real analysis, which started with the lecturer saying something like</p> <blockquote> <p>In high school, your teacher demanded that you take a lot of facts about the real numbers on faith. Here is where we stop taking those facts on faith and instead prove from first principles that they're true.</p> </blockquote> <p>This led to a lot of talk about axioms and painstaking quasi-formal proofs of things you already knew, and at the end of the month you were able to reduce everything to a small set of axioms including something like the supremum principle. Then, if you were lucky, Dedekind cuts or Cauchy sequences were invoked to convince you that if you believe in the counting numbers and a bit of set theory, you should also believe that there is <em>something</em> out there that satisfies the axioms of the real line.</p> <p>This makes it natural to expect that formal logic will work in the same way:</p> <blockquote> <p>As undergraduates, your teachers demanded that you take a lot of proof techniques (such as induction) on faith. Here is where we stop taking them on faith and instead prove from first principles that they're valid.</p> </blockquote> <p>But <em><strong>that is not how it goes</strong></em>. You're <em>still</em> expected to believe in ordinary mathematical reasoning for whichever reason you already did -- whether that's because they make intuitive sense to you, or because you find that the conclusions they lead to usually work in practice when you have a chance to verify them, or simply because authority says so.</p> <p>Instead, metamathematics is a quest to be precise about <em>what it is</em> you already believe in, such that we can use <em>ordinary mathematical reasoning</em> <strong>about</strong> those principles to get to know interesting things about the limits of what one can hope to prove and how different choices of what to take on faith lead to different things you can prove.</p> <p>Or, in other words, the task is to use ordinary mathematical reasoning to build a <strong>mathematical model</strong> of ordinary mathematical reasoning itself, which we can use to study it.</p> <p>Since metamathematicians are interested in knowing <em>how much</em> taken-on-faith foundation is necessary for this-or-that ordinary-mathematical argument to be made, they also tend to apply this interest to <em>their own</em> reasoning about the mathematical model. This means they are more likely to try to avoid high-powered reasoning techniques (such as general set theory) when they can -- not because such methods are <em>forbidden</em>, but because it is an interesting fact that they <em>can</em> be avoided for such-and-such purpose.</p> <p>Ultimately though, it is recognized that there are <em>some</em> principles that are so fundamental that we can't really do anything without them. Induction of the natural numbers is one of these. That's not a <em>problem</em>: it is just an interesting (empirical) fact, and after we note down that fact, we go on to use it when building our model of ordinary-mathematical-reasoning.</p> <p>After all, ordinary mathematical reasoning <em>already exists</em> -- and did so for thousands of years before formal logic was invented. We're not trying to <em>build</em> it here (the model is not the thing itself), just to better understand the thing we already have.</p> <hr /> <p>To answer your concrete question: Yes, you can (&quot;are allowed to&quot;) use the axiom of choice if you need to. It is good form to keep track of the fact that you <em>have</em> used it, such that you have an answer if you're later asked, &quot;the metamathematical argument you have just carried out, can that itself be formalized in such-and-such system?&quot; Formalizing metamathematical arguments within your model has proved to be a very powerful (though also confusing) way of establishing certain kinds of results.</p> <p>You can use the axiom of determinacy too, if that floats your boat -- so long as you're aware that doing so is not really &quot;ordinary mathematical reasoning&quot;, so it becomes doubly important to disclose faithfully that you've done so when you present your result (lest someone tries to combine it with something <em>they</em> found using AC instead, and get nonsense out of the combination).</p>
<p>This is not at all intended to be an answer to your question. (I like Henning Makholm's answer above.) But I thought you might be interested to hear <a href="https://en.wikipedia.org/wiki/Thoralf_Skolem" rel="noreferrer">Thoralf Skolem's</a> remarks on this issue, because they are quite pertinent—in particular one of his points goes exactly to your question about proving that every formula has equal numbers of left and right brackets—but they are much too long to put in a comment.</p> <blockquote> <p>Set-theoreticians are usually of the opinion that the notion of integer should be defined and that the principle of mathematical induction should be proved. But it is clear that we cannot define or prove ad infinitum; sooner or later we come to something that is not definable or provable. Our only concern, then, should be that the initial foundations be something immediately clear, natural, and not open to question. This condition is satisfied by the notion of integer and by inductive inferences, but it is decidedly not satisfied by <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory" rel="noreferrer">set-theoretic axioms of the type of Zermelo's</a> or anything else of that kind; if we were to accept the reduction of the former notions to the latter, the set-theoretic notions would have to be simpler than mathematical induction, and reasoning with them less open to question, but this runs entirely counter to the actual state of affairs.</p> <p>In a paper (1922) <a href="https://en.wikipedia.org/wiki/David_Hilbert" rel="noreferrer">Hilbert</a> makes the following remark about <a href="https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9" rel="noreferrer">Poincaré's</a> assertion that the principle of mathematical induction is not provable: “His objection that this principle could not be proved in any way other than by mathematical induction itself is unjustified and is refuted by my theory.” But then the big question is whether we can prove this principle by means of simpler principles and <em>without using any property of finite expressions or formulas that in turn rests upon mathematical induction or is equivalent to it</em>. It seems to me that this latter point was not sufficiently taken into consideration by Hilbert. For example, there is in his paper (bottom of page 170), for a lemma, a proof in which he makes use of the fact that in any arithmetic proof in which a certain sign occurs that sign must necessarily occur for a first time. Evident though this property may be on the basis of our perceptual intuition of finite expressions, a formal proof of it can surely be given only by means of mathematical induction. In set theory, at any rate, we go to the trouble of proving that every ordered finite set is well-ordered, that is, that every subset has a first element. Now why should we carefully prove this last proposition, but not the one above, which asserts that the corresponding property holds of finite arithmetic expressions occurring in proofs? Or is the use of this property not equivalent to an inductive inference?</p> <p>I do not go into Hilbert's paper in more detail, especially since I have seen only his first communication. I just want to add the following remark: It is odd to see that, since the attempt to find a foundation for arithmetic in set theory has not been very successful because of logical difficulties inherent in the latter, attempts, and indeed very contrived ones, are now being made to find a different foundation for it—as if arithmetic had not already an adequate foundation in inductive inferences and recursive definitions.</p> </blockquote> <p>(Source: Thoralf Skolem, “Some remarks on axiomatized set theory”, address to the Fifth Congress of Scandinavian Mathematicians, August 1922. English translation in <em>From Frege to Gödel</em>, p299–300. Jean van Heijenoort (ed.), Harvard University Press, 1967.)</p> <p>I think it is interesting to read this in the light of Henning Makholm's excellent answer, which I think is in harmony with Skolem's concerns. I don't know if Hilbert replied to Skolem.</p>
logic
<p><em>I'm sorry if this is a duplicate in any way. I doubt it's an original question. Due to my ignorance, it's difficult for me to search for appropriate things.</em></p> <h2>Motivation.</h2> <p>This question is inspired by Exercise 1.2.16 of <a href="http://www.math.wisc.edu/%7Emiller/old/m571-08/simpson.pdf" rel="nofollow noreferrer">these logic notes</a> by S. G. Simpson. Here is a shortened version of that exercise for convenience.</p> <blockquote> <p>Brown, Jones, and Smith are suspected of a crime. They testify as follows:</p> <p>Brown: Jones is guilty and Smith is innocent.</p> <p>Jones: If Brown is guilty then so is Smith.</p> <p>Smith: I’m innocent, but at least one of the others is guilty.</p> <p>a) Are the three testimonies consistent?</p> <p>b) The testimony of one of the suspects follows from that of another. Which from which?</p> <p>c) Assuming everybody is innocent, who committed perjury?</p> <p>d) Assuming all testimony is true, who is innocent and who is guilty?</p> <p>e) Assuming that the innocent told the truth and the guilty told lies, who is innocent and who is guilty?</p> </blockquote> <p>I like to challenge friends and family with similar problems. It's fun to make up scenarios and the solutions are fairly easy to those familiar with basic mathematical logic. <strong>I can vary the testimonies, the number of suspects, the questions about the testimonies, etc.</strong>; it's good stuff.</p> <p>Lately, though, I've wondered what it would mean to have (at least countably) <strong>infinitely many suspects</strong>. To make the problem tractable the <strong>testimonies would need some sort of defining rule</strong> and the <strong>questions ought to address appropriate groups of suspects</strong>.</p> <h2>The Question.</h2> <p>With this in mind, here's my scenario.</p> <blockquote> <p>On the morning of the first night at <a href="http://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel" rel="nofollow noreferrer">Hilbert's hotel</a>, when all the rooms were taken, the receptionist was found dead at his desk; it looked extremely suspicious. Was he murdered? The police interviewed all the tenants and staff, and concluded that the staff couldn't possibly have been involved in the death. However, the tenants had some interesting testimonies which amounted to the following.</p> <p><span class="math-container">$[\dots ]$</span></p> </blockquote> <p>Okay, so I've given this some thought and I suspect that <strong>the original set up is at least similar to</strong> letting <span class="math-container">$$\begin{align} \text{Brown}&amp;\mapsto [1]_3:=\{n\in\mathbb{N}\mid n\cong 1\pmod{3}\}, \\ \text{Jones}&amp;\mapsto [2]_3, \\ \text{Smith}&amp;\mapsto [3]_3, \end{align}$$</span> where each number <span class="math-container">$n$</span> represents the tenant in Room <span class="math-container">$n$</span>, then <strong>changing the testimonies accordingly</strong>. (I'll leave that as an exercise for the reader (ha!): this is too long already.)</p> <p>Immediately, I'm reminded of the notion of <a href="http://en.wikipedia.org/wiki/Presentation_of_a_monoid" rel="nofollow noreferrer">presentations</a> and <a href="http://en.wikipedia.org/wiki/Free_object" rel="nofollow noreferrer">freeness</a>. The above <em>smells like</em> a presentation (or perhaps some kind of <a href="http://en.wikipedia.org/wiki/Homomorphism" rel="nofollow noreferrer">homomorphism</a>). I suppose my main bunch of questions here are:</p> <blockquote> <p><strong>What is this thing? What's a better, more formal way of describing the mathematics behind this scenario? What similar things have been done before?</strong></p> </blockquote> <p>The reason why I included the (number-theory) tag is that I'm curious now as to <strong>what number theoretic problems, if any, can be phrased this way</strong>. (Does that make any sense?)</p> <h2>Thoughts and Clarification.</h2> <p><em>This is based on the comments.</em></p> <p>It's <strong>more about the maps</strong> between the infinite and finite cases.</p> <p>A thorough answer would include a mathematical description of what the infinite case is, a mathematical description of <strong>how the infinite case relates to finite <em>cases</em></strong>, details on what similar things have been done before, and perhaps a number-theoretic problem phrased using the above.</p> <p>One has to take into account negations in the infinite case in such a way that the <strong>structure of the given finite case is preserved</strong>.</p> <p>I suspect that they're just <strong>different models of the same theory</strong>, where the infinite case is in some sense &quot;free&quot;; that maps like the one given above are <strong>somehow related to the notion of a presentation</strong>; and that at least some trivial Number Theory problems can be stated this way.</p> <p>:)</p>
<p>Well, let's look at the structure of the problem:</p> <p>There is a set $S$ of suspects (three in the original problem, a countably infinite number of them in Hilbert's hotel).</p> <p>There's a subset $G\subset S$ of guilty suspects.</p> <p>And there's a mapping $f:S\to P(S)$ where $P(S)$ is the power set (set of subsets) of $S$, where $M\in f(s)$ means "If $s$ says the truth, it is possible that $G=M$". $f(s)$ is specified by a logical form $L_s$, that is, $f(s) = \{M\in P(S): L_s(M)\}$.</p> <p>Now we can formulate the separate questions as follows:</p> <p>a) Is $\bigcap_{s\in S} f(s) \ne \emptyset$?</p> <p>b) For which pairs $(s,t)\in S\times S$ do we have $f(s)\subseteq f(t)$?</p> <p>c) Assuming $G=\emptyset$, what is $\{s\in S: G\notin f(s)\}$?</p> <p>d) What is $\bigcap_{s\in S} f(s)$? (Actually, the question as formulated already presumes that this set has exactly one element; especially it assumes that the answer to (a) is "yes").</p> <p>e) What is $\bigcap_{s\in G} (P(S)\setminus f(s)) \cap \bigcup_{s\in S\setminus G} f(s)$?</p> <p>So to generalize the problem to Hilbert's hotel, you have to find a function $f(n)$ specified by a logical formula dependent on $n$ such that $\bigcap_{s\in n} f(n)$ has exactly one element, and (to be a generalization of the original problem) reduces to the original problem when restricted to the set $\{0,1,2\}$</p> <p>Let's look closer at the original testimonies:</p> <ul> <li>Brown gives an explicit list of who's guilty or innocent.</li> <li>Jones gives an implication connecting the other two.</li> <li>Smith makes a testimony about himself, and the claim that someone is guilty.</li> </ul> <p>Associating $0$ with Brown, $1$ with Jones and $2$ with Smith, we could write those as follows in the formalism derived above, with $S=\{1,2,3\}$ $$\begin{align} f(0) &amp;= \{M\in P(S): 1 \in M \land 2\notin M\}\\ f(1) &amp;= \{M\in P(S): 0 \in M \implies 2\in M\}\\ f(2) &amp;= \{M\in P(S): 2 \notin S \land M\ne\emptyset\} \end{align}$$</p> <p>There are of course many ways to generalize that, but let's try the following: $$f(n) = \begin{cases} \{M\in P(\mathbb N): \forall m &gt; n, m\in M\iff m \equiv 1\ (\mod 2)\} &amp; n \equiv 0\ (\mod 3)\\ \{M\in P(\mathbb N): \forall i &lt; n, \forall k &gt; n, m\in M\implies k\in M\} &amp; n \equiv 1\ (\mod 3)\\ \{M\in P(\mathbb N): n\notin M\land M\ne\emptyset\} &amp; n\equiv 2\ (\mod 3) \end{cases}$$ However this gives an inconsistent set of conditions (i.e. $\bigcup_{n\in\mathbb N} f(n)=\emptyset$), since from $f(0)$, one concludes $5\in G$, but from $f(5)$ one concludes $5\notin G$. This is a deviation from the original problem where the statements are indeed consistent.</p> <p>I'm not going to spend the time to actually find a proper generalization now (I already spent far more time on this answer than originally planned), but I think the mathematical concepts involved should now be clear.</p>
<p>It sounds to me like you are asking about Infinitary logic. I've pondered this idea myself a fair bit. For instance, we can make sense of the 'limit object' of this sequence $$ a_1 \wedge a_2, (a_1 \wedge a_2) \wedge a_3, (((a_1 \wedge a_2) \wedge a_3) \wedge a_4$$ where $\wedge $ denotes logical and. In this case the limit object has a value of true if and only if every $ a_n $ is true, otherwise it is false. But what if we replace those and's with logical implication, $\Rightarrow $?</p>
linear-algebra
<p>Some days ago, I was thinking on a problem, which states that <span class="math-container">$$AB-BA=I$$</span> does not have a solution in <span class="math-container">$M_{n\times n}(\mathbb R)$</span> and <span class="math-container">$M_{n\times n}(\mathbb C)$</span>. (Here <span class="math-container">$M_{n\times n}(\mathbb F)$</span> denotes the set of all <span class="math-container">$n\times n$</span> matrices with entries from the field <span class="math-container">$\mathbb F$</span> and <span class="math-container">$I$</span> is the identity matrix.)</p> <p>Although I couldn't solve the problem, I came up with this problem:</p> <blockquote> <p>Does there exist a field <span class="math-container">$\mathbb F$</span> for which that equation <span class="math-container">$AB-BA=I$</span> has a solution in <span class="math-container">$M_{n\times n}(\mathbb F)$</span>?</p> </blockquote> <p>I'd really appreciate your help.</p>
<p>Let $k$ be a field. The first Weyl algebra $A_1(k)$ is the free associative $k$-algebra generated by two letters $x$ and $y$ subject to the relation $$xy-yx=1,$$ which is usually called the Heisenberg or Weyl commutation relation. This is an extremely important example of a non-commutative ring which appears in many places, from the algebraic theory of differential operators to quantum physics (the equation above <em>is</em> Heisenberg's indeterminacy principle, in a sense) to the pinnacles of Lie theory to combinatorics to pretty much anything else.</p> <p>For us right now, this algebra shows up because </p> <blockquote> <p>an $A_1(k)$-modules are essentially the same thing as solutions to the equation $PQ-QP=I$ with $P$ and $Q$ endomorphisms of a vector space. </p> </blockquote> <p>Indeed:</p> <ul> <li><p>if $M$ is a left $A_1(k)$-module then $M$ is in particular a $k$-vector space and there is an homomorphism of $k$-algebras $\phi_M:A_1(k)\to\hom_k(M,M)$ to the endomorphism algebra of $M$ viewed as a vector space. Since $x$ and $y$ generate the algebra $A_1(k)$, $\phi_M$ is completely determined by the two endomorphisms $P=\phi_M(x)$ and $Q=\phi_M(y)$; moreover, since $\phi_M$ is an algebra homomorphism, we have $PQ-QP=\phi_1(xy-yx)=\phi_1(1_{A_1(k)})=\mathrm{id}_M$. We thus see that $P$ and $Q$ are endomorphisms of the vector space $M$ which satisfy our desired relation.</p></li> <li><p>Conversely, if $M$ is a vector space and $P$, $Q:M\to M$ are two linear endomorphisms, then one can show more or less automatically that there is a unique algebra morphism $\phi_M:A_1(k)\to\hom_k(M,M)$ such that $\phi_M(x)=P$ and $\phi_M(y)=Q$. This homomorphism turns $M$ into a left $A_1(k)$-module.</p></li> <li><p>These two constructions, one going from an $A_1(k)$-module to a pair $(P,Q)$ of endomorphisms of a vector space $M$ such that $PQ-QP=\mathrm{id}_M$, and the other going the other way, are mutually inverse.</p></li> </ul> <p>A conclusion we get from this is that your question </p> <blockquote> <p>for what fields $k$ do there exist $n\geq1$ and matrices $A$, $B\in M_n(k)$ such that $AB-BA=I$?</p> </blockquote> <p>is essentially equivalent to</p> <blockquote> <p>for what fields $k$ does $A_1(k)$ have finite dimensional modules?</p> </blockquote> <p>Now, it is very easy to see that $A_1(k)$ is an infinite dimensional algebra, and that in fact the set $\{x^iy^j:i,j\geq0\}$ of monomials is a $k$-basis. Two of the key properties of $A_1(k)$ are the following:</p> <blockquote> <p><strong>Theorem.</strong> If $k$ is a field of characteristic zero, then $A_1(k)$ is a simple algebra—that is, $A_1(k)$ does not have any non-zero proper bilateral ideals. Its center is trivial: it is simply the $1$-dimensional subspace spanned by the unit element.</p> </blockquote> <p>An immediate corollary of this is the following</p> <blockquote> <p><strong>Proposition.</strong> If $k$ is a field of characteristic zero, the $A_1(k)$ does not have any non-zero finite dimensional modules. Equivalently, there do not exist $n\geq1$ and a pair of matrices $P$, $Q\in M_n(k)$ such that $PQ-QP=I$.</p> </blockquote> <p><em>Proof.</em> Suppose $M$ is a finite dimensional $A_1(k)$-module. Then we have an algebra homomorphism $\phi:A_1(k)\to\hom_k(M,M)$ such that $\phi(a)(m)=am$ for all $a\in A_1(k)$ and all $m\in M$. Since $A_1(k)$ is infinite dimensional and $\hom_k(M,M)$ is finite dimensional (because $M$ is finite dimensional!) the kernel $I=\ker\phi$ cannot be zero —in fact, it must hace finite codimension. Now $I$ is a bilateral ideal, so the theorem implies that it must be equal to $A_1(k)$. But then $M$ must be zero dimensional, for $1\in A_1(k)$ acts on it at the same time as the identity and as zero. $\Box$</p> <p>This proposition can also be proved by taking traces, as everyone else has observed on this page, but the fact that $A_1(k)$ is simple is an immensely more powerful piece of knowledge (there are examples of algebras which do not have finite dimensional modules and which are not simple, by the way :) )</p> <p><em>Now let us suppose that $k$ is of characteristic $p&gt;0$.</em> What changes in term of the algebra? The most significant change is </p> <blockquote> <p><strong>Observation.</strong> The algebra $A_1(k)$ is not simple. Its center $Z$ is generated by the elements $x^p$ and $y^p$, which are algebraically independent, so that $Z$ is in fact isomorphic to a polynomial ring in two variables. We can write $Z=k[x^p,y^p]$.</p> </blockquote> <p>In fact, once we notice that $x^p$ and $y^p$ are central elements —and this is proved by a straightforward computation— it is easy to write down non-trivial bilateral ideals. For example, $(x^p)$ works; the key point in showing this is the fact that since $x^p$ is central, the <em>left</em> ideal which it generates coincides with the <em>bilateral</em> ideal, and it is very easy to see that the <em>left</em> ideal is proper and non-zero.</p> <p>Moreover, a little playing with this will give us the following. Not only does $A_1(k)$ have bilateral ideals: it has bilateral ideals of <em>finite codimension</em>. For example, the ideal $(x^p,y^p)$ is easily seen to have codimension $p^2$; more generally, we can pick two scalars $a$, $b\in k$ and consider the ideal $I_{a,b}=(x^p-a,y^p-b)$, which has the same codimension $p^2$. Now this got rid of the obstruction to finding finite-dimensional modules that we had in the characteristic zero case, so we can hope for finite dimensional modules now!</p> <p>More: this actually gives us a method to produce pairs of matrices satisfying the Heisenberg relation. We just can pick a proper bilateral ideal $I\subseteq A_1(k)$ of finite codimension, consider the finite dimensional $k$-algebra $B=A_1(k)/I$ and look for finitely generated $B$-modules: every such module is provides us with a finite dimensional $A_1(k)$-module and the observations above produce from it pairs of matrices which are related in the way we want.</p> <p>So let us do this explicitly in the simplest case: let us suppose that $k$ is algebraically closed, let $a$, $b\in k$ and let $I=I_{a,b}=(x^p-a,y^p-b)$. The algebra $B=A_1(k)/I$ has dimension $p^2$, with $\{x^iy^j:0\leq i,j&lt;p\}$ as a basis. The exact same proof that the Weyl algebra is simple when the ground field is of characteristic zero proves that $B$ is simple, and in the same way the same proof that proves that the center of the Weyl algebra is trivial in characteristic zero shows that the center of $B$ is $k$; going from $A_1(k)$ to $B$ we have modded out the obstruction to carrying out these proofs. In other words, the algebra $B$ is what's called a (finite dimensional) central simple algebra. Wedderburn's theorem now implies that in fact $B\cong M_p(k)$, as this is the only semisimple algebra of dimension $p^2$ with trivial center. A consequence of this is that there is a unique (up to isomorphism) simple $B$-module $S$, of dimension $p$, and that all other finite dimensional $B$-modules are direct sums of copies of $S$. </p> <p>Now, since $k$ is algebraically closed (much less would suffice) there is an $\alpha\in k$ such that $\alpha^p=a$. Let $V=k^p$ and consider the $p\times p$-matrices $$Q=\begin{pmatrix}0&amp;&amp;&amp;&amp;b\\1&amp;0\\&amp;1&amp;0\\&amp;&amp;1&amp;0\\&amp;&amp;&amp;\ddots&amp;\ddots\end{pmatrix}$$ which is all zeroes expect for $1$s in the first subdiagonal and a $b$ on the top right corner, and $$P=\begin{pmatrix}-\alpha&amp;1\\&amp;-\alpha&amp;2\\&amp;&amp;-\alpha&amp;3\\&amp;&amp;&amp;\ddots&amp;\ddots\\&amp;&amp;&amp;&amp;-\alpha&amp;p-1\\&amp;&amp;&amp;&amp;&amp;-\alpha\end{pmatrix}.$$ One can show that $P^p=aI$, $Q^p=bI$ and that $PQ-QP=I$, so they provide us us a morphism of algebras $B\to\hom_k(k^ p,k^ p)$, that is, they turn $k^p$ into a $B$-module. It <em>must</em> be isomorphic to $S$, because the two have the same dimension and there is only one module of that dimension; this determines <em>all</em> finite dimensional modules, which are direct sums of copies of $S$, as we said above..</p> <p>This generalizes the example Henning gave, and in fact one can show that this procedure gives <em>all</em> $p$-dimensional $A_1(k)$-modules can be constructed from quotients by ideals of the form $I_{a,b}$. Doing direct sums for various choices of $a$ and $b$, this gives us lots of finite dimensional $A_1(k)$-modules and, then, of pairs of matrices satisfying the Heisenberg relation. I think we obtain in this way all the semisimple finite dimensional $A_1(k)$-modules but I would need to think a bit before claiming it for certain.</p> <p>Of course, this only deals with the simplest case. The algebra $A_1(k)$ has non-semisimple finite-dimensional quotients, which are rather complicated (and I think there are pretty of wild algebras among them...) so one can get many, many more examples of modules and of pairs of matrices.</p>
<p>As noted in the comments, this is impossible in characteristic 0.</p> <p>But $M_{2\times 2}(\mathbb F_2)$ contains the example $\pmatrix{0&amp;1\\0&amp;0}, \pmatrix{0&amp;1\\1&amp;0}$.</p> <p>In general, in characteristic $p$, we can use the $p\times p$ matrices $$\pmatrix{0&amp;1\\&amp;0&amp;2\\&amp;&amp;\ddots&amp;\ddots\\&amp;&amp;&amp;0&amp;p-1\\&amp;&amp;&amp;&amp;0}, \pmatrix{0\\1&amp;0\\&amp;\ddots&amp;\ddots\\&amp;&amp;1&amp;0\\&amp;&amp;&amp;1&amp;0}$$ which works even over general unital rings of finite characteristic.</p>
linear-algebra
<p>What is the "standard basis" for fields of complex numbers?</p> <p>For example, what is the standard basis for $\Bbb C^2$ (two-tuples of the form: $(a + bi, c + di)$)? I know the standard for $\Bbb R^2$ is $((1, 0), (0, 1))$. Is the standard basis exactly the same for complex numbers?</p> <p><strong>P.S.</strong> - I realize this question is very simplistic, but I couldn't find an authoritative answer online.</p>
<p>Just to be clear, by definition, a vector space always comes along with a field of scalars $F$. It's common just to talk about a "vector space" and a "basis"; but if there is possible doubt about the field of scalars, it's better to talk about a "vector space over $F$" and a "basis over $F$" (or an "$F$-vector space" and an "$F$-basis").</p> <p>Your example, $\mathbb{C}^2$, is a 2-dimensional vector space over $\mathbb{C}$, and the simplest choice of a $\mathbb{C}$-basis is $\{ (1,0), (0,1) \}$.</p> <p>However, $\mathbb{C}^2$ is also a vector space over $\mathbb{R}$. When we view $\mathbb{C}^2$ as an $\mathbb{R}$-vector space, it has dimension 4, and the simplest choice of an $\mathbb{R}$-basis is $\{(1,0), (i,0), (0,1), (0,i)\}$.</p> <p>Here's another intersting example, though I'm pretty sure it's not what you were asking about:</p> <p>We can view $\mathbb{C}^2$ as a vector space over $\mathbb{Q}$. (You can work through the definition of a vector space to prove this is true.) As a $\mathbb{Q}$-vector space, $\mathbb{C}^2$ is infinite-dimensional, and you can't write down any nice basis. (The existence of the $\mathbb{Q}$-basis depends on the axiom of choice.)</p>
<p>The "most standard" basis is also $\left\lbrace(1,0),\, (0,1)\right\rbrace$. You just take complex combinations of these vectors. Simple :)</p>
probability
<p>You play a game flipping a fair coin. You may stop after any trial, at which point you are paid in dollars the percentage of heads flipped. So if on the first trial you flip a head, you should stop and earn \$100 because you have 100% heads. If you flip a tail then a head, you could either stop and earn \$50, or continue on, hoping the ratio will exceed 1/2. This second strategy is superior.</p> <p>A paper by Medina and Zeilberger (<a href="http://arxiv.org/abs/0907.0032" rel="noreferrer">arXiv:0907.0032v2 [math.PR]</a>) says that it is an unsolved problem to determine if it is better to continue or stop after you have flipped 5 heads in 8 trials: accept \$62.50 or hope for more. It is easy to simulate this problem and it is clear from even limited experimental data that it is better to continue (perhaps more than 70% chance you'll improve over \$62.50): <br /> <img src="https://i.sstatic.net/ZVHVw.jpg" alt="alt text"> <br /> My question is basically: Why is this difficult to prove? Presumably it is not that difficult to write out an expression for the expectation of exceeding 5/8 in terms of the cumulative binomial distribution. <hr /> (<em>5 Dec 2013</em>). A paper on this topic was just published:</p> <p>Olle Häggström, Johan Wästlund. "Rigorous computer analysis of the Chow-Robbins game." (pre-journal <a href="http://arxiv.org/abs/1201.0626" rel="noreferrer">arXiv link</a>). <em>The American Mathematical Monthly</em>, Vol. 120, No. 10, December 2013. (<a href="http://www.jstor.org/discover/10.4169/amer.math.monthly.120.10.893?uid=3739256&amp;uid=2&amp;uid=4&amp;sid=21103072731797" rel="noreferrer">Jstor link</a>). From the Abstract:</p> <blockquote> <p>"In particular, we confirm that with 5 heads and 3 tails, stopping is optimal." </p> </blockquote>
<p>I accept Qiaochu's answer "Have you tried actually doing that?" I did try now, and now I can appreciate the challenge. :-) The paper I cited refers to another by Chow and Robbins from 1965 that has a beautiful formulation, much clearer than the cummulative binomials with which I struggled. Let me explain it, because it is really cool.</p> <p>For the natural strategy I mentioned in the comments (and echoed by Raynos), let $f(n,h,t)$ be the expected payoff if you start with $h$ heads and $t$ tails, and let the game continue no more than $n$ further trials. Then there is an easy recursive formulation for $f$: $$f(n,h,t) = \max \left( \frac{1}{2} f(n,h+1,t) + \frac{1}{2} f(n,h,t+1) , h/(h+t) \right) $$ because you have an equal chance of increasing to $h+1$ heads or to $t+1$ tails on the next flip if you continue, and you get the current ratio if you stop. Now, when $h+t=n$, you need to make a "boundary" assumption. Assuming the law of large numbers for large $n$ leads to the reasonable value $\max ( 1/2, h/n )$ in this case. So now all you need to do is fill out the table for all $h$ and $t$ values up to $n=h+t$. The real answer is the limit when $n \rightarrow \infty$, but using large $n$ approximates this limit.</p> <p>After the Medina and Zeilberger paper was released, in fact just about three weeks ago, a very careful calculation using the above recursive formulation was made by Julian Wiseman and reported on <a href="http://www.jdawiseman.com/papers/easymath/chow_robbins.html">this web page</a>. The conclusion is to me remarkable: "Choosing to continue lowers the expected payoff [from 0.625] to 0.62361957757." This is still not a proof, but the "answer" is now known. So my "it is clear from even limited experimental data that" was completely wrong! I am delighted to learn from my mistake. </p>
<p>This seems to be related to Gittins Indices. Gittins Indices are a way of solving these kind of optimal stopping problems for some classes of problems, and basically give you a way of balancing how much you are expected to gain given your current knowledge and how much more you <em>could</em> gain by risking obtaining more information about the process (or probability of flipping heads, etc).</p> <p>Bruno</p>
logic
<p>In Tao's book <em>Analysis 1</em>, he writes:</p> <blockquote> <p>Thus, from the point of view of logic, we can define equality on a [remark by myself: I think he forgot the word "type of object" here] however we please, so long as it obeys the reflexive, symmetry, and transitive axioms, and it is consistent with all other operations on the class of objects under discussion in the sense that the substitution axiom was true for all of those operations.</p> </blockquote> <p>Does he mean that, if one wants to define define equality on a specific type of object (like functions, ordered pairs, for example), one has to check that these axioms of equality (he refers to these four axioms of equality as "symmetry", "reflexivity", "transitivity", and "substitution") hold in the sense that one has to prove them? It seems so, because of these two passages:</p> <blockquote> <p>[In section 3.3 Functions] We observe that functions obey the axiom of substitution: if $x=x'$, then $f(x) = f(x')$ (why?).</p> </blockquote> <p>(My answer would be "because that's an axiom", but Tao apparently wouldn't accept that.)</p> <p>And after defining equality of sets ($A=B:\iff \forall x(x\in A\iff x\in B)$), Tao writes (on page 39):</p> <blockquote> <p>One can easily verify that this notion of equality is reflexive, symmetric, and transitive (Exercise 3.1.1). Observe that if $x\in A$ and $A = B$, then $x\in B$, by Definition 3.1.4. Thus the "is an element of" relation $\in $ obeys the axiom of substitution</p> </blockquote> <p>So he gives the exercise to <em>prove</em> the axioms of equality for sets. Why does one has to prove axioms? Or, put differently: if one can prove these things, why does he state them as axioms?</p>
<p>I believe Tao means that the axioms of reflexivity, symmetry, and transitivity are adequate to capture our <em>pre-existing</em> (this is the key) intuition about what "equality" between two objects ought to be. Let me try two contrasting examples to help unpack what I mean.</p> <p><strong>Version 1</strong></p> <p>You: Sets $A$ and $B$ are <strong>shmequal</strong> provided $x \in A \Leftrightarrow x \in B$ for all $x$.</p> <p>Me: That sounds like a fine relation to investigate. Creative name, by the way.</p> <p><strong>Version 2</strong></p> <p>You: Sets $A$ and $B$ are <strong>equal</strong> provided $x \in A \Leftrightarrow x \in B$ for all $x$.</p> <p>Me: Now, hold on just a second. By "equal", you mean "identical" or "exactly the same"? I'm not sure I'm ready to accept that this abstract definition captures all that. You would need to show me that the relation $x \in A \Leftrightarrow x \in B$ for all $x$ is reflexive, symmetric, and transitive before I'm willing to concede that this deserves a name like "equal".</p> <hr> <p><strong>Comment from OP</strong></p> <p>An example came to mind: we define equality for ordered pairs: $(x,y)=(a,b)⟺x=a∧y=b$. To show that equality for ordered pairs is reflexive we need to show $(x,y)=(x,y)$, which by definition means $x=x∧y=y$. But $x$ and $y$ could itself be ordered pairs. So now we are in the situation where we have to prove that every ordered pair is equal to itself but where we also have to accept this as given.</p> <p><strong>A weak response from me</strong></p> <p>I struggle to find good words to address your question, but it might help to remember that we are not checking <em>whether</em> $(x,y) = (x,y)$. Rather this is one of the things we insist should be the case if "equality" is to mean anything; it must apply to identical objects. Instead, we are asking "For ordered pairs, does the $x = a \wedge y = b$ property capture this self-evident truth about equality?". We find that it does: $x = x$ and $y = y$ are both true statements because we are comparing two identical objects in each case.</p>
<p>You are probably confused because you think that axioms are (by definition) statements that we take as true without any proofs. However, this word has a slightly different meaning.</p> <p>Axioms are a <strong>starting point</strong> of a mathematical theory. When you build a theory, for example, Arithmetic, from scratch, you need some preliminary facts, otherwise you cannot prove anything. In Arithmetic and a bunch of other mathematical theories the described properties of equality are indeed axioms, that one does not prove. Equality is a <strong>primitive notion</strong>, and the only sensible way to actually <strong>define</strong> it is to postulate that these natural (as it seems to us humans) properties hold.</p> <p>However, in set theory, these "axioms" are not the definition of equality. Rather, equality is defined via the formula above: two sets are equal when they consist of identical elements. But when we define equality in this way, there is a natural question: why we are naming this as "equality" at all? This is why we prove "axioms of equality", which we are already used to, to show that the naming "equality" is adequate. And when we prove them, they become <strong>theorems</strong> of set theory and properties of equality rather than axioms. This is because set theory is more fundamental and more powerful than most mathematical theories in a sense that you can build (almost) all mathematics based on it.</p>
linear-algebra
<p>Prove <span class="math-container">$$\det \left( e^A \right) = e^{\operatorname{tr}(A)}$$</span> for all matrices <span class="math-container">$A \in \mathbb{C}^{n \times n}$</span>.</p>
<p>Both sides are continuous. A standard proof goes by showing this for diagonalizable matrices, and then using their density in <span class="math-container">$M_n(\mathbb{C})$</span>.</p> <p>But actually, it suffices to triangularize <span class="math-container">$$ A=P^{-1}TP $$</span> with <span class="math-container">$P$</span> invertible and <span class="math-container">$T$</span> upper-triangular. This is possible as soon as the characteristic polynomial splits, which is obviously the case in <span class="math-container">$\mathbb{C}$</span>.</p> <p>Let <span class="math-container">$\lambda_1,\ldots,\lambda_n$</span> be the eigenvalues of <span class="math-container">$A$</span>.</p> <p>Observe that each <span class="math-container">$T^k$</span> is upper-triangular with <span class="math-container">$\lambda_1^k,\ldots,\lambda_n^k$</span> on the diagonal. It follows that <span class="math-container">$e^T$</span> is upper triangular with <span class="math-container">$e^{\lambda_1},\ldots,e^{\lambda_n}$</span> on the diagonal. So <span class="math-container">$$ \det e^T=e^{\lambda_1}\cdots e^{\lambda_n}=e^{\lambda_1+\ldots+\lambda_n}=e^{\mbox{tr}\;T} $$</span></p> <p>Finally, observe that <span class="math-container">$\mbox{tr} \;A=\mbox{tr}\;T$</span>, and that <span class="math-container">$P^{-1}T^kP=A^k$</span> for all <span class="math-container">$k$</span>, so <span class="math-container">$$P^{-1}e^TP=e^A\qquad \Rightarrow\qquad \det (e^T)=\det (P^{-1}e^TP)=\det(e^A).$$</span></p>
<p>Hint: Use that every complex matrix has a Jordan normal form and that the determinant of a triangular matrix is the product of the diagonal.</p> <p>Use that <span class="math-container">$\exp(A)=\exp(S^{-1} J S ) = S^{-1} \exp(J) S $</span></p> <p>And that the trace doesn't change under transformations.</p> <p><span class="math-container">\begin{align*} \det(\exp(A))&amp;=\det(\exp(S J S^{-1}))\\ &amp;=\det(S \exp(J) S^{-1})\\ &amp;=\det(S) \det(\exp(J)) \det (S^{-1})\\ &amp;=\det(\exp (J))\\ &amp;=\prod_{i=1}^n \exp(j_{ii})\\ &amp;=\exp(\sum_{i=1}^n{j_{ii}})\\ &amp;=\exp(\text{tr}J) \end{align*}</span></p>
matrices
<p>I am looking for an intuitive explanation as to why/how row rank of a matrix = column rank. I've read the <a href="https://en.wikipedia.org/wiki/Rank_(linear_algebra)#Proofs_that_column_rank_=_row_rank" rel="noreferrer">proof on Wikipedia</a> and I understand the proof, but I don't &quot;get it&quot;. Can someone help me out with this ?</p> <p>I find it hard to wrap my head around the idea of how the column space and the row space is related at a fundamental level.</p>
<p>You can apply elementary row operations and elementary column operations to bring a matrix <span class="math-container">$A$</span> to a matrix that is in <strong>both</strong> row reduced echelon form and column reduced echelon form. In other words, there exist invertible matrices <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> (which are products of elementary matrices) such that <span class="math-container">$$PAQ=E:=\begin{pmatrix}I_k\\&amp;0_{(n-k)\times(n-k)}\end{pmatrix}.$$</span> As <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are invertible, the maximum number of linearly independent rows in <span class="math-container">$A$</span> is equal to the maximum number of linearly independent rows in <span class="math-container">$E$</span>. That is, the row rank of <span class="math-container">$A$</span> is equal to the row rank of <span class="math-container">$E$</span>. Similarly for the column ranks. Now it is evident that the row rank and column rank of <span class="math-container">$E$</span> are identical (to <span class="math-container">$k$</span>). Hence the same holds for <span class="math-container">$A$</span>.</p>
<p>This post is quite old, so my answer might come a bit late. If you are looking for an intuition (you want to "get it") rather than a demonstration (of which there are several), then here is my 5c.</p> <p>If you think of a matrix A in the context of solving a system of simultaneous equations, then the row-rank of the matrix is the number of independent equations, and the column-rank of the matrix is the number of independent parameters that you can estimate from the equation. That I think makes it a bit easier to see why they should be equal.</p> <p>Saad.</p>
geometry
<p>I have a normalized $3D$ vector giving a direction and an angle that forms a cone around it, something like this:</p> <p><img src="https://i.sstatic.net/wsIpF.png" alt="Direction cone"></p> <p>I'd like to generate a random, uniformly distributed normalized vector for a direction within that cone. I would also like to support angles greater than pi (but lower or equal to $2\pi$), at which point the shape becomes more like a sphere from which a cone was removed. How can I proceed?</p> <p>I thought about the following steps, but my implementation did not seem to work:</p> <ul> <li>Find a vector normal to the cone axis vector (by crossing the cone axis vector with the cardinal axis that corresponds with the cone axis vector component nearest to zero, ex: $[1 0 0]$ for $[-1 5 -10]$)</li> <li>Find a second normal vector using a cross product</li> <li>Generate a random angle between $[-\pi, \pi]$</li> <li>Rotate use the two normal vectors as a $2D$ coordinate system to create a new vector at the angle previously generated</li> <li>Generate a random displacement value between $[0, \tan(\theta)]$ and square root it (to normalize distribution like for points in a circle)</li> <li>Normalize the sum of the cone axis vector with the random normal vector times the displacement value to get the final direction vector</li> </ul> <p>[edit] After further thinking, I'm not sure that method would work with theta angles greater or equal to pi. Alternative methods are very much welcome.</p>
<p>I'm surprised how many bad, suboptimal and/or overcomplicated answers this question has inspired, when there's a fairly simple solution; and that the only answer that mentions and uses the most relevant fact, Christian Blatter's, didn't have a single upvote before I just upvoted it.</p> <p>The $2$-sphere is unique in that slices of equal height have equal surface area. That is, to sample points on the unit sphere uniformly, you can sample $z$ uniformly on $[-1,1]$ and $\phi$ uniformly on $[0,2\pi)$. If your cone were centred around the north pole, the angle $\theta$ would define a minimal $z$ coordinate $\cos\theta$, and you could sample $z$ uniformly on $[\cos\theta,1]$ and $\phi$ uniformly on $[0,2\pi)$ to obtain the vector $(\sqrt{1-z^2}\cos\phi,\sqrt{1-z^2}\sin\phi,z)$ uniformly distributed as required.</p> <p>So all you have to do is generate such a vector and then rotate the north pole to the centre of your cone. If the cone is already thus centred, you're done; if it's centred on the south pole, just invert the vector; otherwise, take the cross product of the cone's axis with $(0,0,1)$ to get the direction of the rotation axis, and the scalar product to get the cosine of the rotation angle. Or if you prefer you can apply your idea of generating two orthogonal vectors, in the manner Christian described.</p>
<p>Your first four bulleted points are absolutely correct as stated. So you now have two mutually orthogonal unit vectors ${\bf u}$, ${\bf v}$, both of them orthogonal to the given axis ${\bf a}$ of the cone, where $|{\bf a}|=1$. The random unit vector within the cone will be a vector ${\bf x}$ of the form $${\bf x}=\sin\theta\bigl(\cos\phi\,{\bf u}+\sin\phi\, {\bf v}\bigr)+\cos\theta\,{\bf a}\ .$$ Here $\phi$ has to be chosen uniformly in $[-\pi,\pi]$, and $\theta$ has to be chosen according to some distribution yet to be determined in the interval $[0,\theta_0]$, where $\theta_0$ is the angle denoted by $\theta$ in your figure. Note that $\theta_0$ is restricted to the interval $[0,\pi]$, not to $[0,2\pi]$ as indicated in your question.</p> <p>Now we want the vectors ${\bf x}$ to be equidistributed on the spherical cap given by $0\leq\theta\leq\theta_0$, where "equidistribution" refers to the area measure on the sphere $S^2$. </p> <p>Here the following elementary fact comes to our help: When a cylinder $C$ of height $2$ is wrapped around the equator of a unit sphere then any two planes orthogonal to the axis of the cylinder determine a "spherical annulus" and a cylindrical annulus, and <em>the areas of these two annuli coincide</em>. This implies that points uniformly distributed on the cylinder $C$ "project" to points uniformly distributed on the sphere $S^2$.</p> <p>This observation boils down to the following recipe: Choose $z$ uniformly distributed on the interval $[\cos\theta_0,1]$, and put $\theta:=\arccos(z)$. Furthermore let $\phi$ be uniformly distributed on $[0,2\pi]$. Then the point $${\bf x}=\sin\theta\bigl(\cos\phi\,{\bf u}+\sin\phi\, {\bf v}\bigr)+\cos\theta\,{\bf a}$$ will be uniformly distributed on the spherical cap in question.</p>
probability
<p>Question is quite straight... I'm not very good in this subject but need to understand at a good level.</p>
<p>For probability theory as probability theory (rather than normed measure theory ala Kolmogorov) I'm quite partial to <a href="http://rads.stackoverflow.com/amzn/click/0521592712">Jaynes's Probability Theory: The Logic of Science</a>. It's fantastic at building intuition behind the rules and operations. That said, this has the downside of creating fanatics who think they know all there is to know about probability theory.</p>
<p><em><a href="http://rads.stackoverflow.com/amzn/click/0131856626">A First Course in Probability</a></em> by Sheldon Ross is good.</p>
linear-algebra
<p>Say we have an $n\times m$ matrix $X$. What are the specific properties that $X$ must have so that $A=X^TX$ invertible?</p> <p>I know that when the rows and columns are independent, then matrix $A$ (which is square) would be invertible and would have a non-zero determinant. However, what confuses me is, what sort of conditions must we have on each row of $X$ such that $A$ would be invertible. </p> <p>It would be very nice to have a solution of the form:</p> <ol> <li>when $n &gt; m$ then $X$ must have...</li> <li>when $n &lt; m$ then $X$ must have...</li> <li>when $n = m$ then $X$ must have...</li> </ol> <p>I think in the 3rd case we just need $X$ to be invertible but I was unsure of the other two cases.</p>
<p>Precisely when the rank of $X$ is $m$ (which forces $n\geq m$). </p> <p>The key observation is that for $v\in\mathbb R^m$, $Xv=0$ if and only if $X^TXv=0$. For the non-trivial implication, if $X^TXv=0$, then $v^TX^TXv=0$, that is $(Xv)^TXv=0$, which implies that $Xv=0$.</p> <p>If the rank of $X$ is $m$, this means that $X$ is one-to-one when acting on $\mathbb R^m$. So by the observation, $X^TX$ is one-to-one, which makes it invertible (as it is square). </p> <p>Conversely, if the rank of $X$ is less than $m$, there exists $v\in\mathbb R^m$ with $Xv=0$. Then $X^TXv=0$, and $X^TX$ cannot be invertible. </p>
<p>It is true if and only if:</p> <p><span class="math-container">$m\le n$</span> and Rank<span class="math-container">$\,(X)=m$</span>.</p> <p>Assume that <span class="math-container">$m\le n$</span> and Rank<span class="math-container">$\,(X)=m$</span>, and let <span class="math-container">$X^TXu=0$</span>, for some <span class="math-container">$u\in\mathbb R^m$</span>. We need to show that <span class="math-container">$u=0$</span>. We have also that <span class="math-container">$$ 0=(X^TXu,u)=(Xu,Xu), $$</span> and thus <span class="math-container">$Xu=0$</span>. But as Rank<span class="math-container">$\,(X)=m$</span>, this implies that <span class="math-container">$u=0$</span>. (Otherwise, the columns of <span class="math-container">$X$</span> would be linearly dependent, and hence its rank less than <span class="math-container">$m$</span>.)</p> <p>Assume that <span class="math-container">$X^TX\in\mathbb R^{m\times m}$</span> is invertible. Then <span class="math-container">$m=$</span>Rank<span class="math-container">$\,(X^TX)\le$</span>Rank<span class="math-container">$\,(X)\le \min\{m,n\}$</span>. Thus <span class="math-container">$\min\{m,n\}=m$</span>, Rank<span class="math-container">$\,(X)=m$</span>, and <span class="math-container">$m\le n$</span>.</p>
logic
<p>I'm sure there are easy ways of proving things using, well... any other method besides this! But still, I'm curious to know whether it would be acceptable/if it has been done before?</p>
<p>There is a disappointing way of answering your question affirmatively: If $\phi$ is a statement such that First order Peano Arithmetic $\mathsf{PA}$ proves "$\phi$ is provable", then in fact $\mathsf{PA}$ also proves $\phi$. You can replace here $\mathsf{PA}$ with $\mathsf{ZF}$ (Zermelo Fraenkel set theory) or your usual or favorite first order formalization of mathematics. In a sense, this is exactly what you were asking: If we can prove that there is a proof, then there is a proof. On the other hand, this is actually unsatisfactory because there are no known natural examples of statements $\phi$ for which it is actually easier to prove that there is a proof rather than actually finding it. </p> <p>(The above has a neat formal counterpart, <a href="http://en.wikipedia.org/wiki/L%C3%B6b%27s_theorem" rel="nofollow noreferrer">Löb's theorem</a>, that states that if $\mathsf{PA}$ can prove "If $\phi$ is provable, then $\phi$", then in fact $\mathsf{PA}$ can prove $\phi$.)</p> <p>There are other ways of answering affirmatively your question. For example, it is a theorem of $\mathsf{ZF}$ that if $\phi$ is a $\Pi^0_1$ statement and $\mathsf{PA}$ does not prove its negation, then $\phi$ is true. To be $\Pi^0_1$ means that $\phi$ is of the form "For all natural numbers $n$, $R(n)$", where $R$ is a recursive statement (that is, there is an algorithm that, for each input $n$, returns in a finite amount of time whether $R(n)$ is true or false). Many natural and interesting statements are $\Pi^0_1$: The Riemann hypothesis, the Goldbach conjecture, etc. It would be fantastic to verify some such $\phi$ this way. On the other hand, there is no scenario for achieving anything like this.</p> <p>The key to the results above is that $\mathsf{PA}$, and $\mathsf{ZF}$, and any reasonable formalization of mathematics, are <em>arithmetically sound</em>, meaning that their theorems about natural numbers are actually true in the standard model of arithmetic. The first paragraph is a consequence of arithmetic soundness. The third paragraph is a consequence of the fact that $\mathsf{PA}$ proves all true $\Sigma^0_1$-statements. (Much less than $\mathsf{PA}$ suffices here, usually one refers to Robinson's arithmetic <a href="http://en.wikipedia.org/wiki/Robinson_arithmetic" rel="nofollow noreferrer">$Q$</a>.) I do not recall whether this property has a standard name. </p> <p>Here are two related posts on MO: </p> <ol> <li><a href="https://mathoverflow.net/q/49943/6085">$\mathrm{Provable}(P)\Rightarrow \mathrm{provable}(\mathrm{provable}(P))$?</a></li> <li><a href="https://mathoverflow.net/q/127322/6085">When does $ZFC \vdash\ ' ZFC \vdash \varphi\ '$ imply $ZFC \vdash \varphi$?</a></li> </ol>
<p>I'd say the model-theoretic proof of the Ax-Grothendieck theorem falls into this category. There may be other ways of proving it, but this is the only proof I saw in grad school, and it's pretty natural if you know model theory.</p> <p>The theorem states that for any polynomial map $f:\mathbb{C}^n \to\mathbb{C}^n$, if $f$ is injective (one-to-one), then it is surjective (onto). The theorem uses several results in model theory, and the argument goes roughly as follows. </p> <p>Let $ACL_p$ denote the theory of algebraically closed fields of characteristic $p$. $ACL_0$ is axiomatized by the axioms of an algebraically closed field and the axiom scheme $\psi_2, \psi_3, \psi_4,\ldots$, where $\psi_k$ is the statement "for all $x \neq 0$, $k x \neq 0$". Note that all $\psi_k$ are also proved by $ACL_p$, if $p$ does not divide $k$.</p> <ol> <li>The theorem is true in $ACL_p$, $p&gt;0$. This can be easily shown by contradiction: assume a counter example, then take the finite field generated by the elements in the counter-example, call that finite field $F_0$. Since $F_0^n\subseteq F^n$ is finite, and the map is injective, it must be surjective as well.</li> <li>The theory of algebraically closed fields in characteristic $p$ is complete (i.e. the standard axioms prove or disprove all statements expressible in the first order language of rings).</li> <li>For each degree $d$ and dimension $n$, restrict Ax-Grothendieck to a statement $\phi_{d,n}$, which is expressible as a statement in the first order language of rings. Then $\phi_{d,n}$ is provable in $ACL_p$ for all characteristics $p &gt; 0$.</li> <li>Assume the $\phi_{d,n}$ is false for $p=0$. Then by completeness, there is a proof $P$ of $\neg \phi_{d,n}$ in $ALC_0$. By the finiteness of proofs, there exists a finite subset of axioms for $ACL_0$ which are used in this proof. If none of the $\psi_k$ are used $P$, then $\neg \phi_{d,n}$ is true of all algebraically closed fields, which cannot be the case by (2). Let $k_0,\ldots, k_m$ be the collection of indices of $\psi_k$ used in $P$. Pick a prime $p_0$ which does not divide any of $k_0,\ldots,k_m$. Then all of the axioms used in $P$ are also true of $ACL_{p_0}$. Thus $ACL_{p_0}$ also proves $\neg \phi_{d,n}$, also contradicting (2). Contradiction. Therefore there is a proof of $\phi_{d,n}$ in $ACL_0$.</li> </ol> <p>So the proof is actually along the lines of "for each degree $d$ and dimension $n$ there is a proof of the Ax-Grothendieck theorem restricted to that degree and dimension." What any of those proofs are, I have no clue.</p>
matrices
<p>I want to understand the meaning behind the Jordan Normal form, as I think this is crucial for a mathematician.</p> <p>As far as I understand this, the idea is to get the closest representation of an arbitrary endomorphism towards the diagonal form. As diagonalization is only possible if there are sufficient eigenvectors, we try to get a representation of the endomorphism with respect to its generalized eigenspaces, as their sum always gives us the whole space. Therefore bringing an endomorphism to its Jordan normal form is always possible.</p> <p>How often an eigenvalue appears on the diagonal in the JNF is determined by its algebraic multiplicity. The number of blocks is determined by its geometric multiplicity. Here I am not sure whether I've got the idea right. I mean, I have trouble interpreting this statement.</p> <blockquote> <p>What is the meaning behind a Jordan normal block and why is the number of these blocks equal to the number of linearly independent eigenvectors?</p> </blockquote> <p>I do not want to see a rigorous proof, but maybe someone could answer for me the following sub-questions.</p> <blockquote> <p>(a) Why do we have to start a new block for each new linearly independent eigenvector that we can find?</p> <p>(b) Why do we not have one block for each generalized eigenspace?</p> <p>(c) What is the intuition behind the fact that the Jordan blocks that contain at least <span class="math-container">$k+1$</span> entries of the eigenvalue <span class="math-container">$\lambda$</span> are determined by the following? <span class="math-container">$$\dim(\ker(A-\lambda I)^{k+1}) - \dim(\ker(A-\lambda I)^k)$$</span></p> </blockquote>
<p>Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.</p> <hr /> <p>Let us say that a linear endomorphism <span class="math-container">$f:V\to V$</span> of a nonzero finite dimensional vector space is <strong>decomposable</strong> if there exist <em>proper</em> subspaces <span class="math-container">$U_1$</span>, <span class="math-container">$U_2$</span> of <span class="math-container">$V$</span> such that <span class="math-container">$V=U_1\oplus U_2$</span>, <span class="math-container">$f(U_1)\subseteq U_1$</span> and <span class="math-container">$f(U_2)\subseteq U_2$</span>, and let us say that <span class="math-container">$f$</span> is <strong>indecomposable</strong> if it is not decomposable. In terms of bases and matrices, it is easy to see that the map <span class="math-container">$f$</span> is decomposable iff there exists a basis of <span class="math-container">$V$</span> such that the matrix of <span class="math-container">$f$</span> with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)</p> <p>Now it is not hard to prove the following:</p> <blockquote> <p><strong>Lemma 1.</strong> <em>If <span class="math-container">$f:V\to V$</span> is an endomorphism of a nonzero finite dimensional vector space, then there exist <span class="math-container">$n\geq1$</span> and nonzero subspaces <span class="math-container">$U_1$</span>, <span class="math-container">$\dots$</span>, <span class="math-container">$U_n$</span> of <span class="math-container">$V$</span> such that <span class="math-container">$V=\bigoplus_{i=1}^nU_i$</span>, <span class="math-container">$f(U_i)\subseteq U_i$</span> for all <span class="math-container">$i\in\{1,\dots,n\}$</span> and for each such <span class="math-container">$i$</span> the restriction <span class="math-container">$f|_{U_i}:U_i\to U_i$</span> is indecomposable.</em></p> </blockquote> <p>Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.</p> <p>This lemma allows us to reduce the study of linear maps to the study of <em>indecomposable</em> linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.</p> <p>There is a general fact that comes useful at times:</p> <blockquote> <p><strong>Lemma.</strong> <em>If <span class="math-container">$h:V\to V$</span> is an endomorphism of a finite dimensional vector space, then there exists an <span class="math-container">$m\geq1$</span> such that <span class="math-container">$V=\ker h^m\oplus\def\im{\operatorname{im}}\im h^m$</span>.</em></p> </blockquote> <p>I'll leave its proof as a pleasant exercise.</p> <p>So let us fix an indecomposable endomorphism <span class="math-container">$f:V\to V$</span> of a nonzero finite dimensional vector space. As <span class="math-container">$k$</span> is algebraically closed, there is a nonzero <span class="math-container">$v\in V$</span> and a scalar <span class="math-container">$\lambda\in k$</span> such that <span class="math-container">$f(v)=\lambda v$</span>. Consider the map <span class="math-container">$h=f-\lambda\mathrm{Id}:V\to V$</span>: we can apply the lemma to <span class="math-container">$h$</span>, and we conclude that <span class="math-container">$V=\ker h^m\oplus\def\im{\operatorname{im}}\im h^m$</span> for some <span class="math-container">$m\geq1$</span>. moreover, it is very easy to check that <span class="math-container">$f(\ker h^m)\subseteq\ker h^m$</span> and that <span class="math-container">$f(\im h^m)\subseteq\im h^m$</span>. Since we are supposing that <span class="math-container">$f$</span> is indecomposable, one of <span class="math-container">$\ker h^m$</span> or <span class="math-container">$\im h^m$</span> must be the whole of <span class="math-container">$V$</span>. As <span class="math-container">$v$</span> is in the kernel of <span class="math-container">$h$</span>, so it is also in the kernel of <span class="math-container">$h^m$</span>, so it is not in <span class="math-container">$\im h^m$</span>, and we see that <span class="math-container">$\ker h^m=V$</span>.</p> <p>This means, precisely, that <span class="math-container">$h^m:V\to V$</span> is the zero map, and we see that <span class="math-container">$h$</span> is <em>nilpotent</em>. Suppose its nilpotency index is <span class="math-container">$k\geq1$</span>, and let <span class="math-container">$w\in V$</span> be a vector such that <span class="math-container">$h^{k-1}(w)\neq0=h^k(w)$</span>.</p> <blockquote> <p><strong>Lemma.</strong> The set <span class="math-container">$\mathcal B=\{w,h(w),h^2(w),\dots,h^{k-1}(w)\}$</span> is a basis of <span class="math-container">$V$</span>.</p> </blockquote> <p>This is again a nice exercise.</p> <p>Now you should be able to check easily that the matrix of <span class="math-container">$f$</span> with respect to the basis <span class="math-container">$\mathcal B$</span> of <span class="math-container">$V$</span> is a Jordan block.</p> <p>In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix. According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.</p> <hr /> <p>Much later: <em>How to prove the lemma?</em> A nice way to do this which is just a computation is the following.</p> <ul> <li>Show first that the vectors <span class="math-container">$w,h(w),h^2(w),\dots,h^{k-1}(w)$</span> are linearly independent. Let <span class="math-container">$W$</span> be the subspace they span.</li> <li>Find vectors <span class="math-container">$z_1,\dots,z_l$</span> so that together with the previous <span class="math-container">$k$</span> ones we have a basis for <span class="math-container">$V$</span>. There is then a unique map <span class="math-container">$\Phi:V\to F$</span> such that <span class="math-container">$$ \Phi(h^i(w)) = \begin{cases} 1 &amp; \text{if $i=k-1$;} \\ 0 &amp; \text{if $0&lt;i&lt;k-1$;} \end{cases} $$</span> and <span class="math-container">$$\Phi(z_i)=0\quad\text{for all $i$.}$$</span> Using this construct another map <span class="math-container">$\pi:V\to V$</span> such that <span class="math-container">$$\pi(v) = \sum_{i=0}^{k-1}\Phi(h^i(v))h^{k-1-i}(w)$$</span> for all <span class="math-container">$v\in V$</span>. Prove that <span class="math-container">$W=\operatorname{img}\pi$</span> and that <span class="math-container">$\pi^2=\pi$</span>, so that <span class="math-container">$V=W\oplus\ker\pi$</span>, and that <span class="math-container">$\pi h=h\pi$</span>, so that <span class="math-container">$\ker\pi$</span> is <span class="math-container">$h$</span>-invariant. Since we are supposing <span class="math-container">$V$</span> to be <span class="math-container">$h$</span>-indecomposable, we must have <span class="math-container">$\ker\pi=0$</span>.</li> </ul> <p>This is not the most obvious proof. It is what one gets if one notices that <span class="math-container">$V$</span> is an <span class="math-container">$k[X]/(X^k)$</span>-module with <span class="math-container">$X$</span> a acting by the map <span class="math-container">$h$</span>, and that the ring is self-injective. In fact, if you know what this means, there is really no need to even write down the map <span class="math-container">$\Phi$</span> and <span class="math-container">$\pi$</span>, as the fact that <span class="math-container">$W$</span> is a direct summand of <span class="math-container">$V$</span> as a <span class="math-container">$k[X]/(X^k)$</span>-module is immediate, since it is obviously free of rank 1, and therefore injective.</p> <p>Fun fact: a little note with the details of this argument was rejected by the MAA Monthly because «this is a well-known argument».</p>
<p>The <strong>true meaning</strong> of the Jordan canonical form is explained in the context of representation theory, namely, of finite dimensional representations of the algebra <span class="math-container">$k[t]$</span> (where <span class="math-container">$k$</span> is your algebraically closed ground field):</p> <ul> <li>Uniqueness of the normal form is the Krull-Schmidt theorem, and </li> <li>existence is the description of the indecomposable modules of <span class="math-container">$k[t]$</span>. </li> </ul> <p>Moreover, the description of indecomposable modules follows more or less easily (in a strong sense: if you did not know about the Jordan canonical form, you could guess it by looking at the following:) the simple modules are very easy to describe (this is where algebraically closedness comes in) and the extensions between them (in the sense of homological algebra) are also easy to describe (because <span class="math-container">$k[t]$</span> is an hereditary ring) Putting these things together (plus the Jordan-Hölder theorem) one gets existence.</p>
linear-algebra
<p>I understand that a vector space is a collection of vectors that can be added and scalar multiplied and satisfies the 8 axioms, however, I do not know what a vector is. </p> <p>I know in physics a vector is a geometric object that has a magnitude and a direction and it computer science a vector is a container that holds elements, expand, or shrink, but in linear algebra the definition of a vector isn't too clear. </p> <p>As a result, what is a vector in Linear Algebra?</p>
<p>In modern mathematics, there's a tendency to define things in terms of <em>what they do</em> rather than in terms of <em>what they are</em>.</p> <p>As an example, suppose that I claim that there are objects called "pizkwats" that obey the following laws:</p> <ul> <li><span class="math-container">$\forall x. \forall y. \exists z. x + y = z$</span></li> <li><span class="math-container">$\exists x. x = 0$</span></li> <li><span class="math-container">$\forall x. x + 0 = 0 + x = x$</span></li> <li><span class="math-container">$\forall x. \forall y. \forall z. (x + y) + z = x + (y + z)$</span></li> <li><span class="math-container">$\forall x. x + x = 0$</span></li> </ul> <p>These rules specify what pizkwats <em>do</em> by saying what rules they obey, but they don't say anything about what pizkwats <em>are</em>. We can find all sorts of things that we could call pizkwats. For example, we could imagine that pizkwats are the numbers 0 and 1, with addition being done modulo 2. They could also be bitstrings of length 137, with "addition" meaning "bitwise XOR." Or they could be sets, with “addition” meaning “symmetric difference.” Each of these groups of objects obey the rules for what pizkwats do, but neither of them "are" pizkwats.</p> <p>The advantage of this approach is that we can prove results about pizkwats knowing purely how they behave rather than what they fundamentally are. For example, as a fun exercise, see if you can use the above rules to prove that</p> <blockquote> <p><span class="math-container">$\forall x. \forall y. x + y = y + x$</span>.</p> </blockquote> <p>This means that anything that "acts like a pizkwat" must support a commutative addition operator. Similarly, we could prove that</p> <blockquote> <p><span class="math-container">$\forall x. \forall y. (x + y = 0 \rightarrow x = y)$</span>.</p> </blockquote> <p>The advantage of setting things up this way is that any time we find something that "looks like a pizkwat" in the sense that it obeys the rules given above, we're guaranteed that it must have some other properties, namely, that it's commutative and that every element has its own and unique inverse. We could develop a whole elaborate theory about how pizkwats behave and what pizkwats do purely based on the rules of how they work, and since we specifically <em>never actually said what a pizkwat is</em>, anything that we find that looks like a pizkwat instantly falls into our theory.</p> <p>In your case, you're asking about what a vector is. In a sense, there is no single thing called "a vector," because a vector is just something that obeys a bunch of rules. But any time you find something that looks like a vector, you immediately get a bunch of interesting facts about it - you can ask questions about spans, about changing basis, etc. - regardless of whether that thing you're looking at is a vector in the classical sense (a list of numbers, or an arrow pointing somewhere) or a vector in a more abstract sense (say, a function acting as a vector in a "vector space" made of functions.)</p> <p>As a concluding remark, Grant Sanderson of 3blue1brown has <a href="https://youtu.be/TgKwz5Ikpc8" rel="noreferrer">an excellent video talking about what vectors are</a> that explores this in more depth.</p>
<p>When I was 14, I was introduced to vectors in a freshman physics course (algebra based). We were told that it was a quantity with magnitude and direction. This is stuff like force, momentum, and electric field.</p> <p>Three years later in precalculus we thought of them as "points," but with arrows emanating from the origin to that point. Just another thing. This was the concept that stuck until I took linear algebra two more years later.</p> <p>But now in the abstract sense, vectors don't have to be these "arrows." They can be anything we want: functions, numbers, matrices, operators, whatever. When we build vector spaces (linear spaces in other texts), we just call the objects vectors - who cares what they look like? It's a name to an abstract object.</p> <p>For example, in $\mathbb{R}^n$ our vectors are ordered n-tuples. In $\mathcal{C}[a,b]$ our vectors are now functions - continuous functions on $[a, b]$ at that. In $L^2(\mathbb{R}$) our vectors are those functions for which</p> <p>$$ \int_{\mathbb{R}} | f |^2 &lt; \infty $$</p> <p>where the integral is taken in the Lebesgue sense.</p> <p>Vectors are whatever we take them to be in the appropriate context.</p>
geometry
<p>I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane.</p> <p>How do I determine if a point $(x,y)$ is within the area bounded by the ellipse? </p>
<p>The region (disk) bounded by the ellipse is given by the equation: $$ \frac{(x-h)^2}{r_x^2} + \frac{(y-k)^2}{r_y^2} \leq 1. \tag{1} $$ So given a test point $(x,y)$, plug it in $(1)$. If the inequality is satisfied, then it is inside the ellipse; otherwise it is outside the ellipse. Moreover, the point is on the boundary of the region (i.e., on the ellipse) if and only if the inequality is satisfied tightly (i.e., the left hand side evaluates to $1$). </p>
<p>Another way uses the definition of the ellipse as the points whose sum of distances to the foci is constant.</p> <p>Get the foci at $(h+f, k)$ and $(h-f, k)$, where $f = \sqrt{r_x^2 - r_y^2}$.</p> <p>The sum of the distances (by looking at the lines from $(h, k+r_y)$ to the foci) is $2\sqrt{f^2 + r_y^2} = 2 r_x $.</p> <p>So, for any point $(x, y)$, compute $\sqrt{(x-(h+f))^2 + (y-k)^2} + \sqrt{(x-(h-f))^2 + (y-k)^2} $ and compare this with $2 r_x$.</p> <p>This takes more work, but I like using the geometric definition.</p> <p>Also, for both methods, if speed is important (i.e., you are doing this for many points), you can immediately reject any point $(x, y)$ for which $|x-h| &gt; r_x$ or $|y-k| &gt; r_y$.</p>
logic
<p>I am teaching a first-semester course in abstract algebra, and we are discussing group isomorphisms. In order to prove that two group are not isomorphic, I encourage the students to look for a group-theoretic property satisfied by one group but not by the other. I did not give a precise meaning to the phrase &quot;group-theoretic property&quot;, but some examples of the sort of properties I have in mind are <span class="math-container">$$ \forall g,h\in G:\exists n,m\in\mathbb{Z}:(n,m)\neq (0,0)\wedge g^n=h^m,\\ \forall H\leq G:\exists g,h\in G:H=\langle g,h\rangle,\\ \forall g,h\in G:\exists i\in G: \langle g,h\rangle = \langle i\rangle $$</span> One of my students asked if, give two non-isomorphic groups, there is always a group-theoretic property satisfied by one group but not the other. In a sense, &quot;being isomorphic to that group over there&quot; is a group-theoretic property. But this is not really what I have in mind.</p> <p>To pin down the class of properties I have in mind, let's say we allow expressions involving</p> <ul> <li>quantification over <span class="math-container">$G$</span>, subgroups of <span class="math-container">$G$</span>, and <span class="math-container">$\mathbb{Z}$</span>,</li> <li>group multiplication, inversion, and subgroups generated by a finite list of elements</li> <li>the symbol <span class="math-container">$1_G$</span> (the group identity element),</li> <li>addition, subtraction, multiplication, exponentiation (provided the exponent is non-negative), and inequalities of integers ,</li> <li>the integer symbols <span class="math-container">$0$</span> and <span class="math-container">$1$</span>,</li> <li>raising a group element to an integer power, and</li> <li>equality, elementhood, and logical connectives.</li> </ul> <p>I do not know much about model theory or logic, but my understanding is that this is not the first-order theory of groups. In particular, <a href="https://math.stackexchange.com/questions/2137508/can-a-torsion-group-and-a-nontorsion-group-be-elementarily-equivalent">this MSE question</a> indicates that there exist a torsion and a non-torsion group which are elementarily equivalent (meaning they cannot be distinguished by a first-order statement in the language of groups), but these groups can be distinguished by a property of the above form. I have also heard that free groups of different rank are elementarily equivalent, but these can also be distinguished by a property of the above form.</p> <p>My questions are:</p> <p>(1) Is there a name for the theory I am considering? Or something closely (or distantly) related?</p> <p>(2) Are there examples of non-isomorphic groups that cannot be distinguished by a property of the above form? Are there examples where the groups involved could be understood by an average first-semester algebra student?</p>
<p>First, let's start with the silly answer. Your language only has countably many different expressions, so can only divvy groups up into continuum-many classes - so there are definitely non-isomorphic groups it can't distinguish! In general this will happen as long as your language has <em>only set-many</em> expressions: you need a proper class sized logic like <span class="math-container">$\mathcal{L}_{\infty,\infty}$</span> to distinguish between all pairs of non-isomorphic structures.</p> <p>That said, you're right that you're looking at something much stronger than first-order logic. Specifically, you're describing a sublogic of <strong>second-order logic</strong>, the key difference being that second-order logic lets you quantify over arbitrary subsets of the domain, and indeed <em>functions and relations of arbitrary arity</em> over the domain, and not just subgroups. Second-order logic doesn't have an explicit ability to refer to (say) integers built in, but it can do so via tricks of quantifying over finite configurations.</p> <p>While the exact strength of the system you describe isn't clear to me, second-order logic is known to be extremely powerful. In particular, I believe there are no known natural examples of non-isomorphic second-order-elementarily-equivalent structures <strong>at all</strong>, although as per the first paragraph of this answer such structures certainly have to exist! So second-order-equivalence is a pretty strong equivalence relation, and in practice will suffice to distinguish all the groups your students run into.</p>
<p>Here are some simple examples where you at least need to make some decisions about what you believe about set theory to determine whether two groups are isomorphic. Assuming the axiom of choice every vector space has a basis, so <span class="math-container">$\mathbb{R}$</span> is isomorphic (as a group) to some direct sum of copies of <span class="math-container">$\mathbb{Q}$</span> (in fact necessarily to a direct sum of <span class="math-container">$|\mathbb{R}|$</span> copies of <span class="math-container">$\mathbb{Q}$</span>). The existence of such a basis for <span class="math-container">$\mathbb{R}$</span> over <span class="math-container">$\mathbb{Q}$</span> allows you to construct <a href="https://en.wikipedia.org/wiki/Vitali_set" rel="noreferrer">Vitali sets</a>, which are non-measurable, and there are models of <span class="math-container">$ZF \neg C$</span> in which every subset of <span class="math-container">$\mathbb{R}$</span> is measurable, so <span class="math-container">$\mathbb{R}$</span> fails to have a basis in such models.</p> <p>Another example along the same lines is <span class="math-container">$\left( \prod_{\mathbb{N}} \mathbb{Q} / \bigoplus_{\mathbb{N}} \mathbb{Q} \right)^{\ast}$</span>, taking the dual as a <span class="math-container">$\mathbb{Q}$</span>-vector space. Assuming the axiom of choice this is a direct sum of <span class="math-container">$|\mathbb{R}|$</span> copies of <span class="math-container">$\mathbb{Q}$</span> again, but without at least enough choice to construct something like non-principal ultrafilters on <span class="math-container">$\mathbb{N}$</span> it's not clear how to write down a single nonzero element of this group!</p>
logic
<p>What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus?</p> <p>Specifically, I am interested in the following areas:</p> <ul> <li>Untyped lambda calculus</li> <li>Simply-typed lambda calculus</li> <li>Other typed lambda calculi</li> <li>Church's Theory of Types (I'm not sure where this fits in).</li> </ul> <p>(As I understand, this should provide a solid basis for the understanding of type theory.)</p> <p>Any advice and suggestions would be appreciated.</p>
<p><img src="https://i.sstatic.net/8E8Sp.png" alt="alligators"></p> <p><a href="http://worrydream.com/AlligatorEggs/" rel="noreferrer"><strong>Alligator Eggs</strong></a> is a cool way to learn lambda calculus.</p> <p>Also learning functional programming languages like Scheme, Haskell etc. will be added fun.</p>
<p>Recommendations:</p> <ol> <li>Barendregt &amp; Barendsen, 1998, <a href="https://www.academia.edu/18746611/Introduction_to_lambda_calculus" rel="noreferrer">Introduction to lambda-calculus</a>;</li> <li>Girard, Lafont &amp; Taylor, 1987, <a href="http://www.paultaylor.eu/stable/Proofs+Types.html" rel="noreferrer">Proofs and Types</a>;</li> <li>Sørenson &amp; Urzyczyn, 1999, <a href="https://disi.unitn.it/%7Ebernardi/RSISE11/Papers/curry-howard.pdf" rel="noreferrer">Lectures on the Curry-Howard Isomorphism</a>.</li> </ol> <p>All of these are mentioned in <a href="http://lambda-the-ultimate.org/node/492" rel="noreferrer">the LtU Getting Started thread</a>.</p>
logic
<p>I am teaching a first-semester course in abstract algebra, and we are discussing group isomorphisms. In order to prove that two group are not isomorphic, I encourage the students to look for a group-theoretic property satisfied by one group but not by the other. I did not give a precise meaning to the phrase &quot;group-theoretic property&quot;, but some examples of the sort of properties I have in mind are <span class="math-container">$$ \forall g,h\in G:\exists n,m\in\mathbb{Z}:(n,m)\neq (0,0)\wedge g^n=h^m,\\ \forall H\leq G:\exists g,h\in G:H=\langle g,h\rangle,\\ \forall g,h\in G:\exists i\in G: \langle g,h\rangle = \langle i\rangle $$</span> One of my students asked if, give two non-isomorphic groups, there is always a group-theoretic property satisfied by one group but not the other. In a sense, &quot;being isomorphic to that group over there&quot; is a group-theoretic property. But this is not really what I have in mind.</p> <p>To pin down the class of properties I have in mind, let's say we allow expressions involving</p> <ul> <li>quantification over <span class="math-container">$G$</span>, subgroups of <span class="math-container">$G$</span>, and <span class="math-container">$\mathbb{Z}$</span>,</li> <li>group multiplication, inversion, and subgroups generated by a finite list of elements</li> <li>the symbol <span class="math-container">$1_G$</span> (the group identity element),</li> <li>addition, subtraction, multiplication, exponentiation (provided the exponent is non-negative), and inequalities of integers ,</li> <li>the integer symbols <span class="math-container">$0$</span> and <span class="math-container">$1$</span>,</li> <li>raising a group element to an integer power, and</li> <li>equality, elementhood, and logical connectives.</li> </ul> <p>I do not know much about model theory or logic, but my understanding is that this is not the first-order theory of groups. In particular, <a href="https://math.stackexchange.com/questions/2137508/can-a-torsion-group-and-a-nontorsion-group-be-elementarily-equivalent">this MSE question</a> indicates that there exist a torsion and a non-torsion group which are elementarily equivalent (meaning they cannot be distinguished by a first-order statement in the language of groups), but these groups can be distinguished by a property of the above form. I have also heard that free groups of different rank are elementarily equivalent, but these can also be distinguished by a property of the above form.</p> <p>My questions are:</p> <p>(1) Is there a name for the theory I am considering? Or something closely (or distantly) related?</p> <p>(2) Are there examples of non-isomorphic groups that cannot be distinguished by a property of the above form? Are there examples where the groups involved could be understood by an average first-semester algebra student?</p>
<p>First, let's start with the silly answer. Your language only has countably many different expressions, so can only divvy groups up into continuum-many classes - so there are definitely non-isomorphic groups it can't distinguish! In general this will happen as long as your language has <em>only set-many</em> expressions: you need a proper class sized logic like <span class="math-container">$\mathcal{L}_{\infty,\infty}$</span> to distinguish between all pairs of non-isomorphic structures.</p> <p>That said, you're right that you're looking at something much stronger than first-order logic. Specifically, you're describing a sublogic of <strong>second-order logic</strong>, the key difference being that second-order logic lets you quantify over arbitrary subsets of the domain, and indeed <em>functions and relations of arbitrary arity</em> over the domain, and not just subgroups. Second-order logic doesn't have an explicit ability to refer to (say) integers built in, but it can do so via tricks of quantifying over finite configurations.</p> <p>While the exact strength of the system you describe isn't clear to me, second-order logic is known to be extremely powerful. In particular, I believe there are no known natural examples of non-isomorphic second-order-elementarily-equivalent structures <strong>at all</strong>, although as per the first paragraph of this answer such structures certainly have to exist! So second-order-equivalence is a pretty strong equivalence relation, and in practice will suffice to distinguish all the groups your students run into.</p>
<p>Here are some simple examples where you at least need to make some decisions about what you believe about set theory to determine whether two groups are isomorphic. Assuming the axiom of choice every vector space has a basis, so <span class="math-container">$\mathbb{R}$</span> is isomorphic (as a group) to some direct sum of copies of <span class="math-container">$\mathbb{Q}$</span> (in fact necessarily to a direct sum of <span class="math-container">$|\mathbb{R}|$</span> copies of <span class="math-container">$\mathbb{Q}$</span>). The existence of such a basis for <span class="math-container">$\mathbb{R}$</span> over <span class="math-container">$\mathbb{Q}$</span> allows you to construct <a href="https://en.wikipedia.org/wiki/Vitali_set" rel="noreferrer">Vitali sets</a>, which are non-measurable, and there are models of <span class="math-container">$ZF \neg C$</span> in which every subset of <span class="math-container">$\mathbb{R}$</span> is measurable, so <span class="math-container">$\mathbb{R}$</span> fails to have a basis in such models.</p> <p>Another example along the same lines is <span class="math-container">$\left( \prod_{\mathbb{N}} \mathbb{Q} / \bigoplus_{\mathbb{N}} \mathbb{Q} \right)^{\ast}$</span>, taking the dual as a <span class="math-container">$\mathbb{Q}$</span>-vector space. Assuming the axiom of choice this is a direct sum of <span class="math-container">$|\mathbb{R}|$</span> copies of <span class="math-container">$\mathbb{Q}$</span> again, but without at least enough choice to construct something like non-principal ultrafilters on <span class="math-container">$\mathbb{N}$</span> it's not clear how to write down a single nonzero element of this group!</p>
logic
<p>this might be a silly question, but I was wondering: PA cannot prove its consistency by the incompleteness theorems, but we can "step outside" and exhibit a model of it, namely <span class="math-container">$\mathbb{N}$</span>, so we know that PA is consistent.</p> <p>Why can't we do this with ZFC? I have seen things like "if <span class="math-container">$\kappa$</span> is [some large cardinal] then <span class="math-container">$V_{\kappa}$</span> models ZFC", but these stem from an "if". </p> <p>Is this a case of us not having been able to do this yet, or is there a good reason why it is simply not possible?</p>
<p>The problem is that, unlike the case for PA, essentially all accepted mathematical reasoning can be formalized in ZFC. Any proof of the consistency of ZFC must come from a system that is stronger (at least in some ways), so we must go outside ZFC-formalizable mathematics, which is most of mathmatics. This is just like how we go outside of PA-formalizable mathematics to prove the consistency of PA (say, working in ZFC), except that PA-formalizable mathematics is a much smaller and relatively uncontroversial subset of mathematics. Thus it is a common view that the consistency of PA is a settled fact whereas the consistency of ZFC is conjectural. (As I mentioned in a comment, as a gross oversimplification, “settled mathematical truth (amongst mainstream classical mathematicians)” roughly corresponds to “provable in ZFC”... Con PA is whereas Con ZFC is not.)</p> <p>As for proving the consistency of ZFC, as you mention, we could go to a stronger system where there is an axiom giving the existence of inaccessible cardinals. Working in Morse-Kelley is another possibility. In any case, you are in the same position with the stronger system as you were with ZFC: you cannot use it to prove its own consistency, and since you’ve made stronger assumptions, you’re at a higher “risk” of inconsistency.</p>
<p>Using ZFC plus the axiom "Uncountable strong inaccessible cardinals exist" to give a model of ZFC is exactly the same kind of "stepping outside" as when you use ZF to make a model of PA.</p>
linear-algebra
<blockquote> <p>How can I prove <span class="math-container">$\operatorname{rank}A^TA=\operatorname{rank}A$</span> for any <span class="math-container">$A\in M_{m \times n}$</span>?</p> </blockquote> <p>This is an exercise in my textbook associated with orthogonal projections and Gram-Schmidt process, but I am unsure how they are relevant.</p>
<p>Let $\mathbf{x} \in N(A)$ where $N(A)$ is the null space of $A$. </p> <p>So, $$\begin{align} A\mathbf{x} &amp;=\mathbf{0} \\\implies A^TA\mathbf{x} &amp;=\mathbf{0} \\\implies \mathbf{x} &amp;\in N(A^TA) \end{align}$$ Hence $N(A) \subseteq N(A^TA)$.</p> <p>Again let $\mathbf{x} \in N(A^TA)$</p> <p>So, $$\begin{align} A^TA\mathbf{x} &amp;=\mathbf{0} \\\implies \mathbf{x}^TA^TA\mathbf{x} &amp;=\mathbf{0} \\\implies (A\mathbf{x})^T(A\mathbf{x})&amp;=\mathbf{0} \\\implies A\mathbf{x}&amp;=\mathbf{0}\\\implies \mathbf{x} &amp;\in N(A) \end{align}$$ Hence $N(A^TA) \subseteq N(A)$.</p> <p>Therefore $$\begin{align} N(A^TA) &amp;= N(A)\\ \implies \dim(N(A^TA)) &amp;= \dim(N(A))\\ \implies \text{rank}(A^TA) &amp;= \text{rank}(A)\end{align}$$</p>
<p>Let $r$ be the rank of $A \in \mathbb{R}^{m \times n}$. We then have the SVD of $A$ as $$A_{m \times n} = U_{m \times r} \Sigma_{r \times r} V^T_{r \times n}$$ This gives $A^TA$ as $$A^TA = V_{n \times r} \Sigma_{r \times r}^2 V^T_{r \times n}$$ which is nothing but the SVD of $A^TA$. From this it is clear that $A^TA$ also has rank $r$. In fact the singular values of $A^TA$ are nothing but the square of the singular values of $A$.</p>
combinatorics
<p>How many ways can I write a positive integer $n$ as a sum of $k$ nonnegative integers up to commutativity?</p> <p>For example, I can write $4$ as $0+0+4$, $0+1+3$, $0+2+2$, and $1+1+2$.</p> <hr> <p>I know how to find the number of noncommutative ways to form the sum: Imagine a line of $n+k-1$ positions, where each position can contain either a cat or a divider. If you have $n$ (nameless) cats and $k-1$ dividers, you can split the cats in to $k$ groups by choosing positions for the dividers: $\binom{n+k-1}{k-1}$. The size of each group of cats corresponds to one of the nonnegative integers in the sum.</p>
<p>As Brian M. Scott mentions, these are <a href="https://en.wikipedia.org/wiki/Partition_%28number_theory%29" rel="nofollow noreferrer">partitions</a> of <span class="math-container">$n$</span>. However, allowing <span class="math-container">$0$</span> into the mix, makes them different to the usual definition of a partition (which assumes non-zero parts). However, this can be adjusted for by taking partitions of <span class="math-container">$n+k$</span> into <span class="math-container">$k$</span> non-zero parts (and subtracting <span class="math-container">$1$</span> from each part).</p> <p>If <span class="math-container">$p(n,k)$</span> is the number of partitions of <span class="math-container">$n$</span> into <span class="math-container">$k$</span> <em>non-zero</em> parts, then <span class="math-container">$p(n,k)$</span> satisfies the recurrence relation <span class="math-container">\begin{align} p(n,k) &amp;= 0 &amp; \text{if } k&gt;n \\ p(n,k) &amp;= 1 &amp; \text{if } k=n \\ p(n,k) &amp;= p(n-1,k-1)+p(n-k,k) &amp; \text{otherwise}. \\ \end{align}</span> (this recurrence is explained on <a href="https://en.wikipedia.org/wiki/Partition_%28number_theory%29#Intermediate_function" rel="nofollow noreferrer">Wikipedia</a>). <em>Note</em>: in the above case, remember to change <span class="math-container">$n$</span> to <span class="math-container">$n+k$</span>. This gives a (moderately efficient) method for computing <span class="math-container">$p(n,k)$</span>.</p> <p>The number of partitions of <span class="math-container">$n$</span> into <span class="math-container">$k$</span> parts in <span class="math-container">$\{0,1,\ldots,n\}$</span> can be computed in <a href="https://www.gap-system.org/" rel="nofollow noreferrer">GAP</a> using:</p> <pre><code>NrPartitions(n+k,k); </code></pre> <p>Some small values are listed below:</p> <p><span class="math-container">$$\begin{array}{c|ccccccccccccccc} &amp; k=1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 &amp; 7 &amp; 8 &amp; 9 &amp; 10 &amp; 11 &amp; 12 &amp; 13 &amp; 14 &amp; 15 \\ \hline 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ 2 &amp; 1 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 \\ 3 &amp; 1 &amp; 2 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 \\ 4 &amp; 1 &amp; 3 &amp; 4 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 &amp; 5 \\ 5 &amp; 1 &amp; 3 &amp; 5 &amp; 6 &amp; 7 &amp; 7 &amp; 7 &amp; 7 &amp; 7 &amp; 7 &amp; 7 &amp; 7 &amp; 7 &amp; 7 &amp; 7 \\ 6 &amp; 1 &amp; 4 &amp; 7 &amp; 9 &amp; 10 &amp; 11 &amp; 11 &amp; 11 &amp; 11 &amp; 11 &amp; 11 &amp; 11 &amp; 11 &amp; 11 &amp; 11 \\ 7 &amp; 1 &amp; 4 &amp; 8 &amp; 11 &amp; 13 &amp; 14 &amp; 15 &amp; 15 &amp; 15 &amp; 15 &amp; 15 &amp; 15 &amp; 15 &amp; 15 &amp; 15 \\ 8 &amp; 1 &amp; 5 &amp; 10 &amp; 15 &amp; 18 &amp; 20 &amp; 21 &amp; 22 &amp; 22 &amp; 22 &amp; 22 &amp; 22 &amp; 22 &amp; 22 &amp; 22 \\ 9 &amp; 1 &amp; 5 &amp; 12 &amp; 18 &amp; 23 &amp; 26 &amp; 28 &amp; 29 &amp; 30 &amp; 30 &amp; 30 &amp; 30 &amp; 30 &amp; 30 &amp; 30 \\ 10 &amp; 1 &amp; 6 &amp; 14 &amp; 23 &amp; 30 &amp; 35 &amp; 38 &amp; 40 &amp; 41 &amp; 42 &amp; 42 &amp; 42 &amp; 42 &amp; 42 &amp; 42 \\ 11 &amp; 1 &amp; 6 &amp; 16 &amp; 27 &amp; 37 &amp; 44 &amp; 49 &amp; 52 &amp; 54 &amp; 55 &amp; 56 &amp; 56 &amp; 56 &amp; 56 &amp; 56 \\ 12 &amp; 1 &amp; 7 &amp; 19 &amp; 34 &amp; 47 &amp; 58 &amp; 65 &amp; 70 &amp; 73 &amp; 75 &amp; 76 &amp; 77 &amp; 77 &amp; 77 &amp; 77 \\ 13 &amp; 1 &amp; 7 &amp; 21 &amp; 39 &amp; 57 &amp; 71 &amp; 82 &amp; 89 &amp; 94 &amp; 97 &amp; 99 &amp; 100 &amp; 101 &amp; 101 &amp; 101 \\ 14 &amp; 1 &amp; 8 &amp; 24 &amp; 47 &amp; 70 &amp; 90 &amp; 105 &amp; 116 &amp; 123 &amp; 128 &amp; 131 &amp; 133 &amp; 134 &amp; 135 &amp; 135 \\ 15 &amp; 1 &amp; 8 &amp; 27 &amp; 54 &amp; 84 &amp; 110 &amp; 131 &amp; 146 &amp; 157 &amp; 164 &amp; 169 &amp; 172 &amp; 174 &amp; 175 &amp; 176 \\ \hline \end{array}$$</span></p> <p>If you want a list of the possible partitions, then use:</p> <pre><code>RestrictedPartitions(n,[0..n],k); </code></pre> <hr /> <p><em>Comment</em>: In the latest version of GAP,</p> <pre><code>NrRestrictedPartitions(n,[0..n],k); </code></pre> <p>does not seem to work properly here, since it does not match</p> <pre><code>Size(RestrictedPartitions(n,[0..n],k)); </code></pre> <p>when <span class="math-container">$k&gt;n$</span>. I emailed the support team about this, and they said that <code>NrRestrictedPartitions</code> and <code>RestrictedPartitions</code> are only intended to be valid for sets of positive integers. (I still think the above is a bug, but let's let that slide.) This means that <code>NrPartitions(n+k,k);</code> is the technically correct choice, and, strictly speaking, we shouldn't use <code>RestrictedPartitions(n,[0..n],k);</code>, but judging from the source code, it will work as expected.</p>
<p>If you are only interested in a small:ish number of <span class="math-container">$k$</span> then this is most easily solved by thinking recursively and using induction. First some notation. Let <span class="math-container">$F_k(n)$</span> be the number of ways to sum <span class="math-container">$k$</span> natural numbers so the sum is <span class="math-container">$n$</span>.</p> <p>The generic reasoning can be inferred from a small example.</p> <p>Assume we have three numbers we want to sum to 4. The number of ways to do this is the same as setting the first digit to <span class="math-container">$k=4,3,2,1,0$</span> in turn and then using the remaining digits to sum up to <span class="math-container">$k-1$</span>. </p> <p>number of ways to write 4 with three digits =</p> <p>{4 + {number of ways to write 0 with two digits}} +</p> <p>{3 + {number of ways to write 1 with two digits}} +</p> <p>{2 + {number of ways to write 2 with two digits}} +</p> <p>{1 + {number of ways to write 3 with two digits}} +</p> <p>{0 + {number of ways to write 4 with two digits}}</p> <p>(You might want to convince yourself that this is the case and there is no need to assume the set digit in different positions)</p> <p>Which is the same as writing (in our notation)</p> <p><span class="math-container">$F_3(4) = F_2(0) + F_2(1) + F_2(2) + F_2(3) + F_2(4)$</span></p> <p>For the general case we have</p> <p><span class="math-container">$F_k(n) = \sum_{l=0}^n F_{k-1}(l)$</span></p> <p>it is also easily seen that <span class="math-container">$F_1(n)=1$</span> and <span class="math-container">$F_k(0)=1$</span>. This now allows us to expand the first few relations as</p> <p><span class="math-container">$$F_1(n) = 1$$</span> <span class="math-container">$$F_2(n) = \sum_{l=0}^n F_1(l) = \frac{(n+1)}{1!}$$</span> <span class="math-container">$$F_3(n) = \sum_{l=0}^n F_2(l) = \sum_{l=0}^n n+1 = \frac{(n+1)^2+(n+1)}{2!}$$</span> <span class="math-container">$$F_4(n) = \sum_{l=0}^n F_3(l) = \frac{(n+1)^3+3(n+1)^2+2(n+1)}{3!}$$</span> <span class="math-container">$$F_5(n) = \sum_{l=0}^n F_4(l) = \frac{(n+1)^4 + 6(n+1)^3+11(n+1)^2+6(n+1)}{4!}$$</span></p> <p>... and so on</p> <p>Unfortunately there isn't any "nice" generic expression for the coefficients in the numerator. Its a good exercise to find it but be warned; it gets quite messy apart from the two highest and the lowest coefficients in the numerator polynomial which you probably can spot by inspection.</p> <p><strong>Addendum:</strong></p> <p>By factoring the numerator and recognizing the Gamma function the expression can be written in semi-closed form as:</p> <p><span class="math-container">$$F_k(n) = \frac {\Gamma \left( n+k \right) }{\Gamma \left( n+1 \right) \left( k-1 \right) !}$$</span></p>
linear-algebra
<p>Can someone point me to a paper, or show here, why symmetric matrices have orthogonal eigenvectors? In particular, I'd like to see proof that for a symmetric matrix $A$ there exists decomposition $A = Q\Lambda Q^{-1} = Q\Lambda Q^{T}$ where $\Lambda$ is diagonal.</p>
<p>For any real matrix $A$ and any vectors $\mathbf{x}$ and $\mathbf{y}$, we have $$\langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle.$$ Now assume that $A$ is symmetric, and $\mathbf{x}$ and $\mathbf{y}$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda$ and $\mu$. Then $$\lambda\langle\mathbf{x},\mathbf{y}\rangle = \langle\lambda\mathbf{x},\mathbf{y}\rangle = \langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle = \langle\mathbf{x},A\mathbf{y}\rangle = \langle\mathbf{x},\mu\mathbf{y}\rangle = \mu\langle\mathbf{x},\mathbf{y}\rangle.$$ Therefore, $(\lambda-\mu)\langle\mathbf{x},\mathbf{y}\rangle = 0$. Since $\lambda-\mu\neq 0$, then $\langle\mathbf{x},\mathbf{y}\rangle = 0$, i.e., $\mathbf{x}\perp\mathbf{y}$.</p> <p>Now find an orthonormal basis for each eigenspace; since the eigenspaces are mutually orthogonal, these vectors together give an orthonormal subset of $\mathbb{R}^n$. Finally, since symmetric matrices are diagonalizable, this set will be a basis (just count dimensions). The result you want now follows.</p>
<p>Since being symmetric is the property of an operator, not just its associated matrix, let me use <span class="math-container">$\mathcal{A}$</span> for the linear operator whose associated matrix in the standard basis is <span class="math-container">$A$</span>. Arturo and Will proved that a real symmetric operator <span class="math-container">$\mathcal{A}$</span> has real eigenvalues (thus real eigenvectors) and that eigenvectors corresponding to different eigenvalues are orthogonal. <em>One question still stands: how do we know that there are no generalized eigenvectors of rank more than 1, that is, all Jordan blocks are one-dimensional?</em> Indeed, by referencing the theorem that any symmetric matrix is diagonalizable, Arturo effectively threw the baby out with the bathwater: showing that a matrix is diagonalizable is tautologically equivalent to showing that it has a full set of eigenvectors. Assuming this as a given dismisses half of the question: we were asked to show that <span class="math-container">$\Lambda$</span> is diagonal, and not just a generic Jordan form. Here I will untangle this bit of circular logic.</p> <p>We prove by induction in the number of eigenvectors, namely it turns out that finding an eigenvector (and at least one exists for any matrix) of a symmetric matrix always allows us to generate another eigenvector. So we will run out of dimensions before we run out of eigenvectors, making the matrix diagonalizable.</p> <p>Suppose <span class="math-container">$\lambda_1$</span> is an eigenvalue of <span class="math-container">$A$</span> and there exists at least one eigenvector <span class="math-container">$\boldsymbol{v}_1$</span> such that <span class="math-container">$A\boldsymbol{v}_1=\lambda_1 \boldsymbol{v}_1$</span>. Choose an orthonormal basis <span class="math-container">$\boldsymbol{e}_i$</span> so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span>. The change of basis is represented by an orthogonal matrix <span class="math-container">$V$</span>. In this new basis the matrix associated with <span class="math-container">$\mathcal{A}$</span> is <span class="math-container">$$A_1=V^TAV.$$</span> It is easy to check that <span class="math-container">$\left(A_1\right)_{11}=\lambda_1$</span> and all the rest of the numbers <span class="math-container">$\left(A_1\right)_{1i}$</span> and <span class="math-container">$\left(A_1\right)_{i1}$</span> are zero. In other words, <span class="math-container">$A_1$</span> looks like this: <span class="math-container">$$\left( \begin{array}{c|ccc} \lambda_1 &amp; \\ \hline &amp; &amp; \\ &amp; &amp; B_1 &amp; \\ &amp; &amp; \end{array} \right)$$</span> Thus the operator <span class="math-container">$\mathcal{A}$</span> breaks down into a direct sum of two operators: <span class="math-container">$\lambda_1$</span> in the subspace <span class="math-container">$\mathcal{L}\left(\boldsymbol{v}_1\right)$</span> (<span class="math-container">$\mathcal{L}$</span> stands for linear span) and a symmetric operator <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> whose associated <span class="math-container">$(n-1)\times (n-1)$</span> matrix is <span class="math-container">$B_1=\left(A_1\right)_{i &gt; 1,j &gt; 1}$</span>. <span class="math-container">$B_1$</span> is symmetric thus it has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which has to be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span> and the same procedure applies: change the basis again so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span> and <span class="math-container">$\boldsymbol{e}_2=\boldsymbol{v}_2$</span> and consider <span class="math-container">$\mathcal{A}_2=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1,\boldsymbol{v}_2\right)^{\bot}}$</span>, etc. After <span class="math-container">$n$</span> steps we will get a diagonal matrix <span class="math-container">$A_n$</span>.</p> <p>There is a slightly more elegant proof that does not involve the associated matrices: let <span class="math-container">$\boldsymbol{v}_1$</span> be an eigenvector of <span class="math-container">$\mathcal{A}$</span> and <span class="math-container">$\boldsymbol{v}$</span> be any vector such that <span class="math-container">$\boldsymbol{v}_1\bot \boldsymbol{v}$</span>. Then <span class="math-container">$$\left(\mathcal{A}\boldsymbol{v},\boldsymbol{v}_1\right)=\left(\boldsymbol{v},\mathcal{A}\boldsymbol{v}_1\right)=\lambda_1\left(\boldsymbol{v},\boldsymbol{v}_1\right)=0.$$</span> This means that the restriction <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> is an operator of rank <span class="math-container">$n-1$</span> which maps <span class="math-container">${\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> into itself. <span class="math-container">$\mathcal{A}_1$</span> is symmetric for obvious reasons and thus has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which will be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span>.</p>
logic
<p>I enjoy reading about formal logic as an occasional hobby. However, one thing keeps tripping me up: I seem unable to understand what's being referred to when the word "type" (as in type theory) is mentioned.</p> <p>Now, I understand what types are in programming, and sometimes I get the impression that types in logic are just the same thing as that: we want to set our formal systems up so that you can't add an integer to a proposition (for example), and types are the formal mechanism to specify this. Indeed, <a href="https://en.wikipedia.org/wiki/Type_theory" rel="noreferrer">the Wikipedia page for type theory</a> pretty much says this explicitly in the lead section.</p> <p>However, it also goes on to imply that types are much more powerful than that. Overall, from everything I've read, I get the idea that types are:</p> <ul> <li>like types in programming</li> <li>something that is like a set but different in certain ways</li> <li>something that can prevent paradoxes</li> <li>the sort of thing that could replace set theory in the foundations of mathematics</li> <li>something that is not just analogous to the notion of a proposition but can be thought of as the same thing ("propositions as types")</li> <li>a concept that is <em>really, really deep</em>, and closely related to higher category theory</li> </ul> <p>The problem is that I have trouble reconciling these things. Types in programming seem to me quite simple, practical things (alhough the type system for any given programming language can be quite complicated and interesting). But in type theory it seems that somehow the types <em>are</em> the language, or that they are responsible for its expressive power in a much deeper way than is the case in programming.</p> <p>So I suppose my question is, for someone who understands types in (say) Haskell or C++, and who also understands first-order logic and axiomatic set theory and so on, how can I get from these concepts to the concept of <em>type theory</em> in formal logic? What <em>precisely</em> is a type in type theory, and what is the relationship between types in formal mathematics and types in computer science?</p> <p>(I am not looking for a formal definition of a type so much as the core idea behind it. I can find several formal definitions, but I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a <em>particular</em> type theory. If I can understand the motivation better it should make it easier to follow the definitions.)</p>
<p><strong>tl;dr</strong> Types only have meaning within type systems. There is no stand-alone definition of "type" except vague statements like "types classify terms". The notion of type in programming languages and type theory are basically the same, but different type systems correspond to different type theories. Often the term "type theory" is used specifically for a particular family of powerful type theories descended from Martin-Löf Type Theory. Agda and Idris are simultaneously proof assistants for such type theories and programming languages, so in this case there is no distinction whatsoever between the programming language and type theoretic notions of type.</p> <p>It's not the "types" themselves that are "powerful". First, you could recast first-order logic using types. Indeed, the sorts in <a href="https://en.wikipedia.org/wiki/Many-sorted_logic" rel="noreferrer">multi-sorted first-order logic</a>, are basically the same thing as types.</p> <p>When people talk about type theory, they often mean specifically Martin-Löf Type Theory (MLTT) or some descendant of it like the Calculus of (Inductive) Constructions. These are powerful higher-order logics that can be viewed as constructive set theories. But it is the specific system(s) that are powerful. The simply typed lambda calculus viewed from a propositions-as-types perspective is basically the proof theory of intuitionistic propositional logic which is a rather weak logical system. On the other hand, considering the equational theory of the simply typed lambda calculus (with some additional axioms) gives you something that is very close to the most direct understanding of higher-order logic as an extension of first-order logic. This view is the basis of the <a href="https://hol-theorem-prover.org/" rel="noreferrer">HOL</a> family of theorem provers.</p> <p>Set theory is an extremely powerful logical system. ZFC set theory is a first-order theory, i.e. a theory axiomatized in first-order logic. And what does set theory accomplish? Why, it's essentially an embedding of higher-order logic into first-order logic. In first-order logic, we can't say something like $$\forall P.P(0)\land(\forall n.P(n)\Rightarrow P(n+1))\Rightarrow\forall n.P(n)$$ but, in the first-order theory of set theory, we <em>can</em> say $$\forall P.0\in P\land (\forall n.n\in P\Rightarrow n+1\in P)\Rightarrow\forall n.n\in P$$ Sets behave like "first-class" predicates.</p> <p>While ZFC set theory and MLTT go beyond just being higher-order logic, higher-order logic on its own is already a powerful and ergonomic system as demonstrated by the HOL theorem provers for example. At any rate, as far as I can tell, having some story for doing higher-order-logic-like things is necessary to provoke any interest in something as a framework for mathematics from mathematicians. Or you can turn it around a bit and say you need some story for set-like things and "first-class" predicates do a passable job. This latter perspective is more likely to appeal to mathematicians, but to me the higher-order logic perspective better captures the common denominator.</p> <p>At this point it should be clear there is no magical essence in "types" themselves, but instead some families of type theories (i.e. type systems from a programming perspective) are very powerful. Most "powerful" type systems for programming languages are closely related to the polymorphic lambda calculus aka System F. From the proposition-as-types perspective, these correspond to intuitionistic second-order <em>propositional</em> logics, not to be confused with second-order (predicate) logics. It allows quantification over propositions (i.e. nullary predicates) but not over terms which don't even exist in this logic. <em>Classical</em> second-order propositional logic is easily reduced to classical propositional logic (sometimes called zero-order logic). This is because $\forall P.\varphi$ is reducible to $\varphi[\top/P]\land\varphi[\bot/P]$ classically. System F is surprisingly expressive, but viewed as a logic it is quite limited and far weaker than MLTT. The type systems of Agda, Idris, and Coq are descendants of MLTT. Idris in particular and Agda to a lesser extent are dependently typed programming languages.<sup>1</sup> Generally, the notion of type in a (static) type system and in type theory are essentially the same, but the significance of a type depends on the type system/type theory it is defined within. There is no real definition of "type" on its own. If you decide to look at e.g. Agda, you should be quickly disabused of the idea that "types are the language". All of these type theories have terms and the terms are not "made out of types". They typically look just like functional programs.</p> <p><sup>1</sup> I don't want to give the impression that "dependently typed" = "super powerful" or "MLTT derived". The LF family of languages e.g. Elf and Twelf are intentionally weak dependently typed specification languages that are far weaker than MLTT. From a propositions-as-types perspective, they correspond more to first-order logic.</p>
<blockquote> <p>I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a particular type theory. If I can understand the motivation better it should make it easier to follow the definitions.</p> </blockquote> <p>The basic idea: In ZFC set theory, there is just one kind of object - sets. In type theories, there are multiples kind of objects. Each object has a particular kind, known as its "type". </p> <p>Type theories typically include ways to form new types from old types. For example, if we have types $A$ and $B$ we also have a new type $A \times B$ whose members are pairs $(a,b)$ where $a$ is of type $A$ and $b$ is of type $B$. </p> <p>For the simplest type theories, such as higher-order logic, that is essentially the only change from ordinary first-order logic. In this setting, all of the information about "types" is handled in the metatheory. But these systems are barely "type theories", because the theory itself doesn't really know anything about the types. We are really just looking at first-order logic with multiple sorts. By analogy to computer science, these systems are vaguely like statically typed languages - it is not possible to even write a well-formed formula/program which is not type safe, but the program itself has no knowledge about types while it is running. </p> <p>More typical type theories include ways to reason <em>about</em> types <em>within</em> the theory. In many type theories, such as ML type theory, the types themselves are objects of the theory. So it is possible to prove "$A$ is a type" as a <em>sentence</em> of the theory. It is also possible to prove sentences such as "$t$ has type $A$". These cannot even be expressed in systems like higher-order logic. </p> <p>In this way, these theories are not just "first order logic with multiple sorts", they are genuinely "a theory about types". Again by analogy, these systems are vaguely analogous to programming languages in which a program can make inferences about types of objects <em>during runtime</em> (analogy: we can reason about types within the theory). </p> <p>Another key aspect of type theories is that they often have their own <em>internal</em> logic. The <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="noreferrer">Curry-Howard correspondence</a> shows that, in particular settings, <em>formulas</em> of propositional or first-order logic correspond to <em>types</em> in particular type theories. Manipulating the types in a model of type theory corresponds, via the isomorphism, to manipulating formulas of first order logic. The isomorphism holds for many logic/type theory pairs, but it is strongest when the logic is intuitionistic, which is one reason intuitionistic/constructive logic comes up in the context of type theory. </p> <p>In particular, logical operations on formulas become type-forming operations. The "and" operator of logic becomes the product type operation, for example, while the "or" operator becomes a kind of "union" type. In this way, each model of type theory has its own "internal logic" - which is often a model of intuitionistic logic. </p> <p>The existence of this internal logic is one of the motivations for type theory. When we look at first-order logic, we treat the "logic" as sitting in the metatheory, and we look at the truth value of each formula within a model using classical connectives. In type theory, that is often a much less important goal. Instead we look at the collection of types in a model, and we are more interested in the way that the type forming operations work in the model than in the way the classical connectives work. </p>
logic
<p>I am learning about the difference between booleans and classical logics in Coq, and why logical propositions are sort of a superset of booleans:</p> <p><a href="https://stackoverflow.com/questions/31554453/why-are-logical-connectives-and-booleans-separate-in-coq/31568076#31568076">Why are logical connectives and booleans separate in Coq?</a></p> <p>The answer there explains the main reason for this distinction is because of the law of excluded middle, and how in intuitionistic logic, you can't prove this:</p> <pre><code>Definition excluded_middle := forall P : Prop, P \/ ~ P. </code></pre> <p>I have seen this stated in many different books and articles, but seem to have missed an explanation on why you can't prove it.</p> <p>In a way that a person with some programming/math experience can understand (not an expert in math), what is the reason why you can't prove this in intuitionistic logic?</p>
<p>If you could prove the law of the excluded middle, then it would be true in all systems satisfying intuitionistic axioms. So we just need to find some model of intuitionistic logic for which the law of the excluded middle fails.</p> <p>Let $X$ be a topological space. A proposition will consist of an open set, where $X$ is "true", and $\emptyset$ is "false". Union will be "or", intersection will be "and".</p> <p>We define $U\implies V$ to be the interior of $U^c \cup V$, and $\neg U = (U\implies \emptyset)$ is just the interior of $U^c$.</p> <p>The system I just described obeys the axioms of intuitionistic logic. But the law of the excluded middle says that $U\cup \neg U = X$, which fails for any open set whose complement is not open. So this is false for $X=\mathbb{R}$, for example, or even for the two-point <a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space">Sierpinski space</a>.</p> <p>In fact, intuitionistic logic can generally be interpreted in this geometric kind of way, though topological spaces are not general enough and we need things called elementary topoi to model intuitionistic logic.</p>
<p>Not knowing how much mathematics experience you've had, it can be difficult to explain the difference in viewpoint. </p> <p>Intuitionism, and constructive logic in general, are very much inspired by the fact that we as mathematicians do not know the answers to all mathematical questions. Classical logic is based on the idea that all mathematical statements are either true or false, even if we do not know the truth value. Constructive logic does not focus so much on whether statements are "really" true or false, in that sense - it focuses on whether we <em>know</em> a statement is true, or <em>know</em> it is false. Other answers have described this as a focus on proof rather than truth. </p> <p>So, before someone working in constructive mathematics can assert a statement, she needs to have a concrete justification for the truth of the statement. Arguments such as proof by contradiction are not permitted. The connective "or", in particular, has a different meaning. To assert a statement of the form "<span class="math-container">$A$</span> or <span class="math-container">$B$</span>", a constructive mathematicians much know <em>which one</em> of the two options holds.</p> <p>One kind of example of how excluded middle fails in this framework was called a "weak counterexample" by Brouwer. Here is one (I don't know if it is due to Brouwer). Consider the number <span class="math-container">$\pi$</span>. As a fact, even in classical mathematics we do not currently know whether there are infinitely many occurrences of the digit <span class="math-container">$5$</span> in the decimal expansion of <span class="math-container">$\pi$</span>. Let <span class="math-container">$A$</span> be the statement "there are infinitely many occurrences of <span class="math-container">$5$</span> in the decimal expansion of <span class="math-container">$\pi$</span>". Then someone working in constructive mathematics cannot assert "<span class="math-container">$A$</span> or not <span class="math-container">$A$</span>", because she does not know <em>which</em> of the two options holds. Of course "<span class="math-container">$A$</span> or not <span class="math-container">$A$</span>" is an instance of the law of the excluded middle, and is valid in classical mathematics. </p> <p>The key point is that the first difference between constructive and classical logic deals with what it takes to assert a particular sentence. Mathematical terms such as "or", "there exists", and "not" take on a different meaning under the higher burden of evidence that is required to assert statements in constructive mathematics. </p>
number-theory
<p>The polynomial $n^2+n+41$ famously takes prime values for all $0\le n\lt 40$. I have read that this is closely related to the fact that 163 is a Heegner number, although I don't understand the argument, except that the discriminant of $n^2+n+41$ is $-163$. The next smaller Heegner number is 67, and indeed $n^2+n+17$ shows the same behavior, taking prime values for $0\le n\lt 16$. </p> <p>But 163 is the largest Heegner number, which suggests that this is the last such coincidence, and that there might not be any $k&gt;41$ such that $n^2+n+k$ takes on an unbroken sequence of prime values.</p> <p>Is this indeed the case? If not, why not?</p> <p>Secondarily, is there a $k&gt;41$, perhaps extremely large, such that $n^2+n+k$ takes on prime values for $0&lt;n&lt;j$ for some $j&gt;39$?</p>
<p>See <a href="http://en.wikipedia.org/wiki/Heegner_number#Consecutive_primes" rel="noreferrer">http://en.wikipedia.org/wiki/Heegner_number#Consecutive_primes</a> </p> <p>Here is <a href="http://gdz.sub.uni-goettingen.de/dms/load/img/?PID=GDZPPN002167719&amp;physid=PHYS_0159" rel="noreferrer">a longer piece Rabinowitz published on the topic below</a></p> <p>Rabinowitz showed, in 1913, that $x^2 + x + p$ represent the maximal number of consecutive primes if and only if $x^2 + x y + p y^2$ is the only (equivalence class of) positive binary quadratic form of its discriminant. This condition is called "class number one." For negative discriminants the set of such discriminants is finite, called Heegner numbers. </p> <p>Note that if we take $x^2 + x + ab$ so that the constant term is composite, we get composite outcome both for $x=a$ and $x=b,$ so the thing quits early. In the terms of Rabinowitz, we would also have the form $a x^2 + x y + b y^2,$ which would be a distinct "class" in the same discriminant, thus violating class number one. So it all fits together. That is, it is "reduced" if $0 &lt; a \leq b,$ and distinct from the principal form if also $a &gt; 1.$</p> <p>For binary forms, I particularly like Buell, here is his page about class number one: <a href="http://books.google.com/books?id=qMa8-FDOR8YC&amp;pg=PA81&amp;lpg=PA81&amp;dq=class%20number%20one%20binary%20quadratic%20forms&amp;source=bl&amp;ots=0I1wN7aIMg&amp;sig=J-VYw4fNijIKFFF02oHkmaBSsb4&amp;hl=en&amp;sa=X&amp;ei=LQsHUdWMCJDSigKPiYGwBA&amp;ved=0CEsQ6AEwBA#v=onepage&amp;q=class%20number%20one%20binary%20quadratic%20forms&amp;f=false" rel="noreferrer">BUELL</a>. He does everything in terms of binary forms, but he also gives the relationship with quadratic fields. Furthermore, he allows both positive and indefinite forms, finally he allows odd $b$ in $a x^2 + b x y + c y^2.$ Note that I have often answered MSE questions about ideals in quadratic fields with these techniques, which are, quite simply, easy. Plus he shows the right way to do Pell's equation and variants as algorithms, which people often seem to misunderstand. Here I mean Lagrange's method mostly.</p> <p>EEDDIITT: I decided to figure out the easier direction of the Rabinowitz result in my own language. So, we begin with the principal form, $\langle 1,1,p \rangle$ with some prime $p \geq 3. $ It follows that $2p-4 \geq p-1$ and $$ p-2 \geq \frac{p-1}{2}. $$ Now, suppose we have another, distinct, reduced form of the same discriminant, $$ \langle a,b,c \rangle. $$ There is no loss of generality in demanding $b &gt; 0.$ So reduced means $$ 1 \leq b \leq a \leq c $$ with $b$ odd, both $a,c \geq 2,$ and $$ 4ac - b^2 = 4 p - 1. $$ As $b^2 \leq ac,$ we find $3 b^2 \leq 4p-1 $ and, as $p$ is positive, $ b \leq p.$ </p> <p>Now, define $$ b = 2 \beta + 1 $$ or $ \beta = \frac{b-1}{2}. $ From earlier we have $$ \beta \leq p-2. $$ However, from $4ac - b^2 = 4p-1$ we get $4ac+1 = b^2 + 4 p,$ then $ 4ac + 1 = 4 \beta^2 + 4 \beta + 1 + 4 p, $ then $4ac = 4 \beta^2 + 4 \beta + 4 p, $ then $$ ac = \beta^2 + \beta + p, $$ with $0 \leq \beta \leq p-2.$ That is, the presence of a second equivalence class of forms has forced $x^2 + x + p$ to represent a composite number ($ac$) with a small value $ x = \beta \leq p-2,$ thereby interrupting the supposed sequence of primes. $\bigcirc \bigcirc \bigcirc \bigcirc \bigcirc$ </p> <p>EEDDIITTEEDDIITT: I should point out that the discriminants in question must actually be field discriminants, certain things must be squarefree. If I start with $x^2 + x + 7,$ the related $x^2 + x y + 7 y^2$ of discriminant $-27$ is of class number one, but there is the imprimitive form $ \langle 3,3,3 \rangle $ with the same discriminant. Then, as above, we see that $x^2 + x + 7 = 9$ with $x=1 =(3-1)/2.$</p> <p>[ Rabinowitz, G. “Eindeutigkeit der Zerlegung in Primzahlfaktoren in quadratischen Zahlkörpern.” <em>Proc. Fifth Internat. Congress Math.</em> (Cambridge) <strong>1</strong>, 418-421, 1913. ]</p> <p>Edit October 30, 2013. Somebody asked at deleted question <a href="https://math.stackexchange.com/questions/546368/smallest-integer-n-s-t-fn-n2-n-41-is-composite">Smallest positive integer $n$ s.t. f(n) = $n^2 + n + 41$ is composite?</a> about the other direction in Rabinowitz (1913), and a little fiddling revealed that I also know how to do that. We have a positive principal form $$ \langle 1,1,p \rangle $$ where $$ - \Delta = 4 p - 1 $$ is also prime. Otherwise there is a second genus unless $ - \Delta = 27 $ or $ - \Delta = 343 $ or similar prime power, which is a problem I am going to ignore; our discriminant is minus a prime, $ \Delta = 1- 4 p . $ </p> <p>We are interested in the possibility that $n^2 + n + p$ is composite for $0 \leq n \leq p-2.$ If so, let $q$ be the smallest prime that divides such a composite number. We have $n^2 + n + p \equiv 0 \pmod q.$ This means $(2n+1)^2 \equiv \Delta \pmod q,$ so we know $\Delta$ is a quadratic residue. Next $n^2 + n + p \leq (p-1)^2 + 1.$ So, the smallest prime factor is smaller than $p-1,$ and $q &lt; p,$ so $q &lt; - \Delta.$</p> <p>Let's see, the two roots of $n^2 + n + p \pmod q$ add up to $q-1.$ One may confirm that if $m = q-1-n,$ then $m^2 + m + p \equiv 0 \pmod q.$ However, we cannot have $n = (q-1)/2$ with $n^2 + n + p \pmod q,$ because that implies $\Delta \equiv 0 \pmod q,$ and we were careful to make $- \Delta$ prime, with $q &lt; - \Delta.$ Therefore there are two distinct values of $n \pmod q,$ the two values add to $q-1.$ As a result, there is a value of $n$ with $n^2 + n + p \equiv 0 \pmod q$ and $n &lt; \frac{q-1}{2}.$</p> <p>Using the change of variable matrix $$ \left( \begin{array}{cc} 1 &amp; n \\ 0 &amp; 1 \end{array}\right) $$ written on the right, we find that $$ \langle 1,1,p \rangle \sim \langle 1,2n+1,qs \rangle $$ where $n^2 + n + p = q s,$ with $2n+1 &lt; q$ and $q \leq s.$ As a result, the new form $$ \langle q,2n+1,s \rangle $$ is of the same discriminant but is already reduced, and is therefore not equivalent to the principal form. Thus, the class number is larger than one. </p>
<p>To answer the second part of your question: The requirement that $n^2+n+k$ be prime for $0\lt n\lt40$ defines a <a href="http://en.wikipedia.org/wiki/Prime_k-tuple#Prime_constellations">prime constellation</a>. The <a href="http://en.wikipedia.org/wiki/First_Hardy%E2%80%93Littlewood_conjecture#First_Hardy.E2.80.93Littlewood_conjecture">first Hardy&ndash;Littlewood conjecture</a>, which is widely believed to be likely to hold, states that the asymptotic density of prime constellations is correctly predicted by a probabilistic model based on independently uniformly distributed residues with respect to all primes. Here's a Sage session that calculates the coefficient for your prime constellation:</p> <pre><code>sage: P = Primes (); sage: coefficient = 1; sage: for i in range (0,10000) : ... p = P.unrank (i); ... if (p &lt; 39^2 + 39) : ... admissible = list (true for j in range (0,p)); ... for n in range (1,40) : ... admissible [(n^2+n) % p] = false; ... count = 0; ... for j in range (0,p) : ... if (admissible [j]) : ... count = count + 1; ... else : ... count = p - 39; ... coefficient = coefficient * (count / p) / (1 - 1 / p)^39; ... sage: coefficient.n () 2.22848364649549e27 </code></pre> <p>The change upon doubling the cutoff $10000$ is in the third digit, so the number of such prime constellations up to $N$ is expected to be asymptotically given approximately by $2\cdot10^{27}N/\log^{39}N$. This is $1$ for $N\approx4\cdot10^{54}$, so although there are expected to be infinitely many, you'd probably have quite a bit of searching to do to find one. There are none except the one with $k=41$ up to $k=1000000$:</p> <pre><code>sage: P = Primes (); sage: for k in range (0,1000000) : ... success = true; ... for n in range (1,40) : ... if not (n^2 + n + k) in P : ... success = false; ... break; ... if success : ... print k; 41 </code></pre>
matrices
<blockquote> <p>How can I prove <span class="math-container">$\operatorname{rank}A^TA=\operatorname{rank}A$</span> for any <span class="math-container">$A\in M_{m \times n}$</span>?</p> </blockquote> <p>This is an exercise in my textbook associated with orthogonal projections and Gram-Schmidt process, but I am unsure how they are relevant.</p>
<p>Let $\mathbf{x} \in N(A)$ where $N(A)$ is the null space of $A$. </p> <p>So, $$\begin{align} A\mathbf{x} &amp;=\mathbf{0} \\\implies A^TA\mathbf{x} &amp;=\mathbf{0} \\\implies \mathbf{x} &amp;\in N(A^TA) \end{align}$$ Hence $N(A) \subseteq N(A^TA)$.</p> <p>Again let $\mathbf{x} \in N(A^TA)$</p> <p>So, $$\begin{align} A^TA\mathbf{x} &amp;=\mathbf{0} \\\implies \mathbf{x}^TA^TA\mathbf{x} &amp;=\mathbf{0} \\\implies (A\mathbf{x})^T(A\mathbf{x})&amp;=\mathbf{0} \\\implies A\mathbf{x}&amp;=\mathbf{0}\\\implies \mathbf{x} &amp;\in N(A) \end{align}$$ Hence $N(A^TA) \subseteq N(A)$.</p> <p>Therefore $$\begin{align} N(A^TA) &amp;= N(A)\\ \implies \dim(N(A^TA)) &amp;= \dim(N(A))\\ \implies \text{rank}(A^TA) &amp;= \text{rank}(A)\end{align}$$</p>
<p>Let $r$ be the rank of $A \in \mathbb{R}^{m \times n}$. We then have the SVD of $A$ as $$A_{m \times n} = U_{m \times r} \Sigma_{r \times r} V^T_{r \times n}$$ This gives $A^TA$ as $$A^TA = V_{n \times r} \Sigma_{r \times r}^2 V^T_{r \times n}$$ which is nothing but the SVD of $A^TA$. From this it is clear that $A^TA$ also has rank $r$. In fact the singular values of $A^TA$ are nothing but the square of the singular values of $A$.</p>
game-theory
<p>A mathematician and a computer are playing a game: First, the mathematician chooses an integer from the range $2,...,1000$. Then, the computer chooses an integer <em>uniformly at random</em> from the same range. If the numbers chosen share a prime factor, the larger number wins. If they do not, the smaller number wins. (If the two numbers are the same, the game is a draw.)</p> <p>Which number should the mathematician choose in order to maximize his chances of winning?</p>
<p>For fixed range:</p> <pre><code>range = 16; a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@ Flatten@Table[ Table[Position[a, a[[y, m]]][[n, 1]], {n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n, 1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; cf = Grid[{{Join[{"n"}, Rest@(r = Range[range])] // ColumnForm, Join[{"win against n"}, Rest@w] // ColumnForm, Join[{"lose against n"}, Rest@l] // ColumnForm, Join[{"probability win for n"}, (p = Drop[Table[ results[[n]]/Total@Drop[results, 1] // N,{n, 1, range}], 1])] // ColumnForm}}] Flatten[Position[p, Max@p] + 1] </code></pre> <p>isn't great code, but fun to play with for small ranges, gives</p> <p><img src="https://i.sstatic.net/6oqyM.png" alt="enter image description here"> <img src="https://i.sstatic.net/hxDGU.png" alt="enter image description here"></p> <p>and perhaps more illuminating</p> <pre><code>rr = 20; Grid[{{Join[{"range"}, Rest@(r = Range[rr])] // ColumnForm, Join[{"best n"}, (t = Rest@Table[ a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@Flatten@Table[Table[ Position[a, a[[y, m]]][[n, 1]], {n, 1,Length@Position[a, a[[y, m]]]}], {m, 1,PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n,1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; p = Drop[Table[results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1]; {Flatten[Position[p, Max@p] + 1], Max@p}, {range, 1, rr}]/.Indeterminate-&gt; draw); Table[t[[n, 1]], {n, 1, rr - 1}]] // ColumnForm, Join[{"probability for win"}, Table[t[[n, 2]], {n, 1, rr - 1}]] // ColumnForm}}] </code></pre> <p>compares ranges:</p> <p><img src="https://i.sstatic.net/P2aA8.png" alt="enter image description here"></p> <p>Plotting mean "best $n$" against $\sqrt{\text{range}}$ gives</p> <p><img src="https://i.sstatic.net/LMkAz.png" alt="enter image description here"></p> <p>For range=$1000,$ "best $n$" are $29$ and $31$, which can be seen as maxima in this plot:</p> <p><img src="https://i.sstatic.net/06gXJ.png" alt="enter image description here"></p> <h1>Update</h1> <p>In light of DanielV's comment that a "primes vs winchance" graph would probably be enlightening, I did a little bit of digging, and it turns out that it is. Looking at the "winchance" (just a weighting for $n$) of the primes in the range only, it is possible to give a fairly accurate prediction using</p> <pre><code>range = 1000; a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@Flatten@Table[ Table[Position[a, a[[y, m]]][[n, 1]], {n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[ DeleteCases[ DeleteCases[ Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[ DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n, 1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; p = Drop[Table[ results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1]; {Flatten[Position[p, Max@p] + 1], Max@p}; qq = Prime[Range[PrimePi[2], PrimePi[range]]] - 1; Show[ListLinePlot[Table[p[[t]] range, {t, qq}], DataRange -&gt; {1, Length@qq}], ListLinePlot[ Table[2 - 2/Prime[x] - 2/range (-E + Prime[x]), {x, 1, Length@qq + 0}], PlotStyle -&gt; Red], PlotRange -&gt; All] </code></pre> <p><img src="https://i.sstatic.net/BwAmp.png" alt="enter image description here"></p> <p>The plot above (there are $2$ plots here) show the values of "winchance" for primes against a plot of $$2+\frac{2 (e-p_n)}{\text{range}}-\frac{2}{p_n}$$</p> <p>where $p_n$ is the $n$th prime, and "winchance" is the number of possible wins for $n$ divided by total number of possible wins in range ie $$\dfrac{\text{range}}{2}\left(\text{range}-1\right)$$ eg $499500$ for range $1000$.</p> <p><img src="https://i.sstatic.net/zFegO.png" alt="enter image description here"></p> <pre><code>Show[p // ListLinePlot, ListPlot[N[ Transpose@{Prime[Range[PrimePi[2] PrimePi[range]]], Table[(2 + (2*(E - Prime[x]))/range - 2/Prime[x])/range, {x, 1, Length@qq}]}], PlotStyle -&gt; {Thick, Red, PointSize[Medium]}, DataRange -&gt; {1, range}]] </code></pre> <h1>Added</h1> <p>Bit of fun with game simulation:</p> <pre><code>games = 100; range = 30; table = Prime[Range[PrimePi[range]]]; choice = Nearest[table, Round[Sqrt[range]]][[1]]; y = RandomChoice[Range[2, range], games]; z = Table[ Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}]; Count[Table[If[Count[z, choice] == 0 &amp;&amp; y[[m]] &lt; choice \[Or] Count[z, choice] &gt; 0 &amp;&amp; y[[m]] &lt; choice, "lose", "win"], {m, 1, games}], "win"] </code></pre> <p>&amp; simulated wins against computer over variety of ranges </p> <p><img src="https://i.sstatic.net/82BQG.png" alt="enter image description here"></p> <p>with</p> <pre><code>Clear[range] highestRange = 1000; ListLinePlot[Table[games = 100; table = Prime[Range[PrimePi[range]]]; choice = Nearest[table, Round[Sqrt[range]]][[1]]; y = RandomChoice[Range[2, range], games]; z = Table[Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}]; Count[Table[ If[Count[z, choice] == 0 &amp;&amp; y[[m]] &lt; choice \[Or] Count[z, choice] &gt; 0 &amp;&amp; y[[m]] &lt; choice, "lose", "win"], {m, 1, games}], "win"], {range,2, highestRange}], Filling -&gt; Axis, PlotRange-&gt; All] </code></pre> <h1>Added 2</h1> <p>Plot of mean "best $n$" up to range$=1000$ with tentative conjectured error bound of $\pm\dfrac{\sqrt{\text{range}}}{\log(\text{range})}$ for range$&gt;30$.</p> <p><img src="https://i.sstatic.net/gjQ7E.png" alt="enter image description here"></p> <p>I could well be wrong here though. - In fact, on reflection, I think I am (<a href="https://math.stackexchange.com/questions/865820/very-tight-prime-bounds">related</a>).</p>
<p>First consider choosing a prime $p$ in the range $[2,N]$. You lose only if the computer chooses a multiple of $p$ or a number smaller than $p$, which occurs with probability $$ \frac{(\lfloor{N/p}\rfloor-1)+(p-2)}{N-1}=\frac{\lfloor{p+N/p}\rfloor-3}{N-1}. $$ The term inside the floor function has derivative $$ 1-\frac{N}{p^2}, $$ so it increases for $p\le \sqrt{N}$ and decreases thereafter. The floor function does not change this behavior. So the best prime to choose is always one of the two closest primes to $\sqrt{N}$ (the one on its left and one its right, unless $N$ is the square of a prime). Your chance of losing with this strategy will be $\sim 2/\sqrt{N}$.</p> <p>On the other hand, consider choosing a composite $q$ whose prime factors are $$p_1 \le p_2 \le \ldots \le p_k.$$ Then the computer certainly wins if it chooses a prime number less than $q$ (other than any of the $p$'s); there are about $q / \log q$ of these by the prime number theorem. It also wins if it chooses a multiple of $p_1$ larger than $q$; there are about $(N-q)/p_1$ of these. Since $p_1 \le \sqrt{q}$ (because $q$ is composite), the computer's chance of winning here is at least about $$ \frac{q}{N\log q}+\frac{N-q}{N\sqrt{q}}. $$ The first term increases with $q$, and the second term decreases. The second term is larger than $(1/3)/\sqrt{N}$ until $q \ge (19-\sqrt{37})N/18 \approx 0.72 N$, at which point the first is already $0.72 / \log{N}$, which is itself larger than $(5/3)/\sqrt{N}$ as long as $N &gt; 124$. So the sum of these terms will always be larger than $2/\sqrt{N}$ for $N &gt; 124$ or so, meaning that the computer has a better chance of winning than if you'd chosen the best prime.</p> <p>This rough calculation shows that choosing the prime closest to $\sqrt{N}$ is the best strategy for sufficiently large $N$, where "sufficiently large" means larger than about $100$. (Other answers have listed the exceptions, the largest of which appears to be $N=30$, consistent with this calculation.)</p>
linear-algebra
<p>In <a href="https://math.stackexchange.com/questions/1696686/is-linear-algebra-laying-the-foundation-for-something-important">a recent question</a>, it was discussed how LA is a foundation to other branches of mathematics, be they pure or applied. One answer argued that linear problems are <em>fully understood</em>, and hence a natural target to reduce pretty much anything to. </p> <p>Now, it's evident enough that such a linearisation, if possible, tends to make hard things easier. Find a Hilbert space hidden in your domain, obtain an orthonormal basis, and <em>bam</em>, any point/state can be described as a mere sequence of numbers, any mapping boils down to a matrix; we have some good theorems for existence of inverses / eigenvectors/exponential objects / etc..</p> <p>So, LA sure is <em>convenient</em>.</p> <p>OTOH, it seems unlikely that any nontrivial mathematical system could ever be said to be <em>thoroughly understood</em>. Can't we always find new questions within any such framework that haven't been answered yet? I'm not firm enough with G&ouml;del's incompleteness theorems to judge whether they are relevant here. The first incompleteness theorem says that discrete disciplines like number theory can't be both complete and consistent. Surely this is all the more true for e.g. topology.</p> <p>Is LA for some reason exempt from such arguments, or does it for some other reason deserve to be called <em>the</em> best understood branch of mathematics?</p>
<p>It's closer to true that all the questions in finite-dimensional linear algebra that can be asked in an introductory course can be answered in an introductory course. This is wildly far from true in most other areas. In number theory, algebraic topology, geometric topology, set theory, and theoretical computer science, for instance, here are some questions you could ask within a week of beginning the subject: how many primes are there separated by two? How many homotopically distinct maps are there between two given spaces? How can we tell apart two four dimensional manifolds? Are there sets in between a given set and its powerset in cardinality? Are various natural complexity classes actually distinct?</p> <p>None of these questions are very close to completely answered, only one is really anywhere close, in fact one is known to be unanswerable in general, and partial answers to each of these have earned massive accolades from the mathematical community. No such phenomena can be observed in finite dimensional linear algebra, where we can classify in a matter of a few lectures all possible objects, give explicit algorithms to determine when two examples are isomorphic, determine precisely the structure of the spaces of maps between two vector spaces, and understand in great detail how to represent morphisms in various ways amenable to computation. Thus linear algebra becomes both the archetypal example of a completely successful mathematical field, and a powerful tool when other mathematical fields can be reduced to it. </p> <p>This is an extended explanation of the claim that linear algebra is "thoroughly understood." That doesn't mean "totally understood," as you claim!</p>
<p>The hard parts of linear algebra have been given new and different names, such as representation theory, invariant theory, quantum mechanics, functional analysis, Markov chains, C*-algebras, numerical methods, commutative algebra, and K-theory. Those are full of mysteries and open problems.</p> <p>What is left over as the "linear algebra" taught to students is a much smaller subject that was mostly finished a hundred years ago. </p>
linear-algebra
<p>Let $\mathbb{F}_3$ be the field with three elements. Let $n\geq 1$. How many elements do the following groups have?</p> <ol> <li>$\text{GL}_n(\mathbb{F}_3)$</li> <li>$\text{SL}_n(\mathbb{F}_3)$</li> </ol> <p>Here GL is the <a href="http://en.wikipedia.org/wiki/general_linear_group">general linear group</a>, the group of invertible <em>n</em>×<i>n</i> matrices, and SL is the <a href="http://en.wikipedia.org/wiki/Special_linear_group">special linear group</a>, the group of <em>n</em>×<i>n</i> matrices with determinant 1. </p>
<p><strong>First question:</strong> We solve the problem for &quot;the&quot; finite field <span class="math-container">$F_q$</span> with <span class="math-container">$q$</span> elements. The first row <span class="math-container">$u_1$</span> of the matrix can be anything but the <span class="math-container">$0$</span>-vector, so there are <span class="math-container">$q^n-1$</span> possibilities for the first row. For <em>any</em> one of these possibilities, the second row <span class="math-container">$u_2$</span> can be anything but a multiple of the first row, giving <span class="math-container">$q^n-q$</span> possibilities.</p> <p>For any choice <span class="math-container">$u_1, u_2$</span> of the first two rows, the third row can be anything but a linear combination of <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span>. The number of linear combinations <span class="math-container">$a_1u_1+a_2u_2$</span> is just the number of choices for the pair <span class="math-container">$(a_1,a_2)$</span>, and there are <span class="math-container">$q^2$</span> of these. It follows that for every <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span>, there are <span class="math-container">$q^n-q^2$</span> possibilities for the third row.</p> <p>For any allowed choice <span class="math-container">$u_1$</span>, <span class="math-container">$u_2$</span>, <span class="math-container">$u_3$</span>, the fourth row can be anything except a linear combination <span class="math-container">$a_1u_1+a_2u_2+a_3u_3$</span> of the first three rows. Thus for every allowed <span class="math-container">$u_1, u_2, u_3$</span> there are <span class="math-container">$q^3$</span> forbidden fourth rows, and therefore <span class="math-container">$q^n-q^3$</span> allowed fourth rows.</p> <p>Continue. The number of non-singular matrices is <span class="math-container">$$(q^n-1)(q^n-q)(q^n-q^2)\cdots (q^n-q^{n-1}).$$</span></p> <p><strong>Second question:</strong> We first deal with the case <span class="math-container">$q=3$</span> of the question. If we multiply the first row by <span class="math-container">$2$</span>, any matrix with determinant <span class="math-container">$1$</span> is mapped to a matrix with determinant <span class="math-container">$2$</span>, and any matrix with determinant <span class="math-container">$2$</span> is mapped to a matrix with determinant <span class="math-container">$1$</span>.</p> <p>Thus we have produced a <em>bijection</em> between matrices with determinant <span class="math-container">$1$</span> and matrices with determinant <span class="math-container">$2$</span>. It follows that <span class="math-container">$SL_n(F_3)$</span> has half as many elements as <span class="math-container">$GL_n(F_3)$</span>.</p> <p>The same idea works for any finite field <span class="math-container">$F_q$</span> with <span class="math-container">$q$</span> elements. Multiplying the first row of a matrix with determinant <span class="math-container">$1$</span> by the non-zero field element <span class="math-container">$a$</span> produces a matrix with determinant <span class="math-container">$a$</span>, and all matrices with determinant <span class="math-container">$a$</span> can be produced in this way. It follows that <span class="math-container">$$|SL_n(F_q)|=\frac{1}{q-1}|GL_n(F_q)|.$$</span></p>
<p>Determinant function is a surjective homomorphism from <span class="math-container">$GL(n, F)$</span> to <span class="math-container">$F^*$</span> with kernel <span class="math-container">$SL(n, F)$</span>. Hence by the fundamental isomorphism theorem <span class="math-container">$\frac{GL(n,F)}{SL(n,F)}$</span> is isomorphic to <span class="math-container">$F^*$</span>, the multiplicative group of nonzero elements of <span class="math-container">$F$</span>. </p> <p>Thus if <span class="math-container">$F$</span> is finite with <span class="math-container">$p$</span> elements then <span class="math-container">$|GL(n,F)|=(p-1)|SL(n, F)|$</span>.</p>
logic
<p>Why is establishing the absolute consistency of ZFC impossible? What are the fundamental limitations that prohibit us with coming up with a proof?</p> <p><strong>EDIT:</strong> <a href="https://mathoverflow.net/q/24919">This</a> post seems to make the most sense. In short: if we were to come up with a mathematical proof of the consistency of ZFC, we would be able to mimic that proof inside ZFC. Ergo, if ZFC is consistent, there can be no proof that it is.</p>
<p>This kind of question often includes several common points of confusion.</p> <h3>Confusing point 1: we cannot talk rigorously about a statement being &quot;unprovable&quot; without reference to the formal system for doing the proof.</h3> <p>There is a mathematical, formally defined relation of provability between a formal system and a sentence, which defines what it means for the sentence to be &quot;provable&quot; from the formal system. This relation depends on both the statement and the formal system. On the other hand, there is no rigorous notion of &quot;provable&quot; without reference to a formal system.</p> <p>So it does not really make any sense to talk about a statement being &quot;unprovable&quot; without reference to what system is doing the proving.</p> <p>In particular, every statement is provable from a formal system that already includes that statement as an axiom. That is somewhat trivial - but even if the statement is not an axiom, it is still the case that the only way for a statement to be provable in a particular system is for the statement to already be a consequence of the axioms of the system. This is true for Con(ZFC) and for every other mathematical statement. Some statements are provable from no axioms at all - these are called logically valid. The incompleteness theorems show in a very strong way that Con(ZFC) is not logically valid.</p> <h3>Confusing point 2: Con(ZFC) is not different, in the end, than many other statements that we accept as &quot;provable&quot; without reference to consistency.</h3> <p>There is a particular polynomial <span class="math-container">$P$</span>, with integer coefficients, integer exponents, and many variables, so that the statement Con(ZFC) can be expressed as &quot;there are no positive integer inputs which cause the value of <span class="math-container">$P$</span> to equal 0&quot;. This statement is not particularly different in form, for example, than the special case of Fermat's last theorem: &quot;there are not any positive integer inputs <span class="math-container">$x,y,z$</span> such that <span class="math-container">$x^{7}+y^{7}-z^{7} = 0$</span>.&quot;</p> <p>Nobody seriously claims that the proof of that special case of Fermat's last theorem is merely a conditional claim that can never be &quot;absolutely proved&quot; without assuming the consistency of some theory. But Con(ZFC) is not significantly different in form - just longer - than that special case of Fermat's last theorem. Both of these are just statements that some multivariable integer polynomial is never equal to zero on positive integer inputs.</p> <p>The real situation is that <em>almost nothing</em> can be proved &quot;absolutely&quot;. Unless a statement is logically valid, additional axioms will be required to prove it. There is nothing special about Con(ZFC) in that respect. Any proof of a non-trivial theorem will always rely on extra &quot;assumptions', and in a trivial sense those assumptions always have to be at least as strong as the theorem we are trying to prove.</p> <h3>Confusing point 3: ZFC is not the strongest natural system for set theory</h3> <p>There are many, many axioms systems for set theory. ZFC is of interest because its axioms are very natural to motivate and because it is strong enough for formalize the vast majority of mathematical theorems. But that does not mean that ZFC is somehow a stopping point. There are natural systems of set theory stronger than ZFC.</p> <p>On particular example is <a href="https://en.wikipedia.org/wiki/Morse%E2%80%93Kelley_set_theory" rel="noreferrer">Morse--Kelley set theory</a>, MK, which was exposited in the appendix of Kelley's book <em>General Topology</em>. This is a perfectly reasonable system for set theory, which happens to prove Con(ZFC). We should pay attention to the formal meaning of this: there is a finite derivation that has no assumptions besides the axioms of MK, and which ends with Con(ZFC). And note that MK was designed by a mathematician and published in a mathematics textbook for mathematical purposes.</p> <p>The argument given in <a href="https://mathoverflow.net/a/24919/5442">this MathOverflow answer</a> includes a particular premise: that all &quot;mathematical&quot; techniques are already included in ZFC. That is the &quot;Key Assumption&quot; in that argument, which was alluded to in the question above.</p> <p>There <em>is</em> some merit in that heuristic argument: many of the standard techniques of mathematics can be formalized in ZFC. The situation is not as clear as we might suppose, though, because of examples like MK set theory, which must be defined as &quot;nonmathematical&quot; in order for the Key Assumption to hold. At the worst, we could find ourselves making a circular argument, where we <em>define</em> &quot;mathematical&quot; to be &quot;formalizable in ZFC&quot; and then argue that ZFC is able to formalize all mathematical arguments.</p> <p>A deeper question, which cannot be answered because it is ahistorical, is whether MK would be considered a more &quot;natural&quot; system if it had been exposited before ZFC. Just like ZFC, MK is based on a natural intuition about the nature of sets and classes, which leads to the next point.</p> <h3>An informal, &quot;mathematical&quot; proof of the consistency of ZFC</h3> <p>What if we don't look at formal theories, and we just look for a &quot;mathematical&quot; but informal proof of the consistency of ZFC? Actually, we have one: there is a well known argument that our intuitive picture of the cumulative hierarchy shows that ZFC is satisfied by the cumulative hierarchy, and thus ZFC is unable to prove a contradiction. Of course, this argument relies on a pre-existing, informal understanding of the cumulative hierarchy. So not all mathematicians will accept it - but it is of interest exactly because it is very compelling as an informal argument.</p> <p>If we want to separate &quot;mathematical proof&quot; from &quot;formal proof&quot;, then it is not at all clear why this kind of proof should be out of the question. Unlike formal proofs, mathematicians may differ on whether they accept this informal proof of Con(ZFC). But it is certainly some kind of informal &quot;mathematical proof&quot; of Con(ZFC), if we are willing to consider informal proofs as mathematical.</p> <h3>So what is the deal with Con(ZFC)?</h3> <p>We should really ask <em>why</em> someone would be interested in an &quot;absolute&quot; consistency proof of ZFC. Presumably, it is because they doubt that ZFC is consistent, and they want to shore up their belief (Hilbert's program can be caricatured in this way).</p> <p>In that case, as soon as we see from the incompleteness theorems that the consistency of ZFC is not a logical validity, we are naturally led to an alternate question: &quot;Which theories are strong enough to prove the consistency of ZFC?&quot; There has been a lot of work on that question, in mathematical logic and foundations.</p> <p>The incompleteness theorems give part of the answer: no theory that can be interpreted in ZFC can prove Con(ZFC). Examples like MK give another part: there are natural theories strong enough to prove Con(ZFC). In the end, we can choose any formal system we like for each mathematical theorem we want to prove. Some of those theorems are provable in systems that are unable prove Con(ZFC), and some of those theorems require systems that do prove Con(ZFC). Separating those two groups of theorems leads to interesting research in set theory and logic.</p>
<p>To add to Carl Mummert's answer...</p> <h2>Confusing point 0: No formal system can be shown to be absolutely consistent.</h2> <p>Yes you read that right. Perhaps you might say, how about the formal system consisting of just first-order logic and not a single axiom, namely the pure identity theory, or maybe even without equality? Sorry, but that doesn't work. How do you state the claim that this system is absolutely consistent? You would need to work in a meta-system that is powerful enough to reason about sentences that are provable over the formal system. To do so, your meta-system already needs string manipulation capability, which turns out to be more or less equivalent to first-order PA. Hence before you can even <strong>assert</strong> that any formal system is consistent, you need to believe the consistency of the meta-system, which is likely going to be essentially PA, which then means that you <strong>do not ever have</strong> absolute consistency.</p> <p>Of course, if you <strong>assume</strong> that PA is consistent, then you have some other options, but even then it's not so simple. For example you may try defining &quot;absolute consistent&quot; as &quot;provably consistent within PA as the meta-system&quot;. This flops on its face because then you cannot even claim that PA itself is absolutely consistent, otherwise it contradicts Godel's incompleteness theorem. So ok you try again, this time defining &quot;T is absolutely consistent&quot; to mean &quot;PA proves that Con(PA) implies Con(T)&quot;. Alright this is better; at least we can now show that PA is absolutely consistent, though it is trivial! </p> <p>However, note that even with the assumption of consistency of some formal system such as PA, as above, you are still working in some meta-system, and by doing so you are already accepting the consistency of that meta-system without being able to affirm it <a href="https://math.stackexchange.com/a/1334753/21820">non-circularly</a>. Therefore any attempt to define any truly absolute notion of consistency is doomed from the start.</p>
probability
<p>This morning, I wanted to flip a coin to make a decision but only had an SD card:</p> <p><img src="https://i.sstatic.net/T7fUT.png" alt="enter image description here"></p> <p>Given that <em>I don't know</em> the bias of this SD card, would flipping it be considered a "fair toss"?</p> <p>I thought if I'm just as likely to assign an outcome to one side as to the other, then it must be a fair. But this also seems like a recasting of the original question; instead of asking whether the <em>unknowing of the SD card's construction</em> defines fairness, I'm asking if the <em>unknowing of my own psychology</em> (<em>e.g.</em> which side I'd choose for which outcome) defines fairness. Either way, I think I'm asking: What's the exact relationship between <em>not knowing</em> and "fairness"?</p> <p>Additional thought: An SD card might be "fair" <em>to me</em>, but not at all fair to, say, a design engineer looking at the SD card's blueprint, who immediately sees that the chip is off-center from the flat plane. So it seems <em>fairness</em> even depends on the subjects to whom fairness <em>matters</em>. In a football game then, does an SD card remain "fair" as long as no design engineer is there to discern the object being tossed?</p>
<p>Here's a pragmatic answer from an engineer. You can always get a fair 50/50 outcome with <strong>any</strong> "coin" (or SD card, or what have you), <em>without having to know whether it is biased, or how biased it is</em>:</p> <ul> <li>Flip the coin twice. </li> <li>If you get $HH$ or $TT$, discard the trial and repeat. </li> <li>If you get $HT$, decide $H$. </li> <li>If you get $TH$, decide $T$.</li> </ul> <p>The only conditions are that (i) the coin is not completely biased (i.e., $\Pr(H)\neq 0, \Pr(T)\neq 0$), and (ii) the bias does not change from trial to trial.</p> <p>The procedure works because whatever the bias is (say $\Pr(H)=p$, $\Pr(T)=1-p$), the probabilties of getting $HT$ and $TH$ are the same: $p(1-p)$. Since the other outcomes are discarded, $HT$ and $TH$ each occur with probability $\frac{1}{2}$.</p>
<p>That is a very good question!</p> <p>There are (at least) two different ways to define probability: as a measure of frequencies, and as a measure of (subjective) knowlegde of the result.</p> <p>The frequentist definition would be: the probability of the sd card landing "heads" is the proportion of times it lands "heads", if you toss it many times (details ommited partially because of ignorance: what do we mean by 'many'?)</p> <p>The "knowledge" approach (usually called bayesian) is harder to define. It asks how likely you (given your information) think an outcome is. As you have no information about the construction of the sd card, you might think both sides are equally likely to appear. </p> <p>In more concrete terms, say I offer you a bet: I give you one dollar if 'heads', and you give me one if 'tails'. If we both are ignorant about the sd card, then, for us both, the bet sounds neither good nor bad. In a sense, it is a fair bet.</p> <p>Notice that the bayesian approach defines more probabilities that the frequentist. I can, say, talk about the probability that black holes exist. Well, either they do, or they don't, but that does not mean there are no bets I would consider advantageous on the matter: If you offer me a million dollars versus one dollar, saying that they exist, I might take that bet (and that would 'imply' that I consider the probability that they don't exist to be bigger than 1 millionth).</p> <p>Now, the question of fairness: if no one knows anything about the sd card, I would call your sd card toss fair. In a very meaningfull way: neither of the teams, given a side, would have reason to prefer the other side. However, obviously, it has practical drawbacks: a team might figure something out latter on, and come to complain about it. (that is: back when they chose a side, their knowledge did not allow them to distinguish the sides. Now, it does)</p> <p>It the end: there is not one definition of probability that is 100% accepted. Hence, there is no definition of fair that is 100% accepted.</p> <p><a href="http://en.wikipedia.org/wiki/Probability_interpretations">http://en.wikipedia.org/wiki/Probability_interpretations</a></p>
game-theory
<p>Have any studies been done that demonstrate people (not game theorists) actually using mixed Nash equilibrium as their strategy in a game? </p>
<p>According to <a href="http://pricetheory.uchicago.edu/levitt/Papers/ChiapporiGrosecloseLevitt2002.pdf">this</a>(article about mixed equilibrium strategies), I think penalty kicks between two soccer teams use mixed Nash equilibrium strategies.</p>
<p>There have been lots of studies on this sort of thing, with different results. It depends a lot on cultural context. You might look at <a href="http://books.google.ca/books?id=0NKfbdyzvJAC&amp;printsec=frontcover&amp;dq=%22A%20beautiful%20math%22%20siegfried&amp;hl=en&amp;ei=AOylTcDFPJLmsQOF9bn6DA&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CC8Q6AEwAA#v=onepage&amp;q&amp;f=false" rel="nofollow">"A Beautiful Math"</a> by Tom Siegfried</p>
logic
<p>Gödel states and proves his celebrated Incompleteness Theorem (which is a statement about all axiom systems). What is his own axiom system of choice? ZF, ZFC, Peano or what? He surely needs one, doesn't he?</p>
<p>Gödel's paper was written in the same way as essentially every other mathematical paper. To prove a theorem <em>about</em> a formal system does not require one to prove that theorem <em>within</em> a formal system. Gödel argued in his paper that the incompleteness theorem should be viewed as a result in elementary number theory, and he certainly proved the incompleteness theorem to the same standard of rigor as other results in that area. </p> <p>Of course, we now know that the incompleteness theorem can be proved in <em>extremely</em> weak systems, such as PRA (primitive recursive arithmetic), a theory much weaker than Peano arithmetic. But, at the time that Gödel wrote his paper, the definition of PRA had never been formulated. </p> <p>Even Gödel's theorem, as he stated it at the time, was for a particular formal system "$P$" rather than for effective formal systems in general, because the definition of an "effective formal system" had not yet been compellingly formulated. Remarkably, it took well into the 20th century before many of the the now-standard concepts of logic were clearly understood. </p>
<p>As a footnote to Carl Mummert's terrific answer, it is worth adding the following remark.</p> <p>Yes, Gödel was giving an informal mathematical proof "from outside" (as it were) about certain formal systems. And yes, he is not explicit about what exactly his proof requires to go through. </p> <p>However, it <em>was</em> very important to him at the time that his proof used only very elementary constructive reasoning that would be acceptable even to e.g. intuitionists who did not accept full classical mathematics, and equally to Hilbertian finitists who put even more stringent limits on what counted as quite indisputable mathematical reasoning. (After all, he couldn't effectively challenge Hilbert's program by using reasoning that wouldn't be accepted as unproblematic by a Hilbertian!) </p> <p>So although Gödel doesn't explicitly set out what exactly he is assuming, we are supposed to be able to see by inspection that nothing worryingly infinitary or otherwise suspect is going on, and that -- although his construction of the Gödel sentence for his system $P$ is beautifully ingenious -- once we spot the trick, the reasoning that shows that the sentence is formally undecidable in $P$ is as uncontentious as can be (even by the lights of very contentious intuitionists or finitists!), so falls <em>way</em> short of what it would require the oomph of classical ZF (or even full classical PA) to formalize. </p>
linear-algebra
<p>Determine if the following set of vectors is linearly independent:</p> <p>$$\left[\begin{array}{r}2\\2\\0\end{array}\right],\left[\begin{array}{r}1\\-1\\1\end{array}\right],\left[\begin{array}{r}4\\2\\-2\end{array}\right]$$</p> <p>I've done the following system of equations, and I think I did it right... It's been such a long time since I did this sort of thing...</p> <p>Assume the following: \begin{equation*} a\left[\begin{array}{r}2\\2\\0\end{array}\right]+b\left[\begin{array}{r}1\\-1\\1\end{array}\right]+c\left[\begin{array}{r}4\\2\\-2\end{array}\right]=\left[\begin{array}{r}0\\0\\0\end{array}\right] \end{equation*} Determine if $a=b=c=0$: \begin{align} 2a+b+4c&amp;=0&amp;&amp;(1)\\ 2a-b+2c&amp;=0&amp;&amp;(2)\\ b-2c&amp;=0&amp;&amp;(3) \end{align} Subtract $(2)$ from $(1)$: \begin{align} b+c&amp;=0&amp;&amp;(4)\\ b-2c&amp;=0&amp;&amp;(5) \end{align} Substitute $(5)$ into $(4)$, we get $c=0$.</p> <p>So now what do I do with this fact? I'm tempted to say that only $c=0$, and $a$ and $b$ can be something else, but I don't trust that my intuition is right.</p>
<p>You just stopped too early:</p> <p>Since you have 3 varibles with 3 equations, you can simply obtain <span class="math-container">$a,b,c$</span> by substituting <span class="math-container">$c = 0$</span> back into the two equations: </p> <ul> <li><p>From equation <span class="math-container">$(3)$</span>, <span class="math-container">$c = 0 \implies b = 0$</span>. </p></li> <li><p>With <span class="math-container">$b = 0, c = 0$</span> substituted into equation <span class="math-container">$(1)$</span> or <span class="math-container">$(2)$</span>, <span class="math-container">$b = c = 0 \implies a = 0$</span>. </p></li> </ul> <p>So in the end, since </p> <p><span class="math-container">$$\begin{equation*} a\left[\begin{array}{r}2\\2\\0\end{array}\right]+b\left[\begin{array}{r}1\\-1\\1\end{array}\right]+c\left[\begin{array}{r}4\\2\\-2\end{array}\right]=\left[\begin{array}{r}0\\0\\0\end{array}\right] \end{equation*}\implies a = b = c = 0, $$</span> the vectors <strong>are</strong> linearly independent, based on the definition(shown below).</p> <blockquote> <p>The list of vectors is said to be linearly independent if the only <span class="math-container">$c_1,...,c_n$</span> solving the equation <span class="math-container">$0=c_1v_1+...+c_nv_n$</span> are <span class="math-container">$c_1=c_2=...=c_n=0.$</span></p> </blockquote> <p>You could have, similarly, constructed a <span class="math-container">$3\times 3$</span> matrix <span class="math-container">$M$</span> with the three given vectors as its columns, and computed the determinant of <span class="math-container">$M$</span>. Why would this help? Because we know that if <span class="math-container">$\det M \neq 0$</span>, the given vectors are linearly independent. (However, this method applies only when the number of vectors is equal to the dimension of the Euclidean space.)</p> <p><span class="math-container">$$M = \begin{bmatrix} 2 &amp; 1 &amp; 4 \\ 2 &amp; -1 &amp; 2 \\ 0 &amp; 1 &amp; -2 \end{bmatrix}$$</span></p> <p><span class="math-container">$$\det M = 12 \neq 0 \implies\;\text{linear independence of the columns}.$$</span></p>
<p>you can take the vectors to form a matrix and check its determinant. If the determinant is non zero, then the vectors are linearly independent. Otherwise, they are linearly dependent. </p>
linear-algebra
<p>I have two $2\times 2$ matrices, $A$ and $B$, with the same determinant. I want to know if they are similar or not.</p> <p>I solved this by using a matrix called $S$: $$\left(\begin{array}{cc} a&amp; b\\ c&amp; d \end{array}\right)$$ and its inverse in terms of $a$, $b$, $c$, and $d$, then showing that there was no solution to $A = SBS^{-1}$. That worked fine, but what will I do if I have $3\times 3$ or $9\times 9$ matrices? I can't possibly make system that complex and solve it. How can I know if any two matrices represent the "same" linear transformation with different bases?</p> <p>That is, how can I find $S$ that change of basis matrix?</p> <p>I tried making $A$ and $B$ into linear transformations... but without the bases for the linear transformations I had no way of comparing them.</p> <p>(I have read that similar matrices will have the same eigenvalues... and the same "trace" --but my class has not studied these yet. Also, it may be the case that some matrices with the same trace and eigenvalues are not similar so this will not solve my problem.)</p> <p>I have one idea. Maybe if I look at the reduced col. and row echelon forms that will tell me something about the basis for the linear transformation? I'm not really certain how this would work though? Please help.</p>
<p>There is something called "canonical forms" for a matrix; they are special forms for a matrix that can be obtained intrinsically from the matrix, and which will allow you to easily compare two matrices of the same size to see if they are similar or not. They are indeed based on eigenvalues and eigenvectors.</p> <p>At this point, without the necessary machinery having been covered, the answer is that it is difficult to know if the two matrices are the same or not. The simplest test you can make is to see whether their <a href="http://en.wikipedia.org/wiki/Characteristic_polynomial">characteristic polynomials</a> are the same. This is <em>necessary</em>, but <strong>not sufficient</strong> for similarity (it is related to having the same eigenvalues).</p> <p>Once you have learned about canonical forms, one can use either the <a href="http://en.wikipedia.org/wiki/Jordan_canonical_form">Jordan canonical form</a> (if the characteristic polynomial splits) or the <a href="http://en.wikipedia.org/wiki/Frobenius_normal_form">rational canonical form</a> (if the characteristic polynomial does not split) to compare the two matrices. They will be similar if and only if their rational forms are equal (up to some easily spotted differences; exactly analogous to the fact that two diagonal matrices are the same if they have the same diagonal entries, though the entries don't have to appear in the same order in both matrices).</p> <p>The reduced row echelon form and the reduced column echelon form will not help you, because any two invertible matrices have the same forms (the identity), but need not have the same determinant (so they will not be similar). </p>
<p>My lecturer, Dr. Miryam Rossett, provided the following in her supplementary notes to her linear 1 course ( with a few small additions of my own ):</p> <ol> <li>Show that the matrices represent the same linear transformation according to different bases. This is generally hard to do.</li> <li>If one is diagonalizable and the other not, then they are not similar.</li> <li>Examine the properties of similar matrices. Do they have the same rank, the same trace, the same determinant, the same eigenvalues, the same characteristic polynomial. If any of these are different then the matrices are not similar.</li> <li>Check the geometric multiplicity of each eigenvalue. If the matrices are similar they must match. Another way of looking at this is that for each <span class="math-container">$\lambda_i$</span>, <span class="math-container">$\dim \ker(\lambda_i I-A_k)$</span> must be equal for each matrix. This also implies that for each <span class="math-container">$\lambda_i$</span>, <span class="math-container">$\dim \text{im}(\lambda_i I-A_k)$</span> must be equal since <span class="math-container">$\dim \text{im}+\dim \ker=\dim V$</span></li> <li>Assuming they're both diagonalizable, if they both have the same eigenvalues then they're similar because similarity is transitive. They'e diagonalizable if the geometric multiplicities of the eigenvalues add up to <span class="math-container">$\dim V$</span>, or if all the eigenvalues are distinct, or if they have <span class="math-container">$\dim V$</span> linearly independent eigenvectors.</li> </ol> <p>Numbers 3 and 4 are necessary but not sufficient for similarity.</p>
number-theory
<p>The question is written like this:</p> <blockquote> <p>Is it possible to find an infinite set of points in the plane, not all on the same straight line, such that the distance between <strong>EVERY</strong> pair of points is rational?</p> </blockquote> <p>This would be so easy if these points could be on the same straight line, but I couldn't get any idea to solve the question above(not all points on the same straight line). I believe there must be a kind of concatenation between the points but I couldn't figure it out.</p> <p>What I tried is totally mess. I tried to draw some triangles and to connect some points from one triangle to another, but in vain.</p> <p><strong>Note:</strong> I want to see a real example of such an infinite set of points in the plane that can be an answer for the question. A graph for these points would be helpful.</p>
<p>You can even find infinitely many such points on the unit circle: Let $\mathscr S$ be the set of all points on the unit circle such that $\tan \left(\frac {\theta}4\right)\in \mathbb Q$. If $(\cos(\alpha),\sin(\alpha))$ and $(\cos(\beta),\sin(\beta))$ are two points on the circle then a little geometry tells us that the distance between them is (the absolute value of) $$2 \sin \left(\frac {\alpha}2\right)\cos \left(\frac {\beta}2\right)-2 \sin \left(\frac {\beta}2\right)\cos \left(\frac {\alpha}2\right)$$ and, if the points are both in $\mathscr S$ then this is rational.</p> <p>Details: The distance formula is an immediate consequence of the fact that, if two points on the circle have an angle $\phi$ between them, then the distance between them is (the absolute value of) $2\sin \frac {\phi}2$. For the rationality note that $$z=\tan \frac {\phi}2 \implies \cos \phi= \frac {1-z^2}{1+z^2} \quad \&amp; \quad \sin \phi= \frac {2z}{1+z^2}$$</p> <p>Note: Of course $\mathscr S$ is dense on the circle. So far as I am aware, it is unknown whether you can find such a set which is dense on the entire plane.</p>
<p>Yes, it's possible. For instance, you could start with $(0,1)$ and $(0,0)$, and then put points along the $x$-axis, noting that there are infinitely many different right triangles with rational sides and one leg equal to $1$. For instance, $(3/4,0)$ will have distance $5/4$ to $(0,1)$.</p> <p>This means that <em>most</em> if the points are on a single line (the $x$-axis), but one point, $(0,1)$, is not on that line.</p>
matrices
<p>We say $A$ is a positive definite matrix if and only if $x^T A x &gt; 0$ for all nonzero vectors $x$. Then why does every positive definite matrix have strictly positive eigenvalues?</p>
<p>Suppose our matrix $A$ has eigenvalue $\lambda$. </p> <p>If $\lambda = 0$, then there is some eigenvector $x$ so that $Ax = 0$. But then $x^T A x = 0$, and so $A$ is not positive definite.</p> <p>If $\lambda &lt; 0$, then there is some eigenvector $x$ so that $Ax = \lambda x$. But then $x^T A x = \lambda \lvert x \rvert^2$, which is negative since $\lvert x \rvert^2 &gt; 0$ and $\lambda &lt; 0$. Thus $A$ is not positive definite.</p> <p>And so if $A$ is positive definite, it only has positive eigenvalues.</p>
<p><strong>Hint:</strong> If $\lambda$ is an eigenvalue of $A$, let $x$ be the associated eigenvector, and consider $x'Ax$.</p>
game-theory
<p>A famous problem:</p> <blockquote> <p>A lady is in the center of the circular lake and a monster is on the boundary of the lake. The speed of the monster is <span class="math-container">$v_m$</span>, and the speed of the swimming lady is <span class="math-container">$v_l$</span>. The goal of the lady is to come to the ground without meeting the monster, and the goal of the monster is to meet the lady. <br><br> Under some conditions on <span class="math-container">$v_m,v_l$</span> the lady can always win. What if these conditions are not satisfied?</p> </blockquote> <p><strong>Edited</strong>: the monster cannot swim.</p> <p>If the conditions are not satisfied, then monster can always perform a strategy such that the lady will not escape the lake. On the other hand this strategy is not desirable for both of them because they do not reach their goals.</p> <p>As there was mentioned, this deals with undecidability of the problem. On the other hand, if you imagine yourself to be this lady/monster, you can be interested in the strategy which is not optimal. What is it? If there are such strategies in the game theory?</p> <hr /> <p><strong>Edited2:</strong> My question is more general in fact. If we have a game with one parameter <span class="math-container">$v$</span> when two players <span class="math-container">$P_1, P_2$</span> are enemies and if <span class="math-container">$v&gt;0$</span> then for any strategy of <span class="math-container">$P_2$</span> the player <span class="math-container">$P_1$</span> wins.</p> <p>If <span class="math-container">$v\leq 0$</span> then for any strategy of <span class="math-container">$P_2$</span> there is a strategy of <span class="math-container">$P_1$</span> such that <span class="math-container">$P_2$</span> does not win and vice versa. I am interested in this case. From the mathematical point of view as I have understood the problem is undecidable since there is no an ultimate strategy neither for <span class="math-container">$P_1$</span> nor for <span class="math-container">$P_2$</span>. But we are solving somehow these problem IRL.</p> <p>Imagine that you are a lady in this game - then you would like to win anyway even while knowing that your strategy can be covered by the strategy of the monster. On the other hand, the monster knows that if he will cover all strategies of the lady she will never reach the shore and he will never catch her. I mean they have to develop some non-optimal strategies. I hope now it's more clear.</p>
<p>Since you seem to know the answer, I will give it here.</p> <p>Suppose that $v_l = v_m / k $ and the radius of the lake is $r$. Then the lady can reach a distance $\frac{r}{k}$ from the centre and keep the monster directly behind her, a distance $r\left(1 + \frac{1}{k}\right)$ away. One way would be to swim in a spiral gradually edging outwards as the monster runs trying to close the distance; another would be to swim in a semi-circle of radius $\frac{r}{2k}$ away from the monster once it starts to run. And the lady can sustain this distance by going round in a circle as the monster tries in vain to close the distance.</p> <p>The next stage is for the lady to try to swim direct to shore at some point away from the direction the monster is running. If the monster starts at the point $(-r,0)$ running anti-clockwise and the lady starts at the point $\left(\frac{r}{k},0\right)$ her best strategy is to head off in a straight line initially at right angles to the line between her and the monster: a less steep angle and the monster has proportionately less far to run than the lady has to swim, but a steeper angle and it is worth the monster changing direction. (If the monster changes direction in this right-angle case, the lady changes too but now starts closer to shore.) As they are both trying to get to the point $\left(\frac{r}{k},r \sqrt{1-\frac{1}{k^2}}\right)$ then they will arrive at the same time if $ \pi + \cos^{-1}(1/k) = k \sqrt{1 -1/k^2}$ which by numerical methods gives $k \approx 4.6033$. </p> <p>So if the monster is less than 4.6033 times as fast as the lady, the lady can escape; if not then she stays in the lake and the monster stays on the edge and they live unhappily ever after. </p>
<p><strong>edit:</strong> I corrected a silly mistake, now I get the same answer as Henry.</p> <p>Let $k=v_m/v_l$. We can suppose $v_l=1$, hence $v_m=k$, and that the radius of the lake is 1. Let's reformulate the problem in this way: lady swims as before, but monster stands still and turns the lake with speed $\leq k$. The (vector) speed of the lady is a point in a disc of radius $1$, and the monster can control the center of the disc - he can move it from $0$ in the tangent direction by at most $kr$, where $r$ is the distance of the lady from the center.</p> <p>If $r&lt;1/k$ then $0$ is inside the disc, so the monster has no control over the direction of lady's movement. She can therefore get to $r=1/k$ to the point away from monster.</p> <p>When $r&gt;1/k$ then $0$ is no longer in the disc and the monster can force (by turning at full speed) the constraint $|dr/d\phi|\leq r/\sqrt{k^2r^2-1}$ (where $\phi$ is the angle of the position of the lady). The question is whether she can get from $r=1/k$, $\phi=0$, to $r=1$, $\phi&lt;\pi$ ($r=1$, $\phi=\pi$ is the position of monster). This is possible iff $\int_{1/k}^1 \sqrt{k^2-r^{-2}}\, dr &lt;\pi$.</p> <p><strong>edit:</strong> Here is why $|dr/d\phi|\leq r/\sqrt{k^2r^2-1}$ (it's a bit difficult to explain without a picture, but I'll try). The possible speeds of the lady form a disc with the center at $(0,kr)$ and with the radius $1$. The speed with the largest slope is the point of tangency from $0$ to the circle. Its slope can be seen from the right-angled triangle, with hypotenuse $kr$ and two other sides $1$ and $\sqrt{(kr)^2-1}$ - so the slope is $1/\sqrt{(kr)^2-1}$.</p>
logic
<p>This is not a complaint about my proofs course being difficult, or how I can learn to prove things better, as all other questions of this flavour on Google seem to be. I am asking in a purely technical sense (but still only with regards to mathematics; that's why I deemed this question most appropriate to this Stack Exchange).</p> <p>To elaborate: it seems to me that if you have a few (mathematical) assumptions and there is a logical conclusion which can be made from those assumptions, that conclusion shouldn't be too hard to draw. It literally <strong>follows</strong> from the assumptions! However, this clearly isn't the case (for a lot of proofs, at least). The Poincaré Conjecture took just short of a century to prove. I haven't read <a href="http://www.ims.cuhk.edu.hk/%7Eajm/vol10/10_2.pdf" rel="nofollow noreferrer">the proof itself</a>, but it being ~320 pages long doesn't really suggest easiness. And there are countless better examples of difficulty. In 1993, Wiles announced the final proof of Fermat's Last Theorem, which was originally stated by Fermat in 1637 and was &quot;considered inaccessible to prove by contemporaneous mathematicians&quot; (<a href="https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem" rel="nofollow noreferrer">Wikipedia article on the proof</a>).</p> <p>So clearly, in many cases, mathematicians have to bend over backwards to prove certain logical conclusions. What is the reason for this? Is it humans' lack of intelligence? Lack of creativity? There is the field of <a href="https://en.wikipedia.org/wiki/Automated_theorem_proving" rel="nofollow noreferrer">automated theorem proving</a> which I tried to seek some insight from, but it looks like the algorithms produced from this field are subpar when compared to humans, and even these algorithms are obscenely difficult to implement. So seemingly certain proofs are actually inherently difficult. So I plead again - why is this?</p> <p>(EDIT) To rephrase my question: are there any <strong>inherent mathematical reasons</strong> for why difficult proofs exist?</p>
<p>Although this question may superficially look opinion-based, in actual fact there is an objective answer. The core reason is that the halting problem cannot be solved computably, and statements about halting behaviour get arbitrarily difficult to prove, and elementary arithmetic is sufficient to express notions that are at least as general as statements about halting behaviour.</p> <p>Now the details. First understand <a href="https://math.stackexchange.com/q/2486348/21820">the incompleteness theorem</a>. Next, observe that any reasonable foundational system can reason about programs, via the use of Godel coding to express and reason about finite program execution. Notice that all this reasoning about programs can occur within a tiny fragment of PA (1st-order Peano Arithmetic) called <a href="https://en.wikipedia.org/wiki/Peano_arithmetic#Equivalent_axiomatizations" rel="noreferrer">PA<sup>−</sup></a>. Thus the incompleteness theorem imply that, no matter what your foundational system is (as long as it is reasonable), there would always be true arithmetical sentences that it cannot prove, and these sentences can be explicitly written down and are not that long.</p> <p>Furthermore, the same reduction to the halting problem implies that you cannot even computably determine whether some arithmetical sentence is a theorem of your favourite foundational system S or not. This actually implies that there is <strong>no computable bound</strong> on the length of a shortest proof of a given theorem! To be precise, there is no computable function <span class="math-container">$f$</span> such that for every string <span class="math-container">$X$</span> we have that either <span class="math-container">$X$</span> is not a theorem of S or there is a proof of <span class="math-container">$X$</span> over S of length at most <span class="math-container">$f(X)$</span>. This provides the (at first acquaintance surprising) answer to your question:</p> <blockquote> <p>Logically forced conclusions from an explicitly described set of assumptions may take a big number of steps to deduce, so big that there is no computable bound on that number of steps! So, yes, proofs are actually inherently hard!</p> </blockquote> <p>Things are even worse; if you believe that S does not prove any false arithmetic sentence (which you should otherwise why are you using S?), then we can explicitly construct an arithmetical sentence Q such that S proves Q but you must believe that no proof of Q over S has less than <span class="math-container">$2^{10000}$</span> symbols!</p> <p>And in case you think that such phenomena may not occur in the mathematics that non-logicians come up with, consider the fact that <a href="http://raganwald.com/assets/fractran/Conway-On-Unsettleable-Arithmetical-Problems.pdf" rel="noreferrer">the generalized Collatz problem is undecidable</a>, and the fact that <a href="https://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem" rel="noreferrer">Hilbert's tenth problem</a> was proposed with no idea that it would be computably unsolvable. Similarly, many other discrete combinatorial problems such as <a href="https://en.wikipedia.org/wiki/Wang_tile#Domino_problem" rel="noreferrer">Wang tiling</a> were eventually found to be computably unsolvable.</p>
<p>You received already good answers, but I want to add a point that has not been covered: combinatorics.</p> <p>True, you may start with few simple assumptions and operations, but how many ordered ways are there to combine those assumptions? In a naive brute force approach, assuming that the shortest solution (ignore the fact that this is uncomputable, we just need a lower bound) takes n steps, including using assumptions or using operators; then there are <span class="math-container">$ n! $</span> ways to take all the n steps, but only 1 correct solution. And we do not even know n in advance!</p> <p>So proofs are hard because naive brute force is not a working option.</p>
differentiation
<p>I am not too grounded in differentiation but today, I was posed with a supposedly easy question $w = f(x,y) = x^2 + y^2$ where $x = r\sin\theta $ and $y = r\cos\theta$ requiring the solution to $\partial w / \partial r$ and $\partial w / \partial \theta $. I simply solved the former using the trig identity $\sin^2 \theta + \cos^2 \theta = 1$, resulting to $\partial w / \partial r = 2r$.</p> <p>However I was told that this solution could not be applied to this question because I should be solving for the <strong><em>total derivative</em></strong>. I could not find any good resource online to explain clearly to me the difference between a <em>normal</em> derivative and a <em>total</em> derivative and why my solution here was <em>wrong</em>. Is there anyone who could explain the difference to me using a practical example? Thanks!</p>
<p>The key difference is that when you take a <em>partial derivative</em>, you operate under a sort of assumption that you hold one variable fixed while the other changes. When computing a <em>total derivative</em>, you allow changes in one variable to affect the other.</p> <p>So, for instance, if you have $f(x,y) = 2x+3y$, then when you compute the partial derivative $\frac{\partial f}{\partial x}$, you temporarily assume $y$ constant and treat it as such, yielding $\frac{\partial f}{\partial x} = 2 + \frac{\partial (3y)}{\partial x} = 2 + 0 = 2$.</p> <p>However, if $x=x(r,\theta)$ and $y=y(r,\theta)$, then the assumption that $y$ stays constant when $x$ changes is no longer valid. Since $x = x(r,\theta)$, then if $x$ changes, this implies that at least one of $r$ or $\theta$ change. And if $r$ or $\theta$ change, then $y$ changes. And if $y$ changes, then obviously it has some sort of effect on the derivative and we can no longer assume it to be equal to zero.</p> <p>In your example, you are given $f(x,y) = x^2+y^2$, but what you really have is the following:</p> <p>$f(x,y) = f(x(r,\theta),y(r,\theta))$.</p> <p>So if you compute $\frac{\partial f}{\partial x}$, you cannot assume that the change in $x$ computed in this derivative has no effect on a change in $y$.</p> <p>What you need to compute instead is $\frac{\rm{d} f}{\rm{d}\theta}$ and $\frac{\rm{d} f}{\rm{d} r}$, the first of which can be computed as:</p> <p>$\frac{\rm{d} f}{\rm{d}\theta} = \frac{\partial f}{\partial \theta} + \frac{\partial f}{\partial x}\frac{\rm{d} x}{\rm{d} \theta} + \frac{\partial f}{\partial y}\frac{\rm{d} y}{\rm{d} \theta}$</p>
<p>I know this answer is incredibly delayed; but just to summarise the last post:</p> <p>If I gave you the function </p> <p><span class="math-container">$$ f(x,y) = \sin(x)+3y^2$$</span></p> <p>and asked you for the partial derivative with respect to <span class="math-container">$x$</span>, you should write:</p> <p><span class="math-container">$$ \frac{\partial f(x,y)}{\partial x} = \cos(x)+0$$</span></p> <p>since <span class="math-container">$y$</span> is effectively a <strong>constant with respect to <span class="math-container">$x$</span></strong>. In other words, substituting a value for <span class="math-container">$y$</span> has no effect on <span class="math-container">$x$</span>. However, if I asked you for the total derivative with respect to <span class="math-container">$x$</span>, you should write:</p> <p><span class="math-container">$$\frac{df(x,y)}{dx}=\cos(x)\cdot {dx\over dx} + 6y\cdot {dy\over dx}$$</span></p> <p>Of course I've utilized the chain rule in the bottom case. You wouldn't write <span class="math-container">$dx\over dx$</span> in practice since it's just <span class="math-container">$1$</span>, but you need to realise that it is there :)</p>
combinatorics
<p>What is your recommendation for an in-depth introductory combinatoric book? A book that doesn't just tell you about the multiplication principle, but rather shows the whole logic behind the questions with full proofs. The book should be for a first-year-student in college. Do you know a good book on the subject?</p> <p>Thanks.</p>
<p>My personal favorites are the following:</p> <ul> <li>Introduction to Combinatorial analysis [Riordan]</li> <li>Concrete Mathematics [Graham, Knuth, Patashnik]</li> <li>Enumerative Combinatorics vol. $1$ [Richard Stanley] </li> </ul> <p>(is not always that introductory, but for those who like counting, it is a must have)</p> <p>If you want really easy, but still interesting books, you might like Brualdi's book (though apparently, that book has many mistakes). Also interesting might be some chapters from Feller's book on Probability (volume $1$).</p>
<p>Try <em><a href="http://rads.stackoverflow.com/amzn/click/9810211392">Principles and Techniques in Combinatorics</a></em> by Chen Chuan Chong and Koh Khee Meng or <em><a href="http://rads.stackoverflow.com/amzn/click/0521457610">Combinatorics</a></em> by Peter Cameron. The latter is more advanced and has more topics.</p>
linear-algebra
<p>I'm starting a very long quest to learn about math, so that I can program games. I'm mostly a corporate developer, and it's somewhat boring and non exciting. When I began my career, I chose it because I wanted to create games.</p> <p>I'm told that Linear Algebra is the best place to start. Where should I go?</p>
<p>You are right: Linear Algebra is not just the &quot;best&quot; place to start. It's THE place to start.</p> <p>Among all the books cited in <a href="http://en.wikipedia.org/wiki/Linear_algebra" rel="nofollow noreferrer">Wikipedia - Linear Algebra</a>, I would recommend:</p> <ul> <li>Strang, Gilbert, Linear Algebra and Its Applications (4th ed.)</li> </ul> <p>Strang's book has at least two reasons for being recommended. First, it's extremely easy and short. Second, it's the book they use at MIT for the extremely good video Linear Algebra course you'll find in the <a href="https://math.stackexchange.com/a/4338/391081">link of Unreasonable Sin</a>.</p> <p>For a view towards applications (though maybe not necessarily your applications) and still elementary:</p> <ul> <li>B. Noble &amp; J.W. Daniel: Applied Linear Algebra, Prentice-Hall, 1977</li> </ul> <p>Linear algebra has two sides: one more &quot;theoretical&quot;, the other one more &quot;applied&quot;. Strang's book is just elementary, but perhaps &quot;theoretical&quot;. Noble-Daniel is definitively &quot;applied&quot;. The distinction from the two points of view relies in the emphasis they put on &quot;abstract&quot; vector spaces vs specific ones such as <span class="math-container">$\mathbb{R}^n$</span> or <span class="math-container">$\mathbb{C}^n$</span>, or on matrices vs linear maps.</p> <p>Maybe because of my penchant towards &quot;pure&quot; maths, I must admit that sometimes I find matrices somewhat annoying. They are funny, specific, whereas linear maps can look more &quot;abstract&quot; and &quot;ethereal&quot;. But, for instance: I can't stand the proof that the matrix product is associative, whereas the corresponding associativity for the composition of (linear or non linear) maps is true..., well, just because it can't help to be true the first moment you write it down.</p> <p>Anyway, at a more advanced level in the &quot;theoretical&quot; side you can use:</p> <ul> <li><p>Greub, Werner H., Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer</p> </li> <li><p>Halmos, Paul R., Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer</p> </li> <li><p>Shilov, Georgi E., Linear algebra, Dover Publications</p> </li> </ul> <p>In the &quot;applied&quot; (?) side, a book that I love and you'll appreciate if you want to study, for instance, the exponential of a matrix is <a href="https://rads.stackoverflow.com/amzn/click/com/0486445542" rel="nofollow noreferrer" rel="nofollow noreferrer">Gantmacher</a>.</p> <p>And, at any time, you'll need to do a lot of exercises. Lipschutz's is second to none in this:</p> <ul> <li>Lipschutz, Seymour, 3,000 Solved Problems in Linear Algebra, McGraw-Hill</li> </ul> <p>Enjoy! :-)</p>
<p>I'm very surprised no one's yet listed Sheldon Axler's <a href="https://books.google.com/books?id=5qYxBQAAQBAJ&source=gbs_similarbooks" rel="nofollow noreferrer">Linear Algebra Done Right</a> - unlike Strang and Lang, which are really great books, Linear Algebra Done Right has a lot of &quot;common sense&quot;, and is great for someone who wants to understand what the point of it all is, as it carefully reorders the standard curriculum a bit to help someone understand what it's all about.</p> <p>With a lot of the standard curriculum, you can get stuck in proofs and eigenvalues and kernels, before you ever appreciate the intuition and applications of what it's all about. This is great if you're a typical pure math type who deals with abstraction easily, but given the asker's description, I don't think that a rigorous pure math course is what he/she's asking for.</p> <p>For the very practical view, yet also not at all sacrificing depth, I don't think you can do better than Linear Algebra Done Right - and if you are thirsty for more, after you've tried it, Lang and Strang are both great texts.</p>
probability
<p>I want to teach a short course in probability and I am looking for some counter-intuitive examples in probability. I am mainly interested in the problems whose results seem to be obviously false while they are not.</p> <p>I already found some things. For example these two videos:</p> <ul> <li><a href="http://www.youtube.com/watch?v=Sa9jLWKrX0c" rel="noreferrer">Penney's game</a></li> <li><a href="https://www.youtube.com/watch?v=ud_frfkt1t0" rel="noreferrer">How to win a guessing game</a></li> </ul> <p>In addition, I have found some weird examples of random walks. For example this amazing theorem:</p> <blockquote> <p>For a simple random walk, the mean number of visits to point <span class="math-container">$b$</span> before returning to the origin is equal to <span class="math-container">$1$</span> for every <span class="math-container">$b \neq 0$</span>.</p> </blockquote> <p>I have also found some advanced examples such as <a href="http://amstat.tandfonline.com/doi/abs/10.1080/00031305.1989.10475674" rel="noreferrer">Do longer games favor the stronger player</a>?</p> <p>Could you please do me a favor and share some other examples of such problems? It's very exciting to read yours...</p>
<p>The most famous counter-intuitive probability theory example is the <a href="https://brilliant.org/wiki/monty-hall-problem/">Monty Hall Problem</a></p> <ul> <li>In a game show, there are three doors behind which there are a car and two goats. However, which door conceals which is unknown to you, the player.</li> <li>Your aim is to select the door behind which the car is. So, you go and stand in front of a door of your choice.</li> <li>At this point, regardless of which door you selected, the game show host chooses and opens one of the remaining two doors. If you chose the door with the car, the host selects one of the two remaining doors at random (with equal probability) and opens that door. If you chose a door with a goat, the host selects and opens the other door with a goat.</li> <li>You are given the option of standing where you are and switching to the other closed door.</li> </ul> <p>Does switching to the other door increase your chances of winning? Or does it not matter?</p> <p>The answer is that it <em>does</em> matter whether or not you switch. This is initially counter-intuitive for someone seeing this problem for the first time.</p> <hr> <ul> <li>If a family has two children, at least one of which is a daughter, what is the probability that both of them are daughters?</li> <li>If a family has two children, the elder of which is a daughter, what is the probability that both of them are the daughters?</li> </ul> <p>A beginner in probability would expect the answers to both these questions to be the same, which they are not.</p> <p><a href="https://mathwithbaddrawings.com/2013/10/14/the-riddle-of-the-odorless-incense/">Math with Bad Drawings</a> explains this paradox with a great story as a part of a seven-post series in Probability Theory</p> <hr> <p><a href="https://en.wikipedia.org/wiki/Nontransitive_dice">Nontransitive Dice</a></p> <p>Let persons P, Q, R have three distinct dice.</p> <p>If it is the case that P is more likely to win over Q, and Q is more likely to win over R, is it the case that P is likely to win over R?</p> <p>The answer, strangely, is no. One such dice configuration is $(\left \{2,2,4,4,9,9 \right\},\left \{ 1,1,6,6,8,8\right \},\left \{ 3,3,5,5,7,7 \right \})$</p> <hr> <p><a href="https://brilliant.org/discussions/thread/rationality-revisited-the-sleeping-beauty-paradox/">Sleeping Beauty Paradox</a></p> <p>(This is related to philosophy/epistemology and is more related to subjective probability/beliefs than objective interpretations of it.)</p> <p>Today is Sunday. Sleeping Beauty drinks a powerful sleeping potion and falls asleep.</p> <p>Her attendant tosses a <a href="https://en.wikipedia.org/wiki/Fair_coin">fair coin</a> and records the result.</p> <ul> <li>The coin lands in <strong>Heads</strong>. Beauty is awakened only on <strong>Monday</strong> and interviewed. Her memory is erased and she is again put back to sleep.</li> <li>The coin lands in <strong>Tails</strong>. Beauty is awakened and interviewed on <strong>Monday</strong>. Her memory is erased and she's put back to sleep again. On <strong>Tuesday</strong>, she is once again awaken, interviewed and finally put back to sleep. </li> </ul> <p>In essence, the awakenings on Mondays and Tuesdays are indistinguishable to her.</p> <p>The most important question she's asked in the interviews is</p> <blockquote> <p>What is your credence (degree of belief) that the coin landed in heads?</p> </blockquote> <p>Given that Sleeping Beauty is epistemologically rational and is aware of all the rules of the experiment on Sunday, what should be her answer?</p> <p>This problem seems simple on the surface but there are both arguments for the answer $\frac{1}{2}$ and $\frac{1}{3}$ and there is no common consensus among modern epistemologists on this one.</p> <hr> <p><a href="https://brilliant.org/discussions/thread/rationality-revisited-the-ellsberg-paradox/">Ellsberg Paradox</a></p> <p>Consider the following situation:</p> <blockquote> <p>In an urn, you have 90 balls of 3 colors: red, blue and yellow. 30 balls are known to be red. All the other balls are either blue or yellow.</p> <p>There are two lotteries:</p> <ul> <li><strong>Lottery A:</strong> A random ball is chosen. You win a prize if the ball is red.</li> <li><strong>Lottery B:</strong> A random ball is chosen. You win a prize if the ball is blue.</li> </ul> </blockquote> <p>Question: In which lottery would you want to participate?</p> <blockquote> <ul> <li><strong>Lottery X:</strong> A random ball is chosen. You win a prize if the ball is either red or yellow.</li> <li><strong>Lottery Y:</strong> A random ball is chosen. You win a prize if the ball is either blue or yellow.</li> </ul> </blockquote> <p>Question: In which lottery would you want to participate?</p> <p>If you are an average person, you'd choose Lottery A over Lottery B and Lottery Y over Lottery X. </p> <p>However, it can be shown that there is no way to assign probabilities in a way that make this look rational. One way to deal with this is to extend the concept of probability to that of imprecise probabilities.</p>
<h1>Birthday Problem</h1> <p>For me this was the first example of how counter intuitive the real world probability problems are due to the inherent underestimation/overestimation involved in mental maps for permutation and combination (which is an inverse multiplication problem in general), which form the basis for probability calculation. The question is:</p> <blockquote> <p>How many people should be in a room so that the probability of at least two people sharing the same birthday, is at least as high as the probability of getting heads in a toss of an unbiased coin (i.e., $0.5$).</p> </blockquote> <p>This is a good problem for students to hone their skills in estimating the permutations and combinations, the base for computation of <em>a priori probability</em>.</p> <p>I still feel the number of persons for the answer to be surreal and hard to believe! (The real answer is $23$).</p> <p>Pupils should at this juncture be told about quick and dirty mental maps for permutations and combinations calculations and should be encouraged to inculcate a habit of mental computations, which will help them in forming intuition about probability. It will also serve them well in taking to the other higher level problems such as the Monty Hall problem or conditional probability problems mentioned above, such as:</p> <blockquote> <p>$0.5\%$ of the total population out of a population of $10$ million is supposed to be affected by a strange disease. A test has been developed for that disease and it has a truth ratio of $99\%$ (i.e., its true $99\%$ of the times). A random person from the population is selected and is found to be tested positive for that disease. What is the real probability of that person suffering from the strange disease. The real answer here is approximately $33\%$.</p> </blockquote> <p>Here strange disease can be replaced by any real world problems (such as HIV patients or a successful trading / betting strategy or number of terrorists in a country) and this example can be used to give students a feel, why in such cases (HIV patients or so) there are bound to be many false positives (as no real world tests, I believe for such cases are $99\%$ true) and how popular opinions are wrong in such cases most of the times.</p> <p>This should be the starting point for introducing some of the work of <strong>Daniel Kahneman</strong> and <strong>Amos Tversky</strong> as no probability course in modern times can be complete without giving pupils a sense of how fragile one's intuitions and estimates are in estimating probabilities and uncertainties and how to deal with them. $20\%$ of the course should be devoted to this aspect and it can be one of the final real world projects of students.</p>
number-theory
<p>When I talk about my research with non-mathematicians who are, however, interested in what I do, I always start by asking them basic questions about the primes. Usually, they start getting reeled in if I ask them if there's infinitely many or not, and often the conversation remains by this question. Almost everyone guesses there are infinitely many, although they "thin out", and seem to say it's "obvious": "you keep finding them never mind how far along you go" or "there are infinitely many numbers so there must also always be primes". </p> <p>When I say that's not really an argument there then they may surrender this, but I can see they're not super convinced either. What I would like is to present them with another sequence which also "thins out" but which is <em>finite</em>. Crucially, this sequence must also be intuitive enough that non-mathematicians (as in, people not familiar with our terminology) can grasp the concept in a casual conversation.</p> <p>Is there such a sequence?</p>
<p>An example would be the <a href="https://oeis.org/A005188" rel="noreferrer">narcissistic numbers</a>, which are the natural numbers whose decimal expansion can be written with <span class="math-container">$n$</span> digits and which are equal to sum of the <span class="math-container">$n$</span><sup>th</sup> powers of their digits. For instance, <span class="math-container">$153$</span> is a narcissistic number, since it has <span class="math-container">$3$</span> digits and<span class="math-container">$$153=1^3+5^3+3^3.$$</span>Of course, any natural number smaller than <span class="math-container">$10$</span> is a narcissistic number, but there are only <span class="math-container">$79$</span> more of them, the largest of which is<span class="math-container">$$115\,132\,219\,018\,763\,992\,565\,095\,597\,973\,971\,522\,401.$$</span></p>
<p>You ask for a sequence that thins out, seems infinite but is finite. I suggest you follow up instead with an <em>open question</em>.</p> <p>For people who know (or think they know, or think it's obvious) that the primes go on forever you could talk about twin primes. They begin <span class="math-container">$$ (3,5), (5,7), (11,13), (17, 19), (29,31), \ldots, (101, 103), \ldots $$</span> Clearly they thin out faster than the primes, but no one knows whether they stop entirely at some point.</p> <p>If your audience still has your attention you can say that professionals who know enough to have an opinion on the matter think they go on forever. Then say how famous you would be (in mathematical circles) if you could answer the question one way or the other. </p> <p>If they are still intrigued tell the story of <a href="https://en.wikipedia.org/wiki/Yitang_Zhang" rel="noreferrer">Yitang Zhang</a>'s 2013 breakthrough showing that a prime gap less than <span class="math-container">$70$</span> million occurs infinitely often. That was the first proof that any gap had that property. The number has since been <a href="https://en.wikipedia.org/wiki/Polymath_Project#Polymath8" rel="noreferrer">reduced to 246</a>. If you could get it down to 2 you'd have the twin prime conjecture. </p>
differentiation
<p>I'm starting to learn how to intuitively interpret the directional derivative, and I can't understand why you <em>wouldn't</em> scale down your direction vector $\vec{v}$ to be a unit vector.</p> <p>Currently, my intuition is the idea of slicing the 3D graph of the function along its direction vector and then computing the slope of the curve created by the intersection of the plane. </p> <p>But I can't really understand how the directional derivative would be a directional <em>derivative</em> if it were not scaled down to be a change in unit length in the direction of $\vec{v}$. Is there an intuitive understanding I can grasp onto? I'm just starting out so maybe I haven't gotten there yet.</p> <p>Note, I think there may be a nice analogy to linearization, like if you take "twice as big of a step" in the direction of $\vec{v}$ , then the change to the function <em>due</em> to the change in this step is twice as big. Is this an okay way to think about it?</p>
<p>The intuition I think of for a directional derivative in the direction on $\overrightarrow{v}$ is that it is how fast the function changes if the input changes with a velocity of $\overrightarrow{v}$. So if you move the input across the domain twice as fast, the function changes twice as fast.</p> <p>More precisely, this corresponds to the following process that relates calculus in multiple variables to calculus in a single variable. In particular, we can define a line based at a point $\overrightarrow{p}$ with velocity $\overrightarrow{v}$ parametrically as a curve: $$\gamma(t)=\overrightarrow{p}+t\overrightarrow{v}.$$ This is a map from $\mathbb R$ to $\mathbb R^n$. However, if $f:\mathbb R^n\rightarrow \mathbb R$ is another map, we can define the composite $$(f\circ \gamma)(t)=f(\gamma(t))$$ and observe that this is a map $\mathbb R\rightarrow\mathbb R$ so we can study its derivative! In particular, we define the directional derivative of $f$ at $\overrightarrow{p}$ in the direction of $\overrightarrow{v}$ to be the derivative of $f\circ\gamma$ at $0$.</p> <p>However, when we do this, we only see a "slice" of the domain of $f$ - in particular, we only see the line passing through $\overrightarrow{p}$ in the direction of $\overrightarrow{v}$. This corresponds to the notion of slicing you bring up in your question. In particular, we do not see any values of $f$ outside of the image of $\gamma$, so we are only studying $f$ on some restricted set.</p>
<p>Let $f : \mathbb{R}^n \to \mathbb{R}^m$ and (if the limit exists) $$D_v f(x) = \lim_{h \to 0} \frac{f(x+hv)-f(x)}{h}$$ be the directional derivative in the direction $v$. This way, if the function is differentiabble $$ D_{au+bv} f(x) = a\, D_{u} f(x)+b\, D_{v} f(x) \qquad (a,b) \in \mathbb{R}^2$$ ie. the directional derivative is linear in the direction. Indeed $$D_{v} f(x) = J_x v$$ where $J_x$ is the Jacobian matrix. </p> <p>You'll have some problems for saying and understanding that if you restrict to $\|v\|=1$, or worse if you normalize $D_vf(x)$</p>
combinatorics
<blockquote> <p><span class="math-container">$7$</span> fishermen caught exactly <span class="math-container">$100$</span> fish and no two had caught the same number of fish. Prove that there are three fishermen who have captured together at least <span class="math-container">$50$</span> fish.</p> </blockquote> <hr /> <p><strong>Try:</strong> Suppose <span class="math-container">$k$</span>th fisher caught <span class="math-container">$r_k$</span> fishes and that we have <span class="math-container">$$r_1&lt;r_2&lt;r_3&lt;r_4&lt;r_5&lt;r_6&lt;r_7$$</span> and let <span class="math-container">$r(ijk) := r_i+r_j+r_k$</span>. Now suppose <span class="math-container">$r(ijk)&lt;49$</span> for all triples <span class="math-container">$\{i,j,k\}$</span>. Then we have <span class="math-container">$$r(123)&lt;r(124)&lt;r(125)&lt;r(345)&lt;r(367)&lt;r(467)&lt;r(567)\leq 49$$</span> so <span class="math-container">$$300\leq 3(r_1+\cdots+r_7)\leq 49+48+47+46+45+44+43= 322$$</span></p> <p>and no contradiction. Any idea how to resolve this?</p> <p><strong>Edit:</strong> Actually we have from <span class="math-container">$r(5,6,7)\leq 49$</span> that <span class="math-container">$r(4,6,7)\leq 48$</span> and <span class="math-container">$r(3,6,7)\leq 47$</span> and then <span class="math-container">$r(3,4,5)\leq r(3,6,7) - 4 \leq 43$</span> and <span class="math-container">$r(1,2,5)\leq r(3,4,5)-4\leq 39$</span> and <span class="math-container">$r(1,2,4)\leq 38$</span> and <span class="math-container">$r(1,2,3)\leq 37$</span> so we have:</p> <p><span class="math-container">$$300\leq 49+48+47+43+39+38+37= 301$$</span> but again no contradiction.</p>
<p>Let's work with the lowest four numbers instead of the other suggestions.</p> <p>Supposing there is a counterexample, then the lowest four must add to at least <span class="math-container">$51$</span> (else the highest three add to <span class="math-container">$50$</span> or more).</p> <p>Since <span class="math-container">$14+13+12+11=50$</span> the lowest four numbers would have to include one number at least equal to <span class="math-container">$15$</span> to get a total as big as <span class="math-container">$51$</span>.</p> <p>Then the greatest three numbers must be at least <span class="math-container">$16+17+18=51$</span>, which is a contradiction to the assumption that there exists a counterexample.</p> <p>The examples <span class="math-container">$18+17+15+14+13+12+11=100$</span> and <span class="math-container">$19+16+15+14+13+12+11=100$</span> show that the bound is tight.</p>
<p>If the maximum number of fish caught is <span class="math-container">$m$</span>, then the total number of fish caught is no more than <span class="math-container">$m+(m-1)+...+(m-6)$</span>. So there is one fisherman that caught at least 18 fish. Repeat this process for the second and third highest number of fish caught and you should be good.</p> <p>I should add that this is a common proof technique in combinatorics and graph theory. To show that something with a certain property exists, choose the "extremal" such something, and prove that property holds for the extremal object. For instance, to show in a graph where each vertex has degree at least <span class="math-container">$d$</span> there is a path of length at least <span class="math-container">$d$</span>, and one proof starts by simply showing a maximal path has length at least <span class="math-container">$d$</span>.</p>
geometry
<p>Is there a name for a polygon in which you could place a light bulb that would light up all of its area? (for which there exists a point so that for all points inside it the line connecting those two points does not cross one of its edges)</p> <p>Examples of "lightable" polygons:</p> <p><a href="https://i.sstatic.net/iSJYOm.png" rel="noreferrer"><img src="https://i.sstatic.net/iSJYOm.png" alt="enter image description here"></a></p> <p>Examples of "unlightable" polygons:</p> <p><a href="https://i.sstatic.net/Hn5tgm.png" rel="noreferrer"><img src="https://i.sstatic.net/Hn5tgm.png" alt="enter image description here"></a></p>
<p>Yes, those are called <a href="https://en.wikipedia.org/wiki/Star-shaped_polygon">star-shaped polygons</a>. They have numerous applications in mathematics, for example in complex analysis.</p>
<p>More generally, such a set is a <a href="https://en.wikipedia.org/wiki/Star_domain">star domain</a>, and is a trivial example of <a href="https://en.wikipedia.org/wiki/Contractible_space">contractible space</a>.</p> <p>You may see this as a generalization of a <a href="https://en.wikipedia.org/wiki/Convex_set">convex set</a>: indeed,</p> <ul> <li>$C\neq\emptyset$ is a <strong>convex domain</strong> if for every $x,y\in C$ you have that the line segment $\overline{xy}\subseteq C$ is contained in $C$; while</li> <li>$S\neq\emptyset$ is a <strong>star domain</strong> if there exists a point $y$ such that for every $x\in S$ it holds that $\overline{xy}\subseteq S$.</li> </ul> <p>That is, in a star domain the point $y$ (there might be many such) is fixed. You can easily prove that a set $E\neq\emptyset$ is convex (actually <a href="https://en.wikipedia.org/wiki/Simply_connected_space">simply connected</a>) if and only if it is a star domain with respect to each <em>center</em> $y\in E$.</p>
game-theory
<p>You and n other players ( <span class="math-container">$n \ge 1$</span> ) play a game. Each player chooses a real number between 0 and 1. A referee also chooses a number between 0 and 1. The player who chooses the closest number to the referee's number wins. What should be your choice. Will it depend on <span class="math-container">$n$</span> ? </p> <p>Assume that the n players (other than yourself) and the referee choose the numbers uniformly between 0 and 1. </p> <p>Can someone give a sketch of the solution? I've been trying this but unable to produce an answer. </p> <p>Edit:You may ignore two or more players choosing the same number</p>
<p>Let the referee's number be <span class="math-container">$Y$</span>, and your choice <span class="math-container">$X$</span>. Then you win if nobody's choice is in the interval from <span class="math-container">$X$</span> to <span class="math-container">$2Y-X$</span>. If <span class="math-container">$0 \le 2Y-X \le 1$</span>, that interval has length <span class="math-container">$2|Y-X|$</span>. But if <span class="math-container">$2Y-X &lt; 0$</span>, its intersection with <span class="math-container">$[0,1]$</span> has length <span class="math-container">$X$</span>, and if <span class="math-container">$2Y-X &gt; 1$</span> it has length <span class="math-container">$1-X$</span>. The probability that none of the other players make a choice in an interval of length <span class="math-container">$L$</span> contained in <span class="math-container">$[0,1]$</span> is <span class="math-container">$(1-L)^n$</span>. Thus the conditional probability that you win, given <span class="math-container">$X=x$</span> and <span class="math-container">$Y=y$</span>, is <span class="math-container">$$ \cases{(1-2|y-x|)^n &amp; if $x/2 \le y \le (1+x)/2$\cr (1-x)^n &amp; if $y \le x/2$\cr x^n &amp; if $y \ge (1+x)/2$\cr}$$</span> Integrating over <span class="math-container">$y$</span>, I find that the probability that you win if you choose <span class="math-container">$x$</span> is <span class="math-container">$$ f(x) = {\frac { \left( x\,n+2\,x-1 \right) \left( 1-x \right) ^{n}+ \left( - n-2 \right) {x}^{n+1}+2+ \left( n+1 \right) {x}^{n}}{2\,n+2}} $$</span> An optimal strategy would be to choose <span class="math-container">$x$</span> that maximizes this.</p> <p>One critical point is <span class="math-container">$x=1/2$</span>, but <span class="math-container">$f''(1/2) = (n-4) n 2^{1-n}$</span>, so if <span class="math-container">$n &gt; 4$</span> this is a local minimum. For example, if <span class="math-container">$n=5$</span> we have <span class="math-container">$$f(x) = {\frac { \left( 7\,x-1 \right) \left( 1-x \right) ^{5}}{12}}-{\frac {7\,{x}^{6}}{12}}+{\frac{1}{6}}+{\frac {{x}^{5}}{2}}$$</span> whose maximum on <span class="math-container">$[0,1]$</span> is at the real roots of <span class="math-container">$7\,{x}^{4}-14\,{x}^{3}+18\,{x}^{2}-11\,x+2$</span>, approximately <span class="math-container">$0.2995972362$</span> and <span class="math-container">$0.7004027638$</span>.</p> <p>On the other hand, for <span class="math-container">$n \le 4$</span> the optimal solution is <span class="math-container">$x = 1/2$</span>.</p>
<p>Here is an intuitive, non-rigorous, <em>calculus-free</em> :) explanation for what is going on. <strong>Much credit goes to Robert Israel</strong> whose excellent answer provided formulas for me to simulate, which inspired the claims &amp; subsequent arguments below.</p> <p>Following his notation, let <span class="math-container">$x$</span> be your pick of location, <span class="math-container">$y$</span> be the referee's location, and <span class="math-container">$f(x) $</span> be your win prob.</p> <p>This answer attempts to intuitively argue for the following:</p> <blockquote> <p>Claim 1: In the large <span class="math-container">$n$</span> limit, <span class="math-container">$f(\frac12) = {1\over n+1}$</span>.</p> <p>Claim 2: In the large <span class="math-container">$n$</span> limit, the optimal occurs at <span class="math-container">$x = {c\over n}$</span> (and by symmetry <span class="math-container">$x= 1 - {c \over n}$</span>) for some small constant <span class="math-container">$c \approx \frac32$</span>. (In fact, my simulations show the max occurring at exactly <span class="math-container">$c=2$</span>.)</p> </blockquote> <p>Imagine this game is not played on the <span class="math-container">$[0,1]$</span> interval but rather on a circle. Then clearly, your choice doesn't matter - you might as well pick <span class="math-container">$0$</span> - and since there are <span class="math-container">$n$</span> other players, your win prob is <span class="math-container">${1 \over n+1}$</span> by symmetry (among players).</p> <p>The key idea is this: When the game is played on the interval, and you get to pick <span class="math-container">$x$</span>, this is equivalent to playing on the circle (and you picking location <span class="math-container">$0$</span>) and then you get to <em>cut open the circle / draw a divider</em> at a distance <span class="math-container">$x$</span> from your location.</p> <p>So <span class="math-container">$x$</span> affects your win prob to exactly the extent that the cut (divider) affects your win prob. Well, you start with a win prob of <span class="math-container">${1 \over n+1}$</span> and the cut only affects your win prob in precisely these two scenarios:</p> <ul> <li><p><strong>Scenario A: Turning a loss (in the circle game) into a win (in the interval game):</strong> In the circle game, starting from your position <span class="math-container">$0$</span> and looking in one direction, the immediately next few points are: <span class="math-container">$0, y, x, z$</span> where <span class="math-container">$z$</span> is some other player and <span class="math-container">$|0-y| &gt; |y -z|$</span>. I.e. without the cut <span class="math-container">$x$</span>, you would have lost the circle game, but with the cut <span class="math-container">$x$</span>, that cuts off the original winner <span class="math-container">$z$</span> and you are the new winner in the interval game.</p> </li> <li><p><strong>Scenario B: Turning a win (in the circle game) into a loss (in the interval game):</strong> In the circle game, starting from your position <span class="math-container">$0$</span> and looking in one direction, the immediately next few points are: <span class="math-container">$0, x, y, z$</span> where <span class="math-container">$|0-y| &lt; |y-z|$</span>. I.e. without the cut <span class="math-container">$x$</span>, you would have won the circle game, but with the cut <span class="math-container">$x$</span>, that cuts you off and <span class="math-container">$z$</span> becomes the new winner.</p> </li> </ul> <p>In other words, we have:</p> <p><span class="math-container">$$f(x) = {1 \over n+1} + Prob(A) - Prob(B)$$</span></p> <p><strong>(Rigorous?) Proof of Claim 1:</strong> Scenario A requires that only the referee was between your <span class="math-container">$0$</span> and <span class="math-container">$x$</span>, and scenario B requires that <em>nobody</em> was between your <span class="math-container">$0$</span> and <span class="math-container">$x$</span>. In the large <span class="math-container">$n$</span> limit, for <span class="math-container">$x=\frac12$</span>, each event is exponentially unlikely. Therefore <span class="math-container">$f(\frac12) = {1 \over n+1}$</span>, same value as in the circle game. In short, cut <span class="math-container">$x=\frac12$</span> is so far away in the large <span class="math-container">$n$</span> limit that it doesn't matter (so the two games are equivalent). <span class="math-container">$\square$</span></p> <p><strong>Non-rigorous argument for Claim 2:</strong></p> <p>In both scenarios A and B, before the cut the ordering was <span class="math-container">$0, y, z$</span>. (If <span class="math-container">$y$</span> was not next to you in the circle game, you will lose both the circle game and the interval game no matter where you cut.) Obviously, you want to place the cut between <span class="math-container">$y$</span> and <span class="math-container">$z$</span>, i.e. on the other side of <span class="math-container">$y$</span> away from you. Cutting too close and the cut might be on your side of <span class="math-container">$y$</span> (cutting you off), while cutting too far and it might not cut off <span class="math-container">$z$</span> at all.</p> <p>The hand-waving comes from an intuitive guess at &quot;aiming&quot; the cut between <span class="math-container">$y$</span> and <span class="math-container">$z$</span>.</p> <p>In the circle, there will be <span class="math-container">$n+2$</span> points - one from the referee, one from you, and <span class="math-container">$n$</span> from the other players. So the average length of any interval is <span class="math-container">${1 \over n+2}$</span>. In particular, even when conditioned on <span class="math-container">$y$</span> being next to you, and <span class="math-container">$z$</span> being the other neighbor of <span class="math-container">$y$</span>, we have <span class="math-container">$E[|0-y|] = E[|y-z|] = {1 \over n+2}$</span>, because the conditioning only affects relative ordering / who's neighbor to whom. From this alone, I would have guessed that a good strategy is to cut at <span class="math-container">$x = \frac32 {1 \over n+2}\approx {c \over n}$</span> where <span class="math-container">$c = \frac32$</span>, i.e. one-and-a-half expected interval lengths away. <span class="math-container">$\square$</span></p> <p><strong>Further thoughts:</strong> Based on my simulations, for large <span class="math-container">$n$</span> the optimal happens at <span class="math-container">$x = {c \over n}$</span> where <span class="math-container">$c=2$</span>. In retrospect, this bias (over <span class="math-container">$c = \frac32$</span>) can probably be explained thus:</p> <ul> <li><p>In scenario A, you were on the longer / losing side of <span class="math-container">$y$</span>, i.e. <span class="math-container">$E[|0-y|] \gtrapprox {1 \over n+2}$</span>, so it makes sense to cut slightly further away than <span class="math-container">$c=\frac32$</span>, just to overcome your slightly longer distance to <span class="math-container">$y$</span>.</p> </li> <li><p>In scenario B, you were on the shorter / winning side of <span class="math-container">$y$</span>, i.e. <span class="math-container">$E[|0-y|] \lessapprox {1 \over n+2}$</span>, so it makes sense to cut further away than <span class="math-container">$c=\frac32$</span>, just to be extra safe that you don't accidentally cut yourself off from an already-winning position.</p> </li> </ul>
probability
<ol> <li>Probability of raining today is 60%.</li> <li>If it rains today, the probability of raining tomorrow will increase by 10%.</li> <li>If it doesn't rain today, the probability raining tomorrow will decrease by 10%.</li> </ol> <p>Rule 2 and 3 can be applied to future days.</p> <p>What is the probability that eventually, it will rain forever?</p> <p>(I tried using binomial option pricing model but couldn't solve it because the probability isn't fixed. I saw this question in Chinese as an interview question for a quant position. I couldn't find an answer online. So please help me smart people!)</p>
<p>Interesting problem! These are simply the partial sums of rows of Pascal's triangle. So in this case, the answer is just the sum of the first six elements of the $n = 9$ row (divided by $2^9 = 512$):</p> <p>$$ p_6 = \frac{1+9+36+84+126+126}{2^9} = \frac{382}{512} = \frac{191}{256} $$</p> <p>Here's how this comes about: There are only two possible final states: certain drought, and certain rain. For any $k, 0 \leq k \leq 10$, let $p_k$ be the probability that the final state will be certain rain, given that the initial probability of rain is $\frac{k}{10}$. (Here, initial only means "current" since the process is homogeneous in time.)</p> <p>Then, there is a simple set of linear equations relating the $p_k$. Suppose $k = 1$ initially. That is, the current rain probability is $\frac{1}{10}$. Then with probability $\frac{1}{10}$, the next rain probability will be $\frac{2}{10}$, and with probability $\frac{9}{10}$, the next rain probability will be $0$ (and the final state is certain drought). We can represent this as follows:</p> <p>$$ p_1 = \frac{1}{10} p_2 + \frac{9}{10} p_0 $$</p> <p>where $p_0 = 0$, naturally.</p> <p>Now, let us suppose that $k = 2$ initially. Then with probability $\frac{2}{10}$, the next rain probability will be $\frac{3}{10}$, and with probability $\frac{8}{10}$, the next rain probability will be $\frac{1}{10}$. We can represent <em>this</em> as follows:</p> <p>$$ p_2 = \frac{2}{10} p_3 + \frac{8}{10} p_1 $$</p> <p>Proceeding along these lines, we can write equations of the form</p> <p>$$ p_k = \frac{k}{10} p_{k+1} + \frac{10-k}{10} p_{k-1} \qquad 1 \leq k \leq 9 $$</p> <p>with boundary conditions $p_0 = 0, p_{10} = 1$.</p> <hr> <p>Now, interestingly, because for any $k$ the coefficients of $p_{k-1}$ and $p_{k+1}$ sum to $1$, we can view each of the $p_k$ as a weighted mean of $p_{k-1}$ and $p_{k+1}$. That is to say,</p> <ul> <li>$p_0 = 0$</li> <li>$p_1$ is $\frac{1}{10}$ of the way from $p_0$ to $p_2$</li> <li>$p_2$ is $\frac{2}{10}$ of the way from $p_1$ to $p_3$</li> <li>$p_3$ is $\frac{3}{10}$ of the way from $p_2$ to $p_4$</li> </ul> <p>and so on. This permits us to find $p_2$ in terms of $p_1$, $p_3$ in terms of $p_2$, etc. In particular, if we let $p_1 = q$, where $q$ is some quantity currently unknown, then</p> <p>$$ p_2-p_1 = \frac{9}{1} (p_1-p_0) = \frac{9}{1}q $$ $$ p_3-p_2 = \frac{8}{2} (p_2-p_1) = \frac{9 \times 8}{2 \times 1}q $$ $$ p_4-p_3 = \frac{7}{3} (p_3-p_2) = \frac{9 \times 8 \times 7}{3 \times 2 \times 1} q $$</p> <p>and for any $k$,</p> <p>$$ p_{k+1}-p_k = \binom{9}{k} q $$</p> <p>And since</p> <p>$$ \sum_{k=0}^9 p_{k+1}-p_k = p_{10}-p_0 = 1 $$</p> <p>it must therefore be the case that $q = \frac{1}{2^9}$, and</p> <p>$$ p_k = \frac{1}{2^9} \sum_{i=0}^{k-1} \binom{9}{i} $$</p> <p>So your intuition that the binomial coefficients were involved was not far off; it's just that they represent not the actual probabilities themselves, but their first differences. An obvious generalization of this yields a similar expression when the probability of rain is $\frac{k}{n}$, with increments of size $\frac{1}{n}$:</p> <p>$$ p_k = \frac{1}{2^{n-1}} \sum_{i=0}^{k-1} \binom{n-1}{i} $$</p> <p>Alas, <a href="https://mathoverflow.net/questions/17202/sum-of-the-first-k-binomial-coefficients-for-fixed-n">this question</a> suggests that no further simplification is likely for general $k$ and $n$. (Obviously, closed forms for some special cases may be obtained.)</p> <hr> <p>ETA: The relatively simple form of the answer makes me wonder if there is a cleverer answer that relies on some analogy between selecting no more than $k$ of $n$ objects and the probability of certain rain starting with a rain probability of $\frac{k}{n}$. But I confess nothing quickly comes to mind.</p>
<p>[nvm, wrong answer. There should be only 2 states.]</p> <hr> <p>I just came up with a way to solve it. Correct me if I'm wrong.</p> <p>I'll denote rain days as R, and no rain days as N, </p> <p>There are three possible results: eternal R, eternal N, alternating between R&amp;N.</p> <ol> <li><p>By drawing a binary tree, I observed that to reach eternal R, we need 4 days of R + n days of R + n days of N, meaning <strong>4R+n(N+R)</strong>. (n can be any non-negative integer and can be infinitely large.)</p> <p>The change in probability caused by 1N and 1R cancel each other (+10%-10% = 0).</p> <p>For that 4 days of R, regardless of their order or whether they are consecutive or not, the probabilities of the 4R are 0.6, 0.7, 0.8, and 0.9. </p> <p>Thus the probability of eternal R is <strong>0.6*0.7*0.8*0.9*P(n(N+R) with all possible n's)</strong></p></li> <li><p>Same rule applies to N days. To reach eternal N, we need 6 days of N + n days of R + n days of N, meaning <strong>6N+n(N+R)</strong>.</p> <p>Same here, for that 6 days of N, the probabilities of the 6N are 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9. </p> <p>Thus the probability of eternal N is <strong>0.4*0.5*0.6*0.7*0.8*0.9*P(n(N+R) with all possible n's)</strong></p></li> </ol> <p>(From this point, I will denote P(n(N+R) with all possible n's) as P.</p> <ol start="3"> <li><p>For the alternating state, we have 9 sub-states: <strong>nNR, R+nNR, 2R+nNR, 3R+nNR, N+nNR, 2N+nNR, 3N+nNR, 4N+nNR, 5N+nNR</strong></p> <p>Respectively, the probabilities are <strong>P, 0.6*P, 0.6*0.7*P, 0.6*0.7*0.8*P, 0.4*P, 0.4*0.5*P, 0.4*0.5*0.6*P, 0.4*0.5*0.6*0.7*P, 0.4*0.5*0.6*0.7*0.8*P</strong></p></li> </ol> <p>Adding the probability of all three states, we should get 1. Thus we can solve for <strong>P = 625/2017</strong></p> <p>Thus probability of eternal R = 0.6*0.7*0.8*0.9*P = <strong>189/2017</strong></p> <p>(The assumption here is that because we are summing the probability for all possible n, from 0 to infinity, I assumed that P stays the same for all three states.) </p>
logic
<p><a href="http://en.wikipedia.org/wiki/Second-order_logic" rel="noreferrer">Wikipedia</a> describes the first-order vs. second-order logic as follows:</p> <blockquote> <p>First-order logic uses only variables that range over <strong>individuals</strong> (elements of the domain of discourse); second-order logic has these variables as well as additional variables that range over <strong>sets of individuals</strong>.</p> </blockquote> <p>It gives $\forall P\,\forall x (x \in P \lor x \notin P)$ as an SO-logic formula, which makes perfect sense to me.</p> <p>However, in a <a href="https://cstheory.stackexchange.com/q/5117/873">post at CSTheory</a>, the poster claimed that $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$ is an FO formula. I think this must not be the case, since in the above formula, $x$ and $y$ are <strong>sets of individuals</strong>, while $z$ is an <strong>individual</strong> (and therefore this must be an SO formula).</p> <p>I mentioned this as a comment, but two users commented that ZF can be entirely described by FO logic, and therefore $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$ is an FO formula.</p> <p>I'm confused. Could someone explain this please?</p>
<p>It seems to me that you're confusing the first order formulas with their intended interpretations.</p> <p>The language of set theory consists of just a single 2 place predicate symbol, usually denoted $\in$. The statement you quote is a first order statement - it means just what is says: "for all $x$ and for all $y$, ($x = y$ iff for all $z$, ($z \in x$ iff $z \in y$))", but it does <em>not</em> tell you what $x$ is. When you say "but $x$ is a set and $z$ is an individual, so this statement looks second order!", you're adding an interpretation to the picture which is not specified by the first order formula alone - namely that "for all x" means "for all sets x" and "$\in$" means "the usual $\in$ in set theory".</p> <p>In first order logic, this "adding of interpretation" is usually called "exhibiting a model".</p> <p>Here's another way of looking at this same first order statement. Suppose I reinterpret things - I say "for all $x$" means "for all real numbers $x$" and $\in$ means $&lt;$. Then, $\forall x \forall y$ ($x=y$ iff $\forall z$, ($z \in x$ iff $z \in y$)) is a true statement: it says two real numbers are equal iff they have the same collection of smaller things. Notice that in this model, nothing looks second order.</p> <p>By contrast, in second order logic, you are directly referring to subsets, so that in any model (that is, interpretation), $\forall S$ means "for all subsets of whatever set the variable $x$ ranges over".</p>
<p>The statement is indeed a first order statement in standard Set Theory. But it is no wonder you are a bit confused.</p> <p>Most people think of sets and elements as different things. However, in standard set theory this is not the case. </p> <p>In standard set theory, <em>everything</em> is a set (there are no "ur-elements", elements that are not sets). The <em>objects</em> of set theory are sets themselves. The primitive relation $\in$ is a relation between <em>sets</em>, not between ur-elements and sets. So, for example, the Axiom of the Power Set in ZFC states that $$\forall x\exists y(\forall z(z\in y \leftrightarrow z\subseteq x))$$ which is a first order statement, because all the things being quantified over are objects in the theory (namely, "sets").</p> <p>So you need to forget the notion that "elements" are things <em>in</em> sets and sets are things that contain elements. In ZF, <em>everything</em> is a set.</p> <p>It's a little hard to see the distinction between first and second order statements in ZF precisely because "sets" are the objects, and moreover, given any set, there is a <em>set</em> that contains all the subsets (the power set). In a way, the Axioms are set up precisely to allow you to talk about collections of sets without having to go to second order logic. </p> <p>In ZF, to get to second order you need to start talking about "proper classes" or "properties". For instance, that's why Comprehension is not a single axiom, but an entire infinite family of axioms. Comprehension essentially says that for <em>every</em> property $P$ and every object $x$ of the theory (i.e., every set $x$), $\{y\mid y\in x\wedge P(y)\}$ is a set. But trying to quantify over <em>all propositions</em> would be a second order statement. Instead, you have an "Axiom Schema" which says that for each property $P$, you have an axiom that says $$\forall x\exists y\Bigl(z\in y\leftrightarrow\bigl(z\in x\wedge P(z)\bigr)\Bigr).$$ If you try quantifying over "all $P$", <strong>then</strong> you get a second order statement in ZF. </p>
combinatorics
<p>This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless</p> <p>Imagine there are a 100 people in line to board a plane that seats 100. The first person in line, Alice, realizes she lost her boarding pass, so when she boards she decides to take a random seat instead. Every person that boards the plane after her will either take their &quot;proper&quot; seat, or if that seat is taken, a random seat instead.</p> <p>Question: What is the probability that the last person that boards will end up in their proper seat?</p> <p>Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...</p>
<p>Here is a rephrasing which simplifies the intuition of this nice puzzle.</p> <p>Suppose whenever someone finds their seat taken, they politely evict the squatter and take their seat. In this case, the first passenger (Alice, who lost her boarding pass) keeps getting evicted (and choosing a new random seat) until, by the time everyone else has boarded, she has been forced by a process of elimination into her correct seat.</p> <p>This process is the same as the original process except for the identities of the people in the seats, so the probability of the last boarder finding their seat occupied is the same.</p> <p>When the last boarder boards, Alice is either in her own seat or in the last boarder's seat, which have both looked exactly the same (i.e. empty) to her up to now, so there is no way poor Alice could be more likely to choose one than the other.</p>
<p>This is a classic puzzle!</p> <p>The answer is that the probability that the last person ends in up in their proper seat is exactly <span class="math-container">$\frac{1}{2}$</span>.</p> <p>The reasoning goes as follows:</p> <p>First observe that the fate of the last person is determined the moment either the first or the last seat is selected! This is because the last person will either get the first seat or the last seat. Any other seat will necessarily be taken by the time the last person gets to 'choose'.</p> <p>Since at each choice step, the first or last is equally probable to be taken, the last person will get either the first or last with equal probability: <span class="math-container">$\frac{1}{2}$</span>.</p> <p>Sorry, no clue about a physical system.</p>
matrices
<p>I have a theoretical question. Why are non-square matrices not invertible? </p> <p>I am running into a lot of doubts like this in my introductory study of linear algebra.</p>
<p>I think the simplest way to look at it is considering the dimensions of the Matrices $A$ and $A^{-1 }$ and apply simple multiplication.</p> <p>So assume, wlog $A$ is $m \times n $, with $n\neq m$ then $A^{-1 }$ has to be $n\times m$ because thats the only way $AA^{-1 }=I_m$</p> <p>But it must also be true that $A^{-1 } A=I_m$ but now instead of $I_m$ you get $I_n$ wich is not in accordance with the definition of an Inverse ( see ZettaSuro)</p> <p>Hence $m$ must be equal to $n$</p>
<p><strong>Simple answer</strong>: because by definition a matrix is commutative with its inverse on multiplication. That is: $A^{-1}$ is a matrix such that $AA^{-1}=I_n$ <strong>and</strong> $A^{-1}A=I_n$.</p> <p>For two matrices to commute on multiplication, both must be square.</p> <p><strong>More complicated answer</strong>: There exists a left inverse and a right inverse that is defined for <strong>all</strong> matrices including non-square matrices. For a matrix of dimension $m\times{n}$, the left and right inverse are defined as follows:</p> <p>$$A^L:=\{B|BA=I_n\}$$ $$A^R:=\{B|AB=I_m\}$$</p> <p>If $A^L=A^R$ , by definition $A^L=A^R=A^{-1}$.</p>
linear-algebra
<p>Given two square matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, how do you show that <span class="math-container">$$\det(AB) = \det(A) \det(B)$$</span> where <span class="math-container">$\det(\cdot)$</span> is the determinant of the matrix?</p>
<p>Let's consider the function <span class="math-container">$B\mapsto \det(AB)$</span> as a function of the columns of <span class="math-container">$B=\left(v_1|\cdots |v_i| \cdots | v_n\right)$</span>. It is straight forward to verify that this map is multilinear, in the sense that <span class="math-container">$$\det\left(A\left(v_1|\cdots |v_i+av_i'| \cdots | v_n\right)\right)=\det\left(A\left(v_1|\cdots |v_i| \cdots | v_n\right)\right)+a\det\left(A\left(v_1|\cdots |v_i'| \cdots | v_n\right)\right).$$</span> It is also alternating, in the sense that if you swap two columns of <span class="math-container">$B$</span>, you multiply your overall result by <span class="math-container">$-1$</span>. These properties both follow directly from the corresponding properties for the function <span class="math-container">$A\mapsto \det(A)$</span>.</p> <p>The determinant is completely characterized by these two properties, and the fact that <span class="math-container">$\det(I)=1$</span>. Moreover, any function that satisfies these two properties must be a multiple of the determinant. If you have not seen this fact, you should try to prove it. I don't know of a reference online, but I know it is contained in Bretscher's linear algebra book.</p> <p>In any case, because of this fact, we must have that <span class="math-container">$\det(AB)=c\det(B)$</span> for some constant <span class="math-container">$c$</span>, and setting <span class="math-container">$B=I$</span>, we see that <span class="math-container">$c=\det(A)$</span>.</p> <hr /> <p>For completeness, here is a proof of the necessary lemma that any a multilinear, alternating function is a multiple of determinant.</p> <p>We will let <span class="math-container">$f:\mathbb (F^n)^n\to \mathbb F$</span> be a multilinear, alternating function, where, to allow for this proof to work in characteristic 2, we will say that a multilinear function is alternating if it is zero when two of its inputs are equal (this is equivalent to getting a sign when you swap two inputs everywhere except characteristic 2). Let <span class="math-container">$e_1, \ldots, e_n$</span> be the standard basis vectors. Then <span class="math-container">$f(e_{i_1},e_{i_2}, \ldots, e_{i_n})=0$</span> if any index occurs twice, and otherwise, if <span class="math-container">$\sigma\in S_n$</span> is a permutation, then <span class="math-container">$f(e_{\sigma(1)}, e_{\sigma(2)},\ldots, e_{\sigma(n)})=(-1)^\sigma$</span>, the sign of the permutation <span class="math-container">$\sigma$</span>.</p> <p>Using multilinearity, one can expand out evaluating <span class="math-container">$f$</span> on a collection of vectors written in terms of the basis:</p> <p><span class="math-container">$$f\left(\sum_{j_1=1}^n a_{1j_1}e_{j_1}, \sum_{j_2=1}^n a_{2j_2}e_{j_2},\ldots, \sum_{j_n=1}^n a_{nj_n}e_{j_n}\right) = \sum_{j_1=1}^n\sum_{j_2=1}^n\cdots \sum_{j_n=1}^n \left(\prod_{k=1}^n a_{kj_k}\right)f(e_{j_1},e_{j_2},\ldots, e_{j_n}).$$</span></p> <p>All the terms with <span class="math-container">$j_{\ell}=j_{\ell'}$</span> for some <span class="math-container">$\ell\neq \ell'$</span> will vanish before the <span class="math-container">$f$</span> term is zero, and the other terms can be written in terms of permutations. If <span class="math-container">$j_{\ell}\neq j_{\ell'}$</span> for any <span class="math-container">$\ell\neq \ell'$</span>, then there is a unique permutation <span class="math-container">$\sigma$</span> with <span class="math-container">$j_k=\sigma(k)$</span> for every <span class="math-container">$k$</span>. This yields:</p> <p><span class="math-container">$$\begin{align}\sum_{j_1=1}^n\sum_{j_2=1}^n\cdots \sum_{j_n=1}^n \left(\prod_{k=1}^n a_{kj_k}\right)f(e_{j_1},e_{j_2},\ldots, e_{j_n}) &amp;= \sum_{\sigma\in S_n} \left(\prod_{k=1}^n a_{k\sigma(k)}\right)f(e_{\sigma(1)},e_{\sigma(2)},\ldots, e_{\sigma(n)}) \\ &amp;= \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{k=1}^n a_{k\sigma(k)}\right)f(e_{1},e_{2},\ldots, e_{n}) \\ &amp;= f(e_{1},e_{2},\ldots, e_{n}) \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{k=1}^n a_{k\sigma(k)}\right). \end{align} $$</span></p> <p>In the last line, the thing still in the sum is the determinant, although one does not need to realize this fact, as we have shown that <span class="math-container">$f$</span> is completely determined by <span class="math-container">$f(e_1,\ldots, e_n)$</span>, and we simply define <span class="math-container">$\det$</span> to be such a function with <span class="math-container">$\det(e_1,\ldots, e_n)=1$</span>.</p>
<p>The proof using elementary matrices can be found e.g. on <a href="http://www.proofwiki.org/wiki/Determinant_of_Matrix_Product" rel="noreferrer">proofwiki</a>. It's basically the same proof as given in Jyrki Lahtonen 's comment and Chandrasekhar's link.</p> <p>There is also a proof using block matrices, I googled a bit and I was only able to find it in <a href="http://books.google.com/books?id=N871f_bp810C&amp;pg=PA112&amp;dq=determinant+product+%22alternative+proof%22&amp;hl=en&amp;ei=EPlZToKHOo6aOuyapJIM&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=2&amp;ved=0CC8Q6AEwAQ#v=onepage&amp;q=determinant%20product%20%22alternative%20proof%22&amp;f=false" rel="noreferrer">this book</a> and <a href="http://www.mth.kcl.ac.uk/%7Ejrs/gazette/blocks.pdf" rel="noreferrer">this paper</a>.</p> <hr /> <p>I like the approach which I learned from Sheldon Axler's Linear Algebra Done Right, <a href="http://books.google.com/books?id=ovIYVIlithQC&amp;printsec=frontcover&amp;dq=linear+algebra+done+right&amp;hl=en&amp;ei=H-xZTuutJoLrOdvrwaMM&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CCkQ6AEwAA#v=onepage&amp;q=%2210.31%22&amp;f=false" rel="noreferrer">Theorem 10.31</a>. Let me try to reproduce the proof here.</p> <p>We will use several results in the proof, one of them is - as far as I can say - a little less known. It is the <a href="http://www.proofwiki.org/wiki/Determinant_as_Sum_of_Determinants" rel="noreferrer">theorem</a> which says, that if I have two matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, which only differ in <span class="math-container">$k$</span>-th row and other rows are the same, and the matrix <span class="math-container">$C$</span> has as the <span class="math-container">$k$</span>-th row the sum of <span class="math-container">$k$</span>-th rows of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and other rows are the same as in <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, then <span class="math-container">$|C|=|B|+|A|$</span>.</p> <p><a href="https://math.stackexchange.com/questions/668/whats-an-intuitive-way-to-think-about-the-determinant">Geometrically</a>, this corresponds to adding two parallelepipeds with the same base.</p> <hr /> <p><strong>Proof.</strong> Let us denote the rows of <span class="math-container">$A$</span> by <span class="math-container">$\vec\alpha_1,\ldots,\vec\alpha_n$</span>. Thus <span class="math-container">$$A= \begin{pmatrix} a_{11} &amp; a_{12}&amp; \ldots &amp; a_{1n}\\ a_{21} &amp; a_{22}&amp; \ldots &amp; a_{2n}\\ \vdots &amp; \vdots&amp; \ddots &amp; \vdots \\ a_{n1} &amp; a_{n2}&amp; \ldots &amp; a_{nn} \end{pmatrix}= \begin{pmatrix} \vec\alpha_1 \\ \vec\alpha_2 \\ \vdots \\ \vec\alpha_n \end{pmatrix}$$</span></p> <p>Directly from the definition of matrix product we can see that the rows of <span class="math-container">$A\cdot B$</span> are of the form <span class="math-container">$\vec\alpha_kB$</span>, i.e., <span class="math-container">$$A\cdot B=\begin{pmatrix} \vec\alpha_1B \\ \vec\alpha_2B \\ \vdots \\ \vec\alpha_nB \end{pmatrix}$$</span> Since <span class="math-container">$\vec\alpha_k=\sum_{i=1}^n a_{ki}\vec e_i$</span>, we can rewrite this equality as <span class="math-container">$$A\cdot B=\begin{pmatrix} \sum_{i_1=1}^n a_{1i_1}\vec e_{i_1} B\\ \vdots\\ \sum_{i_n=1}^n a_{ni_n}\vec e_{i_n} B \end{pmatrix}$$</span> Using the theorem on the sum of determinants multiple times we get <span class="math-container">$$ |{A\cdot B}|= \sum_{i_1=1}^n a_{1i_1} \begin{vmatrix} \vec e_{i_1}B\\ \sum_{i_2=1}^n a_{2i_2}\vec e_{i_2} B\\ \vdots\\ \sum_{i_n=1}^n a_{ni_n}\vec e_{i_n} B \end{vmatrix}= \ldots = \sum_{i_1=1}^n \ldots \sum_{i_n=1}^n a_{1i_1} a_{2i_2} \dots a_{ni_n} \begin{vmatrix} \vec e_{i_1} B \\ \vec e_{i_2} B \\ \vdots \\ \vec e_{i_n} B \end{vmatrix} $$</span></p> <p>Now notice that if <span class="math-container">$i_j=i_k$</span> for some <span class="math-container">$j\ne k$</span>, then the corresponding determinant in the above sum is zero (it has two <a href="http://www.proofwiki.org/wiki/Square_Matrix_with_Duplicate_Rows_has_Zero_Determinant" rel="noreferrer">identical rows</a>). Thus the only nonzero summands are those one, for which the <span class="math-container">$n$</span>-tuple <span class="math-container">$(i_1,i_2,\dots,i_n)$</span> represents a permutation of the numbers <span class="math-container">$1,\ldots,n$</span>. Thus we get <span class="math-container">$$|{A\cdot B}|=\sum_{\varphi\in S_n} a_{1\varphi(1)} a_{2\varphi(2)} \dots a_{n\varphi(n)} \begin{vmatrix} \vec e_{\varphi(1)} B \\ \vec e_{\varphi(2)} B \\ \vdots \\ \vec e_{\varphi(n)} B \end{vmatrix}$$</span> (Here <span class="math-container">$S_n$</span> denotes the set of all permutations of <span class="math-container">$\{1,2,\dots,n\}$</span>.) The matrix on the RHS of the above equality is the matrix <span class="math-container">$B$</span> with permuted rows. Using several transpositions of rows we can get the matrix <span class="math-container">$B$</span>. We will show that this can be done using <span class="math-container">$i(\varphi)$</span> transpositions, where <span class="math-container">$i(\varphi)$</span> denotes the number of <a href="http://en.wikipedia.org/wiki/Inversion_%28discrete_mathematics%29" rel="noreferrer">inversions</a> of <span class="math-container">$\varphi$</span>. Using this fact we get <span class="math-container">$$|{A\cdot B}|=\sum_{\varphi\in S_n} a_{1\varphi(1)} a_{2\varphi(2)} \dots a_{n\varphi(n)} (-1)^{i(\varphi)} |{B}| =|A|\cdot |B|.$$</span></p> <p>It remains to show that we need <span class="math-container">$i(\varphi)$</span> transpositions. We can transform the &quot;permuted matrix&quot; to matrix <span class="math-container">$B$</span> as follows: we first move the first row of <span class="math-container">$B$</span> on the first place by exchanging it with the preceding row until it is on the correct position. (If it already is in the first position, we make no exchanges at all.) The number of transpositions we have used is exactly the number of inversions of <span class="math-container">$\varphi$</span> that contains the number 1. Now we can move the second row to the second place in the same way. We will use the same number of transposition as the number of inversions of <span class="math-container">$\varphi$</span> containing 2 but not containing 1. (Since the first row is already in place.) We continue in the same way. We see that by using this procedure we obtain the matrix <span class="math-container">$B$</span> after <span class="math-container">$i(\varphi)$</span> row transpositions.</p>
number-theory
<p>There is an interesting recent article "<a href="https://www.newscientist.com/article/2080613-mathematicians-shocked-to-find-pattern-in-random-prime-numbers/" rel="noreferrer"><em>Mathematicians shocked to find pattern in "random" prime numbers</em></a>" in <em>New Scientist</em>. (Don't you love math titles in the popular press? Compare to the source paper's <em>Unexpected Biases in the Distribution of Consecutive Primes</em>.)</p> <p>To summarize, let $p,q$ be <strong><em>consecutive primes</em></strong> of form $a\pmod {10}$ and $b\pmod {10}$, respectively. In the <a href="http://arxiv.org/pdf/1603.03720v1.pdf" rel="noreferrer">paper</a> by K. Soundararajan and R. Lemke Oliver, here is the number $N$ (in million units) of such pairs for the first hundred million primes modulo $10$,</p> <p>$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline &amp;a&amp;b&amp;\color{blue}N&amp;&amp;a&amp;b&amp;\color{blue}N&amp;&amp;a&amp;b&amp;\color{blue}N&amp;&amp;a&amp;b&amp;\color{blue}N\\ \hline &amp;1&amp;3&amp;7.43&amp;&amp;3&amp;7&amp;7.04&amp;&amp;7&amp;9&amp;7.43&amp;&amp;9&amp;1&amp;7.99\\ &amp;1&amp;7&amp;7.50&amp;&amp;3&amp;9&amp;7.50&amp;&amp;7&amp;1&amp;6.37&amp;&amp;9&amp;3&amp;6.37\\ &amp;1&amp;9&amp;5.44&amp;&amp;3&amp;1&amp;6.01&amp;&amp;7&amp;3&amp;6.76&amp;&amp;9&amp;7&amp;6.01\\ &amp;1&amp;1&amp;\color{brown}{4.62}&amp;&amp;3&amp;3&amp;\color{brown}{4.44}&amp;&amp;7&amp;7&amp;\color{brown}{4.44}&amp;&amp;9&amp;9&amp;\color{brown}{4.62}\\ \hline \text{Total}&amp; &amp; &amp;24.99&amp;&amp; &amp; &amp;24.99&amp;&amp; &amp; &amp;25.00&amp;&amp; &amp; &amp;24.99\\ \hline \end{array}$$</p> <p>As expected, each class $a$ has a total of $25$ million primes (after rounding). The "shocking" thing, according to the article, is that if the primes were truly <strong><em>random</em></strong>, then it is reasonable to expect that each subclass will have $\color{blue}{N=25/4 = 6.25}$. As the present data shows, this is apparently not the case.</p> <p><strong>Argument</strong>: The disparity seems to make sense. For example, let $p=11$, so $a=1$ . Since $p,q$ are <em>consecutive primes</em>, then, of course, subsequent numbers are not chosen at random. Wouldn't it be more likely the next prime will end in the "closer" $3$ or $7$ such as $q=13$ or $q=17$, rather than looping back to the same end digit, like $q=31$? (I've taken the liberty of re-arranging the table to reflect this.) </p> <p>However, what is surprising is the article concludes, and I quote, <em>"...as the primes stretch to infinity, they do eventually shake off the pattern and give the random distribution mathematicians are used to expecting."</em></p> <p><strong>Question:</strong> What is an effective way to counter the argument given above and come up with the same conclusion as in the article? (Will <strong><em>all</em></strong> the $N$ eventually approach $N\to 6.25$, with the unit suitably adjusted?) Or is the conclusion based on a conjecture and may not be true?</p> <p><strong>P.S:</strong> A more enlightening popular article "<a href="https://www.quantamagazine.org/20160313-mathematicians-discover-prime-conspiracy/" rel="noreferrer"><em>Mathematicians Discover Prime Conspiracy</em></a>". (It turns out the same argument is mentioned there, but with a subtle way to address it.)</p>
<p>$ \qquad \qquad $ <em>Remark: see also [update 3] at end</em> </p> <h2>1. First observations</h2> <p>I think there is at least one artifact (=non-random) in that list of frequencies.</p> <p>If we rewrite this as a "correlation"-table, <em>(the row-header indicate the residue classes of the smaller prime p and the column-header that of the larger prime q)</em>:<br> $$ \small \begin{array} {r|rrrr} &amp; 1&amp;3&amp;7&amp;9 \\ \hline 1&amp; 4.62&amp; 7.43&amp; 7.50&amp; 5.44\\ 3&amp; 6.01&amp; 4.44&amp; 7.04&amp; 7.50\\ 7&amp; 6.37&amp; 6.76&amp; 4.44&amp; 7.43\\ 9&amp; 7.99&amp; 6.37&amp; 6.01&amp; 4.62 \end{array}$$ then a surprising observation is surely the striking symmetry around the antidiagonal. But also the asymmetric increase of frequencies from top-right to bottom-left on the antidiagonal is somehow surprising.</p> <p>However, if we look at this table in terms of <strong><em>primegaps</em></strong>, then </p> <ul> <li>residue-pairs $(1,1)$ $(3,3)$ $(7,7)$,$(9,9)$ (the diagonal) refer to primegaps of the lenghtes $(10,20,30,...,10k,...)$ and those are the entries in the table with lowest frequencies, </li> <li>residue-pairs $(1,3)$, $(7,9)$ and $(9,1)$ refer to primegaps of the lenghtes $(2,12,22,32,...,10k+2,...)$ and those contain the entry with the highest frequencies </li> <li>residue-pairs $(3,7)$ $(7,1)$ ,$9,3$ refer to primegaps of the lenghtes $(4,14,24,34,...,10k+4,...)$ </li> <li>residue-pairs $(1,7)$ $(3,9)$ and $(7,3)$ refer to primegaps of the lenghtes $(6,16,26,36,...,10k+6,...)$ and have the two next-largest frequencies </li> <li>residue-pairs $(1,9)$ $(3,1)$ and $(9,7)$ refer to primegaps of the lenghtes $(8,18,28,38,...,10k+8,...)$</li> </ul> <p>so the -in the first view surprising- different frequencies of pairs $(1,9)$ and $(9,1)$ occurs because one collects the gaps of (minimal) length 8 and the other that of (minimal) length 2 - and the latter are much more frequent, but which is completely compatible with the general distribution of primegaps. The following images show the distribution of the primegaps modulo 100 (whose greater number of residue classes should make the problem more transparent).</p> <p>(I've left the primes smaller than 10 out of the computation): </p> <p><a href="https://i.sstatic.net/mkePl.png" rel="noreferrer"><img src="https://i.sstatic.net/mkePl.png" alt="image"></a> </p> <p>in logarithmic scale <a href="https://i.sstatic.net/6eJ9v.png" rel="noreferrer"><img src="https://i.sstatic.net/6eJ9v.png" alt="imglog"></a></p> <p>We see the clear logarithmic decrease of frequencies with a small jittering disturbance over the residue classes. It is also obvious, that the smaller primegaps dominate the larger ones, so that a "slot" which catches the primegaps of lengthes $2,12,22,...$ has more occurences than the "slot" which catches $8,18,28,...$ - just by the frequencies in the very first residue class. The original table of frequencies in the residue classes modulo 10 splits this into 16 combinations of pairs of 4 residue classes and the observed non-smoothness is due to that general jitter in the resdiue classes of the primegaps.</p> <p>It might also be interesting to see that primegap-frequencies separated into three subclasses - : </p> <p><a href="https://i.sstatic.net/SYD2s.png" rel="noreferrer"><img src="https://i.sstatic.net/SYD2s.png" alt="image"></a> </p> <p>That trisection shows the collected residue classes $6,12,18,...$ (the green line) as dominant over the two other collections and the two other collection change "priority" over the single residue classes.<br> The modulo-10-problem overlays that curves a bit and irons the variation a bit out and even makes it a bit less visible - but not completely: because the general distribution of residue classes in the primegaps has such a strong dominance in the small residue-classes. So I think that general distribution-characteristic explains that modulo-10 problem, however a bit less obvious... </p> <h2>2. Further observations (update 2)</h2> <p>For further analysis of the remaining jitter in the previous image I've tried to de-trend the frequencies distribution of the primegaps (however now without modulo considerations!).<br> Here is what I got on base of <em>5 700 000</em> primes and the first <em>75</em> nonzero lenghtes <em>g</em>. The regression-formula was simply created by the Excel-spreadsheet: <a href="https://i.sstatic.net/XAvdu.png" rel="noreferrer"><img src="https://i.sstatic.net/XAvdu.png" alt="image_trend"></a> </p> <p>De-trending means to compute the difference between the true frequencies $\small f(g)$ and the estimated ones; however, the frequency-residuals $\small r_0(g)=f(g) - 16.015 e^{-0.068 g }$ decrease in absolute value with the value of <em>g</em>. Heuristically I applied a further detrending function at the residuals $\small r_0(g)$ so that I got $\small r_1(g) = r_0(g) \cdot 1.07^g $ which look now much better de-trended. </p> <p>This is the plot of the residuals $\small r_1(g)$:<br> <a href="https://i.sstatic.net/yII9Y.png" rel="noreferrer"><img src="https://i.sstatic.net/yII9Y.png" alt="image_resid"></a> </p> <p>Now we see that periodic occurences of peaks in steps of <em>6</em> and even some apparent overlay. Thus I marked the small primefactors $\small (3,5,7,11)$ in <em>g</em> and we see a strong hint for a additive composition due to that primefactors in $g$<br> <a href="https://i.sstatic.net/v1DGf.png" rel="noreferrer"><img src="https://i.sstatic.net/v1DGf.png" alt="image_resid_primefactors"></a><br> The red dots mark that <em>g</em> divisible by <em>3</em>, green dots that by <em>5</em>, and we see, that at <em>g</em> which are divisible by both the frequency is even increased. </p> <p>I've also tried a multiple regression using that small primefactors on that residuals, but this is still in process.... </p> <h2>3. observations after Regression/Detrending (update 3)</h2> <p>Using multiple regression to detrend the frequencies of primegaps by their length <em>g</em> and additionally by the primefactors of <em>g</em> I got initially a strong surviving pattern with peaks for the primefactor <em>5</em>. But those peaks could be explained by the observation, that <em>(mod 100)</em> there are <em>40</em> residues of primefactor <em>p</em> where the gaplength <em>g=0 (mod 10)</em> can occur, but only <em>30</em> residues where the other gaplengthes can occur.<br> Thus I computed the <em>relative (logarithmized) frequencies</em> as $\text{fl}(g)=\ln(f(g)/m_p(g))$ where $f(g)$ is the frequency of that gaplength, and $m_p(g)$ the number of possible residue classes of the (first) prime <em>p</em> (in the pair <em>(p,q)</em> ) at where the gaplengthes <em>g</em> can occur. </p> <p>The first result is the following picture where only the general trend of decreasing of frequencies of larger gaps is detrended: </p> <p><a href="https://i.sstatic.net/krMF2.png" rel="noreferrer"><img src="https://i.sstatic.net/krMF2.png" alt="picture_regr_1"></a> </p> <p>This computation gives a residue $\text{res}_0$ which is the relative (logarithmized) frequency after the length of the primegap is held constant (see the equation in the picture). The regular pattern of peaks at <em>5</em>-steps in the earlier pictures is now practically removed. </p> <p>However, there is still the pattern of <em>3</em>-step which indicates the dominance of gaplength <em>6</em>. I tried to remove now the primefactorization of <em>g</em> as additional predictors. I included marker variables for primefactors <em>q</em> from <em>3</em> to <em>29</em> into the multiple regression equation and the following picture shows the residues $\text{res}_1(g)$ after the systematic influence of the primefactorization of <em>g</em> is removed. </p> <p><a href="https://i.sstatic.net/AXMad.png" rel="noreferrer"><img src="https://i.sstatic.net/AXMad.png" alt="picture_regr_2"></a> </p> <p>This picture has besides a soft long hill-like trend no more -for me- visible systematic pattern, which would indicate non-random influences. </p> <p><em>(For me this is now enough, and I'll step out - but still curious whether there will come out more by someone else)</em></p>
<p>It seems unreasonable to expect the prime numbers to "know" which primes are adjacent. The consecutive prime bias must be a symptom of a more general phenomenon. Some experimentation shows that each prime seems to repel others in its residue class, over considerable distance.</p> <p>Fix a prime $p$ and select primes $q \gg p$. Let $r$ be a reasonably large radius, such as $p \log q$, and let $n$ range over the interval $(q - r, q + r)$. Ignoring $q$, $n$ seems to be prime less often for $n \equiv q \pmod p$ than for $n \not\equiv q \pmod p$.</p> <p>For example, with $p = 7$ and $q$ ranging from $7^7$ to $7^8$, these are the primes counted in each residue class (with many overlaps):</p> <p>$$ \small \begin{array} {r|rrrrrr} [q] &amp; n \equiv +1 &amp; +2 &amp; \mathit{+3} &amp; -3 &amp; \mathit{-2} &amp; \mathit{-1}\\ \hline +1 &amp; 108980 &amp; 128952 &amp; 126384 &amp; 127903 &amp; 128088 &amp; 126665\\ +2 &amp; 128952 &amp; 108641 &amp; 128836 &amp; 126463 &amp; 127911 &amp; 127999\\ \mathit{+3} &amp; 126386 &amp; 128838 &amp; 108915 &amp; 128655 &amp; 126043 &amp; 128555\\ -3 &amp; 127904 &amp; 126464 &amp; 128655 &amp; 108843 &amp; 129049 &amp; 126684\\ \mathit{-2} &amp; 128087 &amp; 127910 &amp; 126040 &amp; 129046 &amp; 109062 &amp; 129065\\ \mathit{-1} &amp; 126665 &amp; 128001 &amp; 128553 &amp; 126686 &amp; 129068 &amp; 109293\\ \end{array}$$</p> <p>Italics indicate the quadratic nonresidues, which do not account for the smaller biases.</p> <p>The repulsion persists even for intervals $(q + p \log q, q + \sqrt{q})$, which gives 10-30 primes per residue class around each $q$:</p> <p>$$ \small \begin{array} {r|rrrrrr} [q] &amp; n \equiv +1 &amp; +2 &amp; \mathit{+3} &amp; -3 &amp; \mathit{-2} &amp; \mathit{-1}\\ \hline +1 &amp; 1009455 &amp; 1015043 &amp; 1015079 &amp; 1014692 &amp; 1012735 &amp; 1014648\\ +2 &amp; 1010366 &amp; 1006394 &amp; 1015175 &amp; 1012825 &amp; 1014562 &amp; 1011749\\ \mathit{+3} &amp; 1014932 &amp; 1010510 &amp; 1008805 &amp; 1014377 &amp; 1015580 &amp; 1017266\\ -3 &amp; 1012473 &amp; 1013167 &amp; 1011447 &amp; 1007058 &amp; 1015711 &amp; 1014626\\ \mathit{-2} &amp; 1017126 &amp; 1011133 &amp; 1014870 &amp; 1010336 &amp; 1008950 &amp; 1016188\\ \mathit{-1} &amp; 1015821 &amp; 1014746 &amp; 1012491 &amp; 1014960 &amp; 1012051 &amp; 1010063\\ \end{array}$$</p> <p>Since there's nothing special about $p = 7$, the repulsion likely occurs for all $p$. This means that simply by determining $q$ to be prime, we learn something about many composite numbers in arbitrary residue classes, without locating any of them precisely.</p> <hr> <p>The following is a plot of $\frac{\phi(n)}{n}$ for odd $n$ with $p = 11, r = 2 \cdot 11^2$ (horizontal), in residue classes modulo $11^2$ (vertical), averaged over all intervals about $q \in (11^5, 11^6)$ and normalized, with $n \equiv q \pmod{11}$ in green, scaled to 2x2 tiles. Dark tiles rarely correspond to primes.</p> <p><a href="https://i.sstatic.net/wBR7I.png" rel="noreferrer"><img src="https://i.sstatic.net/wBR7I.png" alt="$\frac{\phi(n)}{n}$"></a></p> <p>First differences:</p> <p><a href="https://i.sstatic.net/3PNJw.png" rel="noreferrer"><img src="https://i.sstatic.net/3PNJw.png" alt="first differences of $\frac{\phi(n)}{n}$"></a></p>
probability
<p>100 prisoners are imprisoned in solitary cells. Each cell is windowless and soundproof. There's a central living room with one light bulb; the bulb is initially off. No prisoner can see the light bulb from his or her own cell. Each day, the warden picks a prisoner equally at random, and that prisoner visits the central living room; at the end of the day the prisoner is returned to his cell. While in the living room, the prisoner can toggle the bulb if he or she wishes. Also, the prisoner has the option of asserting the claim that all 100 prisoners have been to the living room. If this assertion is false (that is, some prisoners still haven't been to the living room), all 100 prisoners will be shot for their stupidity. However, if it is indeed true, all prisoners are set free and inducted into MENSA, since the world can always use more smart people. Thus, the assertion should only be made if the prisoner is 100% certain of its validity.</p> <p>Before this whole procedure begins, the prisoners are allowed to get together in the courtyard to discuss a plan. What is the optimal plan they can agree on, so that eventually, someone will make a correct assertion?</p> <p><a href="http://www.ocf.berkeley.edu/~wwu/riddles/hard.shtml">http://www.ocf.berkeley.edu/~wwu/riddles/hard.shtml</a></p> <p>I was wondering does anyone know the solution for the 4,000 days solution. Also, anyone got any ideas on how to solve this? </p> <p>Just wanted to know if anyone got any ideas how to solve this. <a href="http://www.siam.org/">http://www.siam.org/</a> is offering prize money for a proof of a optimal solution for it. I suppose to make it clear the people want a plan that would mean on average it would take least amount of time to free the prisoner. The point is the average run time is as low as possible. </p>
<p>The "standard" solution for the puzzle is the following. Prisoners choose one of them, lets call him "The Counter". The job of The Counter is to count the prisoners who have been in living room. Other prisoners' job is to give a signal to The Counter that they have been in the living room. This is done as follows. The Counter is the only person who can turn the light off. Others can only turn the light on and do it only once. That is if a prisoner walks into the living and the light bulb is off and he has never turned the bulb on, then he turns it on. The bulb will stay on, until The Counter comes and turns it off. When The Counter has turned the bulb off 99 times, he knows all prisoners have been in the living room.</p> <p>I don't know what is efficiency of this solution, but in your link it is stated that the standard solution takes 27-28 years on average. So I suspect that is efficiency of this solution.</p>
<p>Here's one way to improve on Levon's solution. I don't have time to do a full probability analysis just now, but my very rough estimates suggest that with careful tuning, techniques like this should be able to get the average running time down to around 4000 days:</p> <p>It's not certain but "pretty likely" that every prisoner gets to visit the living room at least once during the first, say 800 days. So divide the strategy into several phases:</p> <ul> <li><p><strong>Day 1 to 800:</strong> Everyone toggles the lamp <em>the first time</em> they see the living room. If we're lucky, after day 800 there will be 50 prisoners who have ever turned <em>off</em> the lamp. They are the "active" prisoners. (If we're unlucky and there's someone who hasn't been in the living room yet, there will be less than 50 active prisoners).</p></li> <li><p><strong>Day 801 to 1600:</strong> Ever prisoner who is <em>active</em> after the first phase toggles the lamp the first time they see the living room during this period. (The prisoner who enters on day 801 makes sure the lamp is already off before he does his part -- if so, something will have gone wrong but proceed according to plan anyway). The prisoners who turned the lamp <em>off</em> during this phase are still active. If we're lucky there are 25 off them.</p></li> <li><p><strong>Day 1601 to 2400:</strong> Halve the number of active prisones once again. This time those who turned the lamp <em>on</em> stay active; there are now 13 active prisoners unless something went wrong.</p></li> <li><p><strong>Day 2401 to 6000:</strong> Count the 13 still active prisoners using the standard algorithm, with a Counter selected in advance.</p></li> </ul> <p>If anybody finds themselves still in prison on day 6001, everyone shrugs and starts the entire algorithm over from day 1. Since 3600 days should be plenty of time to count 13 active prisoners, the main risk is that someone who needed to didn't get to participate in one of the first three phases, and since the risk of that is small, it won't affect the <em>expected</em> running time of the algorithm much.</p> <p>With the parameters given here (and several approximation shortcuts in the calculation) this strategy has an average running time of 4353 days. But there's room for some optimizations by tweaking the numbers. It appears that 800 days is close to optimal for the first halving, but the later halvings can be shortened a bit without incurring quite as large a total risk of missing an active prisoner and having to restart.</p> <p><strong>EDIT:</strong> After a semi-systematic search for better parameters it seems that the best this method can get is an average running time of 4227 days (about 11½ years), which is achieved by three halving phases of 840, 770 and 700 days, followed by a counting phase of 2200 days. Neither more halving phases nor fewer ones improved the overall efficiency. Back to the drawing board ...</p>
matrices
<p>In a scientific paper, I've seen the following</p> <p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-1}\frac{\delta K}{\delta p}K^{-1}$$</span></p> <p>where <span class="math-container">$K$</span> is a <span class="math-container">$n \times n$</span> matrix that depends on <span class="math-container">$p$</span>. In my calculations I would have done the following</p> <p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-2}\frac{\delta K}{\delta p}=-K^{-T}K^{-1}\frac{\delta K}{\delta p}$$</span></p> <p>Is my calculation wrong?</p> <p>Note: I think <span class="math-container">$K$</span> is symmetric.</p>
<p>The major trouble in matrix calculus is that the things are no longer commuting, but one tends to use formulae from the scalar function calculus like $(x(t)^{-1})'=-x(t)^{-2}x'(t)$ replacing $x$ with the matrix $K$. <strong>One has to be more careful here and pay attention to the order</strong>. The easiest way to get the derivative of the inverse is to derivate the identity $I=KK^{-1}$ <em>respecting the order</em> $$ \underbrace{(I)'}_{=0}=(KK^{-1})'=K'K^{-1}+K(K^{-1})'. $$ Solving this equation with respect to $(K^{-1})'$ (again paying attention to the order (!)) will give $$ K(K^{-1})'=-K'K^{-1}\qquad\Rightarrow\qquad (K^{-1})'=-K^{-1}K'K^{-1}. $$</p>
<p>Yes, your calculation is wrong, note that $K$ may not commute with $\frac{\partial K}{\partial p}$, hence you must apply the chain rule correctly. The derivative of $\def\inv{\mathrm{inv}}\inv \colon \def\G{\mathord{\rm GL}}\G_n \to \G_n$ is <strong>not</strong> given by $\inv'(A)B = -A^2B$, but by $\inv'(A)B = -A^{-1}BA^{-1}$. To see that, note that for small enough $B$ we have \begin{align*} \inv(A + B) &amp;= (A + B)^{-1}\\ &amp;= (\def\I{\mathord{\rm Id}}\I + A^{-1}B)^{-1}A^{-1}\\ &amp;= \sum_k (-1)^k (A^{-1}B)^kA^{-1}\\ &amp;= A^{-1} - A^{-1}BA^{-1} + o(\|B\|) \end{align*} Hence, $\inv'(A)B= -A^{-1}BA^{-1}$, and therefore, by the chain rule $$ \partial_p (\inv \circ K) = \inv'\circ K\bigl(\partial_p K) = -K^{-1}(\partial_p K) K^{-1} $$ </p>
probability
<p>A fair coin is tossed repeatedly until 5 consecutive heads occurs. </p> <p>What is the expected number of coin tosses?</p>
<p>Let $e$ be the expected number of tosses. It is clear that $e$ is finite.</p> <p>Start tossing. If we get a tail immediately (probability $\frac{1}{2}$) then the expected number is $e+1$. If we get a head then a tail (probability $\frac{1}{4}$), then the expected number is $e+2$. Continue $\dots$. If we get $4$ heads then a tail, the expected number is $e+5$. Finally, if our first $5$ tosses are heads, then the expected number is $5$. Thus $$e=\frac{1}{2}(e+1)+\frac{1}{4}(e+2)+\frac{1}{8}(e+3)+\frac{1}{16}(e+4)+\frac{1}{32}(e+5)+\frac{1}{32}(5).$$ Solve this linear equation for $e$. We get $e=62$. </p>
<p>Lets calculate it for $n$ consecutive tosses the expected number of tosses needed.</p> <p>Lets denote $E_n$ for $n$ consecutive heads. Now if we get one more head after $E_{n-1}$, then we have $n$ consecutive heads or if it is a tail then again we have to repeat the procedure.</p> <p>So for the two scenarios: </p> <ol> <li>$E_{n-1}+1$</li> <li>$E_{n}{+1}$ ($1$ for a tail)</li> </ol> <p>So, $E_n=\frac12(E_{n-1} +1)+\frac12(E_{n-1}+ E_n+ 1)$, so $E_n= 2E_{n-1}+2$.</p> <p>We have the general recurrence relation. Define $f(n)=E_n+2$ with $f(0)=2$. So, </p> <p>\begin{align} f(n)&amp;=2f(n-1) \\ \implies f(n)&amp;=2^{n+1} \end{align}</p> <p>Therefore, $E_n = 2^{n+1}-2 = 2(2^n-1)$</p> <p>For $n=5$, it will give us $2(2^5-1)=62$.</p>
combinatorics
<p>A huge group of people live a bizarre box based existence. Every day, everyone changes the box that they're in, and every day they share their box with exactly one person, and never share a box with the same person twice.</p> <p>One of the people of the boxes gets sick. The illness is spread by co-box-itation. What is the minimum number of people who are ill on day n?</p> <hr> <p><strong>Additional information (not originally included in problem):</strong></p> <p>Potentially relevant OEIS sequence: <a href="http://oeis.org/A007843">http://oeis.org/A007843</a></p>
<p>I used brute forcing, minimizing as possible, until day 20, then I was comfortable enough to try to make an educated guessing about the minimum number of patients on successive days. A pattern emerged quite evidently, and the number was exactly the sequence of <a href="http://oeis.org/A007843/list">oeis.org/A007843</a>. First thing I have noticed, after having all the results of the first 20 days written down, is that whenever the minimum number of patients becomes a power of 2, it becomes clearly obvious how long the containment of illness, in the minimum case, is going to be valid: it is given as $\log_2 x$, where x is the minimum number of patients. So I have rewritten the minimum number of patients in terms of powers of 2 $ (2^0, 2^1, 2^2, 2^2+2^1, 2^3, 2^3+2^1, 2^3+2^2, 2^3+2^2+2^1, 2^4, 2^4+2^1, 2^4+2^2, 2^4+2^2+2^1, 2^4+2^3, 2^4+2^3+2^1, 2^4+2^3+2^2, 2^4+2^3+2^2+2^1, ...).$ Each item in this sequence represents the minimum number of patients at some point in time, in consecutive order. Note that, the exponent of the last term in each item exposes the number of days in which the illness is going to keep contained at the same level exhibited by the item's value, that is (1 patient at Day 0, 2 patients for 1 day, 4 patients for 2 days, 6 patients for 1 day, 8 patients for 3 days, 10 patients for 1 day, 12 patients for 2 days, ...). So the sequence of the successive days starting at Day 0 is (1, 2, 4, 4, 6, 8, 8, 8, 10, 12, 12, 14, 16, 16, 16, 16, 18, 20, 20, 22, 24, 24, 24, 26, 28, 28, 30, ...) as displayed at <a href="http://oeis.org/A007843">OEIS sequence</a>. (I wanted to post this as a comment but it seems that I am not allowed to?)</p>
<p>Just in case this helps someone:</p> <p><img src="https://i.sstatic.net/DKsDj.png" alt="enter image description here"></p> <p>(In each step we must cover a $N\times N$ board with $N$ non-self attacking rooks, diagonal forbidden). This gives the sequence (I start numbering day 1 for N=2) : (2,4,4,6,8,8,8,10,12,12,14,16,16,16)</p> <p>Updated: a. Brief explanation: each column-row corresponds to a person; the numbered $n$ cells shows the pairings of sick people corresponding to day $n$ (day 1: pair {1,2}; day 2: pairs {1,4}, {2,3})</p> <p>b. This, as most answers here, assumes that we are interested in a sequence of pairings that minimize the number of sick people <strong>for all $n$</strong>. But it can be argued that the question is not clear on this, and that might be interested in minimize the number of sick people for one fixed $n$. In this case, the problem is simpler, see Daniel Martin's answer.</p>
probability
<p>Suppose a biased coin (probability of head being $p$) was flipped $n$ times. I would like to find the probability that the length of the longest run of heads, say $\ell_n$, exceeds a given number $m$, i.e. $\mathbb{P}(\ell_n &gt; m)$. </p> <p>It suffices to find the probability that length of any run of heads exceeds $m$. I was trying to approach the problem by fixing a run of $m+1$ heads, and counting the number of such configurations, but did not get anywhere.</p> <p>It is easy to simulate it:</p> <p><img src="https://i.sstatic.net/jPJxb.png" alt="Distribution of the length of the longest run of head in a sequence of 1000 Bernoulli trials with 60% change of getting a head"></p> <p>I would appreciate any advice on how to analytically solve this problem, i.e. express an answer in terms of a sum or an integral.</p> <p>Thank you.</p>
<p>This problem was solved using generating functions by de Moivre in 1738. The formula you want is $$\mathbb{P}(\ell_n \geq m)=\sum_{j=1}^{\lfloor n/m\rfloor} (-1)^{j+1}\left(p+\left({n-jm+1\over j}\right)(1-p)\right){n-jm\choose j-1}p^{jm}(1-p)^{j-1}.$$</p> <p><strong>References</strong></p> <ol> <li><p>Section 14.1 <em>Problems and Snapshots from the World of Probability</em> by Blom, Holst, and Sandell</p></li> <li><p>Chapter V, Section 3 <em>Introduction to Mathematical Probability</em> by Uspensky</p></li> <li><p>Section 22.6 <em>A History of Probability and Statistics and Their Applications before 1750</em> by Hald gives solutions by de Moivre (1738), Simpson (1740), Laplace (1812), and Todhunter (1865) </p></li> </ol> <hr> <p><strong>Added:</strong> The combinatorial class of all coin toss sequences without a run of $ m $ heads in a row is $$\sum_{k\geq 0}(\mbox{seq}_{&lt; m }(H)\,T)^k \,\mbox{seq}_{&lt; m }(H), $$ with corresponding counting generating function $$H(h,t)={\sum_{0\leq j&lt; m }h^j\over 1-(\sum_{0\leq j&lt; m }h^j)t}={1-h^ m \over 1-h-(1-h^ m )t}.$$ We introduce probability by replacing $h$ with $ps$ and $t$ by $qs$, where $q=1-p$: $$G(s)={1-p^ m s^ m \over1-s+p^ m s^{ m +1}q}.$$ The coefficient of $s^n$ in $G(s)$ is $\mathbb{P}(\ell_n&lt;m).$</p> <p>The function $1/(1-s(1-p^ m s^ m q ))$ can be rewritten as \begin{eqnarray*} \sum_{k\geq 0}s^k(1-p^ m s^ m q )^k &amp;=&amp;\sum_{k\geq 0}\sum_{j\geq 0} {k\choose j} (-p^ m q)^js^{k+j m }\\ %&amp;=&amp;\sum_{j\geq 0}\sum_{k\geq 0} {k\choose j} (-p^ m q )^js^{k+j m }. \end{eqnarray*} The coefficient of $s^n$ in this function is $c(n)=\sum_{j\geq 0}{n-j m \choose j}(-p^ m q)^j$. Therefore the coefficient of $s^n$ in $G(s)$ is $c(n)-p^ m c(n- m ).$ Finally, \begin{eqnarray*} \mathbb{P}(\ell_n\geq m)&amp;=&amp;1-\mathbb{P}(\ell_n&lt;m)\\[8pt] &amp;=&amp;p^ m c(n- m )+1-c(n)\\[8pt] &amp;=&amp;p^ m \sum_{j\geq 0}(-1)^j{n-(j+1) m \choose j}(p^ m q)^j+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^ m q)^j\\[8pt] &amp;=&amp;p^ m \sum_{j\geq 1}(-1)^{j-1}{n-j m \choose j-1}(p^m q)^{j-1}+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^mq )^j\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m \choose j-1}q+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m +1\choose j}q \right]p^{ jm} q^{j-1}\\[8pt] &amp;=&amp;\sum_{j\geq 1}(-1)^{j+1} \left[p+{n-j m +1\over j}\, q\right] {n-j m \choose j-1}\,p^{ jm} q^{j-1}. \end{eqnarray*}</p>
<p>Define a Markov chain with states $0, 1, \ldots m$ so that with probability $1$ the chain moves from $m$ to $m$ and for $i&lt;m$ with probability $p$ the chain moves from $i$ to $i+1$ and with probability $1-p$ the chain moves from $i$ to $0$. If you look at the $n$th power of the transition matrix for this chain you can read off the probability that in $n$ flips you have a sequence of at least $m$ consecutive heads.</p>
probability
<p>Given $n$ independent geometric random variables $X_n$, each with probability parameter $p$ (and thus expectation $E\left(X_n\right) = \frac{1}{p}$), what is $$E_n = E\left(\max_{i \in 1 .. n}X_n\right)$$</p> <hr> <p>If we instead look at a continuous-time analogue, e.g. exponential random variables $Y_n$ with rate parameter $\lambda$, this is simple: $$E\left(\max_{i \in 1 .. n}Y_n\right) = \sum_{i=1}^n\frac{1}{i\lambda}$$</p> <p>(I think this is right... that's the time for the first plus the time for the second plus ... plus the time for the last.)</p> <p>However, I can't find something similarly nice for the discrete-time case.</p> <hr> <p>What I <em>have</em> done is to construct a Markov chain modelling the number of the $X_n$ that haven't yet "hit". (i.e. at each time interval, perform a binomial trial on the number of $X_n$ remaining to see which "hit", and then move to the number that didn't "hit".) This gives $$E_n = 1 + \sum_{i=0}^n \left(\begin{matrix}n\\i\end{matrix}\right)p^{n-i}(1-p)^iE_i$$ which gives the correct answer, but is a nightmare of recursion to calculate. I'm hoping for something in a shorter form.</p>
<p>There is no nice, closed-form expression for the expected maximum of IID geometric random variables. However, the expected maximum of the corresponding IID exponential random variables turns out to be a very good approximation. More specifically, we have the hard bounds</p> <p>$$\frac{1}{\lambda} H_n \leq E_n \leq 1 + \frac{1}{\lambda} H_n,$$ and the close approximation $$E_n \approx \frac{1}{2} + \frac{1}{\lambda} H_n,$$ where $H_n$ is the $n$th harmonic number $H_n = \sum_{k=1}^n \frac{1}{k}$, and $\lambda = -\log (1-p)$, the parameter for the corresponding exponential distribution.</p> <p>Here's the derivation. Let $q = 1-p$. Use Did's expression with the fact that if $X$ is geometric with parameter $p$ then $P(X \leq k) = 1-q^k$ to get </p> <p>$$E_n = \sum_{k=0}^{\infty} (1 - (1-q^k)^n).$$</p> <p>By viewing this infinite sum as right- and left-hand Riemann sum approximations of the corresponding integral we obtain </p> <p>$$\int_0^{\infty} (1 - (1 - q^x)^n) dx \leq E_n \leq 1 + \int_0^{\infty} (1 - (1 - q^x)^n) dx.$$</p> <p>The analysis now comes down to understanding the behavior of the integral. With the variable switch $u = 1 - q^x$ we have</p> <p>$$\int_0^{\infty} (1 - (1 - q^x)^n) dx = -\frac{1}{\log q} \int_0^1 \frac{1 - u^n}{1-u} du = -\frac{1}{\log q} \int_0^1 \left(1 + u + \cdots + u^{n-1}\right) du $$ $$= -\frac{1}{\log q} \left(1 + \frac{1}{2} + \cdots + \frac{1}{n}\right) = -\frac{1}{\log q} H_n,$$ which is exactly the expression the OP has above for the expected maximum of $n$ corresponding IID exponential random variables, with $\lambda = - \log q$.</p> <p>This proves the hard bounds, but what about the more precise approximation? The easiest way to see that is probably to use the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="noreferrer">Euler-Maclaurin summation formula</a> for approximating a sum by an integral. Up to a first-order error term, it says exactly that</p> <p>$$E_n = \sum_{k=0}^{\infty} (1 - (1-q^k)^n) \approx \int_0^{\infty} (1 - (1 - q^x)^n) dx + \frac{1}{2},$$ yielding the approximation $$E_n \approx -\frac{1}{\log q} H_n + \frac{1}{2},$$ with error term given by $$\int_0^{\infty} n (\log q) q^x (1 - q^x)^{n-1} \left(x - \lfloor x \rfloor - \frac{1}{2}\right) dx.$$ One can verify that this is quite small unless $n$ is also small or $q$ is extreme.</p> <p>All of these results, including a more rigorous justification of the approximation, the OP's recursive formula, and the additional expression $$E_n = \sum_{i=1}^n \binom{n}{i} (-1)^{i+1} \frac{1}{1-q^i},$$ are in Bennett Eisenberg's paper "On the expectation of the maximum of IID geometric random variables" (<em>Statistics and Probability Letters</em> 78 (2008) 135-143). </p>
<p>First principle:</p> <blockquote> <p>To deal with maxima $M$ of independent random variables, use as much as possible events of the form $[M\leqslant x]$.</p> </blockquote> <p>Second principle:</p> <blockquote> <p>To compute the expectation of a nonnegative random variable $Z$, use as much as possible the complementary cumulative distribution function $\mathrm P(Z\geqslant z)$.</p> </blockquote> <p>In the discrete case, $\mathrm E(M)=\displaystyle\sum_{k\ge0}\mathrm P(M&gt;k)$, the event $[M&gt;k]$ is the complement of $[M\leqslant k]$, and the event $[M\leqslant k]$ is the intersection of the independent events $[X_i\leqslant k]$, each of probability $F_X(k)$. Hence, $$ \mathrm E(M)=\sum_{k\geqslant0}(1-\mathrm P(M\leqslant k))=\sum_{k\geqslant0}(1-\mathrm P(X\leqslant k)^n)=\sum_{k\geqslant0}(1-F_X(k)^n). $$ The continuous case is even simpler. For i.i.d. nonnegative $X_1, X_2, \ldots, X_n$, $$ \mathrm E(M)=\int_0^{+\infty}(1-F_X(t)^n) \, \mathrm{d}t. $$</p>
logic
<p>I am having trouble seeing the difference between weak and strong induction.</p> <hr /> <p>There are a few examples in which we can see the difference, such as reaching the <span class="math-container">$k^{th}$</span> rung of a ladder and proving every integer <span class="math-container">$&gt;1$</span> can be written as a product of primes:</p> <blockquote> <p>To show every <span class="math-container">$n\ge2$</span> can be written as a product of primes, first we note that <span class="math-container">$2$</span> is prime. Now we assume true for all integers <span class="math-container">$2 \le m&lt;n$</span>. If <span class="math-container">$n$</span> is prime, we're done. If <span class="math-container">$n$</span> is not prime, then it is composite and so <span class="math-container">$n=ab$</span>, where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are less than <span class="math-container">$n$</span>. Since <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are less than <span class="math-container">$n$</span>, <span class="math-container">$ab$</span> can be written as a product of primes and hence <span class="math-container">$n$</span> can be written as a product of primes. QED</p> </blockquote> <hr /> <p>However, it seems sort of like weak induction, only a bit dubious. In weak induction, we show a base case is true, then we assume true for all integers <span class="math-container">$k-1$</span>, (or <span class="math-container">$k$</span>), then we attempt to show it is true for <span class="math-container">$k$</span>, (or <span class="math-container">$k+1$</span>), which implies true <span class="math-container">$\forall n \in \mathbb N$</span>.</p> <p><strong>When we assume true for all integers <span class="math-container">$k$</span>, isn't that the same as a strong induction hypothesis? That is, we're assuming true for all integers up to some specific one.</strong></p> <hr /> <p>As a simple demonstrative example, how would we show <span class="math-container">$1+2+\cdots+n= {n(n+1) \over 2}$</span> using strong induction?</p> <p>(Learned from Discrete Mathematics by Kenneth Rosen)</p>
<p><strong>Initial remarks:</strong> Good question. I think it deserves a full response (warning: this is going to be a long, but hopefully very clear, answer). First, most students do not really understand <a href="https://math.stackexchange.com/questions/1139579/why-is-mathematical-induction-a-valid-proof-technique/1139606#1139606">why mathematical induction is a valid proof technique</a>. That's part of the problem. Second, weak induction and strong induction are actually logically equivalent; thus, differentiating between these forms of induction may seem a little bit difficult at first. The important thing to do is to understand how weak and strong induction are stated and to clearly understand the differences therein (I disagree with the previous answer that it is "just a matter of semantics"; it's not, and I will explain why). Much of what I will have to say is adapted from David Gunderson's wonderful book <em>Handbook of Mathematical Induction</em>, but I have expanded and tweaked a few things where I saw fit. That being said, hopefully you will find the rest of this answer to be informative.</p> <hr> <p><strong>Gunderson remark about strong induction:</strong> While attempting an inductive proof, in the inductive step one often needs only the truth of $S(n)$ to prove $S(n+1)$; sometimes a little more "power" is needed (such as in the proof that any positive integer $n\geq 2$ is a product of primes--we'll explore why more power is needed in a moment), and often this is made possible by strengthening the inductive hypothesis. </p> <hr> <p><strong>Kenneth Rosen remark in <em>Discrete Mathematics and Its Applications Study Guide</em>:</strong> Understanding and constructing proofs by mathematical induction are extremely difficult tasks for most students. Do not be discouraged, and do not give up, because, without doubt, this proof technique is the most important one there is in mathematics and computer science. Pay careful attention to the conventions to be observed in writing down a proof by induction. As with all proofs, remember that a proof by mathematical induction is like an essay--it must have a beginning, a middle, and an end; it must consist of complete sentences, logically and aesthetically arranged; and it must convince the reader. Be sure that your basis step (also called the "base case") is correct (that you have verified the proposition in question for the smallest value or values of $n$), and be sure that your inductive step is correct and complete (that you have derived the proposition for $k+1$, assuming the inductive hypothesis that proposition is true for $k$--or the slightly strong hypothesis that it is true for all values less than or equal to $k$, when using strong induction. </p> <hr> <p><strong>Statement of weak induction:</strong> Let $S(n)$ denote a statement regarding an integer $n$, and let $k\in\mathbb{Z}$ be fixed. If</p> <ul> <li>(i) $S(k)$ holds, and</li> <li>(ii) for every $m\geq k, S(m)\to S(m+1)$,</li> </ul> <p>then for every $n\geq k$, the statement $S(n)$ holds.</p> <hr> <p><strong>Statement of strong induction:</strong> Let $S(n)$ denote a statement regarding an integer $n$. If </p> <ul> <li>(i) $S(k)$ is true and</li> <li>(ii) for every $m\geq k, [S(k)\land S(k+1)\land\cdots\land S(m)]\to S(m+1)$,</li> </ul> <p>then for every $n\geq k$, the statement $S(n)$ is true. </p> <hr> <p><strong>Proof of strong induction from weak:</strong> Assume that for some $k$, the statement $S(k)$ is true and for every $m\geq k, [S(k)\land S(k+1)\land\cdot\land S(m)]\to S(m+1)$. Let $B$ be the set of all $n&gt;m$ for which $S(n)$ is false. If $B\neq\varnothing, B\subset\mathbb{N}$ and so by well-ordering, $B$ has a least element, say $\ell$. By the definition of $B$, for every $k\leq t&lt;\ell, S(t)$ is true. The premise of the inductive hypothesis is true, and so $S(\ell)$ is true, contradicting that $\ell\in B$. Hence $B=\varnothing$. $\blacksquare$</p> <hr> <p><strong>Proof of weak induction from strong:</strong> Assume that strong induction holds (in particular, for $k=1$). That is, assume that if $S(1)$ is true and for every $m\geq 1, [S(1)\land S(2)\land\cdots\land S(m)]\to S(m+1)$, then for every $n\geq 1, S(n)$ is true. </p> <p>Observe (by truth tables, if desired), that for $m+1$ statements $p_i$, $$ [p_1\to p_2]\land[p_2\to p_3]\land\cdots\land[p_m\to p_{m+1}]\Rightarrow[(p_1\land p_2\land\cdots\land p_m)\to p_{m+1}],\tag{$\dagger$} $$ itself a result provable by induction (see end of answer for such a proof). </p> <p>Assume that the hypotheses of weak induction are true, that is, that $S(1)$ is true, and that for arbitrary $t, S(t)\to S(t+1)$. By repeated application of these recent assumptions, $S(1)\to S(2), S(2)\to S(3),\ldots, S(m)\to S(m+1)$ each hold. By the above observation, then $$ [S(1)\land S(2)\land\cdots\land S(m)]\to S(m+1). $$ Thus the hypotheses of strong induction are complete, and so one concludes that for every $n\geq 1$, the statement $S(n)$ is true, the consequence desired to complete the proof of weak induction. $\blacksquare$</p> <hr> <p><strong>Proving any positive integer $n\geq 2$ is a product of primes using strong induction:</strong> Let $S(n)$ be the statement "$n$ is a product of primes."</p> <p><strong>Base step ($n=2$):</strong> Since $n=2$ is trivially a product of primes (actually one prime, really), $S(2)$ is true. </p> <p><strong>Inductive step:</strong> Fix some $m\geq 2$, and assume that for every $t$ satisfying $2\leq t\leq m$, the statement $S(t)$ is true. To be shown is that $$ S(m+1) : m+1 \text{ is a product of primes}, $$ is true. If $m+1$ is a prime, then $S(m+1)$ is true. If $m+1$ is not prime, then there exist $r$ and $s$ with $2\leq r\leq m$ and $2\leq s\leq m$ so that $m+1=rs$. Since $S(r)$ is assumed to be true, $r$ is a product of primes [<strong>note:</strong> <em>This</em> is where it is imperative that we use strong induction; using weak induction, we cannot assume $S(r)$ is true]; similarly, by $S(s), s$ is a product of primes. Hence $m+1=rs$ is a product of primes, and so $S(m+1)$ holds. Thus, in either case, $S(m+1)$ holds, completing the inductive step.</p> <p>By mathematical induction, for all $n\geq 2$, the statement $S(n)$ is true. $\blacksquare$</p> <hr> <p><strong>Proof of $1+2+3+\cdots+n = \frac{n(n+1)}{2}$ by strong induction:</strong> Using strong induction here is completely unnecessary, for you do not need it at all, and it is only likely to confuse people as to why you are using it. It will proceed just like a proof by weak induction, but the assumption at the outset will look different; nonetheless, just to show what I am talking about, I will prove it using strong induction.</p> <p>Let $S(n)$ denote the proposition $$ S(n) : 1+2+3+\cdots+n = \frac{n(n+1)}{2}. $$</p> <p><strong>Base step ($n=1$):</strong> $S(1)$ is true because $1=\frac{1(1+1)}{2}$. </p> <p><strong>Inductive step:</strong> Fix some $k\geq 1$, and assume that for every $t$ satisfying $1\leq t\leq k$, the statement $S(t)$ is true. To be shown is that $$ S(k+1) : 1+2+3+\cdots+k+(k+1)=\frac{(k+1)(k+2)}{2} $$ follows. Beginning with the left-hand side of $S(k+1)$, \begin{align} \text{LHS} &amp;= 1+2+3+\cdots+k+(k+1)\tag{by definition}\\[1em] &amp;= (1+2+3+\cdots+k)+(k+1)\tag{group terms}\\[1em] &amp;= \frac{k(k+1)}{2}+(k+1)\tag{by $S(k)$}\\[1em] &amp;= (k+1)\left(\frac{k}{2}+1\right)\tag{factor out $k+1$}\\[1em] &amp;= (k+1)\left(\frac{k+2}{2}\right)\tag{common denominator}\\[1em] &amp;= \frac{(k+1)(k+2)}{2}\tag{desired expression}\\[1em] &amp;= \text{RHS}, \end{align} we obtain the right-hand side of $S(k+1)$. </p> <p>By mathematical induction, for all $n\geq 1$, the statement $S(n)$ is true. $\blacksquare$</p> <p>$\color{red}{\text{Comment:}}$ See how this was really no different than how a proof by weak induction would work? The only thing different is really an unnecessary assumption made at the beginning of the proof. However, in your prime number proof, strong induction is essential; otherwise, we cannot assume $S(r)$ or $S(s)$ to be true. Here, any assumption regarding $t$ where $1\leq t\leq k$ is really useless because we don't actually use it anywhere in the proof, whereas we did use the assumptions $S(r)$ and $S(s)$ in the prime number proof, where $1\leq t\leq m$, because $r,s &lt; m$. Does it now make sense why it was necessary to use strong induction in the prime number proof? </p> <hr> <p><strong>Proof of $(\dagger)$ by induction:</strong> For statements $p_1,\ldots,p_{m+1}$, we have that $$ [p_1\to p_2]\land[p_2\to p_3]\land\cdots\land[p_m\to p_{m+1}]\Rightarrow[(p_1\land p_2\land\cdots\land p_m)\to p_{m+1}]. $$ </p> <p><em>Proof.</em> For each $m\in\mathbb{Z^+}$, let $S(m)$ be the statement that for $m+1$ statements $p_i$, $$ S(m) : [p_1\to p_2]\land[p_2\to p_3]\land\cdots\land[p_m\to p_{m+1}]\Rightarrow[(p_1\land p_2\land\cdots\land p_m)\to p_{m+1}]. $$ <strong>Base step:</strong> The statement $S(1)$ says $$ [p_1\to p_2]\Rightarrow [(p_1\land p_2)\to p_2], $$ which is true (since the right side is a tautology). </p> <p><strong>Inductive step:</strong> Fix $k\geq 1$, and assume that for any statements $q_1,\ldots,q_{k+1}$, both $$ S(1) : [q_1\to q_2]\Rightarrow [(q_1\land q_2)\to q_2] $$ and $$ S(k) : [q_1\to q_2]\land[q_2\to q_3]\land\cdots\land[q_k\to q_{k+1}]\Rightarrow[(q_1\land q_2\land\cdots\land q_k)\to q_{k+1}]. $$ hold. It remains to show that for any statements $p_1,p_2,\ldots,p_k,p_{k+1},p_{k+2}$ that $$ S(k+1) : [p_1\to p_2]\land[p_2\to p_3]\land\cdots\land[p_{k+1}\to p_{k+2}]\Rightarrow[(p_1\land p_2\land\cdots\land p_{k+1})\to p_{k+2}] $$ follows. Beginning with the left-hand side of $S(k+1)$, \begin{align} \text{LHS} &amp;\equiv [p_1\to p_2]\land\cdots\land[p_{k+1}\to p_{k+2}]\land[p_{k+1}\to p_{k+2}]\\[0.5em] &amp;\Downarrow\qquad \text{(definition of conjunction)}\\[0.5em] &amp;[[p_1\to p_2]\land[p_2\to p_3]\land\cdots\land[p_{k+1}\to p_{k+2}]]\land[p_{k+1}\to p_{k+2}]\\[0.5em] &amp;\Downarrow\qquad \text{(by $S(k)$ with each $q_i = p_i$)}\\[0.5em] &amp;[(p_1\land p_2\land\cdots\land p_k)\to p_{k+1}]\land[p_{k+1}\to p_{k+2}]\\[0.5em] &amp;\Downarrow\qquad \text{(by $S(1)$ with $q_1=p_1\land\cdots\land p_k)$ and $q_2=p_{k+1}$)}\\[0.5em] &amp;[[(p_1\land p_2\land\cdots\land p_k)\land p_{k+1}]\to p_{k+1}]\land [p_{k+1}\to p_{k+2}]\\[0.5em] &amp;\Downarrow\qquad \text{(by definition of conjunction)}\\[0.5em] &amp;[(p_1\land p_2\land\cdots\land p_k\land p_{k+1}]\to p_{k+1}]\land [p_{k+1}\to p_{k+2}]\\[0.5em] &amp;\Downarrow\qquad \text{(since $a\land b\to b$ with $b=[p_{k+1}\to p_{k+2}]$)}\\[0.5em] &amp;[(p_1\land p_2\land\cdots\land p_k\land p_{k+1})\to p_{k+2}]\land[p_{k+1}\to p_{k+2}]\\[0.5em] &amp;\Downarrow\qquad \text{(since $a\land b\to a$)}\\[0.5em] &amp;(p_1\land p_2\land\cdots\land p_k\land p_{k+1})\to p_{k+2}\\[0.5em] &amp;\equiv \text{RHS}, \end{align} we obtain the right-hand side of $S(k+1)$, which completes the inductive step.</p> <p>By mathematical induction, for each $n\geq 1, S(n)$ holds. $\blacksquare$</p>
<p>Usually, there is no need to distinguish between weak and strong induction. As you point out, the difference is minor. In both weak and strong induction, you must prove the base case (usually very easy if not trivial). Then, weak induction assumes that the statement is true for size $n-1$ and you must prove that the statement is true for $n$. Using strong induction, you assume that the statement is true for all $m&lt;n$ (at least your base case) and prove the statement for $n$.</p> <p>In practice, one may just always use strong induction (even if you only need to know that the statement is true for $n-1$). In the example that you give, you only need to assume that the formula holds for the previous case (weak) induction. You could assume it holds for every case, but only use the previous case. As far as I can tell, it is really just a matter of semantics. There are times when strong induction really is more useful, the case when you break up the problem into two problem of size $n/2$ for example. This happens frequently when making proofs about graphs where you decompose the graph on $n$ vertices into two subgraphs (smaller, but you have little or no control over the exact size).</p>
game-theory
<p>Players A and B alternate writing one digit to make a six-figure number. That means A writes digit $a$, B writes digit $b$, ... to make a number $\overline{abcdef}$. </p> <p>$a,b,c,d,e,f$ are distinct, $a\neq 0$.</p> <p>A is the winner if this number is composite, otherwise B is. Is there any way to help A or B always win?</p>
<p>Player A has a winning strategy. First, Player A picks $a = 3$. </p> <p>Case I: Player B picks $b = 9$. </p> <p>Then, Player A picks $c = 1$. If Player B picks $d = 7$, Player A can pick $e$ arbitrarily. If Player B picks $d \neq 7$, Player A picks $e = 7$. Following this strategy, $\{1,3,7,9\} \subseteq \{a,b,c,d,e\}$. So Player B is forced to pick $f \in \{0,2,4,5,6,8\}$, which makes $\overline{abcdef}$ composite. </p> <p>Case II: Player B picks $b \neq 9$.</p> <p>Then, Player A picks $c = 9$ and waits for Player B to pick $d$.</p> <p>If $\{b,d\} = \{0,6\}, \{1,5\}, \{1,8\}, \{4,5\}, \{4,8\}, \{7,5\}, \{7,8\}$, then Player A picks $e = 2$. </p> <p>If $\{b,d\} = \{1,2\}, \{4,2\}, \{7,2\}$, then Player A picks $e = 5$. </p> <p>If $\{b,d\} = \{0,4\}, \{0,7\}, \{6,4\}, \{6,7\}, \{2,5\}, \{2,8\}, \{5,8\}$, then Player A picks $e = 1$.</p> <p>If $\{b,d\} = \{0,1\}, \{6,1\}$, then Player A picks $e = 4$. </p> <p>If $\{b,d\} = \{6,2\}, \{6,5\}, \{6,8\}, \{1,4\}, \{1,7\}, \{4,7\}$, then Player A picks $e = 0$.</p> <p>If $\{b,d\} = \{0,2\}, \{0,5\}, \{0,8\}$, then Player A picks $e = 6$. </p> <p>In all of these cases, $e$ was chosen such that $b+d+e \equiv 2\pmod{3}$. Hence, $a+b+c+d+e \equiv 2 \pmod{3}$. If Player B picks $f \in \{1,7\}$, then $a+b+c+d+e+f$ will be a multiple of $3$, and thus, $\overline{abcdef}$ is a multiple of $3$, and hence, composite. Otherwise, $f \in \{0,2,4,5,6,8\}$, and $\overline{abcdef}$ is composite. </p> <p>Therefore, Player A can follow this strategy to guarantee that $\overline{abcdef}$ is composite, and thus, ensure that Player A wins the game.</p>
<p>This is community wiki because it is JimmyK4542's answer with different reasoning.</p> <p>First note that if the final digit is 0, 2, 4, 5, 6, or 8 then the result is composite and A wins. In other words B can only win if the final digit is 1, 3, 7, or 9 and even then it is not guaranteed.</p> <p>A gets to choose three digits. As in JimmyK4542's answer A chooses 3 first (9 works as well). If B chooses any of 1, 3, or 7 then A can choose the other two and guarantee a win. Thus, we only need to consider the cases where B's choice lies outside this set.</p> <p>A chooses 9 as his second digit (or 3 if he chose 9 first). Again, if B selects either 1 or 7 then A selects the other and wins. Thus when A is going to select his third digit we need only consider the cases where B has selected two digits from the set $\{0, 2, 4, 5, 6, 8\}$.</p> <p>A's strategy now is to select a number such that even if B chooses either of 1 or 7 that the number will be composite. Note that 1 and 7 are both equal to 1 mod 3 and 3 and 9 are both 0 mod 3. Therefore, A must ensure that his choice added to B's two previous choices equals 2 mod 3. Let us examine the three cases:</p> <p>1) The sum of B's two numbers is 0 mod 3. We know that he cannot have chosen both 2 and 8, so choose one of them.</p> <p>2) The sum of B's two numbers is 1 mod 3. Choose 1 or 7.</p> <p>3) The sum of B's two numbers is 2. We know that he cannot have chosen both 0 and 6 so choose one of them.</p> <p>And we are done.</p> <p>I have added this answer because the reasoning may appear more intuitive to some and the strategy perhaps more direct.</p>
probability
<p>If you are given a die and asked to roll it twice. What is the probability that the value of the second roll will be less than the value of the first roll?</p>
<p>There are various ways to answer this. Here is one:</p> <p>There is clearly a $1$ out of $6$ chance that the two rolls will be the same, hence a $5$ out of $6$ chance that they will be different. Further, the chance that the first roll is greater than the second must be equal to the chance that the second roll is greater than the first (e.g. switch the two dice!), so both chances must be $2.5$ out of $6$ or $5$ out of $12$.</p>
<p>Here another way to solve the problem $$ \text{Pr }[\textrm{second} &gt; \textrm{first}] + \text{Pr }[\textrm{second} &lt; \textrm{first}] + \text{Pr }[\textrm{second} = \textrm{first}] = 1 $$ Because of symmetry $\text{Pr }[\text{second} &gt; \text{first}] = \text{Pr }[\text{second} &lt; \text{first}]$, so $$ \text{Pr }[\text{second} &gt; \text{first}] = \frac{1 - \text{Pr }[\text{second} = \text{first}]}{2} = \frac{1 - \frac{1}{6}}{2} = \frac{5}{12} $$</p>
number-theory
<p>About a month ago, I got the following : </p> <blockquote> <p>For <strong>every positive rational number</strong> $r$, there exists a set of four <strong>positive integers</strong> $(a,b,c,d)$ such that $$r=\frac{a^\color{red}{3}+b^\color{red}{3}}{c^\color{red}{3}+d^\color{red}{3}}.$$</p> <p>For $r=p/q$ where $p,q$ are positive integers, we can take $$(a,b,c,d)=(3ps^3t+9qt^4,\ 3ps^3t-9qt^4,\ 9qst^3+ps^4,\ 9qst^3-ps^4)$$ where $s,t$ are positive integers such that $3\lt r\cdot(s/t)^3\lt 9$.</p> <p>For $r=2014/89$, for example, since we have $(2014/89)\cdot(2/3)^3\approx 6.7$, taking $(p,q,s,t)=(2014,89,2,3)$ gives us $$\frac{2014}{89}=\frac{209889^3+80127^3}{75478^3+11030^3}.$$</p> </blockquote> <p>Then, I began to try to find <strong>every positive integer</strong> $n$ such that the following proposition is true : </p> <p><strong>Proposition</strong> : For every positive rational number $r$, there exists a set of four positive integers $(a,b,c,d)$ such that $$r=\frac{a^\color{red}{n}+b^\color{red}{n}}{c^\color{red}{n}+d^\color{red}{n}}.$$</p> <p>The followings are what I've got. Let $r=p/q$ where $p,q$ are positive integers.</p> <ul> <li><p>For $n=1$, the proposition is true. We can take $(a,b,c,d)=(p,p,q,q)$.</p></li> <li><p>For $n=2$, the proposition is <strong>false</strong>. For example, no such sets exist for $r=7/3$.</p></li> <li><p>For even $n$, the proposition is <strong>false</strong> because the proposition is false for $n=2$.</p></li> </ul> <p>However, I've been facing difficulty in the case of odd $n\ge 5$. I've tried to get a similar set of four positive integers $(a,b,c,d)$ as the set for $n=3$, but I have not been able to get any such set. So, here is my question.</p> <blockquote> <p><strong>Question</strong> : How can we find <strong>every odd number</strong> $n\color{red}{\ge 5}$ such that the following proposition is true?</p> <p><strong>Proposition</strong> : For every positive rational number $r$, there exists a set of four positive integers $(a,b,c,d)$ such that $$r=\frac{a^n+b^n}{c^n+d^n}.$$</p> </blockquote> <p><em>Update</em> : I posted this question on <a href="https://mathoverflow.net/questions/200605/representing-every-positive-rational-number-in-the-form-of-anbn-cndn">MO</a>.</p> <p><strong>Added</strong> : <a href="https://mks.mff.cuni.cz/kalva/short/soln/sh99n2.html" rel="noreferrer">Problem N2 of IMO 1999 Shortlist</a> asks the case $n=3$.</p>
<p>For the above problem there are four sets of solutions (this is intuitive: for a, b, c, &amp; d). In the case of positive rational r and any odd number n we can eliminate all but one of the solutions:</p> <p>$d^n = 5 \wedge c^n = 1 \wedge a^n + b^n = 30 \wedge r = 5 \wedge a^n \in Z$</p> <p>In the case of any odd number n≥3 we refer to the generating function:</p> <p>$a^{2 n + 1} + b^{2 n + 1} = 30 \wedge c^{2 n + 1} = 1 \wedge d^{2 n + 1} = 5 \wedge r = 5 \wedge a^{2 n + 1} \in Z$</p> <p>As well as the case of every odd number n≥5 (et. al):</p> <p>$a^{2 n + 3} + b^{2 n + 3} = 30 \wedge c^{2 n + 3} = 1 \wedge d^{2 n + 3} = 5 \wedge r = 5 \wedge a^{2 n + 3} \in Z$</p> <p>Quickly we discover that it doesn't matter the value of n, as long as it's odd and positive, leading to the generalization:</p> <p>$r = -c_5-1 \wedge a^{2n+1} + b^{2n+1} = (c_1+c_4+1)(c_5+1) \wedge c^{2n+1}+c_3 = c_1+c_2+1 \wedge c_2+d^{2n+1} = c_3+c_4 \wedge (c_5 | c_4 | c_3 | c_2 | c_1 | a^{2n+1}) \in Z$</p> <p>For all n:</p> <p>$r = -c_5-1 \wedge$</p> <p>$a^n + b^n = (c_1+c_4+1)(c_5+1) \wedge$</p> <p>$c^n+c_3 = c_1+c_2+1 \wedge$</p> <p>$c_2+d^n = c_3+c_4 \wedge$</p> <p>$(c_5 | c_4 | c_3 | c_2 | c_1 | a^n) \in Z$</p> <p><em>Note: this isn't a complete answer so it might be more appropriate as a comment, but pending reputation I may as well take a naive crack at it. Excuse any abuse of notation or lack of comprehension--it's been over a decade since I've had any formal mathematics. Lastly, I welcome criticism, especially if it's informative and friendly!</em></p>
<p>Your solution for n=3 includes an implicit change of variables: $$ \left(a,b,c,d\right)=\left(x+y,x-y,u+v,u-v\right) $$ $$ r = \left(2x/2u\right)\left(x^2+3y^2\right)/\left(u^2+3v^2\right)$$ at which point the substitution $$ \left(x,y,u,v\right)=\left(3ps^3t,9qt^4,9qst^3,ps^4\right)$$ yields the desired result of $$r=p/q$$</p> <p>A similar two-step substitution for $$n\ge 5$$ may simplify the search</p> <p>for n=5, the substitution </p> <p>$$ \left(a,b,c,d\right)=\left(x+y,x-y,u+v,u-v\right) $$</p> <p>yields</p> <p>$$ r = \left(2x/2u\right)\left(x^4+10x^2y^2+5y^4\right)/\left(u^4+10u^2v^2+5v^4\right)$$</p>
combinatorics
<p>I'd like to know if it's possible to calculate the odds of winning a game of Minesweeper (on easy difficulty) in a single click. <a href="http://www.minesweeper.info/wiki/One_Click_Bug">This page</a> documents a bug that occurs if you do so, and they calculate the odds to around 1 in 800,000. However, this is based on the older version of Minesweeper, which had a fixed number of preset boards, so not every arrangement of mines was possible. (Also the board size in the current version is 9x9, while the old one was 8x8. Let's ignore the intermediate and expert levels for now - I assume those odds are nearly impossible, though a generalized solution that could solve for any W×H and mine-count would be cool too, but a lot more work I'd think.) In general, the increased board size (with the same number of mines), as well as the removal of the preset boards would both probably make such an event far more common.</p> <p>So, assuming a 9x9 board with 10 mines, and assuming every possible arrangement of mines is equally likely (not true given the pseudo-random nature of computer random number generators, but let's pretend), and knowing that the first click is always safe (assume the described behavior on that site still holds - if you click on a mine in the first click, it's moved to the first available square in the upper-left corner), we'd need to first calculate the number of boards that are 1-click solvable. That is, boards with only one opening, and no numbered squares that are not adjacent to that opening. The total number of boards is easy enough: $\frac{(W×H)!}{((W×H)-M)! ×M!}$ or $\frac{81!}{71!×10!} \approx 1.878×10^{12}$. (Trickier is figuring out which boards are not one-click solvable unless you click on a mine and move it. We can maybe ignore the first-click-safe rule if it over-complicates things.) Valid arrangements would have all 10 mines either on the edges or far enough away from each other to avoid creating numbers which don't touch the opening. Then it's a simple matter of counting how many un-numbered spaces exist on each board and dividing by 81.</p> <p>Is this a calculation that can reasonably be represented in a mathematical formula? Or would it make more sense to write a program to test every possible board configuration? (Unfortunately, the numbers we're dealing with get pretty close to the maximum value storable in a 64-bit integer, so overflow is very likely here. For example, the default Windows calculator completely borks the number unless you multiply by hand from 81 down to 72.)</p>
<p>We must ignore the "cannot lose on first click" rule as it severely complicates things.</p> <p>In this answer, I will be using a notation similar to chess's FEN (<a href="https://en.wikipedia.org/wiki/Forsyth-Edwards_Notation" rel="nofollow">Forsyth-Edwards Notation</a>) to describe minesweeper boards. <em>m</em> is a mine and empty spaces are denoted by numbers. We start at the top of the board and move from left to right, returning to the left at the end of each row. To describe a specific square, the columns are numbered from <em>a</em> to <em>h</em>, left to right, and the rows are numbered from 8 to 1, top to bottom.</p> <p>On a minesweeper board, all mines are adjacent to numbered squares that say how many mines are next to them (including diagonally). If there is ever a numbered square surrounded only by mines and other numbered squares, new squares will stop being revealed at that square. Therefore, the question is actually:</p> <blockquote> <p>How many 9 × 9 minesweeper boards with 10 mines exist such that every blank square adjacent to a mine touches a square that is neither a mine nor adjacent to one?</p> </blockquote> <p>I like to approach problems like these by placing mines down one by one. There are 81 squares to place the first mine. If we place it in a corner, say a1, then the three diagonal squares adjacent to the corner (in this case a3, b2, and c1) are no longer valid (either a2 or b1 is now "trapped"). If we place it on any edge square except the eight squares adjacent to the corners, the squares two horizontal or vertical spaces away become invalid. On edge squares adjacent to the corners (say b1) three squares also become unavailable. On centre squares, either 4 or 3 squares become unavailable.</p> <p>The problem is that invalid squares can be fixed at any time. For example, placing mines first on a1 and then c1 may be initially invalid, but a mine on b1 solves that.</p> <p>This is my preliminary analysis. I conclude that there is no way to calculate this number of boards without brute force. However, anyone with sufficient karma is welcome to improve this answer.</p>
<p>First i apologise for my bad english.</p> <p>A simple rule to use and detect a one clickable grade is: "if every number have a 0 cell (or empty cell) adjacent to it, then, the grade is one clickable." That rule was easy to figure understanding how the automaticaly opens of a cell works. if the opened cell is a 0, then open all the adjecents cells.</p> <p>This rule is very good for brute force algorithm to determine the favorable cases.</p> <p>Besides that i tried to find the patterns that prevents one click win to happen in atempt to count the number of possibles grades that cant be win with one click. if you ignore the walls is simple, there are just two that englobe all of the others: B N B and B N N B (B for bomb, N for not bomb.) This N cells are traped becuse have just bombs or numbers adjecent to them and this kind of grades can't be one clickble, as the rule says. </p> <p>There are the case when bombs make clusters of non openble cells too, without necessarly using the these labels.</p> <p>But with walls things like non-bombs traps into corners and lines of bombs cross the board make thing a lot difficult. This cases dont necessarly need using BNB or BNNB patterns beacuse wall act like a block to empty cells opnenig domino's chain. So i stoped there. </p> <p>Even if wee could figure out all paterns including the wall factor, we'll have another problem counting the possible combinations of patterns.. so i think is very hard, virtualy impossible without a pc to count these nunber of grades. </p> <p>Thats my contribution. I hope that can be usefull</p>
linear-algebra
<p>Can <span class="math-container">$\det(A + B)$</span> expressed in terms of <span class="math-container">$\det(A), \det(B), n$</span> where <span class="math-container">$A,B$</span> are <span class="math-container">$n\times n$</span> matrices?</p> <h3></h3> <p>I made the edit to allow <span class="math-container">$n$</span> to be factored in.</p>
<p>When <span class="math-container">$n=2$</span>, and suppose <span class="math-container">$A$</span> has inverse, you can easily show that</p> <p><span class="math-container">$\det(A+B)=\det A+\det B+\det A\,\cdot \mathrm{Tr}(A^{-1}B)$</span>.</p> <hr /> <p>Let me give a general method to find the determinant of the sum of two matrices <span class="math-container">$A,B$</span> with <span class="math-container">$A$</span> invertible and symmetric (The following result might also apply to the non-symmetric case. I might verify that later...). I am a physicist, so I will use the index notation, <span class="math-container">$A_{ij}$</span> and <span class="math-container">$B_{ij}$</span>, with <span class="math-container">$i,j=1,2,\cdots,n$</span>. Let <span class="math-container">$A^{ij}$</span> donate the inverse of <span class="math-container">$A_{ij}$</span> such that <span class="math-container">$A^{il}A_{lj}=\delta^i_j=A_{jl}A^{li}$</span>. We can use <span class="math-container">$A_{ij}$</span> to lower the indices, and its inverse to raise. For example <span class="math-container">$A^{il}B_{lj}=B^i{}_j$</span>. Here and in the following, the Einstein summation rule is assumed.</p> <p>Let <span class="math-container">$\epsilon^{i_1\cdots i_n}$</span> be the totally antisymmetric tensor, with <span class="math-container">$\epsilon^{1\cdots n}=1$</span>. Define a new tensor <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}=\epsilon^{i_1\cdots i_n}/\sqrt{|\det A|}$</span>. We can use <span class="math-container">$A_{ij}$</span> to lower the indices of <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}$</span>, and define <span class="math-container">$\tilde\epsilon_{i_1\cdots i_n}=A_{i_1j_1}\cdots A_{i_nj_n}\tilde\epsilon^{j_1\cdots j_n}$</span>. Then there is a useful property: <span class="math-container">$$ \tilde\epsilon_{i_1\cdots i_kl_{k+1}\cdots l_n}\tilde\epsilon^{j_1\cdots j_kl_{k+1}\cdots l_n}=(-1)^sl!(n-l)!\delta^{[j_1}_{i_1}\cdots\delta^{j_k]}_{i_k}, $$</span> where the square brackets <span class="math-container">$[]$</span> imply the antisymmetrization of the indices enclosed by them. <span class="math-container">$s$</span> is the number of negative elements of <span class="math-container">$A_{ij}$</span> after it has been diagonalized.</p> <p>So now the determinant of <span class="math-container">$A+B$</span> can be obtained in the following way <span class="math-container">$$ \det(A+B)=\frac{1}{n!}\epsilon^{i_1\cdots i_n}\epsilon^{j_1\cdots j_n}(A+B)_{i_1j_1}\cdots(A+B)_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\tilde\epsilon^{i_1\cdots i_n}\tilde\epsilon^{j_1\cdots j_n}\sum_{k=0}^n C_n^kA_{i_1j_1}\cdots A_{i_kj_k}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon^{j_1\cdots j_k}{}_{i_{k+1}\cdots i_n}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon_{j_1\cdots j_ki_{k+1}\cdots i_n}B_{i_{k+1}}{}^{j_{k+1}}\cdots B_{i_n}{}^{j_n} $$</span> <span class="math-container">$$ =\frac{\det A}{n!}\sum_{k=0}^nC_n^kk!(n-k)!B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$</span> <span class="math-container">$$ =\det A\sum_{k=0}^nB_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$</span> <span class="math-container">$$ =\det A+\det A\sum_{k=1}^{n-1}B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]}+\det B. $$</span></p> <p>This reproduces the result for <span class="math-container">$n=2$</span>. An interesting result for physicists is when <span class="math-container">$n=4$</span>,</p> <p><span class="math-container">\begin{split} \det(A+B)=&amp;\det A+\det A\cdot\text{Tr}(A^{-1}B)+\frac{\det A}{2}\{[\text{Tr}(A^{-1}B)]^2-\text{Tr}(BA^{-1}BA^{-1})\}\\ &amp;+\frac{\det A}{6}\{[\text{Tr}(BA^{-1})]^3-3\text{Tr}(BA^{-1})\text{Tr}(BA^{-1}BA^{-1})+2\text{Tr}(BA^{-1}BA^{-1}BA^{-1})\}\\ &amp;+\det B. \end{split}</span></p>
<p>When $n\ge2$, the answer is no. To illustrate, consider $$ A=I_n,\quad B_1=\pmatrix{1&amp;1\\ 0&amp;0}\oplus0,\quad B_2=\pmatrix{1&amp;1\\ 1&amp;1}\oplus0. $$ If $\det(A+B)=f\left(\det(A),\det(B),n\right)$ for some function $f$, you should get $\det(A+B_1)=f(1,0,n)=\det(A+B_2)$. But in fact, $\det(A+B_1)=2\ne3=\det(A+B_2)$ over any field.</p>
probability
<p>If $\mathrm P(X=k)=\binom nkp^k(1-p)^{n-k}$ for a binomial distribution, then from the definition of the expected value $$\mathrm E(X) = \sum^n_{k=0}k\mathrm P(X=k)=\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}$$ but the expected value of a Binomal distribution is $np$, so how is </p> <blockquote> <p>$$\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}=np$$</p> </blockquote>
<p>The main idea is to factor out $np$. I believe we can rewrite:</p> <p>$$\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}= \sum^n_{k=1} k\binom nkp^k(1-p)^{n-k}$$</p> <p>Factoring out an $np$, this gives (and cancelling the $k$'s):</p> <p>$$\sum^n_{k=1} k\binom nkp^k(1-p)^{n-k} = np \sum^n_{k=1} \dfrac{(n-1)!}{(n-k)!(k-1)!}p^{k-1}(1-p)^{n-k}$$</p> <p>Notice that the RHS is:</p> <p>$$np \sum^n_{k=1} \dfrac{(n-1)!}{(n-k)!(k-1)!}p^{k-1}(1-p)^{n-k} = np \sum^n_{k=1} \binom {n-1}{k-1}p^{k-1}(1-p)^{n-k},$$</p> <p>and since $\displaystyle \sum^n_{k=1} \binom {n-1}{k-1}p^{k-1}(1-p)^{n-k} = (p + (1-p))^{n-1} = 1$, we therefore indeed have </p> <p>$$\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k} = np$$.</p>
<p>Let $B_i=1$ if we have a success on the $i$-th trial, and $0$ otherwise. Then the number $X$ of successes is $B_1+B_2+\cdots +B_n$. But then by the linearity of expectation, we have $$E(X)=E(B_1+B_2+\cdots+B_n)=E(B_1)+E(B_2)+\cdots +E(B_n).$$ It is easy to verify that $E(B_i)=p$, so $E(X)=np$.</p> <p>You wrote down <em>another</em> expression for the mean. So the above argument shows that the combinatorial identity of your problem is correct. You can think of it as a mean proof of a combinatorial identity.</p> <p><strong>Remark:</strong> A very similar argument to the one above can be used to compute the variance of the binomial.</p> <p>The linearity of expectation holds even when the random variables are not independent. Suppose we take a sample of size $n$, <strong>without replacement,</strong> from a box that has $N$ objects, of which $G$ are good. The <em>same</em> argument shows that the expected number of good objects in the sample is $n\dfrac{G}{N}$. This is somewhat unpleasant to prove using combinatorial manipulation.</p>
number-theory
<p>One observes that \begin{equation*} 4!+1 =25=5^{2},~5!+1=121=11^{2} \end{equation*} is a perfect square. Similarly for $n=7$ also we see that $n!+1$ is a perfect square. So one can ask the truth of this question:</p> <ul> <li>Is $n!+1$ a perfect square for infinitely many $n$? If yes, then how to prove.</li> </ul>
<p>This is Brocard's problem, and it is still open.</p> <p><a href="http://en.wikipedia.org/wiki/Brocard%27s_problem">http://en.wikipedia.org/wiki/Brocard%27s_problem</a></p>
<p>The sequence of factorials $n!+1$ which are also perfect squares is <a href="https://oeis.org/A085692" rel="nofollow">here in Sloane</a>. It contains three terms, and notes that there are no more terms below $(10^9)!+1$, but as far as I know there's no proof.</p>
combinatorics
<p>I need help to answer the following question:</p> <blockquote> <p>Is it possible to place 26 points inside a rectangle that is $20\, cm$ by $15\,cm$ so that the distance between every pair of points is greater than $5\, cm$?</p> </blockquote> <p>I haven't learned any mathematical ways to find a solution; whether it maybe yes or no, to a problem like this so it would be very helpful if you could help me with this question.</p>
<p>No, it is not. If we assume that $P_1,P_2,\ldots,P_{26}$ are $26$ distinct points inside the given rectangle, such that $d(P_i,P_j)\geq 5\,cm$ for any $i\neq j$, we may consider $\Gamma_1,\Gamma_2,\ldots,\Gamma_{26}$ as the circles centered at $P_1,P_2,\ldots,P_{26}$ with radius $2.5\,cm$. We have that such circles are disjoint and fit inside a $25\,cm \times 20\,cm$ rectangle. That is impossible, since the total area of $\Gamma_1,\Gamma_2,\ldots,\Gamma_{26}$ exceeds $500\,cm^2$.</p> <p><a href="https://i.sstatic.net/waP4m.png" rel="noreferrer"><img src="https://i.sstatic.net/waP4m.png" alt="enter image description here"></a></p> <p>Highly non-trivial improvement: it is impossible to fit $25$ points inside a $20\,cm\times 15\,cm$ in such a way that distinct points are separated by a distance $\geq 5\,cm$.</p> <p><a href="https://i.sstatic.net/JVXsh.png" rel="noreferrer"><img src="https://i.sstatic.net/JVXsh.png" alt="enter image description here"></a></p> <p>Proof: the original rectangle can be covered by $24$ hexagons with diameter $(5-\varepsilon)\,cm$. Assuming is it possible to place $25$ points according to the given constraints, by the pigeonhole principle / Dirichlet's box principle at least two distinct points inside the rectangle lie in the same hexagon, so they have a distance $\leq (5-\varepsilon)\,cm$, contradiction.</p> <p>Further improvement: <a href="https://i.sstatic.net/Cfuil.png" rel="noreferrer"><img src="https://i.sstatic.net/Cfuil.png" alt="enter image description here"></a></p> <p>the depicted partitioning of a $15\,cm\times 20\,cm$ rectangle $R$ in $22$ parts with diameter $(5-\varepsilon)\,cm$ proves that we may fit at most $\color{red}{22}$ points in $R$ in such a way that they are $\geq 5\,cm$ from each other.</p>
<p>Jack D'Aurizio's answer is nice, but I think the following is probably the solution intended by whoever posed the puzzle:</p> <p>Note that $26=5^2+1$. So perhaps we can divide our $20\times15$ rectangle into $5^2$ pieces, apply the pigeonhole principle, and be done. That would require that our pieces each have diameter at most 5. Well, what's the most obvious way to divide a $20\times15$ rectangle into $5^2$ pieces? Answer: chop it into $5\times5$ rectangles, each of size $4\times3$. And lo, the diagonal of each of those rectangles has length exactly 5 and we're done.</p>
number-theory
<p>I search the least n such that </p> <p>$$38^n+31$$ </p> <p>is prime. </p> <p>I checked the $n$ upto $3000$ and found none, so the least prime of that form must have more than $4000$ digits. I am content with a probable prime, it need not be a proven prime.</p>
<p>This is not a proof, but does not conveniently fit into a comment.</p> <p>I'll take into account that $n=4k$ is required, otherwise $38^n+31$ will be divisible by $3$ or $5$ as pointed in the comments.</p> <p>Now, if we treat the primes as "pseudorandom" in the sense that any large number $n$ has a likelihood $1/\ln(n)$ of being prime (which is the prime number density for large $n$), the expected number of primes for $n=4,8,\ldots,4N$ will increase with $N$ as $$ \sum_{k=1}^N\frac{1}{\ln(38^{4k}+31)} \approx\frac{\ln N+\gamma}{4\ln 38} \text{ where }\gamma=0.57721566\ldots $$ and for the expected number of primes to exceed 1, you'll need $N$ in the order of 1,200,000.</p> <p>Of course, you could get lucky and find it at much lower $n$, but a priori I don't see any particular reason why it should be...or shouldn't.</p> <p>Basically, in general for numbers $a^n+b$, the first prime will usually come fairly early, otherwise often very late (or not at all if $a$ and $b$ have a common factor).</p> <p>Of course, this argument depends on assuming "pseudorandom" behaviour of the primes, and so cannot be turned into a formal proof. However, it might perhaps be possible to say something about the distribution of $n$ values giving the first prime over different pairs $(a,b)$.</p>
<p>Primality of numbers of the form $a^n+b$ is a very hard problem in general. For instance, existence of primes of the form $4294967296^n+1=(2^{32})^n+1$ is an old open problem in number theory (<a href="https://en.wikipedia.org/wiki/Fermat_number#Primality_of_Fermat_numbers">wiki</a>), although it is also easy to show that this can be a prime only for $n$ of a special form (powers of $2$). Your problem $2085136^n+31=(38^4)^n+31$ does not seem much easier.</p> <p>In other words, a theory-based answer to your problem is very unlikely in the near future. For a practice-based answer you will probably need to use some distributed computing project for searching for prime numbers like PrimeGrid, which has found most of the known large primes of the form $ab^n+c$.</p>
linear-algebra
<p>I am quite confused about this. I know that zero eigenvalue means that null space has non zero dimension. And that the rank of matrix is not the whole space. But is the number of distinct eigenvalues ( thus independent eigenvectos ) is the rank of matrix?</p>
<p>Well, if $A$ is an $n \times n$ matrix, the rank of $A$ plus the nullity of $A$ is equal to $n$; that's the rank-nullity theorem. The nullity is the dimension of the kernel of the matrix, which is all vectors $v$ of the form: $$Av = 0 = 0v.$$ The kernel of $A$ is precisely the eigenspace corresponding to eigenvalue $0$. So, to sum up, the rank is $n$ minus the dimension of the eigenspace corresponding to $0$. If $0$ is not an eigenvalue, then the kernel is trivial, and so the matrix has full rank $n$. The rank depends on no other eigenvalues.</p>
<p>My comment is 7 years late but I hope someone might find some useful information.</p> <p>First, the number of linearly independent eigenvectors of a rank <span class="math-container">$k$</span> matrix can be greater than <span class="math-container">$k$</span>. For example <span class="math-container">\begin{align} A &amp;= \left[ \begin{matrix} 1 &amp; 2 \\ 2 &amp; 4 \end{matrix} \right] \\ rk(A) &amp;= 1 \\ \end{align}</span> <span class="math-container">$A$</span> has the following eigenvalues and eigenvectors <span class="math-container">$\lambda_1 = 5, \mathbf{v}_1 = [1 \ \ 2]^\top$</span>, <span class="math-container">$\lambda_2 = 0, \mathbf{v}_2 = [-2 \ \ 1]^\top$</span>. So <span class="math-container">$A$</span> has 1 linearly independent column but 2 linearly independent eigenvectors. The column space of <span class="math-container">$A$</span> has 1 dimension. The eigenspace of <span class="math-container">$A$</span> has 2 dimensions.</p> <p>There are also cases where the number of linearly independent eigenvectors is smaller than the rank of <span class="math-container">$A$</span>. For example <span class="math-container">\begin{align} A &amp;= \left[ \begin{matrix} 0 &amp; 1 \\ -1 &amp; 0 \end{matrix} \right] \\ rk(A) &amp;= 2 \end{align}</span> <span class="math-container">$A$</span> has no real valued eigenvalues and no real valued eigenvectors. But <span class="math-container">$A$</span> has two <em>complex valued eigenvalues</em> <span class="math-container">$\lambda_1 = i,\ \lambda_2 = -i$</span> and two complex valued eigenvectors.</p> <p>Another remark is that a eigenvalue can correspond to multiple linearly independent eigenvectors. An example is the Identity matrix. <span class="math-container">$I_n$</span> has only 1 eigenvalue <span class="math-container">$\lambda = 1$</span> but <span class="math-container">$n$</span> linearly independent eigenvectors.</p> <p>So to answer your question, I think there is no trivial relationship between the rank and the dimension of the eigenspace.</p>
logic
<p>You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.</p> <p>The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length. </p> <p>So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?</p> <p>And if so, what is the smallest possible ratio?</p> <p>Example 1 (unit is cm):</p> <ul> <li>1st cut: 50 : 50 ratio: 1 </li> <li>2nd cut: 50 : 25 : 25 ratio: 2 - bad</li> </ul> <p>Example 2</p> <ul> <li>1st cut: 40 : 60 ratio: 1.5</li> <li>2nd cut: 40 : 30 : 30 ratio: 1.33</li> <li>3rd cut: 20 : 20 : 30 : 30 ratio: 1.5</li> <li>4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad</li> </ul> <p>Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.</p>
<p>TLDR: $a_n=\log_2(1+1/n)$ works, and is the only smooth solution.</p> <p>This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.</p> <p>Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,\dots,a_{2n-1}$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_{2n}$ and $a_{2n+1}$. We have the following constraints:</p> <p>$$a_1=1\qquad a_n=a_{2n}+a_{2n+1}\qquad a_n\ge a_{n+1}\qquad a_n&lt;2a_{2n-1}$$</p> <p>I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_{2n}+a_{2n+1}$ as well).</p> <p>First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_n\in (1/2,1)$ then we can define $a_{2n}=a_nb_n$ and $a_{2n+1}=a_n(1-b_n)$, and this will completely define the sequence $a_n$.</p> <p>Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence</p> <p>$$1,\frac12,\frac12,\frac14,\frac14,\frac14,\frac14,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\dots$$ (which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does <em>not</em> tend to a limit - it varies between $1$ and $2$.</p> <p>If we average this exponentially, by considering the function $g(x)=2^xa_{\lfloor 2^x\rfloor}$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]\to\Bbb R$ such that $g(x+n)\to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.</p> <p>There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,\log_2 (3/2)]$ and scaled down on $[\log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $\int_0^{\log_2(3/2)}h(x)\,dx$ and $\int_{\log_2(3/2)}^1h(x)\,dx$.</p> <p>Thus, to keep these balanced we should let $b_1=\log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[\log_2(2n),\log_2(2n+1)]$ and $[\log_2(2n+1),\log_2(2n+2)]$ (reduced$\bmod 1$), so we must set them to $$b_n=\frac{\log_2(2n+1)-\log_2(2n)}{\log_2(2n+2)-\log_2(2n)}=\frac{\log(1+1/2n)}{\log(1+1/n)}.$$</p> <p>When we do this, a miracle occurs, and $a_n=\log_2(1+1/n)$ becomes analytically solvable: \begin{align} a_1&amp;=\log_2(1+1/1)=1\\ a_{2n}+a_{2n+1}&amp;=\log_2\Big(1+\frac1{2n}\Big)+\log_2\Big(1+\frac1{2n+1}\Big)\\ &amp;=\log_2\left[\Big(1+\frac1{2n}\Big)\Big(1+\frac1{2n+1}\Big)\right]\\ &amp;=\log_2\left[1+\frac{2n+(2n+1)+1}{2n(2n+1)}\right]\\ &amp;=\log_2\left[1+\frac1n\right]=a_n. \end{align}</p> <p>As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m&lt;2n$, then \begin{align} 2a_m&amp;=2\log_2\Big(1+\frac1m\Big)=\log_2\Big(1+\frac1m\Big)^2=\log_2\Big(1+\frac2m+\frac1{m^2}\Big)\\ &amp;\ge\log_2\Big(1+\frac2m\Big)&gt;\log_2\Big(1+\frac2{2n}\Big)=a_n, \end{align}</p> <p>so this is indeed a proper solution, and we have also attained our smoothness goal &mdash; $na_n$ converges, to $\frac 1{\log 2}=\log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.</p> <p>It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):</p> <ul> <li>$58:42$, ratio = $1.41$</li> <li>$42:32:26$, ratio = $1.58$</li> <li>$32:26:22:19$, ratio = $1.67$</li> <li>$26:22:19:17:15$, ratio = $1.73$</li> <li>$22:19:17:15:14:13$, ratio = $1.77$</li> </ul> <p>If you are working with a sequence of points treated$\bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=\log_2(2n+1)\bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://i.sstatic.net/SCaaE.gif" rel="noreferrer"><img src="https://i.sstatic.net/SCaaE.gif" alt="sausages"></a></p> <p>A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0\le x\le 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0\le x\le 1$, and no solution can do better than that (in the limit) for any $x$.</p>
<p>YES, it is possible!</p> <p>You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement. So in fact, you must never have two equal parts. Make the first cut so that the condition is not violated, say $60:40$. </p> <p>From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.) We construct a good cut that maintains this property.</p> <p>So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/b\approx 1$). All you have to make sure is that </p> <ul> <li>$a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round. </li> <li>$a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round). </li> </ul> <p>Then the condition is preserved. For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.</p>
linear-algebra
<p>Is there any intuition why rotational matrices are not commutative? I assume the final rotation is the combination of all rotations. Then how does it matter in which order the rotations are applied?</p>
<p>Here is a picture of a die:</p> <p><a href="https://i.sstatic.net/Ij8xC.jpg"><img src="https://i.sstatic.net/Ij8xC.jpg" alt="enter image description here"></a></p> <p>Now let's spin it $90^\circ$ clockwise. The die now shows</p> <p><a href="https://i.sstatic.net/YNRK3.jpg"><img src="https://i.sstatic.net/YNRK3.jpg" alt="enter image description here"></a></p> <p>After that, if we flip the left face up, the die lands at</p> <p><a href="https://i.sstatic.net/JJRKw.jpg"><img src="https://i.sstatic.net/JJRKw.jpg" alt="enter image description here"></a></p> <hr> <p>Now, let's do it the other way around: We start with the die in the same position:</p> <p><a href="https://i.sstatic.net/Ij8xC.jpg"><img src="https://i.sstatic.net/Ij8xC.jpg" alt="enter image description here"></a></p> <p>Flip the left face up:</p> <p><a href="https://i.sstatic.net/HWIwe.jpg"><img src="https://i.sstatic.net/HWIwe.jpg" alt="enter image description here"></a></p> <p>and then $90^\circ$ clockwise</p> <p><a href="https://i.sstatic.net/pnofv.jpg"><img src="https://i.sstatic.net/pnofv.jpg" alt="enter image description here"></a></p> <p>If we do it one way, we end up with $3$ on the top and $5, 6$ facing us, while if we do it the other way we end up with $2$ on the top and $1, 3$ facing us. This demonstrates that the two rotations do not commute.</p> <hr> <p>Since so many in the comments have come to the conclusion that this is not a complete answer, here are a few more thoughts:</p> <ul> <li>Note what happens to the top number of the die: In the first case we change what number is on the left face, then flip the new left face to the top. In the second case we first flip the old left face to the top, and <em>then</em> change what is on the left face. This makes two different numbers face up.</li> <li>As leftaroundabout said in a comment to the question itself, rotations not commuting is not really anything noteworthy. The fact that they <em>do</em> commute in two dimensions <em>is</em> notable, but asking why they do not commute in general is not very fruitful apart from a concrete demonstration.</li> </ul>
<p>Matrices commute if they <em>preserve each others' eigenspaces</em>: there is a set of eigenvectors that, taken together, describe all the eigenspaces of both matrices, in possibly varying partitions.</p> <p>This makes intuitive sense: this constraint means that a vector in one matrix's eigenspace won't leave that eigenspace when the other is applied, and so the original matrix's transformation still works fine on it. </p> <p>In two dimensions, no matter what, the eigenvectors of a rotation matrix are $[i,1]$ and $[-i,1]$. So since all such matrices have the same eigenvectors, they will commute.</p> <p>But in <em>three</em> dimensions, there's always one real eigenvalue for a real matrix such as a rotation matrix, so that eigenvalue has a real eigenvector associated with it: the axis of rotation. But this eigenvector doesn't share values with the rest of the eigenvectors for the rotation matrix (because the other two are necessarily complex)! So the axis is an eigenspace of dimension 1, so <strong>rotations with different axes can't possibly share eigenvectors</strong>, so they cannot commute.</p>
logic
<p>I'd heard of propositional logic for years, but until I came across <a href="https://math.stackexchange.com/questions/4043/what-are-good-resources-for-learning-predicate-logic-predicate-calculus">this question</a>, I'd never heard of predicate logic. Moreover, the fact that <em>Introduction to Logic: Predicate Logic</em> and <em>Introduction to Logic: Propositional Logic</em> (both by Howard Pospesel) are distinct books leads me to believe there are significant differences between the two fields. What distinguishes predicate logic from propositional logic?</p>
<p>Propositional logic (also called sentential logic) is logic that includes sentence letters (A,B,C) and logical connectives, but not quantifiers. The semantics of propositional logic uses truth assignments to the letters to determine whether a compound propositional sentence is true.</p> <p>Predicate logic is usually used as a synonym for first-order logic, but sometimes it is used to refer to other logics that have similar syntax. Syntactically, first-order logic has the same connectives as propositional logic, but it also has variables for individual objects, quantifiers, symbols for functions, and symbols for relations. The semantics include a domain of discourse for the variables and quantifiers to range over, along with interpretations of the relation and function symbols.</p> <p>Many undergrad logic books will present both propositional and predicate logic, so if you find one it will have much more info. A couple of well-regarded options that focus directly on this sort of thing are Mendelson's book or Enderton's book.</p> <p>This set of <a href="https://sgslogic.net/t20/notes/logic.pdf" rel="nofollow noreferrer">lecture notes</a> by Stephen Simpson is free online and has a nice introduction to the area.</p>
<p>Propositional logic is an axiomatization of Boolean logic. As such predicate logic includes propositional logic. Both systems are known to be consistent, e.g. by exhibiting models in which the axioms are satisfied.</p> <p>Propositional logic is decidable, for example by the method of truth tables:</p> <p><a href="http://en.wikipedia.org/wiki/Truth_table" rel="noreferrer"> [Truth table -- Wikipedia]</a></p> <p>and "complete" in that every tautology in the sentential calculus (basically a Boolean expression on variables that represent "sentences", i.e. that are either True or False) can be proven in propositional logic (and conversely).</p> <p>Predicate logic (also called predicate calculus and first-order logic) is an extension of propositional logic to formulas involving terms and predicates. The full predicate logic is undecidable:</p> <p><a href="http://en.wikipedia.org/wiki/First-order_logic" rel="noreferrer"> [First-order logic -- Wikipedia]</a></p> <p>It is "complete" in the sense that all statements of the predicate calculus which are satisfied in every model can be proven in the "predicate logic" and conversely. This is a famous theorem by Gödel (dissertation,1929):</p> <p><a href="http://en.wikipedia.org/wiki/G%C3%B6del_completeness_theorem" rel="noreferrer"> [Gödel's completeness theorem -- Wikipedia]</a></p> <p>Note: As Doug Spoonwood commented, there are formalizations of both propositional logic and predicate logic that dispense with <em>axioms</em> per se and rely entirely on <em>rules of inference</em>. A common presentation would invoke only <em>modus ponens</em> as the single rule of inference and multiple <em>axiom schemas</em>. The important point for a formal logic is that it should be possible to recognize (with finite steps) whether a claim in a proof is logically justified, either as an instance of axiom schemas or by a rule of inference from previously established claims.</p>
logic
<p>Are there some proofs that can only be shown by contradiction or can everything that can be shown by contradiction also be shown without contradiction? What are the advantages/disadvantages of proving by contradiction?</p> <p>As an aside, how is proving by contradiction viewed in general by 'advanced' mathematicians. Is it a bit of an 'easy way out' when it comes to trying to show something or is it perfectly fine? I ask because one of our tutors said something to that effect and said that he isn't fond of proof by contradiction.</p>
<p>To determine what <em>can</em> and <em>cannot</em> be proved by contradiction, we have to formalize a notion of proof. As a piece of notation, we let $\bot$ represent an identically false proposition. Then $\lnot A$, the negation of $A$, is equivalent to $A \to \bot$, and we take the latter to be the definition of the former in terms of $\bot$. </p> <p>There are two key logical principles that express different parts of what we call "proof by contradiction":</p> <ol> <li><p>The <em>principle of explosion</em>: for any statement $A$, we can take "$\bot$ implies $A$" as an axiom. This is also called <em>ex falso quodlibet</em>. </p></li> <li><p>The <em>law of the excluded middle</em>: for any statement $A$, we can take "$A$ or $\lnot A$" as an axiom. </p></li> </ol> <p>In proof theory, there are three well known systems:</p> <ul> <li><p><a href="http://en.wikipedia.org/wiki/Minimal_logic">Minimal logic</a> has neither of the two principles above, but it has basic proof rules for manipulating logical connectives (other than negation) and quantifiers. This system corresponds most closely to "direct proof", because it does not let us leverage a negation for any purpose. </p></li> <li><p><a href="http://en.wikipedia.org/wiki/Intuitionistic_logic">Intuitionistic logic</a> includes minimal logic and the principle of explosion</p></li> <li><p>Classical logic includes intuitionistic logic and the law of the excluded middle</p></li> </ul> <p>It is known that there are statements that are provable in intuitionistic logic but not in minimal logic, and there are statements that are provable in classical logic that are not provable in intuitionistic logic. In this sense, the principle of explosion allows us to prove things that would not be provable without it, and the law of the excluded middle allows us to prove things we could not prove even with the principle of explosion. So there are statements that are provable by contradiction that are not provable directly. </p> <p>The scheme "If $A$ implies a contradiction, then $\lnot A$ must hold" is true even in intuitionistic logic, because $\lnot A$ is just an abbreviation for $A \to \bot$, and so that scheme just says "if $A \to \bot$ then $A \to \bot$". But in intuitionistic logic, if we prove $\lnot A \to \bot$, this only shows that $\lnot \lnot A$ holds. The extra strength in classical logic is that the law of the excluded middle shows that $\lnot \lnot A$ implies $A$, which means that in classical logic if we can prove $\lnot A$ implies a contradiction then we know that $A$ holds. In other words: even in intuitionistic logic, if a statement implies a contradiction then the negation of the statement is true, but in classical logic we also have that if the negation of a statement implies a contradiction then the original statement is true, and the latter is not provable in intuitionistic logic, and in particular is not provable directly. </p>
<p>If a statements says "not $X$" then it is perfectly fine to assume $X$, arrive at a contradiction and conclude "not $X$". However, in <em>many</em> occasions a proof by contradiction is presented while it is really not used (let alone necessary). The reasoning then goes as follows:</p> <blockquote> <p><strong>Proof of $X$:</strong> Suppose not $X$. Then ... <em>complete proof of $X$ follows here</em>... This is a contradiction and therefore $X$.</p> </blockquote> <p>A famous example is Euclid's proof of the infinitude of primes. It is often stated as follows (not by Euclid by the way):</p> <blockquote> <p>Suppose there is only a finite number of primes. Then ... <em>construction of new prime follows</em> ... This is a contradiction so there are infinitely many primes.</p> </blockquote> <p>Without the contradiction part, you'd be left with a perfectly fine argument. Namely, given a finite set of primes, a new prime can be constructed.</p> <p>This kind of presentation is really something that you should learn to avoid. Once you're aware of this pattern its amazing how often you'll encounter it, including here on math.se.</p>
differentiation
<p>While I do know that $\frac{dy}{dx}$ isn't a fraction and shouldn't be treated as such, in many situations, doing things like multiplying both sides by $dx$ and integrating, cancelling terms, doing things like $\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$ works out just fine.</p> <p>So I wanted to know: Are there any particular cases (in single-variable calculus) we have to look out for, where treating $\frac{dy}{dx}$ as a fraction gives incorrect answers, in particular, at an introductory level?</p> <p><strong>Note: Please provide specific instances and examples where treating $\frac{dy}{dx}$ as a fraction fails</strong></p>
<p>It is because of the extraordinary power of Leibniz's differential notation, which allows you to treat them as fractions while solving problems. The justification for this mechanical process is apparent from the following general result:</p> <blockquote> <p>Let $ y=h(x)$ be any solution of the separated differential equation $A(y)\dfrac{dy}{dx} = B(x)$... (i) such that $h'(x)$ is continuous on an open interval $I$, where $B(x)$ and $A(h(x))$ are assumed to be continuous on $I$. If $g$ is any primitive of $A$ (i.e. $g'=A$) on $I$, then $h$ satisfies the equation $g(y)=\int {B(x)dx} + c$...(ii) for some constant $c$. Conversely, if $y$ satisfies (ii) then $y$ is a solution of (i).</p> </blockquote> <p>Also, it would be advisable to say $\dfrac{dy}{dx}=\dfrac{1}{\dfrac{dx}{dy}}$ only when the function $y(x)$ is invertible.</p> <p>Say you are asked to find the equation of normal to a curve $y(x)$ at a particular point $(x_1,y_1)$. In general you should write the slope of the equation as $-\dfrac{1}{\dfrac{dy}{dx}}\big|_{(x_1,y_1)}$ instead of simply writing it as $-\dfrac{dx}{dy}\big|_{(x_1,y_1)}$ without checking for the invertibility of the function (which would be redundant here). However, the numerical calculations will remain the same in any case.</p> <p><strong>EDIT.</strong> </p> <p>The Leibniz notation ensures that no problem will arise if one treats the differentials as fractions because it beautifully works out in single-variable calculus. But explicitly stating them as 'fractions' in any exam/test could cost one the all important marks. One could be criticised in this case to be not formal enough in his/her approach.</p> <p>Also have a look at <a href="https://math.stackexchange.com/a/1784701/321264">this answer</a> which explains the likely pitfalls of the fraction treatment.</p>
<p>In calculus we have this relationship between differentials: $dy = f^{\prime}(x) dx$ which could be written $dy = \frac{dy}{dx} dx$. If you have $\frac{dy}{dx} = \sin x$, then it's legal to multiply both sides by $dx$. On the left you have $\frac{dy}{dx} dx$. When you replace it with $dy$ using the above relationship, it looks just like you've cancelled the $dx$'s. Such a replacement is so much like division we can hardly tell the difference. </p> <p>However if you have an implicitly defined function $f(x,y) = 0$, the total differential is $f_x \;dx + f_y \;dy = 0$. "Solving" for $\frac{dy}{dx}$ gives $$\frac{dy}{dx} = -\frac{f_x}{f_y} = -\frac{\partial f / \partial x}{\partial f /\partial y}.$$ This is the correct formula for implicit differentiation, which we arrived at by treating $\frac{dy}{dx}$ as a ration, but then look at the last fraction. If you simplify it, it makes the equation $$\frac{dy}{dx} = -\frac{dy}{dx}.$$ That pesky minus sign sneeks in because we reversed the roles of $x$ and $y$ between the two partial derivatives. Maddening.</p>
logic
<p>Russell and Whitehead famously tried to actually create and use a formal system to explicitly develop formal mathematics in their work, "Principia Mathematica."</p> <p>Much more recently, with the aid of computers, there has been much work done related to the development of proof assistant software, formal verification software, and automated theorem proving software. </p> <p>However, even though extensive libraries of formal proofs have been developed with all this research, I have not been able to find any attempts made to present the contents of a given library of proofs in an "updated Principia Mathematica," as a formal development of math. </p> <p>Have I just not done an extensive enough literature search?</p> <p>Thanks in advance!</p>
<p>On the one hand, there has been a lot of success recently in creating fully formalized and computer-verified proofs of nontrivial theorems, including Hales' theorem, the prime number theorem, the Jordan curve theorem, and Gödel's incompleteness theorems. The sense I get from experts in the field is that the main challenge is time, not theory. It takes a long time to formalize human-readable proofs into computer-verifiable proofs with the present systems, but there should be no theoretical obstacles to formalizing any theorem one wishes to study. </p> <p>On the other hand, there are other reasons that nothing explicitly like <em>Principia Mathematica</em> has been developed. The first of these is that <em>Principia</em> is virtually unreadable. As a means of conveying mathematical information from one person to another, fully formal or even mostly formal proofs (the kind that a proof assistant can verify) are not as efficient as ordinary natural-language proofs. This means that few mathematicians have a desire to work with any "new" system of this kind. We already realize that virtually all mathematical theorems can be formalized in ZFC set theory, but instead we write proofs in a way that tries to convey the mathematical insight more than the technical details of a formal system, unless the technical details are somehow important. </p> <p>There has been a lot of recent work on a different foundational system called "<a href="https://en.wikipedia.org/wiki/Homotopy_type_theory">homotopy type theory</a>", which could be used instead of ZFC to formalize theorems. It remains to be seen, however, whether this new system ends up being widely adopted. There other foundational systems, such as second-order arithmetic, which could also be used to fully formalize large parts of mathematics. I believe that a significant number of mathematicians don't really worry much about the foundational system they use, because the objects they deal with are sufficiently concrete that the foundations make little difference.</p> <p>The other goal of <em>Principia</em> was to support the <a href="http://plato.stanford.edu/entries/logicism/">logicist program</a> that all of mathematics can be reduced to logic. The idea that mathematics can be formalized and presented in full detail is no longer in question, as it might have been at the time. But the idea that the axioms of a foundational theory would all be fully <em>logical</em> is far from clear - in fact, it is generally considered false, because axioms such as the axiom of infinity or the axiom of replacement do not seem purely "logical" to many mathematicians. </p>
<p>I think that <a href="http://us.metamath.org/" rel="nofollow noreferrer">Metamath</a> (<a href="http://metamath.org/" rel="nofollow noreferrer">list of mirrors</a>) is likely the best candidate for a modern version of Principia Mathematica. The <a href="http://us.metamath.org/mpegif/mmtheorems.html" rel="nofollow noreferrer">theorem list</a> is broken up into many parts and develops:</p> <ul> <li>Propositional calculus</li> <li><a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory" rel="nofollow noreferrer">ZF</a>, ZFC, and <a href="https://en.wikipedia.org/wiki/Tarski%E2%80%93Grothendieck_set_theory" rel="nofollow noreferrer">TG</a> (roughly the type of set theory that <a href="http://wiki.mizar.org/twiki/bin/view/Mizar/Tarski-GrothendieckSetTheory" rel="nofollow noreferrer">Mizar uses</a> as well)</li> <li>Real and complex numbers</li> <li>Abstract algebraic structures (including category theory)</li> <li>Topology (a lot of pointset, and some basic definitions for algebraic)</li> <li>Precalculus and Calculus concepts</li> <li>and various other assorted things</li> </ul> <p>Aditionally, <a href="http://us.metamath.org/mpegif/mmset.html#overview" rel="nofollow noreferrer">Principia Mathematica was an inspiration for Metamath</a>, and a major contributor made <a href="https://www.youtube.com/watch?v=8WH4Rd4UKGE" rel="nofollow noreferrer">a talk on youtube</a> a couple months after the OP's question was posted titled "Metamath Proof Explorer: A Modern Principia Mathematica".</p> <hr> <p>To address <a href="https://math.stackexchange.com/users/630/carl-mummert">@Carl Mummert</a>'s point in <a href="https://math.stackexchange.com/a/1767184/26369">his answer</a>: "Principia is virtually unreadable...few mathematicians have a desire to work with any "new" system of this kind". I agree. That said, some of those few mathematicians have been working on Metamath, regardless of how it is certainly more difficult to read in some ways than a textbook or paper.</p>
linear-algebra
<p>I have a couple of questions about tensor products:</p> <p>Why is $\text{Hom}(V,W)$ the same thing as $V^* \otimes W$? </p> <p>Why is an element of $V^{*\otimes m}\otimes V^{\otimes n}$ the same thing as a multilinear map $V^m \to V^{\otimes n}$? </p> <p>What is the general formulation of this principle?</p>
<p>The result is generally wrong for infinite-dimensional spaces: see <a href="https://math.stackexchange.com/questions/573378/u-otimes-v-versus-lu-v-for-infinite-dimensional-spaces/573416#573416">this question</a>.</p> <p>For finite dimensional space $V$, let's build an isomorphism $f : V^* \otimes W \to \hom(V,W)$ by defining</p> <p>$$f(\phi \otimes w)(v) = \phi(v) w$$</p> <p>This clearly defines a linear map $V^* \otimes W \to \hom(V,W)$ (it's bilinear in $V^* \times W$). Reciprocally, take a basis $(e_i)$ of $V$, then define $g : \hom(V,W) \to V^* \otimes W$ by:</p> <p>$$g(u) = \sum_i e_i^* \otimes u(e_i)$$</p> <p>Where $(e_i^*)$ is the dual basis to $(e_i)$ (I will use a few of its properties in $\color{red}{red}$ below). This is well-defined because $V$ is finite-dimensional (the sum is finite). Let's check that $f$ and $g$ are inverse to each other:</p> <ul> <li><p>For $u : V \to W$, $$f(g(u))(v) = \sum e_i^*(v) u(e_i) = u \left( \sum e_i^*(v) e_i \right) \color{red}{=} u(v)$$ and so $f(g(u)) = u$.</p></li> <li><p>For $\phi \otimes w \in V^* \otimes W$, $$g(f(\phi \otimes w)) = \sum e_i^* \otimes f(\phi \otimes w)(e_i) = \sum e_i^* \otimes \phi(e_i) w = \sum \phi(e_i) e_i^* \otimes w \color{red}{=} \phi \otimes w$$</p></li> </ul> <p>And so $f$ and $g$ are isomorphisms, inverse to each other.</p> <hr> <p>It is known that for finite dimensional $V$, then $(V^*)^{\otimes m} = (V^{\otimes m})^*$. Then an element of $V^{* \otimes m} \otimes V^{\otimes n}$ is an element of $(V^{\otimes m})^* \otimes V^{\otimes n} = \hom(V^{\otimes m}, V^{\otimes n})$. So by definition / universal property of the tensor product, it's a multilinear map $V^m \to V^{\otimes n}$.</p>
<p>One general form of that assertion (noting that it cannot be quite as simple as one might imagine) is the Cartan-Eilenberg adjunction <span class="math-container">$$ \mathrm{Hom}(X\otimes Y,Z)\;\approx\;\mathrm{Hom}(X,\mathrm{Hom}(Y,Z)) $$</span> in some reasonable additive (or whatever) category, where, significantly, the tensor product must be a <em>genuine</em> categorical tensor product, as opposed to a &quot;projective&quot; or &quot;injective&quot; tensor product, which have only half the properties of a genuine tensor product. So, for example, there is <em>no</em> genuine tensor product of (infinite-dimensional) Hilbert spaces in <em>any</em> reasonable category of topological vector spaces. (The thing often called the &quot;Hilbert-space tensor product&quot; has only half the requisite properties.</p>
logic
<p>All the definitions I came across so far stated, that if a statement is true, then also its dual statement is true and this dual statement is obtained by changing <code>+</code> for <code>.</code>, <code>0</code> for <code>1</code> and vice versa.</p> <p>However when I say <code>1+1</code>, whose dual statement according to the above is <code>0.0</code>, I get opposite results, that is:</p> <pre><code>1 + 1 = 1 0 . 0 = 0 </code></pre> <p>How should I understand this duality principle ?</p>
<p>"$1 + 1 = 1$" is a <em>statement</em> (a boolean statement, in fact), and indeed, $1 + 1 = 1$ happens to be a true statement.</p> <p>Likewise, the entire statement "$0 \cdot 0 = 0$" is a <em>true statement</em>, since $0 \cdot 0$ <em>correctly</em> evaluates to false: and this is exactly what "$0 \cdot 0 = 0$" asserts, so it is a correct (true) statement about the falsity of $0 \cdot 0$. </p> <p>The duality principle ensures that "if we exchange <strong>every</strong> symbol by its dual in a formula, we get the <strong>dual</strong> result".</p> <ul> <li>Everywhere we see 1, change to 0. </li> <li>Everywhere we see 0, change to 1. </li> <li>Similarly, + to $\cdot$, and $\cdot$ to +.</li> </ul> <hr> <p>More examples:</p> <p>(a) <code>0 . 1 = 0</code>: is a true statement asserting that "false and true evaluates to false"</p> <p>(b) <code>1 + 0 = 1</code>: is the dual of (a): it is a true statement asserting that "true or false evaluates true."</p> <hr> <p>(c) <code>1 . 1 = 1</code>: it is a true statement asserting that "true and true evaluates to true".</p> <p>(d) <code>0 + 0 = 0</code>: (d) is the dual of (c): it is a true statement asserting, correctly, that "false or false evaluates to false".</p>
<p>The statement is the full equation, including the = sign. <code>1+1</code> is neither true nor false: it takes the value <code>1</code>, but it is not actually saying anything. Analogously, the expression "Tom has a cat" is neither true nor false (without specifying who Tom is) - it is an expression which could be true or false, depending on who we mean when we say "Tom".</p> <p>On the other hand, the statement <code>1+1=0</code> is a false. Analogously, the statement "If Tom has a cat then Tom has no cats" is false, no matter who we mean when we say "Tom".</p> <p>In this case, <code>1+1=1</code> is the true statement. Its dual is <code>0.0=0</code>, which is also a true statement.</p>
logic
<p><a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems">Gödel's first incompleteness theorem</a> states that "...For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system".</p> <p>What does it mean that a statement is true if it's not provable?</p> <p>What is the difference between true and provable?</p>
<p>Consider this claim: <strong>John Smith will never be able to prove this statement is true.</strong></p> <p>If the statement is false, then John Smith will be able to prove it's true. But clearly that can't be, since it's impossible to prove that a false statement is true. (Assuming John Smith is sensible.)</p> <p>If it's true, there's no contradiction. It just means John Smith won't be able to prove it's true. So it's true, and John Smith won't be able to prove it's true. This is a limit on what John Smith can do. (So if John Smith is sensible, there are truths he cannot prove.)</p> <p>What Goedel showed is that for any sensible formal axiom system, there will be formal versions of "this axiom system cannot prove this claim is true". It will be a statement expressible in that formal system but, obviously, not provable within that axiom system.</p>
<p>Provable means that there is a formal proof using the axioms that you want to use. The set of axioms (in this case axioms for arithmetic, i.e., natural numbers) is the "system" that your quote mentions. True statements are those that hold for the natural numbers (in this particular situation).</p> <p>The point of the incompleteness theorems is that for every reasonable system of axioms for the natural numbers there will always be true statements that are unprovable. You can of course cheat and say that your axioms are simply all true statements about the natural numbers, but this is not a reasonable system since there is no algorithm that decides whether or not a given statement is one of your axioms or not.</p> <p>As a side note, your quote is essentially the first incompleteness theorem, in the form in which it easily follows from the second.</p> <p>In general (not speaking about natural numbers only) given a structure, i.e., a set together with relations on the set, constants in the set, and operations on the set, there is a natural way to define when a (first order) sentence using the corresponding symbols for your relations, constants, and operations holds in the structure. ($\forall x\exists y(x\cdot y=e)$ is a sentence using symbols corresponding to constants and operations that you find in a group, and the sentence says that every element has an inverse.)</p> <p>So this defines what is true in a structure. In order to prove a statement (sentence), you need a set of axioms (like the axioms of group theory) and a notion of formal proof from these axioms. I won't eleborate here, but the important connection between true statements and provable statements is the completeness theorem:</p> <p>A sentence is provable from a set of axioms iff every structure that satisfies the axioms also satisfies the sentence. </p> <p>This theorem tells you what the deal with the incompleteness theorems is: We consider true statements about the natural numbers, but a statement is provable from a set of axioms only if it holds for all structures satisying the axioms. And there will be structures that are not isomorphic to the natural numbers.</p>
number-theory
<p>I have been fascinated by the <a href="http://en.wikipedia.org/wiki/Collatz_problem" rel="noreferrer">Collatz problem</a> since I first heard about it in high school.</p> <blockquote> <p>Take any natural number <span class="math-container">$n$</span>. If <span class="math-container">$n$</span> is even, divide it by <span class="math-container">$2$</span> to get <span class="math-container">$n / 2$</span>, if <span class="math-container">$n$</span> is odd multiply it by <span class="math-container">$3$</span> and add <span class="math-container">$1$</span> to obtain <span class="math-container">$3n + 1$</span>. Repeat the process indefinitely. The conjecture is that no matter what number you start with, you will always eventually reach <span class="math-container">$1$</span>. [...]</p> <p>Paul Erdős said about the Collatz conjecture: &quot;Mathematics is not yet ready for such problems.&quot; He offered $500 USD for its solution.</p> </blockquote> <p><em><strong>QUESTIONS</strong></em>:</p> <p>How important do you consider the answer to this question to be? Why?</p> <p>Would you speculate on what might have possessed Paul Erdős to make such an offer?</p> <p>EDIT: Is there any reason to think that a proof of the Collatz Conjecture would be complex (like the FLT) rather than simple (like PRIMES is in P)? And can this characterization of FLT vs. PRIMES is in P be made more specific than a bit-length comparison?</p>
<p>Most of the answers so far have been along the general lines of 'Why hard problems are important', rather than 'Why the Collatz conjecture is important'; I will try to address the latter.</p> <p>I think the basic question being touched on is:</p> <blockquote> <p>In what ways does the prime factorization of <span class="math-container">$a$</span> affect the prime factorization of <span class="math-container">$a+1$</span>?</p> </blockquote> <p>Of course, one can always multiply out the prime factorization, add one, and then factor again, but this throws away the information of the prime factorization of <span class="math-container">$a$</span>. Note that this question is also meaningful in other UFDs, like <span class="math-container">$\mathbb{C}[x]$</span>.</p> <p>It seems very hard to come up with answers to this question that don't fall under the heading of 'immediate', such as distinct primes in each factorization. This seems to be in part because a small change in the prime factorization for <span class="math-container">$a$</span> (multiplication by a prime, say) can have a huge change in the prime factorization for <span class="math-container">$a+1$</span> (totally distinct prime support perhaps). Therefore, it is tempting to regard the act of adding 1 as an essentially-random shuffling of the prime factorization.</p> <p>The most striking thing about the Collatz conjecture is that it seems to be making a deep statement about a subtle relation between the prime factorizations of <span class="math-container">$a$</span> and <span class="math-container">$a+1$</span>. Note that the Collatz iteration consists of three steps; two of which are 'small' in terms of the prime factorization, and the other of which is adding one:</p> <ul> <li>multiplying by 3 has a small effect on the factorization.</li> <li>adding 1 has a (possibly) huge effect on the factorization.</li> <li>factoring out a power of 2 has a small effect on the factorization (in that it doesn't change the other prime powers in the factorization).</li> </ul> <p>So, the Collatz conjecture seems to say that there is some sort of abstract quantity like 'energy' which cannot be arbitrarily increased by adding 1. That is, no matter where you start, and no matter where this weird prime-shuffling action of adding 1 takes you, eventually the act of pulling out 2s takes enough energy out of the system so that you reach 1. I think it is for reasons like this that mathematicians suspect that a solution of the Collatz conjecture will open new horizons and develop new and important techniques in number theory.</p>
<p>The Collatz conjecture is <em><strong>the</strong></em> simplest open problem in mathematics. You can explain it to all your non-mathematical friends, and even to small children who have just learned to divide by 2. It doesn't require understanding divisibility, just evenness.</p> <p>The lack of connections between this conjecture and existing mathematical theories (as complained of in some other answers) is not an inadequacy of this conjecture, but of our theories.</p> <p>This problem has led directly to theoretical work by Conway showing that <strong>very similar questions are formally undecidable</strong>, certainly a surprising result.</p> <p>The problem also relates directly to chaotic cellular automata. If you look at a number in base 6, you will see that multiplying by 3 and dividing by 2 are the same operation (differing only by a factor of 6, <em>i.e.</em> the location of the decimal point), and the operation is local: each new digit only depends on two of the previous step's digits. Using a 7th state for cells that are not part of the number, a very simple cellular automaton is obtained where each cell only needs to look at <em>one</em> neighbor to compute its next value. <sub>(Wolfram Mathworld has some nonsense about a CA implementation being difficult due to carries, but there are no carries when you add 1, because after multiplying by 3 the last digit is either 0 (becomes a non-digit because number was even so we should divide by 6) or 3 (becomes 4), so there are never any carries.)</sub></p> <p>It is easy to prove that this CA is chaotic: If you change the interior digits in <em>any</em> way, the region of affected digits always grows linearly with time (by <span class="math-container">$\log_6 3$</span> digits per step). This prevents any engineering of the digit patterns, which are quickly randomized. If the final digit behaves randomly, then the conjecture is true. Clearly <strong>any progress on the Collatz conjecture would immediately have consequences for symbolic dynamics</strong>.</p> <p>Emil Post's <em>tag systems</em> (which he created in 1920 expressly for studying the <strong>foundations of mathematics</strong>) have been studied for many decades, and they have been the foundation of the smallest universal Turing machines (as well as other universal systems) since 1961. In 2007, Liesbeth De Mol discovered that the Collatz problem can be encoded as the following 2-tag system:</p> <p><span class="math-container">$\begin{eqnarray} \hspace{2cm} \alpha &amp; \longrightarrow &amp; c \, y \\ \hspace{2cm} c &amp; \longrightarrow &amp; \alpha \\ \hspace{2cm} y &amp; \longrightarrow &amp; \alpha \alpha \alpha \\ \end{eqnarray}$</span></p> <p>In two passes, this tag system processes the word <span class="math-container">$\alpha^{n}$</span> into either <span class="math-container">$\alpha^{n/2}$</span> or <span class="math-container">$\alpha^{(3n+1)/2}$</span> depending on the parity of <span class="math-container">$n$</span>. Larger tag systems are known to be universal, and any progress on the 3x+1 problem will be followed with close attention by this field.</p> <p>In short the Collatz problem is simple enough that anyone can understand it, and yet relates not just to number theory (as described in other answers) but to issues of decidability, chaos, and the foundations of mathematics and of computation. That's about as good as it gets for a problem even a small child can understand.</p>
differentiation
<p>Alan Turing's notebook has recently been sold at an auction house in London. In it he says this:</p> <p><img src="https://i.sstatic.net/6w4uC.png" alt="enter image description here"> Written out:</p> <blockquote> <p>The Leibniz notation $\frac{\mathrm{d}y}{\mathrm{d}x}$ I find extremely difficult to understand in spite of it having been the one I understood best once! It certainly implies that some relation between $x$ and $y$ has been laid down e.g. \begin{equation} y = x^2 + 3x \end{equation}</p> </blockquote> <p>I am trying to get an idea of what he meant by this. I imagine he was a dab hand at differentiation from first principles so he is obviously hinting at something more subtle but I can't access it.</p> <ul> <li>What is the depth of understanding he was trying to acquire about this mathematical operation?</li> <li>What does intuitive differentiation notation require?</li> <li>Does this notation bring out the full subtlety of differentiation?</li> <li>What did he mean?</li> </ul> <p>See <a href="https://www.bonhams.com/auctions/22795/lot/1/" rel="noreferrer">here</a> for more pages of the notebook.</p>
<p>I'm guessing too, but my guess is that it has something to do with the fact that beyond his initial introduction to calculus Turing (in common with many of us) thinks of a <em>function</em> as the entity of which a derivative is taken, either universally or at a particular point. In Leibnitz's notation, $y$ isn't explicitly a function. It's something that has been previously related to $x$, but it actually <em>is</em> a variable, or an axis of a graph, or the output of the function, not the function or the relation <em>per se</em>.</p> <p>Defining $y$ as being related to $x$ by $y = x^2 + 3x$, and then writing $\frac{\mathrm{d}y}{\mathrm{d}x}$ to be "the derivative of $y$ with respect to $x$", quite reasonably might seem unintuitive and confusing to Turing once he's habitually thinking about functions as complete entities. That's not to say he can't figure out what the notation refers to, of course he can, but he's remarking that he finds it difficult to properly grasp.</p> <p>I don't know what notation Turing preferred, but Lagrange's notation was to define a function $f$ by $f(x) = x^2 + 3x$ and then write $f'$ for the derivative of $f$. This then is implicitly with respect to $f$'s single argument. We have no $y$ that we need to understand, nor do we need to deal with any urge to understand what $\mathrm{d}y$ might be in terms of a rigorous theory of infinitesimals. The mystery is gone. But it's hard to deal with the partial derivatives of multivariate functions in that notation, so you pay your money and take your choice.</p>
<p>I'll try my hand and give one possible rationale behind Turing's confusion over the notation $ \tfrac{\mathrm{d}y}{\mathrm{d}x} $. The short answer is that he appears to take issue with the notation on the grounds that differentiation is a mapping between two function spaces but $y$ looks like a variable.</p> <p>To answer the first part of your question regarding depth and his meaning, I base my answer on the discussion in the linked webpage. Based on the dating of the notes and the discussion in the pictures, I doubt that his contention is regarding actual interpretation in terms of differentials $\mathrm{d}x, \mathrm{d}y$ and is instead more pedantic in nature. Earlier in the discussion, he talks about indeterminates and the difference between an indeterminate and a variable. Later, he writes "What is the way out? The notation $\tfrac{\mathrm{d}}{\mathrm{d}x} f(x, y)_{x=y,y=x}$ hardly seems to help in this difficult case". From this I gander that it's primarily the fact that he doesn't like the use of $y$ as something being differentiated. He states that $y = x^2 + 3x$ as if to say that you could alternatively just rearrange the equation in terms of $x$; however, in taking the derivative, you get another function - that is, you have a function $f(x)$ that becomes $g(x)$ as a result of differentiation $D:f(x) \rightarrow g(x)$. Thus, $y$ can't be a variable but rather a function if we want differentiation to be well-defined. Think of it as something akin to abuse of notation.</p> <p>Regarding intuition and subtlety, Leibnitz's notation does indeed provide both although in truth $$ \dfrac{\mathrm{d}}{\mathrm{d}x} f(x) = g(x)$$ is clearer than using $y$. From a more intuitive point of view, you can think of the derivative as a variation in $f$ with respect to $x$ and indeed it's this notion that is more important than the one regarding secants and tangents. The subtlety of the notation is readily apparent when people are using the chain rule, even though you're not really cross-multiplying (for instance it doesn't work with second order derivatives). The subtlety becomes more apparent when you abstract even further to vector calculus or even further to differentiable manifolds. </p>
linear-algebra
<p>I am reading the book &quot;Introduction to Linear Algebra&quot; by Gilbert Strang and couldn't help wondering the advantages of LU decomposition over Gaussian Elimination!</p> <p>For a system of linear equations in the form <span class="math-container">$Ax = b$</span>, one of the methods to solve the unknowns is Gaussian Elimination, where you form a upper triangular matrix <span class="math-container">$U$</span> by forward elimination and then figure out the unknowns by backward substitution. This serves the purpose of solving a system of linear equations. What was necessity for LU Decomposition, i.e. after finding <span class="math-container">$U$</span> by forward elimination, why do we go about finding <span class="math-container">$L$</span> (the lower triangular matrix) when you already had <span class="math-container">$U$</span> and could have done a backward substitution?</p>
<p>In many engineering applications, when you solve $Ax = b$, the matrix $A \in \mathbb{R}^{N \times N}$ remains unchanged, while the right hand side vector $b$ keeps changing.</p> <p>A typical example is when you are solving a partial differential equation for different forcing functions. For these different forcing functions, the meshing is usually kept the same. The matrix $A$ only depends on the mesh parameters and hence remains unchanged for the different forcing functions. However, the right hand side vector $b$ changes for each of the forcing function.</p> <p>Another example is when you are solving a time dependent problem, where the unknowns evolve with time. In this case again, if the time stepping is constant across different time instants, then again the matrix $A$ remains unchanged and the only the right hand side vector $b$ changes at each time step.</p> <p>The key idea behind solving using the $LU$ factorization (for that matter any factorization) is to decouple the factorization phase (usually computationally expensive) from the <em>actual</em> solving phase. The factorization phase only needs the matrix $A$, while the <em>actual</em> solving phase makes use of the factored form of $A$ and the right hand side to solve the linear system. Hence, once we have the factorization, we can make use of the factored form of $A$, to solve for different right hand sides at a relatively moderate computational cost.</p> <p>The cost of factorizing the matrix $A$ into $LU$ is $\mathcal{O}(N^3)$. Once you have this factorization, the cost of solving i.e. the cost of solving $LUx = b$ is just $\mathcal{O}(N^2)$, since the cost of solving a triangular system scales as $\mathcal{O}(N^2)$.</p> <p>(Note that to solve $LUx = b$, you first solve $Ly = b$ and then $Ux = y$. Solving $Ly = b$ and $Ux=y$ costs $\mathcal{O}(N^2).$)</p> <p>Hence, if you have '$r$' right hand side vectors $\{b_1,b_2,\ldots,b_r\}$, once you have the $LU$ factorization of the matrix $A$, the total cost to solve $$Ax_1 = b_1, Ax_2 = b_2 , \ldots, Ax_r = b_r$$ scales as $\mathcal{O}(N^3 + rN^2)$.</p> <p>On the other hand, if you do Gauss elimination separately for each right hand side vector $b_j$, then the total cost scales as $\mathcal{O}(rN^3)$, since each Gauss elimination independently costs $\mathcal{O}(N^3)$.</p> <p>However, typically when people say Gauss elimination, they usually refer to $LU$ decomposition and not to the method of solving each right hand side completely independently.</p>
<p>The computational cost of solving <span class="math-container">$x$</span> for <span class="math-container">$Ax = b$</span> via Gaussian elimination or <span class="math-container">$LU-$</span>decomposition is the same. Solving <span class="math-container">$Ax = b$</span> via Gaussian elimination with partial pivoting on the augmented matrix <span class="math-container">$[A | b]$</span> transforms <span class="math-container">$[A|b]$</span> into an upper-triangular system <span class="math-container">$[U | b^{'}]$</span>. Thereafter, backward substitution is performed to determine <span class="math-container">$x$</span>. Furthermore, in the <span class="math-container">$LU-$</span>decomposition matrix, <span class="math-container">$L$</span> is nothing but &quot;multipliers&quot; obtained during the Gauss elimination process. So there is no additional cost of computing <span class="math-container">$L$</span>.</p> <p>But consider a scenario when the right-hand side <span class="math-container">$Ax = b $</span> keeps on changing and <span class="math-container">$A$</span> is fixed. That means you deal with the system of equations <span class="math-container">$Ax = b_1, Ax=b_2, ...Ax=b_k$</span>. In this case, <span class="math-container">$LU-$</span>decomposition works more efficiently than Gauss elimination as you do not need to solve all augmented matrices. You just have to decompose <span class="math-container">$A$</span> into <span class="math-container">$LU$</span> (or Gaussian elimination with cost ~ <span class="math-container">$n^3$</span>) once.</p>