tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
probability
<p>I'm studying Probability theory, but I can't fully understand what are Borel sets. In my understanding, an example would be if we have a line segment [0, 1], then a Borel set on this interval is a set of all intervals in [0, 1]. Am I wrong? I just need more examples. </p> <p>Also I want to understand what is Borel $\sigma$-algebra.</p>
<p>To try and motivate the technical answers, I'm ploughing through this stuff myself, so, people, do correct me:</p> <p>Imagine <a href="https://en.wikipedia.org/wiki/Arnold_Schwarzenegger" rel="noreferrer">Arnold Schwarzenegger</a>'s height was recorded to <em>infinite</em> precision. Would you prefer to try and guess Arnie's <em>exact</em> height, or some <em>interval</em> containing it?</p> <p>But what if there was a website for this game, which provided some pre-defined intervals? That could be quite annoying, if say, the bands offered were <span class="math-container">$[0,1m)$</span> and <span class="math-container">$[1m,\infty)$</span>. I suspect most of us could improve on those.</p> <p>Wouldn't it be better to be able to choose an arbitrary interval? That's what the Borel <span class="math-container">$\sigma$</span>-algebra offers: a choice of all the possible intervals you might need or want. </p> <p>It would make for a seriously (infinitely) long drop down menu, but it's conceptually equivalent: all the members are predefined. But you still get the convenience of choosing an arbitrary interval. </p> <p>The Borel sets just function as the building blocks for the menu that is the Borel <span class="math-container">$\sigma$</span>-algebra.</p>
<p>First let me clear one misconception.</p> <p>The set <em>of all subintervals</em> is <strong>not</strong> a Borel set, but rather a collection of Borel sets. Every subinterval <em>is</em> a Borel set on its own accord.</p> <p>To understand the Borel sets and their connection with probability one first needs to bear in mind two things:</p> <ol> <li><p>Probability is $\sigma$-additive, namely if $\{X_i\mid i\in\mathbb N\}$ is a list of mutually exclusive events then $P(\bigcup X_i)=\sum P(X_i)$.</p> <p>Therefore the collection of all events which we can measure their probability must have the property that it is closed under countable unions; trivially we require closure under complements (i.e. negation) and thus by DeMorgan we have also closure under countable intersections.</p> <p>If so, the set of all events which we can measure the probability of them happening is a $\sigma$-algebra.</p></li> <li><p>We wish to extend the idea that the probability that $x\in (a,b)$, where $(a,b)$ is a subinterval of $[0,1]$ is exactly $b-a$. Namely the length of the interval is the probability that we choose a point from it.</p></li> </ol> <p>Combine these two results and we have that the Borel sets of $[0,1]$ is a collection which is a $\sigma$-algebra, and it contains all the subintervals of $[0,1]$. Since we do not want to add more than we need, then Borel sets are defined to be <strong>the smallest $\sigma$-algebra which contains all the subintervals</strong>.</p>
number-theory
<p>A few days ago I was recalling some facts about the $p$-adic numbers, for example the fact that the $p$-adic metric is an ultrametric implies very strongly that there is no order on $\mathbb{Q}_p$, as any number in the interior of an open ball is in fact its center.</p> <p>I know that if you take the completion of the algebraic closure of the $p$-adic completion you get something which is isomorphic to $\mathbb{C}$ (this result was very surprising until I studied model theory, then it became obvious).</p> <p>Furthermore, if the algebraic closure is of an extension of dimension $2$ then the field is orderable, or even real closed. Either way, it implies that the $p$-adic numbers don't have this property.</p> <p>So I was thinking, is there a $p$-adic number whose square equals $2$? $3$? $2011$? For which prime numbers $p$? How far down the rabbit hole of algebraic numbers can you go inside the $p$-adic numbers? Are there general results connecting the choice (or rather properties) of $p$ to the "amount" of algebraic closure it gives?</p>
<blockquote> <p>A few days ago I was recalling some facts about the p-adic numbers, for example the fact that the p-adic metric is an ultrametric implies very strongly that there is no order on <span class="math-container">$\mathbb{Q}_p$</span>, as any number in the interior of an open ball is in fact its center.</p> </blockquote> <p>This argument is not correct. For instance why does it not apply to <span class="math-container">$\mathbb{Q}$</span> with the <span class="math-container">$p$</span>-adic metric? In fact any field which admits an ordering also admits a nontrivial non-Archimedean metric.</p> <p>It is true though that <span class="math-container">$\mathbb{Q}_p$</span> cannot be ordered. By the Artin-Schreier theorem, this is equivalent to the fact that <span class="math-container">$-1$</span> is a sum of squares. Using Hensel's Lemma and a little quadratic form theory it is not hard to show that <span class="math-container">$-1$</span> is a sum of four squares in <span class="math-container">$\mathbb{Q}_p$</span>.</p> <blockquote> <p>I know that if you take the completion of the algebraic close of the p-adic completion you get something which is isomorphic to <span class="math-container">$\mathbb{C}$</span> (this result was very surprising until I studied model theory, then it became obvious).</p> </blockquote> <p>I don't mean to pick, but I am familiar with basic model theory and I don't see how it helps to establish this result. Rather it is basic field theory: any two algebraically closed fields of equal characteristic and absolute transcendence degree are isomorphic. (From this the completeness of the theory of algebraically closed fields of any given characteristic follows easily, by Vaught's test.)</p> <blockquote> <p>So I was thinking, is there a <span class="math-container">$p$</span>-adic number whose square equals 2? 3? 2011? For which prime numbers <span class="math-container">$p$</span>?</p> </blockquote> <p>All of these answers depend on <span class="math-container">$p$</span>. The general situation is as follows: for any odd <span class="math-container">$p$</span>, the group of <strong>square classes</strong> <span class="math-container">$\mathbb{Q}_p^{\times}/\mathbb{Q}_p^{\times 2}$</span> -- which parameterizes quadratic extensions -- has order <span class="math-container">$4$</span>, meaning there are exactly three quadratic extensions of <span class="math-container">$\mathbb{Q}_p$</span> inside any algebraic closure. If <span class="math-container">$u$</span> is any integer which is not a square modulo <span class="math-container">$p$</span>, then these three extensions are given by adjoinging <span class="math-container">$\sqrt{p}$</span>, <span class="math-container">$\sqrt{u}$</span> and <span class="math-container">$\sqrt{up}$</span>. When <span class="math-container">$p = 2$</span> the group of square classes has cardinality <span class="math-container">$8$</span>, meaning there are <span class="math-container">$7$</span> quadratic extensions.</p> <blockquote> <p>How far down the rabbit hole of algebraic numbers can you go inside the p-adic numbers? Are there general results connecting the choice (or rather properties) of <span class="math-container">$p$</span> to the &quot;amount&quot; of algebraic closure it gives?</p> </blockquote> <p>I don't know exactly what you are looking for as an answer here. The absolute Galois group of <span class="math-container">$\mathbb{Q}_p$</span> is in some sense rather well understood: it is an infinite profinite group but it is &quot;small&quot; in the technical sense that there are only finitely many open subgroups of any given index. Also every finite extension of <span class="math-container">$\mathbb{Q}_p$</span> is solvable. All in all it is vague -- but fair -- to say that the fields <span class="math-container">$\mathbb{Q}_p$</span> are &quot;much closer to being algebraically closed&quot; than the field <span class="math-container">$\mathbb{Q}$</span> but &quot;not as close to being algebraically closed&quot; as the finite field <span class="math-container">$\mathbb{F}_p$</span>. This can be made precise in various ways.</p> <p>If you are interested in the <span class="math-container">$p$</span>-adic numbers you should read intermediate level number theory texts on local fields. For instance <a href="http://alpha.math.uga.edu/%7Epete/MATH8410.html" rel="nofollow noreferrer">this page</a> collects notes from a course on (in part) local fields that I taught last spring. I also highly recommend books called <em>Local Fields</em>: one by Cassels and one by Serre.</p> <p><b>Added</b>: see in particular Sections 5.4 and 5.5 <a href="http://alpha.math.uga.edu/%7Epete/8410Chapter5.pdf" rel="nofollow noreferrer">of this set of notes</a> for information about the number of <span class="math-container">$n$</span>th power classes and the number of field extensions of a given degree.</p>
<p>Suppose that $K$ is an algebraic number field, i.e. a finite extension of $\mathbb Q$. It has a ring of integers $\mathcal O_K$ (the integral closure of $\mathbb Z$ in $K$). Suppose that there is a prime ideal $\wp \subset \mathcal O_K$ such that:</p> <ol> <li><p>$p \in \wp,$ but $p \not\in \wp^2$.</p></li> <li><p>The order of $\mathcal O_K/\wp = p.$ (Note that (1) implies in particular that $\wp \cap \mathbb Z = p \mathbb Z$, so that $\mathcal O_K/\wp$ is an extension of $\mathbb Z/p\mathbb Z$. We are now requiring that it in fact be the trivial extension.)</p></li> </ol> <p>Then the number field $K$ embeds into $\mathbb Q_p$. The converse also holds.</p> <p>So if you want to know whether you can solve the equation $f(x) = 0$ in $\mathbb Q_p$ (where $f(x)$ is some irreducible polynomial in $\mathbb Q[x]$), then set $K = \mathbb Q[x]/f(x)$ and apply this criterion. This is easiest to do when $f(x)$ has integral coefficients, and remains separable when reduced mod $p$ (something that you can check by computing the discriminant and seeing whether or not it is divisible by $p$), because in this case the criterion is equivalent to asking that $f(x)$ have a root mod $p$.</p> <p>Incidentally, there are many $f(x)$ that satisfy this criterion (because, among other things, the algebraic closure of $\mathbb Q$ in $\mathbb Q_p$ has infinite degree over $\mathbb Q$), but there are also many $f(x)$ that don't.</p>
linear-algebra
<p>What is the importance of eigenvalues/eigenvectors? </p>
<h3>Short Answer</h3> <p><em>Eigenvectors make understanding linear transformations easy</em>. They are the "axes" (directions) along which a linear transformation acts simply by "stretching/compressing" and/or "flipping"; eigenvalues give you the factors by which this compression occurs. </p> <p>The more directions you have along which you understand the behavior of a linear transformation, the easier it is to understand the linear transformation; so you want to have as many linearly independent eigenvectors as possible associated to a single linear transformation.</p> <hr> <h3>Slightly Longer Answer</h3> <p>There are a lot of problems that can be modeled with linear transformations, and the eigenvectors give very simply solutions. For example, consider the system of linear differential equations \begin{align*} \frac{dx}{dt} &amp;= ax + by\\\ \frac{dy}{dt} &amp;= cx + dy. \end{align*} This kind of system arises when you describe, for example, the growth of population of two species that affect one another. For example, you might have that species $x$ is a predator on species $y$; the more $x$ you have, the fewer $y$ will be around to reproduce; but the fewer $y$ that are around, the less food there is for $x$, so fewer $x$s will reproduce; but then fewer $x$s are around so that takes pressure off $y$, which increases; but then there is more food for $x$, so $x$ increases; and so on and so forth. It also arises when you have certain physical phenomena, such a particle on a moving fluid, where the velocity vector depends on the position along the fluid.</p> <p>Solving this system directly is complicated. But suppose that you could do a change of variable so that instead of working with $x$ and $y$, you could work with $z$ and $w$ (which depend linearly on $x$ and also $y$; that is, $z=\alpha x+\beta y$ for some constants $\alpha$ and $\beta$, and $w=\gamma x + \delta y$, for some constants $\gamma$ and $\delta$) and the system transformed into something like \begin{align*} \frac{dz}{dt} &amp;= \kappa z\\\ \frac{dw}{dt} &amp;= \lambda w \end{align*} that is, you can "decouple" the system, so that now you are dealing with two <em>independent</em> functions. Then solving this problem becomes rather easy: $z=Ae^{\kappa t}$, and $w=Be^{\lambda t}$. Then you can use the formulas for $z$ and $w$ to find expressions for $x$ and $y$..</p> <p>Can this be done? Well, it amounts <em>precisely</em> to finding two linearly independent eigenvectors for the matrix $\left(\begin{array}{cc}a &amp; b\\c &amp; d\end{array}\right)$! $z$ and $w$ correspond to the eigenvectors, and $\kappa$ and $\lambda$ to the eigenvalues. By taking an expression that "mixes" $x$ and $y$, and "decoupling it" into one that acts independently on two different functions, the problem becomes a lot easier. </p> <p>That is the essence of what one hopes to do with the eigenvectors and eigenvalues: "decouple" the ways in which the linear transformation acts into a number of independent actions along separate "directions", that can be dealt with independently. A lot of problems come down to figuring out these "lines of independent action", and understanding them can really help you figure out what the matrix/linear transformation is "really" doing. </p>
<h3>A short explanation</h3> <p>Consider a matrix <span class="math-container">$A$</span>, for an example one representing a physical transformation (e.g rotation). When this matrix is used to transform a given vector <span class="math-container">$x$</span> the result is <span class="math-container">$y = A x$</span>.</p> <p>Now an interesting question is</p> <blockquote> <p>Are there any vectors <span class="math-container">$x$</span> which do not change their direction under this transformation, but allow the vector magnitude to vary by scalar <span class="math-container">$ \lambda $</span>?</p> </blockquote> <p>Such a question is of the form <span class="math-container">$$A x = \lambda x $$</span></p> <p>So, such special <span class="math-container">$x$</span> are called <em>eigenvector</em>(s) and the change in magnitude depends on the <em>eigenvalue</em> <span class="math-container">$ \lambda $</span>.</p>
logic
<p>I expect that nearly everyone here at stackexchange is by now familiar with <a href="http://www.google.com/search?q=Cheryls+birthday" rel="nofollow noreferrer">Cheryl's birthday problem</a>, which spawned many variant problems, including a <a href="http://web.archive.org/web/20150509214648/https://plus.google.com/+TimothyGowers0/posts/Ak3Fnw8dvBk" rel="nofollow noreferrer">transfinite version</a> due to Timothy Gowers.</p> <p>In response, I have made my own transfinite epistemic logic puzzle, <a href="http://jdh.hamkins.org/transfinite-epistemic-logic-puzzle-challenge/" rel="nofollow noreferrer">Cheryl's rational gift</a>, which appears below. Can you solve it?</p> <p><img src="https://i.sstatic.net/Lq7C0.png" alt="" /></p> <em> <p><strong><em>Cheryl</em>  </strong> Welcome, Albert and Bernard, to my birthday party, and I thank you for your gifts. To return the favor, as you entered my party, I privately made known to each of you a rational number of the form <span class="math-container">$$n-\frac{1}{2^k}-\frac{1}{2^{k+r}},$$</span> where <span class="math-container">$n$</span> and <span class="math-container">$k$</span> are positive integers and <span class="math-container">$r$</span> is a non-negative integer; please consider it my gift to each of you. Your numbers are different from each other, and you have received no other information about these numbers or anyone's knowledge about them beyond what I am now telling you. Let me ask, who of you has the larger number?</p> <p><strong><em>Albert</em>   </strong> I don't know.</p> <p><strong><em>Bernard</em>   </strong> Neither do I.</p> <p><strong><em>Albert</em>   </strong> Indeed, I still do not know.</p> <p><em><strong>Bernard</strong></em>    And still neither do I.</p> <p><em><strong>Cheryl</strong></em>    Well, it is no use to continue that way! I can tell you that no matter how long you continue that back-and-forth, you shall not come to know who has the larger number.</p> <p><em><strong>Albert</strong></em>    What interesting new information! But alas, I still do not know whose number is larger.</p> <p><em><strong>Bernard</strong></em>    And still also I do not know.</p> <p><em><strong>Albert</strong></em>    I continue not to know.</p> <p><em><strong>Bernard</strong></em>    I regret that I also do not know.</p> <p><em><strong>Cheryl</strong></em>    Let me say once again that no matter how long you continue truthfully to tell each other in succession that you do not yet know, you will not know who has the larger number.</p> <p><em><strong>Albert</strong></em>    Well, thank you very much for saving us from that tiresome trouble! But unfortunately, I still do not know who has the larger number.</p> <p><em><strong>Bernard</strong></em>    And also I remain in ignorance. However shall we come to know?</p> <p><em><strong>Cheryl</strong></em>    Well, in fact, no matter how long we three continue from now in the pattern we have followed so far---namely, the pattern in which you two state back-and-forth that still you do not yet know whose number is larger and then I tell you yet again that no further amount of that back-and-forth will enable you to know---then still after as much repetition of that pattern as we can stand, you will not know whose number is larger! Furthermore, I could make that same statement a second time, even after now that I have said it to you once, and it would still be true. And a third and fourth as well! Indeed, I could make that same pronouncement a hundred times altogether in succession (counting my first time as amongst the one hundred), and it would be true every time. And furthermore, even after my having said it altogether one hundred times in succession, you would still not know who has the larger number!</p> <p><em><strong>Albert</strong></em>    Such powerful new information! But I am very sorry to say that still I do not know whose number is larger.</p> <p><em><strong>Bernard</strong></em>    And also I do not know.</p> <p><em><strong>Albert</strong></em>    But wait! It suddenly comes upon me after Bernard's last remark, that finally I know who has the larger number!</p> <p><em><strong>Bernard</strong> </em>   Really? In that case, then I also know, and what is more, I know both of our numbers!</p> <p><em><strong>Albert</strong></em>    Well, now I also know them!</p> </em> <hr /> <p><strong>Question.</strong> What numbers did Cheryl give to Albert and Bernard?</p> <p>There are many remarks and solution proposals posted already as comments on <a href="http://jdh.hamkins.org/transfinite-epistemic-logic-puzzle-challenge/" rel="nofollow noreferrer">my blog</a>, but the solutions proposed there do not all agree with one another.</p> <p>I shall post a solution there in a few days time, but I thought people here at Math.SE might enjoy the puzzle. I apologize if some find this question to be an inappropriate use of this site.</p> <p>You may also want to see my earlier <a href="http://jdh.hamkins.org/now-i-know" rel="nofollow noreferrer">transfinite epistemic logic puzzles</a>, with solutions there.</p>
<p>We can map $n - 2^{-k} - 2^{-(k+r)}$ onto the ordinal $\omega^2(n-1) + \omega(k-1) + r$; this preserves order. So Cheryl effectively says "I have given you both distinct ordinals $a$ and $b$, both less than $\omega^3$. Which is bigger?"</p> <p>Albert says "$a \geq 1$", Bernard says "$b \geq 2$", Albert says "$a \geq 3$", Bernard says "$b \geq 4$".</p> <p>Cheryl says "Actually $a$ and $b$ are both $\geq \omega$".</p> <p>Albert says "$a \geq \omega + 1$", Bernard says "$b \geq \omega + 2$", Albert says "$a \geq \omega + 3$", Bernard says "$b \geq \omega + 4$".</p> <p>Eventually Cheryl says "Actually both are $\geq \omega^2 100 + 1$".</p> <p>Albert says "$a \geq \omega^2 100 + 2$".</p> <p>Bernard says "$b \geq \omega^2 100 + 3$".</p> <p>Albert says "Aha! In that case I know the answer, which tells you that $a &lt; \omega^2 100 + 4$".</p> <p>From that, Bernard can work out Albert's number. How does he know whether Albert's number is $\omega^2 100 + 2$ or $\omega^2 100 + 3$? It can only be that $\omega^2 100 + 3$ is Bernards's number, so he knows Albert's must be $\omega^2 100 + 2$.</p> <p>Translating back in to the language of the original question, this means that Albert's number is $101 - 2^{-1}- 2^{-3} = 100.375$ and Bernard's is $101 - 2^{-1}- 2^{-4} = 100.4375$.</p>
<p>Every time Albert and Bernard go back and forth they are eliminating a new ‘r’ , and every time she tells them that no matter how many times they go back and forth they won’t get it, they are eliminating a new ‘k.’</p> <p>When Cheryl makes her big rant, she if effectively telling them that for all r and k when n = 1, they won’t find it, and furthermore that the first 100 times she makes that statement they won’t find it.</p> <p>She also says that after saying it 100 times, neither will know which number is larger. This means that neither Albert nor Bernard has the number 100. (something a couple other solutions missed I think.) Albert says he does not know, ruling out him having 100 + 1/4, and then Bernard says he still does not know, ruling out Bernard having 100 + 3/8 and 100 + 1/4.</p> <p>When Albert says that he suddenly knows, he can either have 100 + 3/8 or 100 + 7/16. The only way for Bernard to know both of their numbers is if he has 100 + 7/16, which gives us the answer:</p> <p>Albert: 100 + 3/8 Bernard: 100 + 7/16</p>
number-theory
<p>I've recently been reading about the Millennium Prize problems, specifically the Riemann Hypothesis. I'm not near qualified to even fully grasp the problem, but seeing the hypothesis and the other problems I wonder: what practical use will a solution have?</p> <p>Many researchers have spent a lot of time on it, trying to prove it, but why is it important to solve the problem?</p> <p>I've tried relating the situation to problems in my field. For instance, solving the <span class="math-container">$P \ vs. NP$</span> problem has important implications if <span class="math-container">$P = NP$</span> is shown, and important implications if <span class="math-container">$P \neq NP$</span> is shown. For instance, there would be implications regarding the robustness or security of cryptographic protocols and algorithms. However, it's hard to say WHY the Riemann Hypothesis is important.</p> <p>Given that the Poincaré Conjecture has been resolved, perhaps a hint about what to expect if and when the Riemann Hypothesis is resolved could be obtained by seeing what a proof of the Poincaré Conjecture has led to.</p>
<p>Proving the Riemann Hypothesis will get you tenure, pretty much anywhere you want it. </p>
<p>The Millennium problems are not necessarily problems whose solution will lead to curing cancer. These are problems in mathematics and were chosen for their importance in mathematics rather for their potential in applications.</p> <p>There are plenty of important open problems in mathematics, and the Clay Institute had to narrow it down to seven. Whatever the reasons may be, it is clear such a short list is incomplete and does not claim to be a comprehensive list of the most important problems to solve. However, each of the problems solved is extremely central, important, interesting, and hard. Some of these problems have direct consequences, for instance the Riemann hypothesis. There are many (many many) theorems in number theory that go like &quot;if the Riemann hypothesis is true, then blah blah&quot;, so knowing it is true will immediately validate the consequences in these theorems as true.</p> <p>In contrast, a solution to some of the other Millennium problems is (highly likely) not going to lead to anything dramatic. For instance, the <span class="math-container">$P$</span> vs. <span class="math-container">$NP$</span> problem. I personally doubt it is probable that <span class="math-container">$P=NP$</span>. The reason it's an important question is not because we don't (philosophically) already know the answer, but rather that we don't have a bloody clue how to prove it. It means that there are fundamental issues in computability (which is a hell of an important subject these days) that we just don't understand. Solving <span class="math-container">$P \ne NP$</span> will be important not for the result but for the techniques that will be used. (Of course, in the unlikely event that <span class="math-container">$P=NP$</span>, enormous consequences will follow. But that is about as likely as it is that the Hitchhiker's Guide to the Galaxy is based on true events.)</p> <p>The Poincaré conjecture is an extremely basic problem about three-dimensional space. I think three-dimensional space is very important, so if we can't answer a very fundamental question about it, then we don't understand it well. I'm not an expert on Perelman's solution, nor the field to which it belongs, so I can't tell what consequences his techniques have for better understanding three-dimensional space, but I'm sure there are.</p>
matrices
<p>I know that matrix multiplication in general is not commutative. So, in general:</p> <p>$A, B \in \mathbb{R}^{n \times n}: A \cdot B \neq B \cdot A$</p> <p>But for some matrices, this equations holds, e.g. A = Identity or A = Null-matrix $\forall B \in \mathbb{R}^{n \times n}$.</p> <p>I think I remember that a group of special matrices (was it $O(n)$, the <a href="http://en.wikipedia.org/wiki/Orthogonal_group">group of orthogonal matrices</a>?) exist, for which matrix multiplication is commutative.</p> <p><strong>For which matrices $A, B \in \mathbb{R}^{n \times n}$ is $A\cdot B = B \cdot A$?</strong></p>
<p>Two matrices that are simultaneously diagonalizable are always commutative.</p> <p>Proof: Let $A$, $B$ be two such $n \times n$ matrices over a base field $\mathbb K$, $v_1, \ldots, v_n$ a basis of Eigenvectors for $A$. Since $A$ and $B$ are simultaneously diagonalizable, such a basis exists and is also a basis of Eigenvectors for $B$. Denote the corresponding Eigenvalues of $A$ by $\lambda_1,\ldots\lambda_n$ and those of $B$ by $\mu_1,\ldots,\mu_n$. </p> <p>Then it is known that there is a matrix $T$ whose columns are $v_1,\ldots,v_n$ such that $T^{-1} A T =: D_A$ and $T^{-1} B T =: D_B$ are diagonal matrices. Since $D_A$ and $D_B$ trivially commute (explicit calculation shows this), we have $$AB = T D_A T^{-1} T D_B T^{-1} = T D_A D_B T^{-1} =T D_B D_A T^{-1}= T D_B T^{-1} T D_A T^{-1} = BA.$$</p>
<p>The only matrices that commute with <em>all</em> other matrices are the multiples of the identity.</p>
logic
<p>The way I used to be getting it was that circular reasoning occurs when a proof contains its thesis within its assumptions. Then, everything such a proof "proves" is that this particular statement entails itself; which is trivial since any statement entails itself.</p> <p>But I witnessed a conversation that made me think I'm not getting this at all.</p> <p>In short, Bob accused Alice of circular reasoning. But Alice responded in a way that perplexed me:</p> <blockquote> <p><em>Of course</em> my proof contains its thesis within its assumptions. Each and every proof must be based on axioms, which are assumptions that are not to be proved. Thus each set of axioms implicitly contains all theses that can be proven from this set of axioms. As we know, each theorem in mathematics and logic is little more than a tautology: so is mine.</p> </blockquote> <p>Not sure what should I think? On the one hand, Alice's reasoning seems correct. I, at least, can't find any error there. On the other hand, this entails that... Every valid proof must be circular! Which is absurd.</p> <p>What is a circular proof? And what is wrong with the reasoning above?</p>
<blockquote> <p>Of course my proof contains its thesis within its assumptions. Each and every proof must be based on axioms, which are assumptions that are not to be proved. </p> </blockquote> <p>Hold it right there, Alice. These <em>specific</em> axioms are to be accepted without proof but nothing else is. For anything that is true that is <em>not</em> one of these axioms, the role of proof must be to demonstrate that such a truth <em>can</em> be derived from these axioms and <em>how</em> it would be so derived.</p> <blockquote> <p>Thus each set of axioms implicite contains all thesis that can be proven from this set of axioms.</p> </blockquote> <p><em>Implicit</em>. But the role of a proof is to make the implicit explicit. I can claim that Fermat's last theorem is true. That is a true statement. But merely <em>claiming</em> it is not the same as a proof. I can claim the axioms of mathematics imply Fermat's last theorem and that would be true. But that's still not a proof. To prove it, I must <em>demonstrate</em> <strong><em>how</em></strong> the axioms imply it. And in doing so I can not base any of my demonstration implications upon the knowledge that I know it to be true.</p> <blockquote> <p>As we know, each theorem in mathematics and logics is little more than a tautology:</p> </blockquote> <p>That's not actually what a tautology is. But I'll assume you mean a true statement.</p> <blockquote> <p>so is mine.</p> </blockquote> <p>No one cares if your statement is true. We care if you can demonstrate <em>how</em> it is true. You did not do that.</p>
<p>All reasoning (whether formal or informal, mathematical, scientific, every-day-life, etc.) needs to satisfy two basic criteria in order to be considered good (sound) reasoning:</p> <ol> <li><p>The steps in the argument need to be logical (valid .. the conclusion follows from the premises)</p></li> <li><p>The assumptions (premises) need to be acceptable (true or at least agreed upon by the parties involved in the debate within which the argument is offered)</p></li> </ol> <p>Now, what Alice is pointing out is that in the domain of deductive reasoning (which includes mathematical reasoning), the information contained in the conclusion is already contained in the premises ... in a way, the conclusion thus 'merely' pulls this out. .. Alice thus seems to be saying: "all mathematical reasoning is circular .. so why attack my argument on being circular?"</p> <p>However, this is not a good defense against the charge of circular reasoning. First of all, there is a big difference between 'pulling out', say, some complicated theorem of arithmetic out of the Peano Axioms on the one hand, and simply assuming that very theorem as an assumption proven on the other: </p> <p>In the former scenario, contrary to Alice's claim, we really do not say that circular reasoning is taking place: as long as the assumptions of the argument are nothing more than the agreed upon Peano Axioms, and as long as each inference leading up the the theorem is logically valid, then such an argument satisfied the two forementioned criteria, and is therefore perfectly acceptable.</p> <p>In the latter case, however, circular reasoning <em>is</em> taking place: if all we agreed upon were the Peano axioms, but if the argument uses the conclusion (which is not part of those axioms) as an assumption, then that argument violates the second criterion. It can be said to 'beg the question' ... as it 'begs' the answer to the very question (is the theorem true?) we had in the first place.</p>
combinatorics
<p>A common way to define a group is as the group of structure-preserving transformations on some structured set. For example, the symmetric group on a set $X$ preserves no structure: or, in other words, it preserves only the structure of being a set. When $X$ is finite, what structure can the <em>alternating</em> group be said to preserve?</p> <p>As a way of making the question precise, is there a natural definition of a category $C$ equipped with a faithful functor to $\text{FinSet}$ such that the skeleton of the underlying groupoid of $C$ is the groupoid with objects $X_n$ such that $\text{Aut}(X_n) \simeq A_n$? </p> <p><strong>Edit:</strong> I've been looking for a purely combinatorial answer, but upon reflection a geometric answer might be more appropriate. If someone can provide a convincing argument why a geometric answer is more natural than a combinatorial answer I will be happy to accept that answer (or Omar's answer).</p>
<p>The alternating group preserves orientation, more or less by definition. I guess you can take $C$ to be the category of simplices together with an orientation. I.e., the objects of $C$ are affinely independent sets of points in some $\mathbb R^n$ together with an orientation and the morphisms are affine transformations taking the vertices of one simplex to the vertices of another. Of course this is cheating since if you actually try to define orientation you'll probably wind up with something like "coset of the alternating group" as the definition. On the other hand, some people find orientations of simplices to be a geometric concept, so this might conceivably be reasonable to you.</p>
<p>$A_n$ is the symmetry group of the chamber of the Tits building of $\mathbb{P}GL_n$. The shape of this chamber is independent of what coefficients you insert into the group scheme $\mathbb{P}GL_n$, just the number and configuration of chambers changes. If you insert the finite fields $\mathbb{F}_p$ then you get finite simplicial complexes as buildings, and the smaller $p$ gets, the fewer chambers you have. You can analyse and even reconstruct the group in terms of its action on this building. The natural limit case would be just having one chamber and the symmetry group of this chamber - the Weyl group - is $A_n$. This is how Tits first thought that there is a limit case to the sequence of finite fields - which he called the field with one element.</p> <p>Maybe somewhat more algebraically you can think in terms of Lie algebras - as I said the shape of the chamber does not change with different coefficients. The reason is that it is determined just by the Lie algebra of the group and thus describable by a Dynkin diagram or by a root system (ok, geometry creeps in again). The Wikipedia page about <a href="http://en.wikipedia.org/wiki/Weyl_group" rel="nofollow noreferrer">Weyl groups</a> tells you that the Weyl group of the Lie algebra $sl_n$ is $S_n$. If have no experience with Lie algebras, but maybe you can get $A_n$ the same way.</p> <p>If you can get hold of it, you can read Tits' original account, it's nice to read (but geometric) see the reference on this <a href="http://en.wikipedia.org/wiki/Field_with_one_element" rel="nofollow noreferrer">Wikipedia page</a>. </p> <p>Edit: Aha, I found a link now: Lieven Le Bruyn's <a href="http://cage.ugent.be/~kthas/Fun/index.php/kapranov-smirnov-on-f_un.html" rel="nofollow noreferrer">F_un</a> is back online. You can look there under "papers" and find Tits' article. And, since you are picking up the determinant ideas, you should definitely take a look at Kapranov/Smirnov!</p>
differentiation
<p>I think I read somewhere that Newton tried to find derivatives of basic functions like $x^2$ before formulating systematic calculus; how did/would he do it?</p>
<p>The earliest formulations of something that looks like calculus were not in terms of <em>functions</em> but of <em>curves</em> given by algebraic relations. So if we have $$ y=x^2 $$ and $(y+p,x+q)$ is another point on the curve, we would also have $$ (y+p) = (x+q)^2 $$ and if we multiply these equations out and subtract we get $$ p = 2qx + q^2 $$</p> <p>Now what happens in the earliest sources is that it is simply <em>postulated</em>, without any backing by theoretical definitions or limits that <strong>when $p$ and $q$ are small, we can remove their higher powers</strong>, so ignoring the $q^2$ term yields us $$ p = 2qx \qquad\text{or, in other words,}\qquad p:q = 2x:1 $$ which can be used to draw a tangent.</p> <p>The justification for this procedure was initially just that it worked in practice, but the unexplained ignoring of terms like $q^2$ drew quite a lot of contemporary criticism.</p> <p>This was in the generation before Newton (chiefly Fermat and Descartes). They could produce painstaking <em>geometric</em> proofs in the Euclidean tradition (which was the gold standard for proofs in those days) that <em>for each of the actual curves they considered</em> what came out of this procedure was the right result -- but they didn't have the definitions and machinery to explain or prove rigorously why the procedure <em>always</em> works.</p> <p>Newton brought in the new idea of a quantity $x$ that varied with <em>time</em> and its time rate of change $\dot x$, but he was still building on the "ignore higher powers of the increments" method. But he did it masterfully and was able to get more out of it than his predecessors did.</p>
<p>Some of these were likely found through a combination of intuition and geometric reasoning.</p> <p>Consider $y = x^2$. Notice that the square of side length $x$ has area $y$. Suppose that we change the side length by $dx$. Then what is the change in the area of the rectangle $dy$?</p> <p><a href="https://i.sstatic.net/xJPki.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xJPki.png" alt="enter image description here"></a></p> <p>I apologize for the crudely drawn picture, but notice that the change in area that we get is,</p> <p>$$dy = xdx+xdx+(dx)^2 = 2xdx+(dx)^2 \approx 2xdx$$</p> <p>for small enough $dx$ (i.e. the little black square is insignificantly small). And thus,</p> <p>$$\frac{dy}{dx} = \frac{2xdx}{dx} = 2x.$$</p> <p>Similar reasoning can be used to show that $\frac{d}{dx} (x^3) = 3x^2$ by considering the change in volume of a cube.</p> <p>EDIT: Figured it was worth mentioned that this same square technique can be used to intuitively see why $\frac{d}{dx}(f(x) \cdot g(x)) = f(x)g'(x)+f'(x)g(x)$.</p> <p>EDIT 2: After you have the product rule, it is easy to derive the power rule (over the natural numbers) using induction. The base case $\frac{d}{dx}(x^1) = 1$ is obvious. If we assume $\frac{d}{dx}(x^k) = kx^{k-1}$, then, $$\frac{d}{dx}(x^{k+1}) = \frac{d}{dx}(x^{k} \cdot x) = \frac{d}{dx}(x^{k})\cdot x + x^{k} \cdot \frac{d}{dx}(x)$$$$ = kx^{k-1}\cdot x+x^k\cdot 1 = kx^k+x^k = (k+1)x^k.$$</p>
differentiation
<p>Speaking about ALL differential equations, it is extremely rare to find analytical solutions. Further, simple differential equations made of basic functions usually tend to have ludicrously complicated solutions or be unsolvable. Is there some deeper reasoning behind why it is so rare to find solutions? Or is it just that every time we can solve differential equations, it is just an algebraic coincidence?</p> <p>I reviewed the existence and uniqueness theorems for differential equations and did not find any insight. Nonetheless, perhaps the answer can be found among these?</p> <p>A huge thanks to anyone willing to help!</p> <p>Update: I believe I have come up with an answer to this odd problem. It is the bottom voted one just because I posted it about a month after I started thinking about this question and all you're inputs, but I have taken all the responses on this page into consideration. Thanks everyone!</p>
<p>Let's consider the following, very simple, differential equation: <span class="math-container">$f'(x) = g(x)$</span>, where <span class="math-container">$g(x)$</span> is some given function. The solution is, of course, <span class="math-container">$f(x) = \int g(x) dx$</span>, so for this specific equation the question you're asking reduces to the question of &quot;which simple functions have simple antiderivatives&quot;. Some famous examples (such as <span class="math-container">$g(x) = e^{-x^2}$</span>) show that even simple-looking expressions can have antiderivatives that can't be expressed in such a simple-looking way.</p> <p>There's a theorem of Liouville that puts the above into a precise setting: <a href="https://en.wikipedia.org/wiki/Liouville%27s_theorem_(differential_algebra)" rel="noreferrer">https://en.wikipedia.org/wiki/Liouville%27s_theorem_(differential_algebra)</a>. For more general differential equations you might be interested in differential Galois theory.</p>
<p>Compare Differential Equations to Polynomial Equations. Polynomial Equations are, arguably, much, <strong>much</strong> more simple. The solution space is smaller, and the fundamental operations that build the equations (multiplication, addition and subtraction) are extremely simple and well understood. Yet (and we can even prove this!) <strong>there are Polynomial Equations for which we cannot find an analytical solution</strong>. In this way - I don't think it's any surprise that we cannot find nice analytical solutions to almost all Differential Equations. It would be a shock if we could!</p> <hr /> <p><strong>Edit</strong>: in fact, users @Winther and @mlk noted that Polynomial Equations are actually &quot;embedded&quot; into a very small subsection of Differential Equations. Namely, Linear Homogeneous Constant Coefficient Ordinary Differential Equations, which take the form</p> <p><span class="math-container">$${c_ny^{(n)}(x) + c_{n-1}y^{(n-1)}(x) + ... + c_1y^{(1)}(x) + c_0y(x) = 0}$$</span></p> <p>The solution to such an ODE in fact will utilise the roots of the polynomial:</p> <p><span class="math-container">$${c_nx^n + c_{n-1}x^{n-1} + ... + c_1x + c_0 = 0}$$</span></p> <p>The point to make is that Differential Equations of this form are clearly just a <em>teeny tiny</em> small subsection of all possible Differential Equations - proving that both the solution space of Differential Equations is <em>&quot;much, <strong>much</strong> larger&quot;</em> than Polynomial Equations and already, even for such a small subsection - we begin to struggle (since any Polynomial Equation we cannot analytically solve will correspond to an ODE that we are forced to either (a) approximate the root and use it or (b) leave the root in symbolic form!)</p> <hr /> <p>Another thing to note is that solving equations in Mathematics is, in general, <strong>not</strong> a nice and easy mechanical process. The majority of equations we can solve usually do require methods to be built based on exploiting some beautiful, nifty trick. Going back to Polynomial Equations - the Quadratic Formula comes from completing the square! Completing the square is just a nifty trick, and by using it in a general case we built a formula. Similar things happen in Differential Equations - you can find a solution using a nice nifty trick, and then apply this trick to some general case. It's not as though these methods or formulas come from nowhere - it's not an easy process!</p> <p>The last thing to mention in regards specifically to Differential Equations - as Mathematicians, we only deal with a very small subset of all possible Analytical Functions on a regular basis. <span class="math-container">${\sin(x),\cos(x),e^x,x^2}$</span>... all nice Analytical Functions that we have given symbols for. But this is only a small list! There will be an <strong>almighty infinite</strong> number of possible Analytical Functions out there - so it's again no surprise that the solution to a Differential Equation may not be able to be rewritten nicely in terms of our small, pathetic list.</p>
differentiation
<p>Consider the floor function:</p> <p>$$f(x) = \lfloor x \rfloor$$</p> <p>The indefinite integral of f is:</p> <p>$$\int_0^x f(x) dx = x\lfloor x \rfloor - \frac {\lfloor x \rfloor^2 + \lfloor x \rfloor} 2$$</p> <p>This <em>should</em> be an antiderivative of floor, right?</p> <p>Nope! If you take the derivative of the integral you find that sharp corners cause the derivative to not exist.</p> <p>So then this would mean that the integral of floor is not an antiderivative right?</p> <p>Therefore, I have found a case where the antiderivative does not equal the indefinite integral.</p> <p>In that case I must be flawed as that violates the first fundamental theorem of calculus.</p> <blockquote> <p>Where is the mistake in my logic and why does it appear to disprove the first fundamental theorem's relationship between integral and derivative?</p> </blockquote> <p>This isn't part of the above question per se, but I noticed interesting enough, that the derivative of the integral above is:</p> <p>$$\lfloor x \rfloor \frac {x - \lfloor x \rfloor}{x - \lfloor x \rfloor}$$</p> <p>I wonder what sort of properties would be altered within integration/differentiation if one were to IDK... redefine the derivative by canceling the terms in that fraction? Or for that matter, canceling all fractions of that nature?</p>
<p>The fundamental theorem of calculus has a crucial hypothesis: $$F(x)=\int_{a}^x f(t)dt\Rightarrow F'(a)=f(a) $$</p> <p>Whenever and wherever $f$ is continuous, here we are assuming $f$ is continuous at the point $a$. The floor function is very much not continuous. When you see theorems, it is very important that you check what the hypotheses are. </p>
<p>The first fundamental theorem of calculus is stated as follows:</p> <blockquote> <p>For any continuous function $f:[a,b]\rightarrow \mathbb R$ the function $F(x)=\int_{a}^x f(x)\,dx$ has $F'(x)=f(x)$ for all $x\in (a,b)$.</p> </blockquote> <p>Notice that $f(x)=\lfloor x\rfloor$ is not continuous at the integers, thus this theorem says nothing about that. Note that the derivatives <em>do</em> agree at points where $f(x)$ is continuous.</p> <p>An interesting thing to note is that there is another version of the first fundamental theorem of calculus called <a href="https://en.wikipedia.org/wiki/Lebesgue_differentiation_theorem">the Lebesgue differentiation theorem</a> which loosens the restriction on $f$, but only says that $F'(x)=f(x)$ almost everywhere. It relies on measure theory to state, so I won't reproduce it here, but it's worth noting that one has a trade-off between conditions on $f$ and results for $F$.</p>
probability
<p>Consider a population of nodes arranged in a triangular configuration as shown in the figure below, where each level $k$ has $k$ nodes. Each node, except the ones in the last level, is a parent node to two child nodes. Each node in levels $2$ and below has $1$ parent node if it is at the edge, and $2$ parent nodes otherwise.</p> <p>The single node in level $1$ is infected (red). With some probability $p_0$, it does not infect either of its child nodes in level $2$. With some probability $p_1$, it infects exactly one of its child nodes, with equal probability. With the remaining probability $p_2=1-p_0-p_1$, it infects both of its child nodes.</p> <p>Each infected node in level $2$ then acts in a similar manner on its two child nodes in level $3$, and so on down the levels. It makes <em>no</em> difference whether a node is inected by one or two parents nodes - it's still just infected.</p> <p>The figure below shows one possibility of how the disease may spread up to level $6$.</p> <p><a href="https://i.sstatic.net/NMxL0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NMxL0.png" alt="One possibile spread of the disease up to level $6$."></a></p> <p>The question is: <strong>what is the expected number of infected nodes at level $k$?</strong></p> <p>Simulations suggest that this is (at least asymptotically) linear in $k$, i.e.,</p> <p>$$ \mathbb{E}(\text{number of infected nodes in level } k) = \alpha k $$</p> <p>where $\alpha = f(p_0, p_1,p_2)$.</p> <hr> <p>This question arises out of a practical scenario in some research I'm doing. Unfortunately, the mathematics involved is beyond my current knowledge, so I'm kindly asking for your help. Pointers to relevant references are also appreciated. </p> <p>I asked a <a href="https://math.stackexchange.com/questions/1500829/a-disease-spreading-through-a-triangular-population">different version</a> of this question some time ago, which did not have the possibility of a node not infecting either of its child nodes. It now turns out that in the system I'm looking at, the probability of this happening is not negligble.</p>
<p><strong>Note:</strong> Of course, a most interesting approach would be to derive a generating function describing the probabilities of infected nodes at level $k$ with respect to the probabilities $p_0,p_1$ and $p_2$ and to derive an asymptotic estimation from it.</p> <p>This seems to be a rather tough job, currently out of reach for me and so this is a much more humble approach trying to obtain at least for small values of $k$ the expectation value $E(k)$ of the number of infected nodes at this level.</p> <p>Here I give the expectation value $E(k)$ for the number of infected nodes at level $k=1,2$ and propose an algorithm to derive the expectation values for greater values of $k$. Maybe someone with computer based assistance could use it to provide $E(k)$ for some larger values of $k$.</p> <blockquote> <p><strong>The family of graphs of infected nodes:</strong></p> <p>We focus at graphs containing infected nodes only which can be derived from one infected root node. The idea is to iteratively find at step $k$ a manageable representation of <em>all</em> graphs of this kind with diameter equal to the level $k$ based upon the graphs from step $k-1$.</p> <p><strong>Expectation value $E(k=1)$</strong></p> <p>We encode a left branch by $x$, a right branch by $y$ and a paired branch by $tx+ty$, which is marked with $t$ in order to differentiate it from single branching. No branching at all is encoded by $1$. So, starting from a root node denoted with $1$ we obtain four different graphs: \begin{array}{cccc} 1\qquad&amp;x\qquad&amp;y\qquad&amp;tx+ty\\ p_0\qquad&amp;\frac{1}{2}p_1\qquad&amp;\frac{1}{2}p_1\qquad&amp; p_2 \end{array}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>Three of the graphs $x,y,tx+ty$ have diameter equal to $1$ which means they have <em>leaves</em> at level $k=1$.</p></li> <li><p>These three graphs are of interest for further generations of graphs with higher level. </p></li> <li><p>Let the polynomial $G(x,y,t)$ represent a graph. The number of terms $x^my^n$ in $G(x,y,t)$ with $m+n=k$ gives the number of nodes at level $k$.</p></li> <li><p>A node without successor nodes is weighted with $p_0$. We associate the weight $\frac{1}{2}p_1$ to $x$ resp. $y$ if they are not marked and the weight $p_2$ to $tx+ty$. </p></li> </ul> <blockquote> <p><strong>Description:</strong> The first generation of graphs corresponding to $k=1$ is obtained from a root node $1$ by <em>multiplication</em> with $(1|x|y|tx+ty)$ whereby the bar $|$ denotes alternatives. \begin{align*} 1(1|x|y|tx+ty)\qquad\rightarrow\qquad 1,x,y,tx+ty \end{align*}</p> <p>We obtain \begin{array}{c|c|c} &amp;&amp;\text{nodes at level }\\ \text{graph}&amp;\text{prob}&amp;k=1\\ \hline 1&amp;p_0&amp;0\\ x&amp;\frac{1}{2}p_1&amp;1\\ y&amp;\frac{1}{2}p_1&amp;1\\ tx+ty&amp;p_2&amp;2\\ \end{array}</p> <p>We conclude \begin{align*} E(1)&amp;=0\cdot p_0 +1\cdot\frac{1}{2}p_1 + 1\cdot\frac{1}{2}p_1+2\cdot p_2\\ &amp;=p_1+2p_2 \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p><strong>Expectation value $E(k=2)$</strong></p> <p>For the next step $k=2$ we consider all graphs from the step before having diameter equal to $k-1=1$.</p> <p>These are $x,y,tx+ty$. Each of them generates graphs for the next generation by appending at nodes at level $k$ the subgraphs $1|x|y|tx+ty$. If a graph has $n$ nodes at level $k$ we get $4^n$ graphs to analyze for the next generation.</p> <p>But, we will see that due to symmetry we can identify graphs and be able to reduce roughly by a factor two the variety of different graphs.</p> </blockquote> <p><strong>Intermezzo: Symmetry</strong></p> <p>Note the contribution of graphs which are symmetrical with respect to $x$ and $y$ is the same. They both show the same probability of occurrence.</p> <p>Instead of considering the three graphs \begin{align*} x,y,tx+ty \end{align*} we can identify $x$ and $y$. We arrange the family $\mathcal{F}_1=\{x,y,tx+ty\}$ of graphs of level $k=1$ in two equivalence classes \begin{align*} [x],[tx+ty] \end{align*} and describe the family more compactly by their equivalence classes together with a multiplication factor giving the number of elements in that class. In order to uniquely describe the different equivalence classes it is convenient to also add the probability of occurrence of a representative in the description. We use a semicolon to separate the probability weight from the polynomial representation of the graph. \begin{align*} \mathcal{[F}_1]=\{2[x;\frac{1}{2}p_1],[tx+ty;p_2]\} \end{align*}</p> <blockquote> <p>The second generation of graphs corresponding to $k=2$ is obtained from $[\mathcal{F}_1]$ via selecting a representative from each equivalence class and each node at level $k=2$ is multiplied by $(1|x|y|tx+ty)$. The probability of this graph has to be multiplied accordingly with $(p_0|\frac{1}{2}p_1|\frac{1}{2}p_1|p_2)$. We obtain this way $4+4^2=20$ graphs</p> <p>We calculate from the representative $x$ of $2[x;\frac{1}{2}p_1]$</p> <p>\begin{align*} x(1|x|y|tx+ty) &amp;\rightarrow (x|x^2|xy|tx^2+txy)\\ \frac{1}{2}p_1\left(p_0\left|\frac{1}{2}p_1\right.\left|\frac{1}{2}p_1\right.\left|\phantom{\frac{1}{2}}p_2\right.\right) &amp;\rightarrow \left(\frac{1}{2}p_0p_1\left|\frac{1}{4}p_1^2\right.\left|\frac{1}{4}p_1^2\right.\left|\frac{1}{2}p_1p_2\right.\right)\\ \end{align*}</p> <p>We obtain from $2[x;\frac{1}{2}p_1]\in[\mathcal{F}_1]$ the first part of equivalence classes of $[\mathcal{F}_2]$ with multiplicity denoting the number of graphs within an equivalence class. We list the representative and the graphs for each class. \begin{array}{c|l|c|c|l} &amp;&amp;&amp;\text{nodes at}\\ \text{mult}&amp;\text{repr}&amp;\text{prob}&amp;\text{level }2&amp;graphs\\ \hline 2&amp;x&amp;\frac{1}{2}p_0p_1&amp;0&amp;x,y\\ 2&amp;x^2&amp;\frac{1}{4}p_1^2&amp;1&amp;x^2,y^2\\ 2&amp;xy&amp;\frac{1}{4}p_1^2&amp;1&amp;xy,yx\\ 2&amp;tx^2+txy&amp;\frac{1}{2}p_1p_2&amp;2&amp;tx^2+txy,txy+ty^2\tag{1}\\ \end{array}</p> <p>We calculate from the representative $tx+ty$ of $[tx+ty;p_2]$ using a somewhat informal notation</p> <p>\begin{align*} tx&amp;(1|x|y|tx+ty)+ty(1|x|y|tx+ty)\\ &amp;\rightarrow (tx|tx^2|txy|t^2x^2+t^2xy)+(ty|tyx|ty^2|t^2xy+t^2y^2)\tag{2}\\ \end{align*}</p> </blockquote> <p>We arrange the resulting graphs in groups and associate the probabilities accordingly. The graphs are created by adding a left alternative from (2) with a right alternative from (2). The probabilities are the product of $p_2$ from $[tx+ty;p_2]$ and the corresponding probabilities of the left and right selected alternatives.</p> <blockquote> <p>\begin{array}{ll} tx+ty\qquad&amp;\qquad p_2p_0p_0\\ tx+tyx\qquad&amp;\qquad p_2p_0\frac{1}{2}p_1\\ tx+ty^2\qquad&amp;\qquad p_2p_0\frac{1}{2}p_1\\ tx+t^2xy+t^2y^2\qquad&amp;\qquad p_2p_0p_2\\ \\ tx^2+ty\qquad&amp;\qquad p_2\frac{1}{2}p_1p_0\\ tx^2+tyx\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ tx^2+ty^2\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ tx^2+t^2xy+t^2y^2\qquad&amp;\qquad p_2\frac{1}{2}p_1p_2\\ \\ txy+ty\qquad&amp;\qquad p_2\frac{1}{2}p_1p_0\\ txy+tyx\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ txy+ty^2\qquad&amp;\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\ txy+t^2xy+t^2y^2\qquad&amp;\qquad p_2\frac{1}{2}p_1p_2\\ \\ t^2x^2+t^2y^2+ty\qquad&amp;\qquad p_2p_2p_0\\ t^2x^2+t^2y^2+tyx\qquad&amp;\qquad p_2p_2\frac{1}{2}p_1\\ t^2x^2+t^2y^2+ty^2\qquad&amp;\qquad p_2p_2\frac{1}{2}p_1\\ t^2x^2+t^2y^2+t^2xy+t^2y^2\qquad&amp;\qquad p_2p_2p_2\tag{3}\\ \end{array}</p> </blockquote> <p>A few words to terms like $txxytyxy$. This term can be replaced with $t^2x^3y^2$. In fact <em>any</em> walk in a graph containing $m$ $x$'s, $n$ $y$'s and $r$ $t$'s (with $t\leq m+n$) can be normalised to \begin{align*} t^rx^my^n \end{align*} If we map this walk to a lattice path in $\mathbb{Z}^2$ with the root at $(0,0)$ and with $x$ moving one step horizontally and $y$ moving one step vertically we always describe a path from $(0,0)$ to $(m,n)$. We could represent each graph as union of lattice paths.</p> <blockquote> <p>We create now a table as we did in (1). We identify graphs in (3) which belong to the same equivalence class.</p> <p>\begin{array}{c|l|c|c|l} &amp;&amp;&amp;\text{nodes at}\\ \text{mult}&amp;\text{repr}&amp;\text{prob}&amp;\text{level }2&amp;graphs\\ \hline 1&amp;tx+ty&amp;p_0^2p^2&amp;0&amp;tx+ty\\ 2&amp;tx+txy&amp;\frac{1}{2}p_0p_1p_2&amp;1&amp;tx+txy,txy+ty\\ 2&amp;tx+ty^2&amp;\frac{1}{2}p_0p_1p_2&amp;1&amp;tx+ty^2,tx^2+ty\\ 2&amp;tx+t^2xy+t^2y^2&amp;p_0p_2^2&amp;2&amp;tx+t^2xy+t^2y^2,\\ &amp;&amp;&amp;&amp;t^2x^2+t^2xy+ty\\ 2&amp;tx^2+txy&amp;\frac{1}{4}p_1^2p_2&amp;2&amp;tx^2+txy,txy+ty^2\\ 1&amp;tx^2+ty^2&amp;\frac{1}{4}p_1^2p_2&amp;2&amp;tx^2+ty^2\\ 2&amp;tx^2+t^2xy+t^2y^2&amp;\frac{1}{2}p_1p_2^2&amp;2&amp;tx^2+t^2xy+t^2y^2,\\ &amp;&amp;&amp;&amp;t^2x^2+t^2xy+ty^2\\ 1&amp;2txy&amp;\frac{1}{4}p_1^2p_2&amp;1&amp;txy+txy\\ 2&amp;txy+t^2xy+t^2y^2&amp;\frac{1}{2}p_1p_2^2&amp;3&amp;txy+t^2xy+t^2y^2,\\ &amp;&amp;&amp;&amp;t^2x^2+t^2xy+txy\\ 1&amp;t^2x^2+2t^2xy+t^2y^2&amp;p_2^3&amp;3&amp;t^2x^2+2t^2xy+t^2y^2\tag{4} \end{array}</p> <p>Combining the classes from (1) and (4) gives $[F_2]$.</p> <p>We calculate the expectation value $E(2)$ from the tables in (1) and (4) \begin{align*} E(2)&amp;=0\cdot p_0p_1 +1\cdot\frac{1}{2}p_1^2 + 1\cdot\frac{1}{2}p_1^2+2\cdot p_1p_2\\ &amp;\qquad+0\cdot p_0^2p_2+1\cdot p_0p_1p_2+1\cdot p_0p_1p_2+2\cdot 2p_0p_2^2\\ &amp;\qquad+2\cdot\frac{1}{2}p_1^2p_2+2\cdot\frac{1}{4}p_1^2p_2+2\cdot p_1p_2^2+1\cdot\frac{1}{4}p_1^2p_2\\ &amp;\qquad+3\cdot p_1p_2^2+3\cdot p_2^3\\ &amp;=p_1^2+2p_1p_2+2p_0p_1p_2+4p_0p_2^2+\frac{7}{4}p_1^2p_2+5p_1p_2^2+3p_2^3 \end{align*}</p> </blockquote> <p><strong>Algorithm for $E(k)$</strong></p> <p>Here is a short summary how $E(k)$ can be derived when $[F_{k-1}]$, the family of equivalence classes of graphs with diameter $k-1$ is already known. </p> <ul> <li><p>Take a representative $G(x,y,t)$ from each equivalence class of $[F_{k-1}]$</p></li> <li><p>Multiply each leave which is at level $k-1$ with $(1|x|y|tx+ty)$</p></li> <li><p>Multiply the probability of the representative with $(p_0|\frac{1}{2}p_1|\frac{1}{2}p_1|p_2)$ accordingly</p></li> <li><p>Use $xy$-symmetry of graphs and normalization $t^rx^my^n$ to find new equivalence classes as we did for $k=2$ above. <em>Attention</em>: There may be equivalence classes with <strong>equal</strong> polynomial representatives but with <strong>different</strong> probabilities.</p></li> <li><p>The number of nodes at level $k$ of a graph $G(x,y,t)$ is the number of terms \begin{align*} t^rx^my^n\qquad\qquad m,n\geq 0, 0\leq r\leq m+n=k \end{align*}</p></li> <li><p>Build $[F_k]$ by collecting all equivalence classes respecting the multiplicity (number of graphs) in an equivalence class</p></li> <li><p>Calculate $E(k)$</p></li> </ul>
<p>According to discussion in the comments of the question we can conclude that infections of the children of the same node are not independent. But we don't know if that dependence is horizontal or vertical. Meaning, maybe infection of one node has influence on its sibiling, or maybe the parent statistically has higher or lower influence on its children so sometimes it infects both, one or none of them</p> <p>Since disease spreads vertically only, I will consider the second case</p> <p>Let parents be named B,L,R or N if they infect both, left, right or none of the children, where $p(L)=p(R)=\frac{p_1}2$. We can draw all possible trees and count their probabilities. For instance $$B$$$$B\_\_R$$$$N\_\_L\_\_R$$$$0\_\_L\_\_0\_\_B$$ $$0\_\_\_1\_\_0\_\_1\_\_\_1$$ has probability $B^3R^2L^2N$ (0-not infected, 1-infected)</p> <p>Lets discuss all the graphs that end with $01011$, then previous row has to be one of $\{0L0B,R00B,...\}=$ $(\{R\}\times\{0,L,N\}\bigcup \{0,N\}\times\{L\})\times(\{R\}\times\{R\}\bigcup\{0,R,N\}\times\{B\})$</p> <p>Probability that $01011$ is preceded with $0L0B$ is equal to probability for $0101$ to happen, times $p(L)p(B)$</p> <p>So $p(01011)=p(1001)p(R)p(B)+p(0101)p(L)p(B) +p(1011)p(R)(p(R)^2+p(R)p(B)+p(N)p(B))+p(0111)p(L)(p(R)^2+p(R)p(B)+p(N)p(B)) +p(1101)(p(R)p(L)+p(N)p(R)+p(N)p(L))p(B) +p(1111)(p(R)p(L)+p(N)p(R)+p(N)p(L))(p(R)^2+p(R)p(B)+p(N)p(B))$</p> <p>Also $p(1011)=p(1101)$ as a mirror pair</p> <p>So, it is left to conclude a way of finding preceding combinations for a 011011110... It is made of several groups of consecutive $1$s which are independent, so they can be attached to each other by a Decartes multiplication also multiplied by $\{0,N\}$ for each pair of consecutive $0$s in between. </p> <p>Let $f(n)$ be the set of all combinations that precede to n consecutive $1$s, $f(1)=\{RN,R0,RL,0L,NL\}$ and lets define $g(n)$ as the set of all elements from $f(n)$ that start with $R$ or $0$ but leading $R$ subtituted with $B$ and leading $0$ subtituted with $L$ , $g(1)=\{BN,B0,BL,LL\}$. Then $f(n+1)=R\times f(n)\bigcup \{R,0,N\}\times g(n)$</p> <p>Exceptions are the groups on the end or start of the level, where you pick only combinations with leading or ending zero and remove that zero. Lets define sets for leading groups as $lf(n)$, sets for ending groups as $ef(n)$ and groups that are both leading and ending as $lef(n)$</p> <p>$f(1)=\{RN,R0,RL,0L,NL\}$<br> $g(1)=\{BN,B0,BL,LL\}$<br> $lf(1)=\{L\}$, $ef(1)=\{R\}$<br> $f(2)=\{RRN,RR0,R0L,RNL,RBN,RB0,RBL,RLL,NBN,NB0,NBL,NLL,0BN,0B0,0BL,0LL\}$<br> $g(2)=\{BRN,BR0,B0L,BNL,BBN,BB0,BBL,BLL,LBN,LB0,LBL,LLL\}$<br> $lf(2)=\{BN,B0,BL,LL\}$, $ef(2)=\{RR,RB,NB,0B\}$, $lef(2)=\{B\}$<br> $f(3)=\{RRRN,RRR0,RR0L,RRNL,RRBN,RRB0,RRBL,RRLL,RNBN,RNB0,RNBL,RNLL,R0BN,R0B0,R0BL,R0LL,RBRN,RBR0,RB0L,RBNL,RBBN,RBB0,RBBL,RBLL,RLBN,RLB0,RLBL,RLLL,NBRN,NBR0,NB0L,NBNL,NBBN,NBB0,NBBL,NBLL,NLBN,NLB0,NLBL,NLLL,0BRN,0BR0,0B0L,0BNL,0BBN,0BB0,0BBL,0BLL,0LBN,0LB0,0LBL,0LLL\}$<br> $lf(3)=\{BRN,BR0,B0L,BNL,BBN,BB0,BBL,BLL,LBN,LB0,LBL,LLL\}$, $ef(3)=\{RRR,RRB,RNB,R0B,RBR,RBB,RLB,NBR,NBB,NLB,0BR,0BB,0LB\}$, $lef(3)=\{BR,BB,LB\}$<br></p> <p>Lets calculate p(111),p(110),p(101),p(100),p(010),p(000). With notes that $p(L)=p(10)=\frac {p_1} 2=p(01)=p(R)$, $p(N)=p(00)=p_0$, $p(B)=p(11)=p_2$</p> <p>$prec(111)=lef(3)=\{BR,BB,LB\}$ is the set of rows that can precede to $111$, so $p(111)=p(11)p(B)(p(B)+p(R)+p(L))$, further I will write just B not p(B)<br> $prec(110)=lf(2)$,<br> $p(110)=p(11)(BN+BL+LL)+p(10)B=B(BN+BL+LL+L)$<br> $prec(101)=lf(1)\times ef(1)=\{LR\}$,<br> $p(101)=p(11)LR=BLR$<br> $prec(100)=lf(1)\times \{0,N\}=\{LN,L0\}$,<br> $p(100)=p(11)LN+p(10)L=BLN+LL$<br> $prec(010)=f(1)$,<br> $p(010)=p(11)(RN+RL+NL)+p(10)R+p(01)L=B(RN+RL+NL)+2RL$</p> <p>$p(000)$ isn't needed and $p(100)=p(001)$, $p(110)=p(011)$</p> <p>$E(2)=3p(111)+2(p(101)+2p(110))+2p(100)+p(010)= 3p(111)+2p(101)+4p(110)+2p(100)+p(010)= 3p_2^2(p_2+p_1)+2p_2p_1^2/4+4p_2(p_2p_0+p_2p_1/2+p_1^2/4+p_1/2)+p_0p_1p_2+p_1^2/2+p_0p_1p_2+p_2p_1^2/4+p_1^2/2=3p_2^3+5p_2^2p_1+\frac{7}{4}p_1^2p_2+4p_0p_2^2+2p_1p_2+2p_0p_1p_2+p_1^2$</p> <p>Now you have probabilities on 3rd level you can "easily" calculate them on 4th level</p> <p>$prec(1111)=lef(4)$<br> $prec(1110)=lf(3)$,<br> $prec(1101)=lf(2)\times ef(1)$,<br> $prec(1100)=lf(2)\times \{0,N\}$,<br> $prec(1010)=lf(1)\times f(1)$,<br> $prec(1001)=lf(1)\times \{0,N\}\times ef(1)$,<br> $prec(0110)=f(2)$,<br> $prec(1000)=lf(1)\times \{0,N\}\times \{0,N\}$,<br> $prec(0100)=f(1)\times \{0,N\}$,<br></p>
linear-algebra
<p>Singular value decomposition (<a href="http://en.wikipedia.org/wiki/Singular_value_decomposition" rel="noreferrer">SVD</a>) and principal component analysis (<a href="https://en.wikipedia.org/wiki/Principal_component_analysis" rel="noreferrer">PCA</a>) are two eigenvalue methods used to reduce a high-dimensional data set into fewer dimensions while retaining important information. Online articles say that these methods are 'related' but never specify the exact relation.</p> <p>What is the intuitive relationship between PCA and SVD? As PCA uses the SVD in its calculation, clearly there is some 'extra' analysis done. What does PCA 'pay attention' to differently than the SVD? What kinds of relationships do each method utilize more in their calculations? Is one method 'blind' to a certain type of data that the other is not?</p>
<p>(I assume for the purposes of this answer that the data has been preprocessed to have zero mean.)</p> <p>Simply put, the PCA viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product <span class="math-container">$\frac{1}{n-1}\mathbf X\mathbf X^\top$</span>, where <span class="math-container">$\mathbf X$</span> is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:</p> <p><span class="math-container">$\frac{1}{n-1}\mathbf X\mathbf X^\top=\frac{1}{n-1}\mathbf W\mathbf D\mathbf W^\top$</span></p> <p>On the other hand, applying SVD to the data matrix <span class="math-container">$\mathbf X$</span> as follows:</p> <p><span class="math-container">$\mathbf X=\mathbf U\mathbf \Sigma\mathbf V^\top$</span></p> <p>and attempting to construct the covariance matrix from this decomposition gives <span class="math-container">$$ \frac{1}{n-1}\mathbf X\mathbf X^\top =\frac{1}{n-1}(\mathbf U\mathbf \Sigma\mathbf V^\top)(\mathbf U\mathbf \Sigma\mathbf V^\top)^\top = \frac{1}{n-1}(\mathbf U\mathbf \Sigma\mathbf V^\top)(\mathbf V\mathbf \Sigma\mathbf U^\top) $$</span></p> <p>and since <span class="math-container">$\mathbf V$</span> is an orthogonal matrix (<span class="math-container">$\mathbf V^\top \mathbf V=\mathbf I$</span>),</p> <p><span class="math-container">$\frac{1}{n-1}\mathbf X\mathbf X^\top=\frac{1}{n-1}\mathbf U\mathbf \Sigma^2 \mathbf U^\top$</span></p> <p>and the correspondence is easily seen (the square roots of the eigenvalues of <span class="math-container">$\mathbf X\mathbf X^\top$</span> are the singular values of <span class="math-container">$\mathbf X$</span>, etc.)</p> <p>In fact, using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of <span class="math-container">$\mathbf X\mathbf X^\top$</span> can cause loss of precision. This is detailed in books on numerical linear algebra, but I'll leave you with an example of a matrix that can be stable SVD'd, but forming <span class="math-container">$\mathbf X\mathbf X^\top$</span> can be disastrous, the <a href="https://doi.org/10.1007/BF01386022" rel="noreferrer">Läuchli matrix</a>:</p> <p><span class="math-container">$\begin{pmatrix}1&amp;1&amp;1\\ \epsilon&amp;0&amp;0\\0&amp;\epsilon&amp;0\\0&amp;0&amp;\epsilon\end{pmatrix}^\top,$</span></p> <p>where <span class="math-container">$\epsilon$</span> is a tiny number.</p>
<p><em><a href="https://arxiv.org/pdf/1404.1100.pdf" rel="nofollow noreferrer">A tutorial on Principal Component Analysis</a></em> by Jonathon Shlens is a good tutorial on PCA and its relation to SVD. Specifically, section VI: A More General Solution Using SVD.</p>
combinatorics
<p>Do men or women have more brothers?</p> <p>I think women have more as no man can be his own brother. But how one can prove it rigorously?</p> <hr> <p>I am going to suggest some reasonable background assumptions:</p> <ol> <li>There are a large number of individuals, of whom half are men and half are women.</li> <li>The individuals are partitioned into nonempty families.</li> <li>The distribution of the sizes of the families is deliberately not specified.</li> <li>However, in each family, the sex of each member is independent of the sexes of the other members.</li> </ol> <p>I believe these assumptions are roughly correct for the world we actually live in.</p> <p>Even in the absence of any information about point 3, what can one say about relative expectation of the random variables “Number of brothers of individual $I$, given that $I$ is female” and “Number of brothers of individual $I$, given that $I$ is male”?</p> <p>And how can one directly refute the argument that claims that the second expectation should almost certainly be smaller than the first, based on the observation that in any single family, say with two girls and one boy, the girls have at least as many brothers as do the boys, and usually more.</p>
<p>So many long answers! But really it's quite simple. </p> <ul> <li>Mathematically, the expected number of brothers is the same for men and women.</li> <li>In real life, we can expect men to have slightly <em>more</em> brothers than women. </li> </ul> <p><strong>Mathematically:</strong></p> <p>Assume, as the question puts it, that "in each family, the sex of each member is independent of the sexes of the other members". This is all we assume: we don't get to pick a particular set of families. (This is essential: If we were to choose the collection of families we consider, we can find collections where the men have more brothers, collections where the women have more brothers, or where the numbers are equal: we can get the answer to come out any way at all.) </p> <p>I'll write $p$ for the gender ratio, i.e. the proportion of all people who are men. In real life $p$ is close to 0.5, but this doesn't make any difference. In any random set of $n$ persons, the expected (average) number of men is $n\cdot p$.</p> <ol> <li>Take an arbitrary child $x$, and let $n$ be the number of children in $x$'s family. </li> <li>Let $S(x)$ be the set of $x$'s siblings. Note that there are <em>no</em> gender-related restrictions on $S(x)$: It's just the set of children other than $x$.</li> <li>Obviously, <strong>the expected number of $x$'s brothers is the expected number of men in $S(x)$.</strong> </li> <li>So what is the expected number of men in this set? Since $x$ has $n-1$ siblings, it's just $(n-1)\cdot p$, or approximately $(n-1)\div 2$, regardless of $x$'s gender. That's all there is to it.</li> </ol> <p>Note that the gender of $x$ didn't figure in this calculation at all. If we were to choose an arbitrary boy or an arbitrary girl in step 1, the calculation would be exactly the same, since $S(x)$ is not dependent on $x$'s gender.</p> <p><strong>In real life:</strong> </p> <p>In reality, the gender distribution of children does depend on the parents a little bit (for biological reasons that are beyond the scope of math.se). I.e., the distribution of genders in families <a href="http://www.genetics.org/content/genetics/15/5/445.full.pdf" rel="noreferrer">is not completely random.</a> Suppose some couples cannot have boys, some might be unable to have girls, etc. In such a case, being male is evidence that your parents <em>can</em> have a boy, which (very) slightly raises the odds that you can have a brother. </p> <p>In other words: <strong>If the likelihood of having boys does depend on the family, men on average have <em>more</em> brothers, not fewer.</strong> (I am expressly putting aside the "family planning" scenario where people choose to have more children depending on the gender of the ones they have. If you allow this, <strong>anything could happen.</strong>)</p>
<p><strong>Edit, 5/24/16:</strong> After some thought I don't particularly like this answer anymore; please take a look at my second answer below instead. </p> <hr> <p>Here's a simple version of the question. Suppose there is exactly one family which has $n$ children, of which $k$ are male with some probability $p_k$. When this happens, the men each have $k-1$ brothers, while the women have $k$ brothers. So it would seem that no matter what the probabilities $p_k$ are, the women will always have more brothers on average. </p> <p>However, this is not true, and the reason is that sometimes we might have $k = 0$ (no males) or $k = n$ (no females). In the first case the women have no brothers and the men don't exist, and in the second case the men have $n-1$ brothers and the women don't exist. In these cases it's unclear whether the question even makes sense.</p> <hr> <p>Another simple version of the question, which avoids the previous problem and which I think is more realistic, is to suppose that there are two families with a total of $2n$ children between them, $n$ of which are male and $n$ of which are female, but now the children are split between the families in some random way. If there are $m$ male children in the first family and $f$ female children, then the average number of brothers a man has is</p> <p>$$\frac{m(m-1) + (n-m)(n-m-1)}{n}$$</p> <p>while the average number of brothers a woman has is</p> <p>$$\frac{mf + (n-m)(n-f)}{n}.$$</p> <p>The first quantity is big when $m$ is either big or small (in other words, when the distribution of male children is lopsided between the two families) while the second quantity is big when $m$ and $f$ are either both big or both small (in other words, when the distribution of male and female children are similar in the two families). If we suppose that "big" and "small" are disjoint and both occur with some probability $p \le \frac{1}{2}$ (say $p = \frac{1}{3}$ to be concrete), then the first case occurs with probability $2p$ (say $2 \frac{1}{3} = \frac{2}{3}$) while the second case occurs with probability $2p^2$ (say $2 \frac{1}{9} = \frac{2}{9}$). So heuristically, in this version of the question:</p> <blockquote> <p>If it's easy for there to be many or few men in a family, men could have more brothers than women because it's easier for men to correlate with themselves than for women to correlate with men.</p> </blockquote> <p>But you don't have to take my word for it: we can actually do the computation. Let me write $M$ for the random variable describing the number of men in the first family and $F$ for the random variable describing the number of women in the first family, and let's assume that they are 1) independent and 2) symmetric about $\frac{n}{2}$, so that in particular</p> <p>$$\mathbb{E}(M) = \mathbb{E}(F) = \frac{n}{2}.$$ </p> <p>$M$ and $F$ are independent, so</p> <p>$$\mathbb{E}(MF) = \mathbb{E}(M) \mathbb{E}(F) = \frac{n^2}{4}.$$</p> <p>and similarly for $n-M$ and $n-F$. This is already enough to compute the expected number of brothers a woman has, which is (because $MF$ and $(n-M)(n-F)$ have the same distribution by assumption)</p> <p>$$\frac{2}{n} \left( \mathbb{E}(MF) \right) = \frac{n}{2}.$$</p> <p>In other words, the expected number of brothers a woman has is precisely the expected number of men in one family. This also follows from linearity of expectation.</p> <p>Next we'll compute the expected number of brothers a man has. This is (again because $M(M-1)$ and $(n-M)(n-M-1)$ have the same distribution by assumption)</p> <p>$$\frac{2}{n} \left( \mathbb{E}(M(M-1)) \right) = \frac{2}{n} \left( \mathbb{E}(M^2) - \frac{n}{2} \right) = \frac{2}{n} \left( \text{Var}(M) + \frac{n^2}{4} - \frac{n}{2} \right) = \frac{n}{2} - 1 + \frac{2 \text{Var}(M)}{n}$$</p> <p>where we used $\text{Var}(M) = \mathbb{E}(M^2) - \mathbb{E}(M)^2$. As in Donkey_2009's answer, this computation reveals that the answer depends delicately on the variance of the number of men in one family (although be careful comparing these two answers: in Donkey_2009's answer he's choosing a random family to inspect while I'm choosing a random distribution of males and females among two families). More precisely,</p> <blockquote> <p>Men have more brothers than women on average if and only if $\text{Var}(M)$ is strictly larger than $\frac{n}{2}$.</p> </blockquote> <p>For example, if the men are distributed by independent coin flips, then we can compute that $\text{Var}(M) = \frac{n}{4}$, so in fact in this case women have more brothers than men (and this doesn't depend on the distribution of $F$ at all, as long as it's independent of $M$). Here the heuristic argument about bigness and smallness doesn't apply because the probability of $M$ deviating from its mean is quite small. </p> <p>But if, for example, $m$ is instead chosen uniformly at random among the possible values $0, 1, 2, \dots n$, then $\mathbb{E}(M^2) = \frac{n(2n+1)}{6}$, so $\text{Var}(M) = \frac{n(2n+1)}{6} - \frac{n^2}{4} = \frac{n^2}{12} + \frac{n}{6}$, which is quite a bit larger than in the previous case, and this gives about $\frac{2n}{3}$ expected brothers for men. </p> <p>One quibble you might have with the above model is that you might not think it's reasonable for $M$ and $F$ to be independent. On the one hand, some families just like having lots of children, so you might expect $M$ and $F$ to be correlated. On the other hand, some families don't like having lots of children, so you might expect $M$ and $F$ to be anticorrelated. Without the independence assumption the computation for women acquires an extra term, namely $\frac{2 \text{Cov}(M, F)}{n}$ (as in Donkey_2009's answer), and now the answer also depends on how large this is relative to $\text{Var}(M)$. </p> <p>Note that the argument in the OP that "no man can be his own brother" (basically, the $-1$ in $m(m-1)$) ought to imply, if it worked, that the difference between expected number of brothers for men and women is exactly $1$: this happens iff we are allowed to write $\mathbb{E}(M(M-1)) = \mathbb{E}(M) \mathbb{E}(M-1)$ iff $M$ is independent of itself iff it's constant iff $\text{Var}(M) = 0$. </p> <hr> <p><strong>Edit:</strong> Perhaps the biggest objection you might have to the model above is that a given person's gender is not independent of the gender of their siblings; that is, as Greg Martin points out in the comments below, requirement 4 in the OP is not satisfied. This is easiest to see in the extreme case that $n = 1$: in that case we're only distributing one male and one female child, and so any siblings you have must have opposite gender from you. In general the fact that the number of male and female children is fixed here means that your siblings are slightly more likely to be a different gender from you. </p> <p>A more realistic model would be to both distribute the children randomly and to assign their genders randomly. Beyond that we should think more about how to model family sizes. </p>
geometry
<p>I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane.</p> <p>How do I determine if a point $(x,y)$ is within the area bounded by the ellipse? </p>
<p>The region (disk) bounded by the ellipse is given by the equation: $$ \frac{(x-h)^2}{r_x^2} + \frac{(y-k)^2}{r_y^2} \leq 1. \tag{1} $$ So given a test point $(x,y)$, plug it in $(1)$. If the inequality is satisfied, then it is inside the ellipse; otherwise it is outside the ellipse. Moreover, the point is on the boundary of the region (i.e., on the ellipse) if and only if the inequality is satisfied tightly (i.e., the left hand side evaluates to $1$). </p>
<p>Another way uses the definition of the ellipse as the points whose sum of distances to the foci is constant.</p> <p>Get the foci at $(h+f, k)$ and $(h-f, k)$, where $f = \sqrt{r_x^2 - r_y^2}$.</p> <p>The sum of the distances (by looking at the lines from $(h, k+r_y)$ to the foci) is $2\sqrt{f^2 + r_y^2} = 2 r_x $.</p> <p>So, for any point $(x, y)$, compute $\sqrt{(x-(h+f))^2 + (y-k)^2} + \sqrt{(x-(h-f))^2 + (y-k)^2} $ and compare this with $2 r_x$.</p> <p>This takes more work, but I like using the geometric definition.</p> <p>Also, for both methods, if speed is important (i.e., you are doing this for many points), you can immediately reject any point $(x, y)$ for which $|x-h| &gt; r_x$ or $|y-k| &gt; r_y$.</p>
combinatorics
<p>I just came up with the following identity while solving some combinatorial problem but not sure if it's correct. I've done some numerical computations and they coincide. $$\lim_{n\to \infty}{\frac{1}{2^n}\sum_{k=0}^{n}\binom{n}{k}\frac{an+bk}{cn+dk}}\;\stackrel?=\;\frac{2a+b}{2c+d}$$ Here $a$, $b$, $c$, and $d$ are reals except that $c$ mustn't $0$ and $2c+d\neq0$. I wish I could explain how I came up with it, but I did nothing but comparing numbers with the answer then formulated the identity, and just did numerical computations.</p>
<p>The reason this is true is that the binomial coefficients are strongly concentrated around the mean, when $k \sim \frac{n}{2}$. Using some standard concentration inequalities (Chernoff is strong, but Chebyshev's inequality sufficies too), you can show that for any constant $c &gt; 0$, $2^{-n} \sum_{ k \in [0, (1/2 - c ) n ] \cup [(1/2 + c)n, n]} \binom{n}{k} \rightarrow 0$ as $n \rightarrow \infty$.</p> <p>Hence in your limit, the only terms that survive are when $k \sim \frac{n}{2}$, in which case you can cancel the $n$ throughout and get the right-hand side.</p> <p>For the limit to hold, you clearly need $2c + d \neq 0$. You would also need $c \neq 0$, as otherwise the $k = 0$ term of the sum will present difficulties.</p>
<p>$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &amp;\color{#f00}{\lim_{n\to \infty}\,{1 \over 2^{n}} \sum_{k = 0}^{n}{n \choose k}{an + bk \over cn + dk}} = \lim_{n\to \infty}\,{1 \over 2^{n}} \sum_{k = 0}^{n}{n \choose k}\pars{an + bk}\int_{0}^{1}t^{cn + dk - 1}\,\,\,\dd t \\[3mm] = &amp;\ \lim_{n\to \infty}\,{1 \over 2^{n}}\int_{0}^{1}t^{cn - 1}\bracks{% an\sum_{k = 0}^{n}{n \choose k}\pars{t^{d}}^{k} + b\sum_{k = 0}^{n}{n \choose k}k\pars{t^{d}}^{k}}\dd t \end{align} <hr> However, $\ds{\sum_{k = 0}^{n}{n \choose k}\xi^{k} = \pars{1 + \xi}^{n} \quad\imp\quad \sum_{k = 0}^{n}{n \choose k}k\,\xi^{k} = n\xi\pars{1 + \xi}^{n - 1}}$. <hr> Then, \begin{align} &amp;\color{#f00}{\lim_{n\to \infty}\,{1 \over 2^{n}} \sum_{k = 0}^{n}{n \choose k}{an + bk \over cn + dk}} = \\[3mm] = &amp;\ \lim_{n\to \infty}\,{1 \over 2^{n}}\int_{0}^{1}t^{cn - 1}\bracks{% an\pars{1 + t^{d}}^{n} + bn\,t^{d}\pars{1 + t^{d}}^{n - 1}}\dd t \\[3mm] = &amp;\ \color{#f00}{\lim_{n \to \infty}\braces{\vphantom{\LARGE A}% {n \over 2^{n}}\bracks{\vphantom{\Large A}a\,\mathrm{f}_{n}\pars{cn,d} + b\,\mathrm{f}_{n - 1}\pars{cn + d,d}}}} \end{align} <hr> $$ \mbox{where}\quad \begin{array}{|c|}\hline\\ \ds{\quad\mathrm{f}_{n}\pars{\mu,\nu} \equiv \int_{0}^{1}t^{\mu - 1}\pars{1 + t^{\nu}}^{n}\,\dd t\quad} \\ \\ \hline \end{array} $$</p> <blockquote> <p>Can you take it from here ?. Maybe, an asymptotic study of the integral could be somehow reasonable. Otherwise, the integral is related to hypergeometric functions.</p> </blockquote>
logic
<p>Why can't some mathematical statement (or whatever is the correct term) be both true and false?</p> <p>For example we can prove (e.g. by induction) that <span class="math-container">$1+2+3+\cdots+n=\frac{n(n+1)}{2}$</span> for all positive integers <span class="math-container">$n$</span>. But how can we be sure that no one will ever find a counter example? What if someone claims that <span class="math-container">$1+2+3+\cdots+1000$</span> equals (e.g.) 500567 and not 500500, which is what the above formula claims.</p> <p>Another example: Why is it impossible for someone to come up with three integer <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span>, for which <span class="math-container">$a^3+b^3=c^3$</span> (contradicting Fermat's Last Theorem)? This bothers me even in the simple intuitive level.</p> <p>Then I have heard about Gödel's incompleteness theorems, second of which says (at least this is how I have interpreted it) that an axiomatic system cannot prove its own consistency. So doesn't Gödel's second incompleteness theorem say basically that "anything is possible"? ...that there can be an integer <span class="math-container">$n$</span> for which <span class="math-container">$1+2+3+\cdots+n \neq \frac{n(n+1)}{2}$</span> or that there can be integers <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> for which <span class="math-container">$a^3+b^3=c^3$</span>?</p>
<p>Gödel's theorem could be more accurately interpreted as saying that we can never be sure of the consistency of a sufficiently complex system. We can't be <em>sure</em>, for instance, that the Peano Axioms don't prove $1+1=3$. We sure hope this isn't the case, but no proof would convince us otherwise (and it's probably not, since the Peano Axioms have an intuitive model as being the natural numbers with addition and multiplication).</p> <p>However, it's still true that $1+1=2$ even if the Peano Axioms say otherwise (indeed, if they proved $1+1=3$, they would also have to prove $1+1\neq 3$, and also every <em>other</em> statement you could possibly make within that system). In fact, we can say that, if a (suitably complex) system is inconsistent, then it admits both a proof and a disproof of every statement - this is the principle of explosion.</p> <p>The difference is that there is an intended model of the Peano Axioms - the natural numbers with addition and multiplication. This is clearly well-defined and certain things are undeniably true of them. We would therefore expect that the Peano Axioms are, in fact, consistent (though we can't prove it) - and, if it is consistent, everything it proves <em>is true</em> and undeniably so. Even if PA were inconsistent, we would still expect proofs like the $1+2+\ldots+n=\frac{n(n+1)}2$ to be work since they are leveraging such simple properties of the structure of the natural numbers.</p> <p>The point here is that "truth" and "proof" are distinct statements - but we tend to identify them because we assume our logical systems are consistent, or at least assume that the bits of them we actually use are consistent.</p>
<p>I have two answers for you.</p> <p>One has already been said so I will only say it briefly: if we exhaustively prove something to be true, there can be no counter example - ie, in the case of an induction argument like you provided. </p> <p>Second, I will answer you with Godel as well. He is mainly known for his Incompleteness Theorems (because they are far more interesting), but his first famous work was for proving his Completeness Theorem. This tell us many things, among which is that consistent and sound system will not have the counter examples you suggest may exist (to quote Wikipedia, a more general formulation would be "It says that for any first-order theory T with a well-orderable language, and any sentence S in the language of the theory, there is a formal proof of S in T if and only if S is satisfied by every model of T (S is a semantic consequence of T)."). </p> <p>Further, you misunderstand his Incompleteness theorems. It does not say no theory is consistent, it simply says no consistent theory can prove its own consistency. Indeed, Godel proved the consistency of Peano Arithmetic using Type Theory which could not prove it's own consistency.</p> <p>However, (depending on your level) all of the theorems you encounter can be <em>assured</em> to be true if proven because ZFC (the most common foundation of mathematics) had it's consistency proven by only assuming the existence of a Weakly Inaccessible Cardinal. I think this is widely accepted, so if you can accept the existence of that, you are safe.</p> <p><strong>EDIT:</strong> It has been made known to me in the comments that making the existance of a Weakly Inaccessible Cardinal an axiom creates a far stronger theory than needed to prove the consistency of ZFC. In any case, the point remains that any system of mathematics you're working with has likely been proven consistent by assuming something only slightly stronger - for whatever that is worth.</p> <p>It also occurs to me that first order logic includes the Law of Noncontradiction (also known as the Law of Excluded Middle) as an axiom, where this is also know to be consistent by Godel's Completeness theorem. So, with this your more general question of "Why not both true and false?" is answered because we take it as axiomatic and show that including such an axiom is consistent.</p>
combinatorics
<p>TLDR; I go on a math adventure and get overwhelmed :)</p> <p>Some background:</p> <p>My maths isn't great (I can't read notation) but I'm a competent programmer and reasonable problem solver. I've done the first dozen or so Euler problems and intend to continue with that when I have time.</p> <p>The problem:</p> <p>In Arthur C Clarke's story "The 9 Billion Names of God" the names of God are all possible sequences in an unspecified alphabet, having no more than nine characters, where no letter occurs more than three times in succession.</p> <p>Out of curiosity, I started playing around with determining how many valid sequences there are in a range.</p> <p>I started with digits repeating in base 10 numbers, at heart it's the same problem as letters repeating in an alphabet.</p> <p>Not being very knowledgable about math, I thought I'd write a program iterate over ranges and count all the elements that match the above condition, then put the results in a spreadsheet to see if a clear pattern of some kind emerged that would let me write an algorithm to determine the number of valid sequences in a given range.</p> <p>I started with the constraint that a digit could only appear once, so in the range 0-99 there are 9 invalid sequences, 11, 22, 33 etc., leaving 91 valid 'names of God'.</p> <p>Here's the table for 0-99 through 0-99999999. I stopped there because beyond that's where it started taking to long to calculate and I didn't want to get sidetracked optimizing.</p> <pre><code>0-99 91 0-999 820 0-9999 7381 0-99999 66430 0-999999 597871 0-9999999 5380840 0-99999999 48427561 </code></pre> <p>I also generated a table for digits appearing no more than twice or thrice:</p> <pre><code>0-999 991 0-9999 9820 0-99999 97300 0-999999 964081 0-9999999 9552430 0-99999999 94648600 0-9999 9991 0-99999 99820 0-999999 997300 0-9999999 9964000 0-99999999 99550081 </code></pre> <p>I haven't got around to looking into these yet, because I got fascinated by the first table.</p> <p>The first table appears in <a href="http://oeis.org/A002452">OEIS as A002452</a>.</p> <p>Going from there, looking at all sorts of different things, amongst them the sequences of numbers in different placeholder columns in the tables, differences between numbers in different columns and/or tables etc. I looked at all sorts of things, I wish I'd documented it more, I was just idly mucking around with a spreadsheet and Googling sequences. With a quick Google search I found some of these sequences in all sorts of strange places, some examples include transformations of Lucas Numbers, solutions to Kakuro / Addoku / Soduku puzzles, repunits, the coordinates of geodesic faces, even the Ishango bone, which I'd never heard of before. It justs goes on and on. </p> <p>Maths is full of this sort of thing isn't it? And I'm just looking at one little problem from one very specific angle, this is just the tip of the iceberg here isn't it? </p> <p>Questions/requests for comments:</p> <ol> <li><p>I'm presuming that my adventure isn't anything extraordinary at all and maths is full of this unexpected relationships stuff, true?</p></li> <li><p>What is the right way to describe the problem outlined in the first few paragraphs, what do I need to learn to figure it out?</p></li> <li><p>I'd love to hear any comments/trivia etc. relating to this little adventure please!</p></li> </ol>
<p>The names you describe can be described by a <a href="http://en.wikipedia.org/wiki/Regular_expression">regular expression</a>, hence the set of all names is a <a href="http://en.wikipedia.org/wiki/Regular_language">regular language</a>. Equivalently, names can be recognized by a <a href="http://en.wikipedia.org/wiki/Deterministic_finite_state_machine">deterministic finite state machine</a> (I can think of one with $28$ states, but this is probably not optimal). If $G_n$ denotes the number of names of length $n$, it follows that the <a href="http://en.wikipedia.org/wiki/Generating_function">generating function</a> $\sum G_n x^n$ is <a href="http://en.wikipedia.org/wiki/Rational_function">rational</a> and can be calculated fairly explicitly (in several different ways), which leads to a closed form for $G_n$ as a sum of exponentials $\alpha^n$ for various $\alpha$ (possibly with polynomial coefficients) via <a href="http://en.wikipedia.org/wiki/Partial_fraction">partial fraction decomposition</a>.</p> <p>In other words, such sequences are well-understood from several related perspectives. Unfortunately I don't know a particularly elementary introduction to this material. The simplest nontrivial example of a sequence of this kind is a sequence counted by the <a href="http://en.wikipedia.org/wiki/Fibonacci_number">Fibonacci numbers</a>: the words are words over an alphabet of two letters $A, B$ with the restriction that the letter $B$ can never appear twice in a row. Here the generating function is $\sum F_n x^n = \frac{x}{1 - x - x^2}$ and this gives the closed form</p> <p>$$F_n = \frac{\phi^n - \varphi^n}{\phi - \varphi}$$</p> <p>where $\phi, \varphi$ are the positive and negative roots of $x^2 = x + 1$. A similar, but more complicated, closed form exists for the sequence you're interested in.</p> <p>The closest thing I know to a <em>complete</em> reference is Chapter 4 of Stanley's <a href="http://www-math.mit.edu/~rstan/ec/">Enumerative Combinatorics</a>, but this is not easy reading. Sipser's <a href="http://books.google.com/books?id=SV2DQgAACAAJ&amp;dq=sipser">Introduction to the Theory of Computation</a> discusses regular languages and finite state machines, but does not address the enumerative aspects of the theory. There is also a discussion of these issues (and much, much more) in Flajolet and Sedgewick's <a href="http://algo.inria.fr/flajolet/Publications/books.html">Analytic Combinatorics</a> (also not easy reading). </p> <hr> <p>Since regular languages are in some sense the simplest languages, sequences counting words in regular languages appear frequently in many situations. For example, pick any word $w$. The set of all words in which $w$ doesn't appear as a subword is a regular language, and so using the machinery I describe above one can compute the probability that a random sequence of letters of a certain length does or doesn't contain $w$. This has applications, for example, to the study of DNA sequences, if you want to ascertain how likely it is that a certain sequence of nucleotides $w$ could occur in a strand of DNA however many nucleotides long by chance. More prosaically, you can compute, for example, the probability of <a href="http://qchu.wordpress.com/2010/09/21/test-your-intuition-consecutive-tails/">flipping $7$ tails at some point out of $150$ coin flips</a>. </p>
<p>These are pretty vague questions, but here goes:</p> <ol> <li><p>True. There's lots of unexpected connections and relationships, and part of the fun is unraveling the mystery of their occurrence (see e.g. "bijective proofs"). </p></li> <li><p>Combinatorics. More specifically, generating functions. Try <a href="http://www.math.upenn.edu/~wilf/DownldGF.html" rel="noreferrer">generatingfunctionology</a>. You may find it not too easy. </p></li> <li><p>The number of sequences of length $n$ comprising of bits (0 and 1) and no two consecutive 1's is $F_n$, the $n$th Fibonacci number.</p></li> </ol>
combinatorics
<p>A somewhat information theoretical paradox occurred to me, and I was wondering if anyone could resolve it.</p> <p>Let <span class="math-container">$p(x) = x^n + c_{n-1} x^{n-1} + \cdots + c_0 = (x - r_0) \cdots (x - r_{n-1})$</span> be a degree <span class="math-container">$n$</span> polynomial with leading coefficient <span class="math-container">$1$</span>. Clearly, the polynomial can be specified exactly by its <span class="math-container">$n$</span> coefficients <span class="math-container">$c=\{c_{n-1}, \ldots, c_0\}$</span> <strong>OR</strong> by its <span class="math-container">$n$</span> roots <span class="math-container">$r=\{r_{n-1}, \ldots, r_0\}$</span>.</p> <p>So the roots and the coefficients contain the same information. However, it takes less information to specify the roots, because their <strong>order doesn't matter</strong>. (i.e. the roots of the polynomial require <span class="math-container">$\lg(n!)$</span> bits less information to specify than the coefficients).</p> <p>Isn't this a paradox? Or is my logic off somewhere?</p> <p>Edit: To clarify, all values belong to any <a href="https://en.wikipedia.org/wiki/Algebraically_closed_field" rel="noreferrer">algebraically closed field</a> (such as the complex numbers). And note that the leading coefficient is specified to be 1, meaning that there is absolutely a one-to-one correspondence between the <span class="math-container">$n$</span> remaining coefficients <span class="math-container">$c$</span> and the <span class="math-container">$n$</span> roots <span class="math-container">$r$</span>.</p>
<p>What is happening here is just a consequence that an infinite set and a proper subset can be in bijective correspondence. That's an well known fact about infinite sets. And it is a paradox in the sense that it is anti-intuitive, but not in the sense that it leads to a contradiction.</p>
<p>The ordering of the roots doesn't give you any new information about the polynomial. You have a map <span class="math-container">$$ {\mathbb C}^n \ni (r_1,r_2,\dots r_n) \mapsto (c_0,c_1\dots c_{n-1}) \in \mathbb{C}^n$$</span> This map is surjective, but not injective. That kind of things may happen because <span class="math-container">$\mathbb{C}^n$</span> is an infinte set. It is also true for other algebraically closed fields - <a href="https://math.stackexchange.com/questions/56397/do-finite-algebraically-closed-fields-exist">no algebraically closed field is finite</a>.</p>
logic
<p>What is the difference between a "proof by contradiction" and "proving the contrapositive"? Intuitive, it feels like doing the exact same thing. And when I compare an exercise, one person proves by contradiction, and the other proves the contrapositive, the proofs look almost exactly the same.</p> <p>For example, say I want to prove: $P \implies Q$ When I want to prove by contradiction, I would say assume this is not true. Assume $Q$ is not true, and $P$ is true. Blabla, but this implies $P$ is not true, which is a contradiction.</p> <p>When I want to prove the contrapositive, I say. Assume $Q$ is not true. Blabla, this implies $P$ is not true.</p> <p>The only difference in the proof is that I assume $P$ is true in the beginning, when I want to prove by contradiction. But this feels almost redundant, as in the end I always get that this is not true. The only other way that I could get a contradiction is by proving that $Q$ is true. But this would be the exact same things as a direct proof. </p> <p>Can somebody enlighten me a little bit here ? For example: Are there proofs that can be proven by contradiction but not proven by proving the contrapositve?</p>
<p>To prove $P \rightarrow Q$, you can do the following:</p> <ol> <li>Prove directly, that is assume $P$ and show $Q$;</li> <li>Prove by contradiction, that is assume $P$ and $\lnot Q$ and derive a contradiction; or</li> <li>Prove the contrapositive, that is assume $\lnot Q$ and show $\lnot P$.</li> </ol> <p>Sometimes the contradiction one arrives at in $(2)$ is merely contradicting the assumed premise $P$, and hence, as you note, is essentially a proof by contrapositive $(3)$. However, note that $(3)$ allows us to assume <em>only</em> $\lnot Q$; if we can then derive $\lnot P$, we have a <em>clean</em> proof by contrapositive.</p> <p>However, in $(2)$, the aim is to derive a <em>contradiction</em>: the contradiction might <em>not</em> be arriving at $\lnot P$, if one has assumed ($P$ <em>and</em> $\lnot Q$). Arriving at <em>any contradiction</em> counts in a proof by contradiction: say we assume $P$ <em>and</em> $\lnot Q$ and derive, say, $Q$. Since $Q \land \lnot Q$ is a contradiction (can never be true), we are forced then to conclude <em>it <strong>cannot</strong> be that <strong>both</em></strong> $(P \land \lnot Q)$. </p> <p>But note that $\lnot (P \land \lnot Q) \equiv \lnot P \lor Q\equiv P\rightarrow Q.$</p> <p>So a proof by contradiction usually looks something like this ($R$ is often $Q$, or $\lnot P$ or any other contradiction):</p> <ul> <li>$P \land \lnot Q$ Premise <ul> <li>$P$</li> <li>$\lnot Q$</li> <li>$\vdots$</li> <li>$R$</li> <li>$\vdots$</li> <li>$\lnot R$</li> <li>$\lnot R \land R$ Contradiction</li> </ul></li> </ul> <p>$\therefore \lnot (P \land \lnot Q) \equiv P \rightarrow Q$</p> <hr>
<p>There is a useful rule of thumb, when you have a proof by contradiction, to see whether it is "really" a proof by contrapositive.</p> <p>In a proof of by contrapositive, you prove $P \to Q$ by assuming $\lnot Q$ and reasoning until you obtain $\lnot P$.</p> <p>In a "genuine" proof by contradiction, you assume <em>both</em> $P$ and $\lnot Q$, and deduce some other contradiction $R \land \lnot R$.</p> <p>So, at then end of your proof, ask yourself: Is the "contradiction" just that I have deduced $\lnot P$, when the implication was $P \to Q$? Did I never use $P$ as an assumption? If both answers are "yes" then your proof is a proof by contraposition, and you can rephrase it in that way. </p> <p>For example, here is a proof by "contradiction":</p> <blockquote> <p>Proposition: Assume $A \subseteq B$. If $x \not \in B$ then $x \not \in A$.</p> <p>Proof. We proceed by contradiction. Assume $x \not \in B$ and $x \in A$. Then, since $A \subseteq B$, we have $x \in B$. This is a contradiction, so the proof is complete.</p> </blockquote> <p>That proof can be directly rephrased into a proof by contrapositive:</p> <blockquote> <p>Proposition: Assume $A \subseteq B$. If $x \not \in B$ then $x \not \in A$.</p> <p>Proof. We proceed by contraposition. Assume $x \in A$. Then, since $A \subseteq B$, we have $x \in B$. This is what we wanted to prove, so the proof is complete.</p> </blockquote> <p>Proof by contradiction can be applied to a much broader class of statements than proof by contraposition, which only works for implications. But there are proofs of implications by contradiction that cannot be directly rephrased into proofs by contraposition.</p> <blockquote> <p>Proposition: If $x$ is a multiple of $6$ then $x$ is a multiple of $2$.</p> <p>Proof. We proceed by contradiction. Let $x$ be a number that is a multiple of $6$ but not a multiple of $2$. Then $x = 6y$ for some $y$. We can rewrite this equation as $1\cdot x = 2\cdot (3y)$. Because the right hand side is a multiple of $2$, so is the left hand side. Then, because $2$ is prime, and $1\cdot x $ is a multiple of $2$, either $x$ is a multiple of $2$ or $1$ is a multiple of $2$. Since we have assumed that $x$ is not a multiple of $2$, we see that $1$ must be a multiple of $2$. But that is impossible: we know $1$ is not a multiple of $2$. So we have a contradiction: $1$ is a multiple of $2$ and $1$ is not a multiple of $2$. The proof is complete.</p> </blockquote> <p>Of course that proposition can be proved directly as well: the point is just that the proof given is genuinely a proof by contradiction, rather than a proof by contraposition. The key benefit of proof by contradiction is that you can stop when you find <em>any</em> contradiction, not only a contradiction directly involving the hypotheses.</p>
probability
<blockquote> <p>If a $1$ meter rope is cut at two uniformly randomly chosen points (to give three pieces), what is the average length of the smallest piece?</p> </blockquote> <p>I got this question as a mathematical puzzle from a friend. It looks similar to MathOverflow question <a href="https://mathoverflow.net/q/2014/">If you break a stick at two points chosen uniformly, the probability the three resulting sticks form a triangle is 1/4.</a></p> <p>However, in this case, I have to find the expected length of the smallest segment. The two points where the rope is cut are selected uniformly at random. </p> <p>I tried simulating it and I got an average value of $0.1114$. I suspect the answer is $1/9$ but I don't have any rigorous math to back it up.</p> <p>How do I solve this problem?</p>
<p>Update: The current version of this answer is more intuitive (IMHO) than the previous one. See also <a href="https://math.stackexchange.com/questions/14190/average-length-of-the-longest-segment/14194#14194">this similar answer</a> to a similar question.</p> <p>The generalization of this problem to <span class="math-container">$n$</span> pieces is discussed extensively in David and Nagaraja's <em><a href="http://books.google.com/books?id=bdhzFXg6xFkC&amp;printsec=frontcover&amp;dq=david+and+nagaraja+order+statistics&amp;source=bl&amp;ots=OaK_k-BbTf&amp;sig=vAtEAURNfZTe3NDV9WodQZ-3ip4&amp;hl=en&amp;ei=DfsDTdiXPML88AafsKnqAg&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CBgQ6AEwAA#v=onepage&amp;q&amp;f=false" rel="noreferrer">Order Statistics</a></em> (pp. 133-135, and p. 153). </p> <p>If <span class="math-container">$X_1, X_2, \ldots, X_{n-1}$</span> denote the positions on the rope where the cuts are made, let <span class="math-container">$V_i = X_i - X_{i-1}$</span>, where <span class="math-container">$X_0 = 0$</span> and <span class="math-container">$X_n = 1$</span>. So the <span class="math-container">$V_i$</span>'s are the lengths of the pieces of rope. <BR><BR> The key idea is that the probability that any particular <span class="math-container">$k$</span> of the <span class="math-container">$V_i$</span>'s simultaneously have lengths longer than <span class="math-container">$c_1, c_2, \ldots, c_k$</span>, respectively (where <span class="math-container">$\sum_{i=1}^k c_i \leq 1$</span>), is <span class="math-container">$$(1-c_1-c_2-\ldots-c_k)^{n-1}.$$</span> This is proved formally in David and Nagaraja's <em><a href="http://books.google.com/books?id=bdhzFXg6xFkC&amp;pg=PA133&amp;lpg=PA133&amp;dq=david+and+nagaraja+order+statistics+division+of+random+interval&amp;source=bl&amp;ots=OaK_l7B9-j&amp;sig=por_GsntbxBDII72xTWISQ9Keas&amp;hl=en&amp;ei=urkGTevVIZTmsQOM_7HsBw&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CBMQ6AEwAA#v=onepage&amp;q&amp;f=false" rel="noreferrer">Order Statistics</a></em>, p. 135. Intuitively, the idea is that in order to have pieces of size at least <span class="math-container">$c_1, c_2, \ldots, c_k$</span>, all <span class="math-container">$n-1$</span> of the cuts have to occur in intervals of the rope of total length <span class="math-container">$1 - c_1 - c_2 - \ldots - c_k$</span>. For example, <span class="math-container">$P(V_1 &gt; c_1)$</span> is the probability that all <span class="math-container">$n-1$</span> cuts occur in the interval <span class="math-container">$(c_1, 1]$</span>, which, since the cuts are randomly distributed in <span class="math-container">$[0,1]$</span>, is <span class="math-container">$(1-c_1)^{n-1}$</span>. <BR><BR> If <span class="math-container">$V_{(1)}$</span> denotes the shortest piece of rope, then for <span class="math-container">$x \leq \frac{1}{n}$</span>, (following Raskolnikov's comment) <span class="math-container">$$P(V_{(1)} &gt; x) = P(V_1 &gt; x, V_2 &gt; x, \ldots, V_n &gt; x) = (1 - nx)^{n-1}.$$</span></p> <p>Therefore, <span class="math-container">$$E[V_{(1)}] = \int_0^{\infty} P(V_{(1)} &gt; x) dx = \int_0^{1/n} (1-nx)^{n-1} dx = \frac{1}{n^2}.$$</span></p> <p>David and Nagaraja also give the formula Yuval Filmus mentions (as Problem 6.4.2):</p> <p><span class="math-container">$$E[V_{(r)}] = \frac{1}{n} \sum_{j=1}^r \frac{1}{n-j+1}.$$</span></p>
<p>My approach is maybe more naive than the others posted.</p> <p>Break the unit interval at $x$ and $y$ where $x &lt; y$. Our lengths are then $x$, $y - x$, and $1 - y$. It's not hard to show that they all have probability $1/3$ of being the shortest. In any case, our joint PDF is given by $f(x,y) = 6$ (since $x$ and $y$ remain uniform random variables on $1/6$th of the square $[0,1] \times [0,1]$). Each triangle in the diagram below corresponds to the domain of the PDF for one of the three cases.</p> <p><img src="https://i.sstatic.net/obVJs.png" alt="Each triangle corresponds to which length is shortest"></p> <p>I'll take care of the case when $x$ is shortest, that is, $x \leq y - x$ and $x \leq 1 - y$. This is the leftmost triangle. Since we're assuming $x$ is least, we are looking for $$E[x] = \int_0^{1/3} \int_{2x}^{1 - x} 6x \;dy \;dx = 1/9$$</p> <p>The cases when $y - x$ and $1 - y$ are shortest are similar.</p>
probability
<p>Let $X_{i}$, $i=1,2,\dots, n$, be independent random variables of geometric distribution, that is, $P(X_{i}=m)=p(1-p)^{m-1}$. How to compute the PDF of their sum $\sum_{i=1}^{n}X_{i}$?</p> <p>I know intuitively it's a negative binomial distribution $$P\left(\sum_{i=1}^{n}X_{i}=m\right)=\binom{m-1}{n-1}p^{n}(1-p)^{m-n}$$ but how to do this deduction? </p>
<p>Let $X_{1},X_{2},\ldots$ be independent rvs having the geometric distribution with parameter $p$, i.e. $P\left[X_{i}=m\right]=pq^{m-1}$ for $m=1,2.\ldots$ (here $p+q=1$). </p> <p>Define $S_{n}:=X_{1}+\cdots+X_{n}$.</p> <p>With induction on $n$ it can be shown that $S_{n}$ has a negative binomial distribution with parameters $p$ and $n$, i.e. $P\left\{ S_{n}=m\right\} =\binom{m-1}{n-1}p^{n}q^{m-n}$ for $m=n,n+1,\ldots$. </p> <p>It is obvious that this is true for $n=1$ and for $S_{n+1}$ we find for $m=n+1,n+2,\ldots$: </p> <blockquote> <p>$P\left[S_{n+1}=m\right]=\sum_{k=n}^{m-1}P\left[S_{n}=k\wedge X_{n+1}=m-k\right]=\sum_{k=n}^{m-1}P\left[S_{n}=k\right]\times P\left[X_{n+1}=m-k\right]$</p> </blockquote> <p>Working this out leads to $P\left[S_{n+1}=m\right]=p^{n+1}q^{m-n-1}\sum_{k=n}^{m-1}\binom{k-1}{n-1}$ so it remains to be shown that $\sum_{k=n}^{m-1}\binom{k-1}{n-1}=\binom{m-1}{n}$.</p> <p>This can be done with induction on $m$: </p> <blockquote> <p>$\sum_{k=n}^{m}\binom{k-1}{n-1}=\sum_{k=n}^{m-1}\binom{k-1}{n-1}+\binom{m-1}{n-1}=\binom{m-1}{n}+\binom{m-1}{n-1}=\binom{m}{n}$</p> </blockquote>
<p>Another way to do this is by using moment-generating functions. In particular, we use the theorem, a probability distribution is unique to a given MGF(moment-generating functions).<br/> Calculation of MGF for negative binomial distribution: <br/></p> <p><span class="math-container">$$X\sim \text{NegBin}(r,p),\ P(X=x) = p^rq^x\binom {x+r-1}{r-1}.$$</span></p> <p>Then, using the definition of MGF:</p> <p><span class="math-container">$$E[e^{tX}]=\sum_{x=0}^{\infty}p^rq^x\binom {x+r-1}{r-1}\cdot e^{tx} = p^r(1-qe^t)^{-r}=M(t)^r,$$</span></p> <p>where <span class="math-container">$M(t)$</span> denotes the moment generating function of a random variable <span class="math-container">$Y \sim \text{Geo}(p)$</span>. As</p> <p><span class="math-container">$$E[e^{t(X_1+X_2+\dots+X_n)}]=\prod_{i=1}^nE[e^{tX_i}]$$</span></p> <p>since they are independent, and we are done.</p>
logic
<p>I'm wondering if there are any non-standard theories (built upon ZFC with some axioms weakened or replaced) that make formal sense of hypothetical set-like objects whose "cardinality" is "in between" the finite and the infinite. In a world like that non-finite may not necessarily mean infinite and there might be a "set" with countably infinite "power set".</p>
<p>There's a few things I can think of which might fit the bill:</p> <ul> <li><p>We could work in a <em>non-<span class="math-container">$\omega$</span> model</em> of ZFC. In such a model, there are sets the model <em>thinks</em> are finite, but which are actually infinite; so there's a distinction between &quot;internally infinite&quot; and &quot;externally infinite.&quot; (A similar thing goes on in <em>non-standard analysis</em>.)</p> </li> <li><p>Although their existence is ruled out by the axiom of choice, it is consistent with ZF that there are sets which are not finite but are <strong>Dedekind-finite</strong>: they don't have any non-trivial self-injections (that is, Hilbert's Hotel doesn't work for them). Such sets are similar to genuine finite sets in a number of ways: for instance, you can show that a Dedekind-finite set can be even (= partitionable into pairs) or odd (= partitionable into pairs and one singleton) or neither but not both. And in fact it is consistent with ZF that the Dedekind-finite cardinalities are linearly ordered, in which case they form a nonstandard model of true arithmetic; see <a href="https://mathoverflow.net/questions/172329/does-sageevs-result-need-an-inaccessible">https://mathoverflow.net/questions/172329/does-sageevs-result-need-an-inaccessible</a>.</p> </li> <li><p>You could also work in <strong>non-classical logic</strong> - for instance, in a <strong>topos</strong>. I don't know much about this area, but lots of subtle distinctions between classically-equivalent notions crop up; I strongly suspect you'd find some cool stuff here.</p> </li> </ul>
<p>Well, there are a few notions of "infinite" sets that aren't equivalent in $\mathsf{ZF}.$ One sort is called Dedekind-infinite ("D-infinite", for short) which is a set with a countably infinite subset, or equivalently, a set which has a proper subset of the same cardinality. So, a set is D-finite if and only if the Pigeonhole Principle holds on that set. The more common notion is Tarski-infinite (usually just called "infinite"), which describes sets for which there is no injection into any set of the form $\{0,1,2,...,n\}.$</p> <p>It turns out, then, that the following are equivalent in $\mathsf{ZF}$:</p> <ol> <li>Every D-finite set is finite.</li> <li>D-finite unions of D-finite sets are D-finite.</li> <li>Images of D-finite sets are D-finite.</li> <li>Power sets of D-finite sets are D-finite.</li> </ol> <p>Without a weak Choice principle (anything that implies $\aleph_0$ to be the smallest infinite cardinality, rather than simply a minimal infinite cardinality), the following may occur: </p> <ol> <li>There may be infinite, D-finite sets. In particular, there may be infinite sets whose cardinality is not comparable to $\aleph_0.$ Put another way, there may be infinite sets such that removing an element from such a set makes a set with strictly smaller cardinality.</li> <li>There may be a D-finite set of D-finite sets whose union is D-infinite.</li> <li>There may be a surjective function from a D-finite set to a D-infinite set.</li> <li>There may be a D-finite set whose power set is D-infinite.</li> </ol>
probability
<p>In <a href="https://math.stackexchange.com/a/656426/25554">this math.se post</a> I described in some detail a certain paradox, which I will summarize:</p> <blockquote> <p>$A$ writes two distinct numbers on slips of paper. $B$ selects one of the slips at random (equiprobably), examines its number, and then, without having seen the other number, predicts whether the number on her slip is the larger or smaller of the two. $B$ can obviously achieve success with probability $\frac12$ by flipping a coin, and it seems impossible that she could do better. However, there is a strategy $B$ can follow that is guaranteed to produce a correct prediction with probability strictly greater than $\frac12$.</p> </blockquote> <p>The strategy, in short, is:</p> <ul> <li>Prior to selecting the slip, $B$ should select some probability distribution $D$ on $\Bbb R$ that is everywhere positive. A normal distribution will suffice.</li> <li>$B$ should generate a random number $y\in \Bbb R$ distributed according to $D$. </li> <li>Let $x$ be the number on the slip selected by $B$. If $x&gt;y$, then $B$ predicts that $x$ is the larger of the two numbers; if $x&lt;y$ she predicts that $x$ is the smaller of the two numbers. ($y=x$ occurs with probability $0$ and can be disregarded.)</li> </ul> <p>I omit the analysis that shows that this method predicts correctly with probability <em>strictly</em> greater than $\frac12$; the details are in the other post.</p> <p>I ended the other post with “I have heard this paradox attributed to Feller, but I'm afraid I don't have a reference.”</p> <p>I would like a reference.</p>
<p>Thanks to a helpful comment, since deleted, by user <a href="https://math.stackexchange.com/users/128037/stefanos">Stefanos</a>, I was led to this (one-page) paper of <a href="https://en.wikipedia.org/wiki/Thomas_M._Cover">Thomas M. Cover</a> “<a href="http://www-isl.stanford.edu/~cover/papers/paper73.pdf">Pick the largest number</a>”<em>Open Problems in Communication and Computation</em> Springer-Verlag, 1987, p152.</p> <p>Stefanos pointed out that there is an extensive discussion of related paradoxes in the Wikipedia article on the ‘<a href="https://en.wikipedia.org/wiki/Two_envelopes_problem">Two envelope problem</a>’. Note that the paradox I described above does not appear until late in the article, in the section "<a href="https://en.wikipedia.org/wiki/Two_envelopes_problem#Randomized_solutions">randomized solutions</a>".</p> <p>Note also that the main subject of that article involves a paradox that arises from incorrect reasoning, whereas the variation I described above is astonishing but sound.</p> <p>I would still be interested to learn if this paradox predates 1987; I will award the "accepted answer" check and its 15 points to whoever posts the earliest appearance.</p>
<p><a href="https://johncarlosbaez.wordpress.com/2015/07/20/the-game-of-googol/" rel="noreferrer">In his blog post</a>, mathematical physicist John Baez considered this problem described by Thomas M. Cover as the special case $n = 2$ of what was called "the game of googol" by Martin Gardner in his column in the Feb. 1960 issue of Scientific American.</p> <p>That is, John Baez accepted (the suggestion by one of his blog readers) that this is a variant of the famous secretary problem, which <a href="https://en.wikipedia.org/wiki/Secretary_problem#The_game_of_googol" rel="noreferrer">wiki entry actually includes this "game of googol".</a></p> <p>If you are willing to take this point of view (Thomas M. Cover sort of did, in his closing remark), then the quest becomes the history of (this variant of) the secretary problem.</p> <p>John Baez's blog post is a good expository piece with some good discussion in the comments there. However, he didn't really do the literature search that pertains to Thomas M. Cover's approach going backwards in time.</p> <p>Similarly, there's no citation in that <a href="https://www.quantamagazine.org/information-from-randomness-puzzle-20150707" rel="noreferrer">Quanta magazine article</a> that got John Baez's notice. FWIW, the analysis there is thorough enough while being easily accessible to the general public.</p> <p>My personal stance at this point is that, nobody before Thomas M. Cover looked at the secretary problem in this particular way. He passed away in 2012 so one can only learn how he encountered this problem by studying his notes or asking his collaborators, friends, etc.</p> <p>I have to admit that I didn't go through <a href="https://scholar.google.com.tw/scholar?hl=en&amp;as_sdt=0,5&amp;sciodt=0,5&amp;cites=10356219223560197333&amp;scipsc=" rel="noreferrer">all the publications (27 so far) that cited that short paper</a> by Thomas M. Cover to see if they found anything preceding that.</p> <p>For the record, if one wants to see Martin Gardner's original text, I cannot find an open access of the Feb.1960 Scientific American nor the earliest reprint of the column in his 1966 book.${}^2$ The best thing I've got is the <a href="https://books.google.com.tw/books?id=sUuBCzazfYUC&amp;pg=PA17&amp;lpg=PA17&amp;dq=martin+gardner+googol&amp;source=bl&amp;ots=4Mr5gjyJgn&amp;sig=awPckIsv5Oi5Wm6sTuMydqFIicg&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwixsIv40OjXAhUFHpQKHbewC1oQ6AEIQTAE#v=onepage&amp;q=martin%20gardner%20googol&amp;f=false" rel="noreferrer">1994 google book</a>. See the 2nd paragraph on p.18 for the case $n=2$. Martin Gardner certainly didn't know what he was missing.</p> <hr> <p><strong>Footnote 1:</strong>$\quad$ <em>"New Mathematical Diversions from Scientific American". Simon and Schuster, 1966, Chapter 3, Problem 3</em>. This is a relatively long chapter in the book, as it essentially deals with the secretary problem.</p>
logic
<p>I just started studying logic (high school) anyway...for the truth table of logical implication</p> <p>If sentence $A$ is true and $B$ is true then $A\implies B$ is true.</p> <p>does that mean if $A$ and $B$ are both true then there is a way to prove $B$ is true from $A$, always?</p> <p>the same for if $A$ is false can you get anything either True or false proved from this $A$?</p>
<p>As a logical proposition, the <a href="//en.wikipedia.org/wiki/Material_conditional" rel="noreferrer">material conditional</a> $A \implies B$ is a very weak one: as you've noticed, it's very easy to satisfy it <a href="//en.wikipedia.org/wiki/Contingency_(philosophy)" rel="noreferrer">just by accident</a>. In fact, this happens whenever $A$ is false, or whenever $B$ is true. Thus, merely observing that $A \implies B$, for some specific $A$ and $B$, says very little.</p> <p>Instead, the usefulness of implication lies in the fact that, precisely because of its weakness, it is often possible to <em>assert</em> $A \implies B$ as a universal statement (either an axiom or a provable theorem) that holds for <em>any</em> valuation of any free variables mentioned in the propositions $A$ and $B$.</p> <p>For example, consider the statement: $$x &gt; 2 \;\land\; x \text{ is prime} \implies x \text{ is odd}.$$ Merely observing that this statement holds for <em>some</em> $x$ says very little &mdash; there are plenty of numbers for which it is trivially true, either because they are odd, or because they are not primes greater than 2. What makes this statement useful is that we can prove that it holds for <em>all</em> $x$ &mdash; there isn't a single number which would be greater than 2 and prime, but not odd.</p>
<p>One way to understand implication is to remember that $A\Rightarrow B$ is equivalent to $\neg A \lor B$. If you understand negation ($\neg$) and disjunction ($\lor$), then you understand implication.</p>
combinatorics
<p>The goal of the <a href="http://en.wikipedia.org/wiki/Four_fours" rel="noreferrer">four fours</a> puzzle is to represent each natural number using four copies of the digit $4$ and common mathematical symbols.</p> <p>For example, $165=\left(\sqrt{4} + \sqrt{\sqrt{{\sqrt{4^{4!}}}}}\right) \div .4$.</p> <blockquote> <p>If we remove the restriction on the number of fours, let $f(N)$ be the number of fours required to be able to represent all positive integers no greater than $N$. What is the asymptotic behaviour of $f(N)$? Can it be shown that $f(N) \sim r \log N$ for some $r$?</p> </blockquote> <p>To be specific, let’s restrict the operations to the following: </p> <ul> <li>addition: $x+y$</li> <li>subtraction: $x-y$</li> <li>multiplication: $x\times y$</li> <li>division: $x\div y$</li> <li>exponentiation: $y^x$</li> <li>roots: $\sqrt[x]{y}$</li> <li>square root: $\sqrt{x}$</li> <li>factorial $n!$</li> <li>decimal point: $.4$</li> <li>recurring decimal: $. \overline 4$</li> </ul> <p>It is easy to see that $f(N)$ is $O(\log N)$. For example, with four fours, numbers up to $102$ can be represented (see <a href="https://math.stackexchange.com/questions/92230/proving-you-cant-make-2011-out-of-1-2-3-4-nice-twist-on-the-usual/93188#93188">here</a> for a tool for generating solutions), so, since $96 = 4\times4!$, we can use $6k-2$ fours in the form $(\dots((a_1\times 96+a_2)\times 96+a_3)\dots)\times96+a_k$ to represent every number up to $96^k$.</p> <p>On the other hand, we can try to count the number of distinct expressions that can be made with $k$ fours. For example, if we (arbitrarily) permit factorial only to be applied to the digit $4$, and allow no more than two successive applications of the square root operation, we get $\frac{216^k}{18}C_{k-1}$ distinct expressions where $C_k$ is the $k$th Catalan number. (Of course, many of these expressions won’t represent a positive integer, many different expressions will represent the same number, and the positive integers generated won’t consist of a contiguous range from $1$ to some $N$.) </p> <p>Using Stirling’s formula, for large $k$, this is approximately $\frac{864^k}{72k\sqrt{\pi k}}$. So for $f(N)$ to grow slower than $r\log N$, we’d need to remove the restrictions on the use of unary operations. (It is <a href="http://en.wikipedia.org/wiki/Four_fours#Rules" rel="noreferrer">well-known</a> that the use of logs enables <em>any</em> number to be represented with only <em>four</em> fours.)</p> <blockquote> <p>Can this approach be extended to show that $f(N)$ is $\Omega(\log N)$? Or does unrestricted use of factorial and square roots mean that $f(N)$ is actually $o(\log N)$? Is the answer different if the use of $x\%$ (percentages) is also permitted?</p> </blockquote>
<p>I'm one of the authors of the paper referenced by David Bevan in his comment. The four-fours was one inspiration for that problem, although others have thought about it also. The specific version of the problem there looks at the minimum number of <span class="math-container">$1$</span>s needed to represent <span class="math-container">$n$</span> where one is allowed only addition and multiplication but any number of parentheses. Call this <span class="math-container">$g(n)$</span>. For example, <span class="math-container">$g(6) \le 5$</span>, since <span class="math-container">$6=(1+1)(1+1+1)$</span>, and it isn't hard to show that <span class="math-container">$g(6)=5$</span>. Even in this limited version of the problem, the question is generally difficult even to get asymptotics.</p> <p>In some sense most natural questions of asymptotic growth are somewhat contained in this question, since one can write any given <span class="math-container">$k$</span> as <span class="math-container">$1+1+1...+1$</span> <span class="math-container">$k$</span> times, and <span class="math-container">$1=k/k$</span>. Thus starting with some <span class="math-container">$k$</span> other than <span class="math-container">$1$</span> (such as <span class="math-container">$k=4$</span>), the asymptotics stay bounded within a constant factor, assuming that addition and division are allowed.</p> <p>However, actually calculating this sort of thing for any set of operations is generally difficult. In the case of integer complexity one has a straightforward way of doing so, since if one calculates <span class="math-container">$g(i)$</span> for all <span class="math-container">$i &lt; n$</span>, calculating <span class="math-container">$g(n)$</span> is then doable. This doesn't apply when one has other operations generally, with division and substraction already making an algorithm difficult. In this case, one can make such an algorithm but exactly how to do so is more subtle. In fact, as long as one is restricted to binary operations this is doable (proof sketch: do what you did to look at all distinct expressions).</p> <p>Adding in non-binary operations makes everything even tougher. Adding in square roots won't make things that much harder, nor will adding factorial by itself. The pair of them together makes calculating specific values much more difficult. My guess would be that even with factorial, square root and the four binary operations there are numbers which require arbitrarily large numbers of <span class="math-container">$1$</span>s, but I also suspect that this would be extremely difficult to prove. Note that this is already substantially weaker than what you are asking- whether the order of growth is of <span class="math-container">$\log n$</span>. Here though square roots probably don't alter things at all; in order for it to matter one needs to have a lot numbers of the form n^2^k with surprisingly low complexity. This seems unlikely.</p>
<p>You can get <span class="math-container">$103$</span> with five <span class="math-container">$4$</span>s as <span class="math-container">$$\frac {\sqrt{\sqrt{\sqrt{4^{4!}}}}+4+\sqrt{.\overline4}}{\sqrt{.\overline4}}=103$$</span> </p> <p>For four <span class="math-container">$4$</span>s, we have <span class="math-container">$\dfrac {44}{.\overline 4}+4=103$</span>.</p> <p>In fact, <span class="math-container">$113$</span> is the first number I can't get with four <span class="math-container">$4$</span>s.</p>
logic
<p>Do I get this right? Gödel's incompleteness theorem applies to first order logic as it applies to second order and any higher order logic. So there is essentially <em>no way</em> pinning down the natural numbers that we think of in everyday life?</p> <ul> <li>First order logic fails to be categorical, i.e. there are always non-standard models.</li> <li>Second order logic is categorical here, but does not allow us to prove all its true statements?</li> <li>Defining the numbers from set theory (e.g. ZFC) suffers from the same problem as all first order theories, i.e. there are non-standard models of ZFC which induce non-standard natural numbers?</li> </ul> <p>How do we even <em>know</em> what natural numbera <em>are</em>, if we have no way to pin down a definition. What are the <em>standard natural numbers</em>? Or do we accept that second-order logic gives us this definition and we just cannot prove all there is?</p> <hr> <p><strong>LATER:</strong> </p> <p>I think I can summarize my question as follows:</p> <blockquote> <p>How do the mathematicians that write <em>standard natural numbers</em> have formal consensus on what they are talking about?</p> </blockquote>
<blockquote> <p>How do the mathematicians that write standard natural numbers have formal consensus on what they are talking about?</p> </blockquote> <p>Mathematicians work in a meta-system (which is usually ZFC unless otherwise stated). ZFC has a collection of natural numbers that is automagically provided for by the axiom of infinity. One can easily define over ZFC the language of arithmetic (as in any standard logic textbook), and also that the standard natural numbers are terms of the form "$0$" or "$1+\cdots+1$" where the number of "$1$"s is a natural number. To disambiguate these two 'kinds' of natural numbers, some authors call the standard natural numbers "standard numerals".</p> <blockquote> <p>Do I get this right? Gödel's incompleteness theorem applies to first order logic as it applies to second order and any higher order logic. So there is essentially no way pinning down the natural numbers that we think of in everyday life?</p> </blockquote> <p>Yes. See <a href="http://math.stackexchange.com/a/1895288/21820">this post</a> for the generalization and proof of the incompleteness theorems that applies to every conceivable formal system, even if it is totally different from first-order or higher-order logic.</p> <blockquote> <p>First order logic fails to be categorical, i.e. there are always non-standard models.</p> </blockquote> <p>Yes, so first-order PA does not pin down the natural numbers.</p> <blockquote> <p>Second order logic is categorical here, but does not allow us to prove all its true statements?</p> </blockquote> <p>Yes; there is no (computably) effective deductive system for second-order logic, so we cannot use second-order PA as a practical formal system. In the first place the second-order induction axiom is useless if you do not add some set-existence axioms. In any case, any effective formal system that describes the natural numbers will be incomplete, by the incompleteness theorem.</p> <p>So although second-order PA is categorical (from the perspective of a strong enough meta-system), the categoricity does not solve the philosophical problem at all since such a meta-system is itself necessarily incomplete and hence the categoricity of second-order PA only ensures uniqueness of the natural numbers within each model of the meta-system, and cannot establish any kind of absolute categoricity.</p> <blockquote> <p>Defining the numbers from set theory (e.g. ZFC) suffers from the same problem as all first order theories, i.e. there are non-standard models of ZFC which induce non-standard natural numbers?</p> </blockquote> <p>Exactly; see previous point.</p> <blockquote> <p>How do we even know what natural numbers are, if we have no way to pin down a definition.</p> </blockquote> <p>We can only describe what we would like them to be like, and our description must be incomplete because we cannot convey any non-effective description. PA is one (incomplete) characterization. ACA is another. ZFC's axiom of infinity is a much stronger characterization. But there will never be an absolute categorical characterization.</p> <p>You might hear a common attempt at defining natural numbers as those that can be obtained from 0 by adding 1 repeatedly. This is <strong>circular</strong>, because "repeatedly" cannot be defined without essentially knowing natural numbers. We are stuck; we <strong>must already know what are natural numbers</strong> before we can talk about iteration. This is why every useful foundational system for mathematics already has something inbuilt to provide such a collection. In the case of ZFC it is the axiom of infinity.</p> <blockquote> <p>What are the standard natural numbers?</p> </blockquote> <p>Good question. But this is highly philosophical, so I'll answer it later.</p> <blockquote> <p>Or do we accept that second-order logic gives us this definition and we just cannot prove all there is?</p> </blockquote> <p>No, second-order PA does not actually help us define natural numbers. The second-order induction axiom asserts "For every <strong>set</strong> of natural numbers, ...", but leaves undefined what "set" means. And it cannot possibly define "set" because it is <a href="http://math.stackexchange.com/a/1334753/21820">circular</a> as usual, and it does not help that the circularity is tied up with natural numbers...</p> <hr> <p>Now for the philosophical part.</p> <p>We have seen that mathematically we cannot uniquely define the natural numbers. Worse still, there does not seem to be ontological reason for believing in the existence of a perfect physical representation of any collection that satisfies PA under a suitable interpretation.</p> <p>Even if we discard the arithmetical properties of natural numbers, there is not even a complete theory of finite strings, in the sense that <a href="https://www.impan.pl/~kz/files/AGKZ_Con.pdf" rel="noreferrer">TC (the theory of concatenation)</a> is essentially incomplete, despite having just the concatenation operation and no arithmetic operations, so we cannot pin down even the finite strings!</p> <p>So we do not even have hope of giving a description that <strong>uniquely identifies</strong> the collection of finite strings, which naturally precludes doing the same for natural numbers. This fact holds under very weak assumptions, such as those required to prove Godel's incompleteness theorems. If one rejects those... Well one reason to reject them is that there is no apparent physical model of PA...</p> <p>As far as we know in modern physics, one cannot store finite strings in any physical medium with high fidelity beyond a certain length, for which I can safely give an upper bound of $2^{10000}$ bits. This is not only because <strong>the observable universe is finite</strong>, but also because a physical storage device with extremely large capacity (on the order of the size of the observable universe) will degrade faster than you can use it.</p> <p>So description aside, we do not have any reason to even believe that finite strings have actual physical representation in the real world. This problem cannot be escaped by using conceptual strings, such as iterations of some particular process, because we have no basis to assume the existence of a process that can be iterated indefinitely, pretty much due to the finiteness of the observable universe, again.</p> <p>Therefore we are stuck with the <strong>physical inability</strong> to even generate all finite strings, or to generate all natural numbers in a physical representation, even if we define them using circular natural-language definitions!</p> <hr> <p>Now I am not saying that there is absolutely no real-world relevance of arithmetical facts.</p> <p>Despite the fact that PA (Peano arithmetic) is based on the assumption of an infinite collection of natural numbers, which as explained above cannot have a perfect physical representation, PA still generates theorems that seem to be <strong>true at least at human scales</strong>. My favourite example is HTTPS, whose decryption process relies crucially on the correctness of Fermat's little theorem applied to natural numbers with length on the order of thousands of bits. So there is some truth in PA at human scales.</p> <p>This may even suggest one way to <strong>escape</strong> the incompleteness theorems, because they only apply to deterministic formal systems that roughly speaking have certain unbounded closure properties (see <a href="https://pdfs.semanticscholar.org/c278/147b7a68385836a90939a175a9959cabbf0b.pdf" rel="noreferrer">this paper about self-verifying theories</a> for sharp results about the incompleteness phenomenon). Perhaps the real world may even be governed by some kind of system that does is syntactically complete, since it has physical 'fuzziness' due to quantum mechanics or the spacetime limitations, but anyway such systems will not have arithmetic in full as we know it!</p>
<p>Yes, as you phrased it in a comment:</p> <blockquote> <p>So even the academic usage of the term standard natural number trusts in our intuitive understanding from preschool?</p> </blockquote> <p>That's exactly how it is.</p> <p>We believe, based on experience, that the intuitive concept we learned in preschool has meaning, and mathematics is an endeavor to <em>explore <strong>that</strong> concept</em> with more powerful tools than we had available in preschool -- not to construct it from scratch.</p> <p>There are the Peano axioms either in their first- or second-order guise, of course. However, even if Gödel hadn't sunk the hope that they would tell us <em>all</em> truth about our intuitive natural numbers, they would still just be ink. The fundamental reason why we <em>care</em> about those axioms is that we believe they capture truth (only some of the truth, but truth nonetheless) about our intuitive conception of number.</p> <p>Indeed it is hard to imagine how one <em>could</em> construct the natural numbers from scratch. To demand that, one would need to build on something else that we <em>already</em> know -- but it can't be turtles all the way down, and somewhere the buck has to stop. We can kick it around a bit and say, for example, that our fundamental concept is not numbers, but the finite strings of symbols that make up formal proofs -- but that's not really progress, because the natural numbers are implicit even there: If we can speak of strings, then we can speak of tally marks, and there's the natural numbers already!</p>
probability
<p>There's an 80% probability of a certain outcome, we get some new information that means that outcome is 4 times more likely to occur.</p> <p>What's the new probability as a percentage and how do you work it out?</p> <p>As I remember it the question was posed like so:</p> <blockquote> <p>Suppose there's a student, Tom W, if you were asked to estimate the probability that Tom is a student of computer science. Without any other information you would only have the base rate to go by (percentage of total students enrolled on computer science) suppose this base rate is 80%.</p> <p>Then you are given a description of Tom W's personality, suppose from this description you estimate that Tom W is 4 times more likely to be enrolled on computer science.</p> <p>What is the new probability that Tom W is enrolled on computer science.</p> </blockquote> <p>The answer given in the book is 94.1% but I couldn't work out how to calculate it!</p> <p>Another example in the book is with a base rate of 3%, 4 times more likely than this is stated as 11%.</p>
<p>The most reasonable way to match the answer in the book would be to define the likelihood to be the ratio of success over failure (aka odds): $$ q=\frac{p}{1-p} $$ then the probability as a function of the odds is $$ p=\frac{q}{1+q} $$ In your case the odds are $4:1$ so $4$ times as likely would be $16:1$ odds which has a probability of $$ \frac{16}{17}=94.1176470588235\% $$ This matches the $3\%$ to $11.0091743119266\%$ transformation, as well.</p> <hr> <p><strong>Bayes' Rule</strong></p> <p><a href="http://en.wikipedia.org/wiki/Bayes%27_rule#Single_event">Bayes' Rule for a single event</a> says that $$ O(A\mid B)=\frac{P(B\mid A)}{P(B\mid\neg A)}\,O(A) $$ where the odds of $X$ is defined as earlier $$ O(X)=\frac{P(X)}{P(\neg X)}=\frac{P(X)}{1-P(X)} $$ This is exactly what is being talked about in the later addition to the question, where it is given that $$ \frac{P(B\mid A)}{P(B\mid\neg A)}=4 $$</p>
<p>Daniel Kahneman's book mentions Bayesian reasoning. An answer using Bayesian reasoning is as follows:</p> <p>Let $C$ be the event that Tom is compsci, $N$ be the event that he has a "nerdy" personality.</p> <p>We are given $P(N|C)/P(N|\neg C)= 4$, which implies that $P(N|\neg C) = P(N|C)/4$.</p> <p>By Bayes Theorem (and using the theorem of total probability to expand the denominator)</p> <p>$$\begin{eqnarray*} P(C|N) &amp;=&amp; \frac{P(N|C) P(C)}{ P(N)} \\ &amp;=&amp; \frac{P(N|C) P(C)}{P(N|C)P(C) + P(N|\neg C) P(\neg C)} \\ &amp;=&amp; \frac{P(N|C) P(C)}{P(N|C)P(C) + 0.25 P(N|C)P(\neg C)} \\ &amp;=&amp; \frac{P(C)}{P(C) + 0.25 P(\neg C)} \\ &amp;=&amp; \frac{0.8}{0.8 + 0.25 \times 0.2} \\ &amp;\approx&amp; 0.9411765 \end{eqnarray*}$$</p> <p>Similar reasoning in the 3% case leads to $P(C|N) = 0.03 / (0.03 + .25*.97) \approx 0.1100917$.</p>
probability
<p>$X \sim \mathcal{P}( \lambda) $ and $Y \sim \mathcal{P}( \mu)$ meaning that $X$ and $Y$ are Poisson distributions. What is the probability distribution law of $X + Y$. I know it is $X+Y \sim \mathcal{P}( \lambda + \mu)$ but I don't understand how to derive it.</p>
<p>This only holds if $X$ and $Y$ are independent, so we suppose this from now on. We have for $k \ge 0$: \begin{align*} P(X+ Y =k) &amp;= \sum_{i = 0}^k P(X+ Y = k, X = i)\\ &amp;= \sum_{i=0}^k P(Y = k-i , X =i)\\ &amp;= \sum_{i=0}^k P(Y = k-i)P(X=i)\\ &amp;= \sum_{i=0}^k e^{-\mu}\frac{\mu^{k-i}}{(k-i)!}e^{-\lambda}\frac{\lambda^i}{i!}\\ &amp;= e^{-(\mu + \lambda)}\frac 1{k!}\sum_{i=0}^k \frac{k!}{i!(k-i)!}\mu^{k-i}\lambda^i\\ &amp;= e^{-(\mu + \lambda)}\frac 1{k!}\sum_{i=0}^k \binom ki\mu^{k-i}\lambda^i\\ &amp;= \frac{(\mu + \lambda)^k}{k!} \cdot e^{-(\mu + \lambda)} \end{align*} Hence, $X+ Y \sim \mathcal P(\mu + \lambda)$.</p>
<p>Another approach is to use characteristic functions. If $X\sim \mathrm{po}(\lambda)$, then the characteristic function of $X$ is (if this is unknown, just calculate it) $$ \varphi_X(t)=E[e^{itX}]=e^{\lambda(e^{it}-1)},\quad t\in\mathbb{R}. $$ Now suppose that $X$ and $Y$ are <em>independent</em> Poisson distributed random variables with parameters $\lambda$ and $\mu$ respectively. Then due to the independence we have that $$ \varphi_{X+Y}(t)=\varphi_X(t)\varphi_Y(t)=e^{\lambda(e^{it}-1)}e^{\mu(e^{it}-1)}=e^{(\mu+\lambda)(e^{it}-1)},\quad t\in\mathbb{R}. $$ As the characteristic function completely determines the distribution, we conclude that $X+Y\sim\mathrm{po}(\lambda+\mu)$.</p>
logic
<p>Me and my friend were arguing over this &quot;fact&quot; that we all know and hold dear. However, I do know that <span class="math-container">$1+1=2$</span> is an axiom. That is why I beg to differ. Neither of us have the required mathematical knowledge to convince each other.</p> <p>And that is why, we decided to turn to Math Stackexchange for help.</p> <p>What would be stack's opinion?</p>
<p>It seems that you and your friend lack the mathematical knowledge to handle this delicate point. What is a proof? What is an axiom? What are $1,+,2,=$?</p> <p>Well, let me try and be concise about things.</p> <ul> <li><p>A proof is a short sequence of deductions from axioms and assumptions, where at every step we deduce information from our axioms, our assumptions and previously deduced sentences.</p></li> <li><p>An axiom is simply an assumption.</p></li> <li><p>$1,+,2,=$ are just letters and symbols. We usually associate $=$ with equality; that is two things are equal if and only if they are the same thing. As for $1,2,+$ we have a natural understanding of what they are but it is important to remember those are just letters which can be used elsewhere (and they are used elsewhere, often).</p></li> </ul> <p>You want to prove to your friend that $1+1=2$, where those symbols are interpreted as they are naturally perceived. $1$ is the amount of hands attached to a healthy arm of a human being; $2$ is the number of arms attached to a healthy human being; and $+$ is the natural sense of addition. </p> <p>From the above, what you want to show, mathematically, is that if you are a healthy human being then you have exactly two hands.</p> <p>But in mathematics we don't talk about hands and arms. We talk about mathematical objects. We need a suitable framework, and we need axioms to define the properties of these objects. For the sake of the natural numbers which include $1,2,+$ and so on, we can use the <strong>Peano Axioms</strong> (PA). These axioms are commonly accepted as the definition of the natural numbers in mathematics, so it makes sense to choose them.</p> <p>I don't want to give a full exposition of PA, so I will only use the part I need from the axioms, the one discussing addition. We have three primary symbols in the language: $0, S, +$. And our axioms are:</p> <ol> <li>For every $x$ and for every $y$, $S(x)=S(y)$ if and only if $x=y$.</li> <li>For every $x$ either $x=0$ or there is some $y$ such that $x=S(y)$.</li> <li>There is no $x$ such that $S(x)=0$.</li> <li>For every $x$ and for every $y$, $x+y=y+x$.</li> <li>For every $x$, $x+0=x$.</li> <li>For every $x$ and for every $y$, $x+S(y)=S(x+y)$.</li> </ol> <p>This axioms tell us that $S(x)$ is to be thought as $x+1$ (the successor of $x$), and it tells us that addition is commutative and what relations it bears with the successor function.</p> <p>Now we need to define what are $1$ and $2$. Well, $1$ is a shorthand for $S(0)$ and $2$ is a shorthand for $S(1)$, or $S(S(0))$.</p> <p><strong>Finally!</strong> We can write a proof that $1+1=2$:</p> <blockquote> <ol> <li>$S(0)+S(0)=S(S(0)+0)$ (by axiom 6).</li> <li>$S(0)+0 = S(0)$ (by axiom 5).</li> <li>$S(S(0)+0) = S(S(0))$ (by the second deduction and axiom 1).</li> <li>$S(0)+S(0) = S(S(0))$ (from the first and third deductions).</li> </ol> </blockquote> <p>And that is what we wanted to prove.</p> <hr> <p>Note that the context is quite important. We are free to define the symbols to mean whatever it is we want them to mean. We can easily define a new context, and a new framework in which $1+1\neq 2$. Much like we can invent a whole new language in which <em>Bye</em> is a word for greeting people when you meet them, and <em>Hi</em> is a word for greeting people as they leave.</p> <p>To see that $1+1\neq2$ in <em>some</em> context, simply define the following axioms:</p> <ol> <li>$1\neq 2$</li> <li>For every $x$ and for every $y$, $x+y=x$.</li> </ol> <p>Now we can write a proof that $1+1\neq 2$:</p> <ol> <li>$1+1=1$ (axiom 2 applied for $x=1$).</li> <li>$1\neq 2$ (axiom 1).</li> <li>$1+1\neq 2$ (from the first and second deductions).</li> </ol> <hr> <p><strong>If you read this far, you might also be interested to read these:</strong></p> <ol> <li><a href="https://math.stackexchange.com/questions/95069/how-would-one-be-able-to-prove-mathematically-that-11-2/">How would one be able to prove mathematically that $1+1 = 2$?</a></li> <li><a href="https://math.stackexchange.com/questions/190690/what-is-the-basis-for-a-proof/">What is the basis for a proof?</a></li> <li><a href="https://math.stackexchange.com/questions/182303/how-is-a-system-of-axioms-different-from-a-system-of-beliefs">How is a system of axioms different from a system of beliefs?</a></li> </ol>
<p>Those interested in pushing this question back further than Asaf Karagila did (well past logic and into the morass of philosophy) may be interested in the following comments that were written in 1860 (full reference below). Also, although Asaf's treatment here avoids this, there are certain issues when defining addition of natural numbers in terms of the successor operation that are often overlooked. See my <a href="http://mathforum.org/kb/message.jspa?messageID=7614303" rel="noreferrer">22 November 2011</a> and <a href="http://mathforum.org/kb/message.jspa?messageID=7617938" rel="noreferrer">28 November 2011</a> posts in the Math Forum group math-teach.</p> <blockquote> <p><span class="math-container">$[\ldots]$</span> consider this case. There is a world in which, whenever two pairs of things are either placed in proximity or are contemplated together, a fifth thing is immediately created and brought within the contemplation of the mind engaged in putting two and two together. This is surely neither inconceivable, for we can readily conceive the result by thinking of common puzzle tricks, nor can it be said to be beyond the power of Omnipotence, yet in such a world surely two and two would make five. That is, the result to the mind of contemplating two two’s would be to count five. This shows that it is not inconceivable that two and two might make five; but, on the other hand, it is perfectly easy to see why in this world we are absolutely certain that two and two make four. There is probably not an instant of our lives in which we are not experiencing the fact. We see it whenever we count four books, four tables or chairs, four men in the street, or the four corners of a paving stone, and we feel more sure of it than of the rising of the sun to-morrow, because our experience upon the subject is so much wider and applies to such an infinitely greater number of cases.</p> </blockquote> <p>The above passage comes from:</p> <p><a href="http://en.wikipedia.org/wiki/James_Fitzjames_Stephen" rel="noreferrer">James Fitzjames Stephen</a> (1829-1894), Review of <a href="http://en.wikipedia.org/wiki/Henry_Longueville_Mansel" rel="noreferrer">Henry Longueville Mansel</a> (1820-1871), <a href="http://books.google.com/books?id=59UNAAAAYAAJ" rel="noreferrer"><strong>Metaphysics; or, the Philosophy of Consciousness, Phenomenal and Real</strong></a> (1860), <strong>The Saturday Review</strong> 9 #244 (30 June 1860), pp. 840-842. [see <a href="http://books.google.com/books?id=yVwwAQAAMAAJ&amp;pg=PA842" rel="noreferrer">page 842</a>]</p> <p>Stephen’s review of Mansel's book is reprinted on pp. 320-335 of Stephen's 1862 book <a href="http://books.google.com/books?id=RssBAAAAQAAJ" rel="noreferrer"><strong>Essays</strong></a>, where the quote above can be found on <a href="http://books.google.com/books?id=RssBAAAAQAAJ&amp;pg=PA333" rel="noreferrer">page 333</a>.</p> <p><em>(ADDED 2 YEARS LATER)</em> Because my answer continues to receive sporadic interest and because I came across something this weekend related to it, I thought I would extend my answer by adding a couple of items.</p> <p>The first new item, <strong>[A]</strong>, is an excerpt from a 1945 paper by Charles Edward Whitmore. I came across Whitmore's paper several years ago when I was looking through all the volumes of the journal <strong>Journal of the History of Ideas</strong> at a nearby university library. Incidentally, Whitmore's paper is where I learned about speculations of James Fitzjames Stephen that are given above. The second new item, <strong>[B]</strong>, is an excerpt from an essay by Augustus De Morgan that I read this last weekend. De Morgan's essay is item <strong>[15]</strong> in my answer to the History of Science and Math StackExchange question <a href="https://hsm.stackexchange.com/questions/451/did-galileos-writings-on-infinity-influence-cantor">Did Galileo's writings on infinity influence Cantor?</a>, and his essay is also mentioned in item <strong>[8]</strong>. I've come across references to De Morgan's essay from time to time over the years, but I've never read it because I never bothered trying to look it up in a university library. However, when I found to my surprise (but I really shouldn't have been surprised) that a digital copy of the essay was freely available on the internet when I searched for it about a week ago, I made a print copy, which I then read through when I had some time (this last weekend).</p> <p><strong>[A]</strong> Charles Edward Whitmore (1887-1970), <a href="http://www.jstor.org/stable/2707061" rel="noreferrer"><em>Mill and mathematics: An historical note</em></a>, <strong>Journal of the History of Ideas</strong> 6 #1 (January 1945), 109-112. MR 6,141n; Zbl 60.01622</p> <blockquote> <p><strong>(first paragraph of the paper, on p. 109)</strong> In various philosophical works one encounters the statement that J. S. Mill somewhere asserted that two and two might conceivably make five. Thus, Professor Lewis says<span class="math-container">$^1$</span> that Mill &quot;asked us to suppose a demon sufficiently powerful and maleficent so that every time two things were brought together with two other things, this demon should always introduce a fifth&quot;; but he gives no specific reference. <strong>{{footnote:</strong> <span class="math-container">$^1$</span>C. I. Lewis, <em>Mind and the World Order</em> (1929), 250.<strong>}}</strong> C. S. Peirce<span class="math-container">$^2$</span> puts it in the form, &quot;when two things were put together a third should spring up,&quot; calling it a doctrine usually attributed to Mill. <strong>{{footnote:</strong> <span class="math-container">$^2$</span><em>Collected Papers</em>, IV, 91 (dated 1893). The editors supply a reference to <em>Logic</em>, II, vi, 3.<strong>}}</strong> Albert Thibaudet<span class="math-container">$^3$</span> ascribes to &quot;a Scottish philosopher cited by Mill&quot; the doctrine that the addition of two quantities might lead to the production of a third. <strong>{{footnote:</strong> <span class="math-container">$^3$</span>Introduction to <em>Les Idées de Charles Maurras</em> (1920), 7.<strong>}}</strong> Again, Professor Laird remarks<span class="math-container">$^4$</span> that &quot;Mill suggested, we remember, that two and two might not make four in some remote part of the stellar universe,&quot; referring to <em>Logic</em> III, xxi, 4 and II, vi, 2. <strong>{{footnote:</strong> <span class="math-container">$^4$</span>John Laird, Knowledge, Belief, and Opinion (1930), 238.<strong>}}</strong> These instances, somewhat casually collected, suggest that there is some confusion in the situation.</p> <p><strong>(from pp. 109-111)</strong> Moreover, the notion that two and two should [&quot;could&quot; intended?] make five is entirely opposed to the general doctrine of the <em>Logic</em>. <span class="math-container">$[\cdots]$</span> Nevertheless, though these views stand in the final edition of the <em>Logic</em>, it is true that Mill did, in the interval, contrive to disallow them. After reading through the works of Sir William Hamilton three times, he delivered himself of a massive Examination of that philosopher, in the course of which he reverses his position--but at the suggestion of another thinker. In chapter VI he falls back on the inseparable associations generated by uniform experience as compelling us to conceive two and two as four, so that &quot;we should probably have no difficulty in putting together the two ideas supposed to be incompatible, if our experience had not first inseparably associated one of them with the contradictory of the other.&quot; To this he adds, &quot;That the reverse of the most familiar principles of arithmetic and geometry might have been made conceivable even to our present mental faculties, if those faculties had coexisted with a totally different constitution of external nature, is ingeniously shown in the concluding paper of a recent volume, anonymous, but of known authorship, Essays, by a Barrister.&quot; The author of the work in question was James Fitzjames Stephen, who in 1862 had brought together various papers which had appeared in <em>Saturday Review</em> during some three previous years. Some of them dealt with philosophy, and it is from a review of Mansel's <em>Metaphysics</em> that Mill proceeds to quote in support of his new doctrine <span class="math-container">$[\cdots]$</span></p> <p><strong>Note:</strong> On p. 111 Whitmore argues against Mill's and Stephen's empirical viewpoint of &quot;two plus two equals four&quot;. Whitmore's arguments are not very convincing to me.</p> <p><strong>(from p. 112)</strong> Mill, then, did not originate the idea, but adopted it from Stephen, in the form that two and two might make five to our present faculties, if external nature were differently constituted. He did not assign it to some remote part of the universe, nor did he call in the activity of some maleficent demon; neither did he say that one and one might make three. He did not explore its implications, or inquire how it might be reconciled with what he had said in other places; but at least he is entitled to a definite statement of what he did say. I confess that I am somewhat puzzled at the different forms in which it has been quoted, and at the irrelevant details which have been added.</p> </blockquote> <p><strong>[B]</strong> Augustus De Morgan (1806-1871), <em>On infinity; and on the sign of equality</em>, <strong>Transactions of the Cambridge Philosophical Society</strong> 11 Part I (1871), 145-189.</p> <blockquote> <p><a href="http://catalog.hathitrust.org/Record/000125921" rel="noreferrer">Published separately as a booklet</a> by Cambridge University Press in 1865 (same title; i + 45 pages). The following excerpt is from the version published in 1865.</p> <p><strong>(footnote 1 on p. 14)</strong> We are apt to pronounce that the admirable <em>pre-established harmony</em> which exists between the subjective and objective is a necessary property of mind. It may, or may not, be so. Can we not grant to omnipotence the power to fashion a mind of which the primary counting is by twos, <span class="math-container">$0,$</span> <span class="math-container">$2,$</span> <span class="math-container">$4,$</span> <span class="math-container">$6,$</span> &amp;c.; a mind which always finds its first indicative notion in <em>this and that</em>, and only with effort separates <em>this</em> from <em>that</em>. I cannot invent the fundamental forms of language for this mind, and so am obliged to make it contradict its own nature by using our terms. The attempt to think of such things helps towards the habit of distinguishing the subjective and objective.</p> <p><strong>Note:</strong> Those interested in such speculations will also want to look at De Morgan's lengthy footnote on p. 20.</p> </blockquote> <p><em>(ADDED 6 YEARS LATER)</em> I recently read Ian Stewart's 2006 book <a href="https://rads.stackoverflow.com/amzn/click/com/0465082319" rel="noreferrer" rel="nofollow noreferrer"><strong>Letters to a Young Mathematician</strong></a> and in this book there is a passage (see below) that I think is worth including here.</p> <blockquote> <p><strong>(from pp. 30-31)</strong> I think human math is more closely linked to our particular physiology, experiences, and psychological preferences than we imagine. It is parochial, not universal. Geometry's points and lines may seem the natural basis for a theory of shape, but they are also the features into which our visual system happens to dissect the world. An alien visual system might find light and shade primary, or motion and stasis, or frequency of vibration. An alien brain might find smell, or embarrassment, but not shape, to be fundamental to its perception of the world. And while discrete numbers like <span class="math-container">$1,$</span> <span class="math-container">$2,$</span> <span class="math-container">$3,$</span> seem universal to us, they trace back to our tendency to assemble similar things, such as sheep, and consider them property: has one of <em>my</em> sheep been stolen? Arithmetic seems to have originated through two things: the timing of the seasons and commerce. But what of the blimp creatures of distant Poseidon, a hypothetical gas giant like Jupiter, whose world is a constant flux of turbulent winds, and who have no sense of individual ownership? Before they could count up to three, whatever they were counting would have blown away on the ammonia breeze. They would, however, have a far better understanding than we do of the math of turbulent fluid flow.</p> </blockquote>
linear-algebra
<p>This question aims to create an &quot;<a href="http://meta.math.stackexchange.com/q/1756/18880">abstract duplicate</a>&quot; of numerous questions that ask about determinants of specific matrices (I may have missed a few):</p> <ul> <li><a href="https://math.stackexchange.com/q/153457/18880">Characteristic polynomial of a matrix of $1$&#39;s</a></li> <li><a href="https://math.stackexchange.com/q/55165/18880">Eigenvalues of the rank one matrix $uv^T$</a></li> <li><a href="https://math.stackexchange.com/q/577937/18880">Calculating $\det(A+I)$ for matrix $A$ defined by products</a></li> <li><a href="https://math.stackexchange.com/q/84206/18880">How to calculate the determinant of all-ones matrix minus the identity?</a></li> <li><a href="https://math.stackexchange.com/q/86644/18880">Determinant of a specially structured matrix ($a$&#39;s on the diagonal, all other entries equal to $b$)</a></li> <li><a href="https://math.stackexchange.com/q/629892/18880">Determinant of a special $n\times n$ matrix</a></li> <li><a href="https://math.stackexchange.com/q/689111/18880">Find the eigenvalues of a matrix with ones in the diagonal, and all the other elements equal</a></li> <li><a href="https://math.stackexchange.com/q/897469/18880">Determinant of a matrix with $t$ in all off-diagonal entries.</a></li> <li><a href="https://math.stackexchange.com/q/227096/18880">Characteristic polynomial - using rank?</a></li> <li><a href="https://math.stackexchange.com/q/3955338/18880">Caclulate $X_A(x) $ and $m_A(x) $ of a matrix $A\in \mathbb{C}^{n\times n}:a_{ij}=i\cdot j$</a></li> <li><a href="https://math.stackexchange.com/q/219731/18880">Determinant of rank-one perturbations of (invertible) matrices</a></li> </ul> <p>The general question of this type is</p> <blockquote> <p>Let <span class="math-container">$A$</span> be a square matrix of rank<span class="math-container">$~1$</span>, let <span class="math-container">$I$</span> the identity matrix of the same size, and <span class="math-container">$\lambda$</span> a scalar. What is the determinant of <span class="math-container">$A+\lambda I$</span>?</p> </blockquote> <p>A clearly very closely related question is</p> <blockquote> <p>What is the characteristic polynomial of a matrix <span class="math-container">$A$</span> of rank<span class="math-container">$~1$</span>?</p> </blockquote>
<p>The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of <em>any</em> square matrix of size$~n$. So the answer to the second question is</p> <blockquote> <p>The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.</p> </blockquote> <p>The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore</p> <blockquote> <p>The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n&gt;1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n&gt;1$ is diagonalisable if and only if $\tr(A)\neq0$.</p> </blockquote> <p>See also <a href="https://math.stackexchange.com/q/52395/18880">this question</a>.</p> <p>For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)</p> <blockquote> <p>For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.</p> </blockquote> <p>In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.</p>
<p>Here’s an answer without using eigenvalues: the rank of <span class="math-container">$A$</span> is <span class="math-container">$1$</span> so its image is spanned by some nonzero vector <span class="math-container">$v$</span>. Let <span class="math-container">$\mu$</span> be such that <span class="math-container">$$Av=\mu v.$$</span></p> <p>We can extend this vector <span class="math-container">$v$</span> to a basis of <span class="math-container">$\mathbb{C}^n$</span>. With respect to this basis now, we have that the matrix of <span class="math-container">$A$</span> has all rows except the first one equal to <span class="math-container">$0$</span>. Since determinant and trace are basis-independent it follows by expanding the first column of <span class="math-container">$A$</span> wrt to this basis that <span class="math-container">$$\det(A-\lambda I)= (-1)^n(\lambda -\mu)\lambda^{n-1}.$$</span> Using this same basis as above we also see that <span class="math-container">$\text{Tr}(A) =\mu$</span>, so the characteristic polynomial of <span class="math-container">$A$</span> turns out to be</p> <p><span class="math-container">$$(-1)^n(\lambda -\text{Tr}(A))\lambda^{n-1}.$$</span></p>
logic
<p>The standard proof by contradiction goes like</p> <ol> <li>It is known that $P$ is true.</li> <li>Assume that $Q$ is true.</li> <li>Using the laws of logic, deduce that $P$ is false.</li> <li>Rejecting this contradiction, we are forced to accept the falsity of $Q$.</li> </ol> <p>In rejecting the contradiction we implicitly assume that mathematics is consistent. However, doesn't <a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Second_incompleteness_theorem">Godel's (Second) Incompleteness Theorem</a> tell us that the consistency of mathematics cannot be proven? Does this pose a problem?</p>
<p>Godel's Incompleteness Theorem does not apply to every mathematical system. However, let us suppose we are working in a system to which it applies. If our system is consistent, your proof of $\neg Q$ is meaningful. If our system is inconsistent, then everything can be proven from it. So your proof tell us that a theorem is a consequence of the axioms of our system.</p>
<p>If logic is consistent, we have proven Q false.</p> <p>If logic is inconsistent, then <em>all</em> statements are false (and true, simultaneously).</p> <p>Either way, Q is false.</p>
logic
<p><em>Background: I'm a logic student with very little background in cohomology etc., so this question is fairly naive.</em></p> <hr> <p>Although mathematical logic is generally perceived as sitting off on its own, there are some striking applications of algebraic/geometric/combinatorial ideas to logic. In general, I'm very interested in the following broad question: </p> <blockquote> <p>"How should I go about looking for pieces of mathematics far from mathematical logic, which have bearing on some piece of mathematical logic?"</p> </blockquote> <p>Right now, I'm specifically interested in the following: </p> <blockquote> <p>"When should I think 'cohomology!'?"</p> </blockquote> <p>The specific example I'm motivated by is a pair of papers by Dan Talayco (<a href="http://arxiv.org/pdf/math/9311205.pdf">http://arxiv.org/pdf/math/9311205.pdf</a>, <a href="http://www.sciencedirect.com/science/article/pii/0168007295000240">http://www.sciencedirect.com/science/article/pii/0168007295000240</a>) in which he develops cohomology theories for two purely set-theoretic objects: Hausdorff gaps, and particularly weird infinite trees ("Todorcevic trees").</p> <p>At the beginning of his paper on Hausdorff gaps, Talayco mentions that</p> <blockquote> <p>"the original observation that gaps are cohomological in nature is due to Blass."</p> </blockquote> <p>This is something I want to be able to do! I can tell that e.g. Hausdorff gaps are all about "not being able to fill something in," but it's a long way between that vague statement and the intuition that there should be a cohomology theory around it, let alone coming up with the specifics. So my question is:</p> <blockquote> <p><strong>Question.</strong> When should I suspect that some piece of mathematics (ideally far from algebra/geometry) has a cohomological interpretation, and how should I go about figuring out what the specifics should be?</p> </blockquote> <p>To clarify: although 'useful' is always good, I'm just asking how I can tell that cohomology <em>can</em> be attached to some piece of mathematics (especially logic), regardless of whether it yields new results.</p>
<p>I guess I will take a crack at this.</p> <p>First of all, it is probably worthwhile for you to learn some cohomology in its original home so that you have some intuition for it, and some knowledge of what theorems there are, how to compute it, etc. You do not give much indication in your answer as to how much knowledge of cohomology you have currently.</p> <p>Generally the intuition behind cohomology groups is that they measure the failure of "locally consistent" things to be "globally consistent".</p> <p>Examples:</p> <p>The first de Rham cohomology group of the punctured plane is 1 dimensional since there is (up to the gradient of a global function) only 1 vector field on this space which is locally a gradient of a function but not globally the gradient of a function.</p> <p>The <a href="http://en.wikipedia.org/wiki/Penrose_triangle" rel="noreferrer">Penrose triangle</a> represents a nontrivial cohomology class over the multiplicative group of positive reals, since it "locally" looks like a perspective drawing, but there is no "global" object realizing that.</p> <p>If local exchange rates between countries allow arbitrage, then there is no globally consistent exchange rate, so current exchange rates give a nontrivial cohomology class.</p> <p>The axiom of choice says every surjection splits. In fact, even without the axiom of choice, every surjection splits locally (for every point in the codomain, I can find an inverse image), and so the axiom of choice is a a local to global statement: these local inverses can be assembled into a global section. Blass has written a bit about this in <a href="http://www.ams.org/journals/tran/1983-279-01/S0002-9947-1983-0704615-7" rel="noreferrer">Blass - Cohomology detects failure of the axiom of choice</a>, but there is still a lot more work to be done with this concept.</p> <p>The moral is just to be on the lookout for situations where things seems to fit together in small bits, but somehow the whole does not work out. There is, more than likely, cohomology playing into this somehow. </p> <p>I will mention that (from my perspective) sheaf cohomology probably formalizes this intuitive perspective the best, since you do not have to start with a sequence of maps with differentials (where do those come from?) just a notion of local objects and how to patch them. So I would recommend learning some sheaf cohomology if you are planning on looking for cohomology far from algebraic topology.</p>
<p>As Steven Gubkin and Ryan Budney have already pointed out, cohomology is often used to measure how far a "locally consistent" object is from being "globally consistent". I thought I might describe another example of this in set theory, which it turns out contains a lot of open problems. As far as I know, this example hasn't been written down anywhere, so I'll have to be a bit verbose. I won't attempt an actual answer to the OP's question, but I hope that what I write down might be helpful.</p> <p>Disclaimer: Justin Moore told me about this problem many years ago, when I was just beginning grad school. What I'm writing down is what I've been able to reconstruct from my memory of that conversation; it might not be the most up-to-date information on the problem, or the best description of it.</p> <p>Given a subset $A$ of $\omega$, let $\Gamma(A) = \prod_{n\in A} \mathbb{Z} / \bigoplus_{n\in A} \mathbb{Z}$. Thus, an element of $\Gamma(A)$ can be described as the equivalence class of a function $f : A\to\mathbb{Z}$, where the equivalence is "$f$ and $g$ differ on at most finitely-many coordinates."</p> <p>Now given a family $\mathcal{A}\subseteq\mathcal{P}(\omega)$, and an $n &lt; \omega$, we define the groups $$ C_n(\mathcal{A}) = \prod_{A_1,\ldots,A_n} \Gamma(A_1\cap\cdots \cap A_n)$$</p> <p>When $n = 0$, this definition is a little ambiguous, so we take the opportunity to identify $C_0(\mathcal{A})$ with $\Gamma(\omega)$. (One can argue that that's the correct way of doing things, but I'll leave it to the reader.) Now we can define coboundary maps $\delta_n : C_n\to C_{n+1}$ (for all $n$, <em>including</em> $n = 0$) by $$\delta_n(F)(A_1,\ldots,A_{n+1}) = \sum_{k=0}^n (-1)^k F(A_1,\ldots,\widehat{A_{k+1}},\ldots,A_{n+1})$$ where as usual $\widehat{A_{k+1}}$ means we drop $A_{k+1}$ from the list. One can prove as usual that $\delta_{n+1}\circ \delta_n = 0$, so $H_n(\mathcal{A}) = \ker{\delta_{n+1}} / \textrm{im}\;\delta_n$ makes sense.</p> <p>If you work through the definitions, you can see that $\delta_0$ maps a function (or more accurately, its equivalence class) to its restrictions to elements of $\mathcal{A}$. $\delta_1$ takes a collection of functions defined on members of $\mathcal{A}$ to their differences (on the pairwise intersections).</p> <p>Hence, a member of $\ker{\delta_1}$ is a family of functions $f_A : A\to \mathbb{Z}$ ($A\in\mathcal{A}$) such that $f_A\upharpoonright A\cap B$ and $f_B\upharpoonright A\cap B$ agree mod-finite for every $A,B$. Such families have been studied before by set theorists, and are called <em>coherent families</em>. The question of whether such a coherent family is in $\textrm{im}\;\delta_0$ is exactly what comes up in Dow, Simon and Vaughan's paper "Strong homology and the proper forcing axiom". They prove there the following (I'm paraphrasing a little bit):</p> <p><strong>Theorem 1</strong>: Assume $\mathfrak{d} = \omega_1$. Then there is a $P$-ideal $\mathcal{I}\subseteq\omega$ (in fact, $\mathcal{I}$ is just $\emptyset\times\textrm{fin}$) such that $H_0(\mathcal{I}) \neq 0$.</p> <p><strong>Theorem 2</strong>: Assume the Proper Forcing Axiom. Then $H_0(\mathcal{I}) = 0$ for every $P_{\aleph_1}$-ideal $\mathcal{I}$.</p> <p>Actually, it's not hard to prove that $2^\omega &lt; 2^{\omega_1}$ implies that $H_0(\mathcal{A})$ has size $2^{\omega_1}$, whenever $\mathcal{A}$ is a $\subset^*$-increasing $\omega_1$-sequence in $\mathcal{P}(\omega)$. Moreover, Velickovic proves Theorem 2 from just OCA in his paper "OCA and automorphisms of $\mathcal{P}(\omega)/\mathrm{fin}$".</p> <p>Okay, so we have a lot of natural questions!</p> <p><strong>Question</strong>: What is the possible behavior of $H_n(\mathcal{A})$ for various sets $\mathcal{A}$, and $n\ge 1$? Does PFA (or MM, or whatever) imply that they're all trivial, whenever (say) $\mathcal{A}$ is a $P$-ideal? Can we consistently get $H_n(\mathcal{A}) \neq 0$ for all $n\ge 1$? All at the same time? Etc.</p> <p>Here's another, entirely unrelated problem. It's known that for every $n$, there is a $\sigma$-$n$-linked poset of size $\mathfrak{b}$, which has no $n+1$-linked subset of size $\mathfrak{b}$. (See Todorcevic, "Remarks on cellularity in products.") Can you express this using cohomology (or maybe homology)?</p>
geometry
<p>How can one see that a dot product gives the angle's cosine between two vectors. (assuming they are normalized)</p> <p>Thinking about how to prove this in the most intuitive way resulted in proving a trigonometric identity: $\cos(a+b)=\cos(a)\cos(b)-\sin(a)\sin(b)$.</p> <p>But even after proving this successfully, the connection between and cosine and dot product does not immediately stick out and instead I rely on remembering that this is valid while taking comfort in the fact that I've seen the proof in the past.</p> <p>My questions are:</p> <ol> <li><p>How do you see this connection?</p></li> <li><p>How do you extend the notion of dot product vs. angle to higher dimensions - 4 and higher?</p></li> </ol>
<p>The dot product is basically a more flexible way of working with the Euclidean norm. You know that if you have the dot product $\langle x, y \rangle$, then you can define the Euclidean norm via $$\lVert x\rVert = \sqrt{\langle x, x \rangle}.$$</p> <p>Conversely, it turns out that you can recover the dot product from the Euclidean norm using the <a href="http://en.wikipedia.org/wiki/Polarization_identity" rel="noreferrer">polarization identity</a> $$\langle x, y \rangle = \frac{1}{4} \left(\lVert x + y\rVert^2 - \lVert x - y\rVert^2 \right).$$</p> <p>Okay, so how can you see the relationship between the dot product and cosines? The key is the <strong>law of cosines</strong>, which in vector language says that $$\lVert a - b\rVert^2 = \lVert a\rVert^2 + \lVert b\rVert^2 - 2 \lVert a\rVert \lVert b\rVert \cos \theta$$</p> <p>where $\theta$ is the angle between $a$ and $b$. On the other hand, by bilinearity and symmetry we see that $$\lVert a - b\rVert^2 = \langle a - b, a - b \rangle = \lVert a\rVert^2 + \lVert b\rVert^2 - 2 \langle a, b \rangle$$</p> <p>so it follows that $$\langle a, b \rangle = \lVert a\rVert \lVert b\rVert \cos \theta$$</p> <p>as desired. </p> <p>Any two vectors in an $n$-dimensional Euclidean space together span a Euclidean space of dimension at most $2$, so the connection between the dot product and angles in general reduces to the case of $2$ dimensions. </p>
<p>Here's one way to remember it easily: assume one of the two unit vectors is $(1,0)$ (by an appropriate choice of coordinates we may assume we are working in $2$ dimensions, and then that one of the vectors is the standard basis vector). Then the dot product is just the $x$-coordinate of the other, which is by definition the cosine of the angle between them.</p>
differentiation
<p>Does a function, $f(x)$, exist such that $\int f(x) dx $ can be found but $f' (x)$ cannot be found in terms of elementary functions.</p> <p>For example, if $f(x)=e^{x^2}$, then the derivative is easily calculated by using the chain rule. However, there does not exist an anti-derivative in terms of elementary functions. </p> <p>Does a function exist with the opposite property?</p>
<p>If the antiderivative $F$ of $f$ is elementary, then so is $f' = F''$ (for any reasonable definition of "elementary function"). Thus, no such example can be found.</p> <hr> <p><strong>EDIT</strong></p> <p>Here are some more details which were adressed in the comments and/or other answers:</p> <ol> <li><p>What I assume here is that for your favorite definition of "elementary function", the following is true: Every elementary function is differentiable and the derivative is again an elementary function.</p> <p>This is indeed fulfilled (on the respective domains) if you take as your elementary functions all functions which can be obtained from $\exp, \ln, \sin, \cos$ and polynomials by taking sums/quotients/products and compositions of these functions. This is a consequence of the chain rule.</p> <p>It is <strong>not</strong> fulfilled, however, if you also want to include roots, since e.g. $x \mapsto \sqrt{x}$ is not differentiable at the origin. But note that it is true if you only consider the roots as functions on $(0,\infty)$ instead of $[0,\infty)$.</p></li> <li><p>I assume that if your function $f$ has a continuous version (with respect to equality a.e.), you identify it with its continuous version.</p> <p>As noted in the answer of @RossMillikan, the Dirichlet function $f = 1_\Bbb{Q}$ is (Lebesgue)-integrable with "antiderivative" $x \mapsto 0$, but not differentiable. But note that $f = 0$ almost everywhere, which is elementary and has an elementary derivative.</p> <p>Finally, if $F(x) = \int_a^x f(t) \, dt$ is elementary, then (by Lebesgue's differentiation theorem) you have $f(x) = F'(x)$ almost everywhere. Hence, if $F$ is elementary (as outlined in point 1), then $F'$ is elementary and hence continuous, so that we get $f = F'$ almost everywhere. Since we agreed to identify $f$ with its continuous version, we get $f = F'$ everywhere, so that $f$ is differentiable with $f' = F''$ elementary, as claimed above.</p></li> </ol>
<p>Although the other answers say differently, I would put up the example of the <a href="http://mathworld.wolfram.com/WeierstrassFunction.html" rel="nofollow noreferrer">Weierstrass function</a> which is a <a href="http://mathworld.wolfram.com/Pathological.html" rel="nofollow noreferrer">pathological</a> mathematical idea.</p> <p>Quoting from <a href="https://en.wikipedia.org/w/index.php?title=Weierstrass_function&amp;oldid=692163780" rel="nofollow noreferrer">Wikipedia</a>:</p> <blockquote> <p>In mathematics, the Weierstrass function is an example of a <a href="https://en.wikipedia.org/wiki/Pathological_(mathematics)" rel="nofollow noreferrer">pathological</a> real-valued function on the real line. The function has the property of being continuous everywhere but differentiable nowhere. It is named after its discoverer Karl Weierstrass.</p> <p>Historically, the Weierstrass function is important because it was the first published example (1872) to challenge the notion that every continuous function was differentiable except on a set of isolated points.</p> <p>In Weierstrass' original paper, the function was defined as the sum of a Fourier series:</p> <p>$$f(x)=\sum_{n=0} ^\infty a^n \cos(b^n \pi x)$$ where $0&lt;a&lt;1$, $b$ is a positive odd integer, and</p> <p>$$ab &gt; 1+\frac{3}{2} \pi$$</p> <p>The minimum value of $b$ which satisfies these constraints is $b=7$. This construction, along with the proof that the function is nowhere differentiable, was first given by Weierstrass in a paper presented to the Königliche Akademie der Wissenschaften on 18 July 1872.</p> <p>The proof that this function is continuous everywhere is not difficult. Since the terms of the infinite series which defines it are bounded by $\pm a^n$ and this has finite sum for $0 &lt; a &lt; 1$, convergence of the sum of the terms is uniform by the Weierstrass M-test with $M_n = a^n$. Since each partial sum is continuous and the uniform limit of continuous functions is continuous, it follows $f$ is continuous.</p> </blockquote> <p>As is evident from the functional form of this special function, it does have an antiderivative. So this can be considered a valid example.</p>
logic
<h2>Uncomputable functions: Intro</h2> <p>The last month I have been going down the rabbit hole of googology (mathematical study of large numbers) in my free time. I am still trying to wrap my head around the seeming paradox of the existence of natural numbers that are <strong>well-defined</strong> but <strong>uncomputable</strong> (in the sense that it has been proven that they can never be calculated by a human / a Turing machine). Let me give two of the most famous examples:</p> <p><strong>Busy beaver function <span class="math-container">$\Sigma(n,m)$</span></strong></p> <p><span class="math-container">$\Sigma(n,m)$</span> &quot;is defined as the maximum number of non-blank symbols that can be written (in the finished tape) with an <span class="math-container">$n$</span>-state, <span class="math-container">$m$</span>-color halting Turing machine starting from a blank tape before halting.&quot; It has been shown that <span class="math-container">$\Sigma$</span> grows faster than all computable functions and, thus, is uncomputable. Calculating <span class="math-container">$\Sigma$</span> for sufficiently large inputs would require an oracle Turing machine as it would literally be a solution to the halting problem. Thus, it is uncomputable, although the forumlation of <span class="math-container">$\Sigma$</span> in set theory is precise and clear. <a href="https://googology.fandom.com/wiki/Busy_beaver_function" rel="noreferrer">More details here.</a></p> <p><strong>Rayo's number <span class="math-container">$\text{Rayo}\left(10^{100}\right)$</span></strong></p> <p>Rayo's number was the record holder in the googology community for a long time and it is defined as &quot;the smallest positive integer bigger than any finite positive integer named by an expression in the language of first-order set theory with googol symbols or less.&quot; It is defined in the language of an (unspecified) second-order set theory <a href="https://googology.fandom.com/wiki/Rayo%27s_number#Definition" rel="noreferrer">here</a>. (Its well-definedness is thusly a bit controversial but it would outgrow <span class="math-container">$\Sigma$</span> by a huge marigin if resolved.)</p> <h2>My mathematical / existential questions</h2> <ul> <li><p>Does a number like <span class="math-container">$x=\Sigma\left(10^{100},10^{100}\right)$</span> &quot;exist&quot; in set theory in the same sense like the number <span class="math-container">$4$</span>? Does it even make sense to include it in a mathematical operation like <span class="math-container">$(x$</span> mod <span class="math-container">$4)$</span> or <span class="math-container">$x^x$</span> if we cannot even write it down in a decimal expansion?</p> </li> <li><p>I am well aware of Gödel's incompleteness theorems and the existence of unprovable statements like the continuum hypothesis, which can neither be proven nor disproven by ZFC axioms in any finite number of steps. Is there some parallel between that and the existence of numbers that cannot be computed in any finite amount of time?</p> </li> <li><p>Is there some version of mathematics or system of axioms which resolves this problem? (i.e. where well-definedness of an object is equivalent to computability?)</p> </li> </ul> <p>I would be very happy if anyone could answer or point me in the right direction.</p>
<p>Replying to your three mathematical/existential questions in order:</p> <ul> <li>Yes. There exists a TM that, when started on a blank tape, eventually halts (after finitely many steps) with the exact decimal expansion of <span class="math-container">$x=\Sigma\left(10^{100},10^{100}\right)$</span> on the tape. The same is true for <span class="math-container">$x\bmod 4$</span> or <span class="math-container">$x^x$</span> <em>or any other natural number</em>.</li> <li>There does not exist a natural number &quot;that cannot be computed in any finite amount of time&quot;.</li> <li>There is no such problem for natural numbers.</li> </ul> <p>On the other hand, <em>provability</em> is another kettle o' fish: <a href="https://math.stackexchange.com/q/3854667/16397">There is no natural number <span class="math-container">$n$</span> such that <strong>ZFC proves</strong> <span class="math-container">$\ BB(7918)=n,$</span></a> and more recently we have that <a href="https://www.scottaaronson.com/papers/bb.pdf" rel="noreferrer">for any <span class="math-container">$m\ge 748,$</span> there is no natural number <span class="math-container">$n$</span> such that ZFC proves <span class="math-container">$BB(m)=n.$</span></a> (Here <span class="math-container">$BB$</span> is the Busy Beaver function w.r.t. the number of steps taken before halting.)</p> <hr /> <p><strong>NB</strong>: As your questions (and your entire Intro) seemed to be about <strong>natural numbers</strong>, that is the context of my replies above. The situation is quite different in the larger context of <strong>real numbers</strong>. Note that every natural number has a <em>finite</em> representation, which is the basic reason it is computable. In contrast, a real number typically requires an <em>infinite</em> representation, which opens the <em>possibility</em> of not being computable. (It turns out that almost all reals are not computable.)</p>
<p>My view is that questions such as these arise from a failure to distinguish between intension (a description of a thing, an arithmetic expression, program source code, etc.) and extension (the thing described, the result of evaluating the expression, the observable behaviour of the compiled program, etc.). Maybe the following &quot;theorem&quot; will throw the difference into sharp relief.</p> <p><strong>Theorem.</strong> Every natural number is computable.</p> <p><em>Proof.</em> We must show that for every natural number <span class="math-container">$n$</span>, there is a Turing machine whose output is (say) a unary representation of <span class="math-container">$n$</span>. We proceed by induction. There is certainly a Turing machine whose output is empty, i.e. a representation of <span class="math-container">$0$</span>. If we have a Turing machine whose output is a unary representation of <span class="math-container">$n$</span> it is clear that we can modify it to output just one more unit, so as to obtain a unary representation of <span class="math-container">$n + 1$</span>. Hence every natural number is indeed computable. ◼</p> <p>There is nothing wrong with the proof above, but nonetheless you may feel unable to accept the conclusion. The only way out is to conclude that there is a problem with the interpretation of the statement of the theorem. What this is actually proving is that every natural-number-in-extension is computable. This proof has nothing to do with natural-numbers-in-intension.</p> <p>In order to say anything mathematically rigorous about natural-numbers-in-intension we must first choose a mathematical model for &quot;describing natural numbers&quot;. Unfortunately, there is no canonical choice, and different choices have different expressive power. If you choose to define &quot;description of a natural number&quot; as &quot;Turing machine that computes it&quot;, then tautologically every natural-number-in-intension is computable. But you could also choose to define it as &quot;formula <span class="math-container">$\phi$</span> in the language of set theory with at least one free variable <span class="math-container">$n$</span> such that ZFC proves <span class="math-container">$\exists ! n \in \mathbb{N} . \phi$</span>&quot;. In this case it is true that not every natural-number-in-intension is computable – or, to put it another way, there is no procedure (computable or otherwise!) that will convert the formula <span class="math-container">$\phi$</span> into a Turing machine that (ZFC proves) computes the unique natural number that <span class="math-container">$\phi$</span> describes.</p>
combinatorics
<p>In an exam with <span class="math-container">$12$</span> yes/no questions with <span class="math-container">$8$</span> correct needed to pass, is it better to answer randomly or answer exactly <span class="math-container">$6$</span> times yes and 6 times no, given that the answer 'yes' is correct for exactly <span class="math-container">$6$</span> questions?</p> <p>I have calculated the probability of passing by guessing randomly and it is</p> <p><span class="math-container">$$\sum_{k=8}^{12} {{12}\choose{k}}0.5^k0.5^{n-k}=0.194$$</span></p> <p>Now given that the answer 'yes' is right exactly <span class="math-container">$6$</span> times, is it better to guess 'yes' and 'no' <span class="math-container">$6$</span> times each? </p> <p>My idea is that it can be modelled by drawing balls without replacement. The balls we draw are the correct answers to the questions.</p> <p>Looking at the first question, we still know that there are <span class="math-container">$6$</span> yes and no's that are correct. The chance that a yes is right is <span class="math-container">$\frac{6}{12}$</span> and the chance that a no is right is also <span class="math-container">$\frac{6}{12}$</span>. </p> <p>Of course the probability in the next question depends on what the first right answer was. If yes was right, yes will be right with a probability of <span class="math-container">$5/11$</span> and a no is right with the chance <span class="math-container">$6/11$</span>. If no was right, the probabilities would change places.</p> <p>Now that we have to make the choice <span class="math-container">$12$</span> times and make the distinction which one was right, we get <span class="math-container">$2^{12}$</span> paths total. We cannot know what the correct answers to the previous questions were. So we are drawing <span class="math-container">$12$</span> balls at once, but from what urn? It cannot contain <span class="math-container">$24$</span> balls with <span class="math-container">$12$</span> yes and <span class="math-container">$12$</span> no's. Is this model even correct?</p> <p>Is there a more elegant way to approach that?</p> <p>I am asking for hints, not solutions, as I'm feeling stuck. Thank you.</p> <hr> <p><strong>Edit</strong>: After giving @David K's answer more thought, I noticed that the question can be described by the <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="noreferrer">hypergeometric distribution</a>, which yields the desired result.</p>
<p>We are given the fact that there are $12$ questions, that $6$ have the correct answer "yes" and $6$ have the correct answer "no."</p> <p>There are $\binom{12}{6} = 924$ different sequences of $6$ "yes" answers and $6$ "no" answers. If we know nothing that will give us a better chance of answering any question correctly than sheer luck, the most reasonable assumption is that every possible sequence of answers is equally likely, that is, each one has $\frac{1}{924}$ chance to occur.</p> <p>So guess "yes" $6$ times and "no" $6$ times. I do not care how you do that: you may guess "yes" for the first $6$, or flip a coin and answer "yes" for heads and "no" for tails until you have used up either the $6$ "yeses" or the $6$ "noes" and the rest of your answers are forced, or you can put $6$ balls labeled "yes" and $6$ labeled "no" in an urn, draw them one at a time, and answer the questions in that sequence.</p> <p>No matter <em>what</em> you do, you end up with some sequence of "yes" $6$ times and "no" $6$ times. You get $12$ correct if and only if the sequence of correct answers is exactly the same as your sequence. That probability is $\frac{1}{924}.$</p> <p>There is no way for you to get $11$ correct. You get $10$ correct if and only if the correct answers are "yes" on $5$ of your "yes" answers and "no" on your other "yes" answers. The number of ways this can happen is the number of ways to choose $5$ correct answers from your $6$ "yes" answers, times the number of ways to choose $5$ correct answers from your $6$ "no" answers: $\binom 65 \times \binom 65 = 36.$</p> <p>There is no way for you to get $9$ correct. You get $8$ correct if and only if the correct answers are "yes" on $4$ of your "yes" answers and "no" on your other "yes" answers. The number of ways this can happen is the number of ways to choose $4$ correct answers from your $6$ "yes" answers, times the number of ways to choose $4$ correct answers from your $6$ "no" answers: $\binom 64 \times \binom 64 = 225.$</p> <p>In any other case you fail. So the chance to pass is $$ \frac{1 + 36 + 225}{924} = \frac{131}{462} \approx 0.283550, $$ which is much better than the chance of passing if you simply toss a coin for each individual question but not nearly as good as getting $4$ or more heads in $6$ coin tosses.</p> <hr> <p>Just to check, we can compute the chance of failing in the same way: $6$ answers correct ($3$ "yes" and $3$ "no"), $4$ answers correct, $2$ correct, $0$ correct. This probability comes to $$ \frac{\binom 63^2 + \binom 62^2 + \binom 61^2 + 1}{924} = \frac{400 + 225 + 36 + 1}{924} = \frac{331}{462} \approx 0.716450, $$ which is the value needed to confirm the answer above.</p>
<p>In terms of balls and urns: Maybe it helps to think about it as follows:</p> <p>You have a red urn and a blue urn, and you have $6$ red balls and $6$ blue balls. You randomly put $6$ of the twelve balls in the red urn, and the other $6$ in the blue urn. Now: what it the chance that at least $8$ balls are in the 'right' (i.e. same colored) urn? </p> <p>Well, to get $8$ correct, you either need to get all $6$ red balls in the red urn ($1$ possibility), or $5$ red ones and $1$ blue in the red urn (${6 \choose 5} \cdot {6 \choose 1} = 6 \cdot 6 = 36$ possibilities), or $4$ red ones and $2$ blue ones (${6 \choose 4} \cdot {6 \choose 2} = 15 \cdot 15 = 225$ possibilities). This is out of a total of ${12 \choose 6} = 924$ possibilities, and so the probability is $\frac{1+36+225}{924}$</p> <p>NOTE: Thanks to @DavidK for pointing out my initial answer was wrong! Everyone please upvote his answer!</p>
logic
<p>I wanted to give an easy example of a non-constructive proof, or, more precisely, of a proof which states that an object exists, but gives no obvious recipe to create/find it.</p> <p>Euclid's proof of the infinitude of primes came to mind, however there is an obvious way to "fix" it: just try all the numbers between the biggest prime and the constructed number, and you'll find a prime in a finite number of steps.</p> <p>Are there good examples of <strong>simple</strong> non-constructive proofs which would require a substantial change to be made constructive? (Or better yet, can't be made constructive at all).</p>
<p>Some digit occurs infinitely often in the decimal expansion of $\pi$.</p>
<p>Claim: There exist irrational $x,y$ such that $x^y$ is rational.</p> <p>Proof: If $\sqrt2^{\sqrt2}$ is rational, take $x=y=\sqrt 2$. Otherwise take $x=\sqrt2^{\sqrt2}, y=\sqrt2$, so that $x^y=2$.</p>
linear-algebra
<p>Let <span class="math-container">$A$</span> be an <span class="math-container">$n \times n$</span> matrix and let <span class="math-container">$\Lambda$</span> be an <span class="math-container">$n \times n$</span> diagonal matrix. Is it always the case that <span class="math-container">$A\Lambda = \Lambda A$</span>? If not, when is it the case that <span class="math-container">$A \Lambda = \Lambda A$</span>?</p> <p>If we restrict the diagonal entries of <span class="math-container">$\Lambda$</span> to being equal (i.e. <span class="math-container">$\Lambda = \text{diag}(a, a, \dots, a)$</span>), then it is clear that <span class="math-container">$A\Lambda = AaI = aIA = \Lambda A$</span>. However, I can't seem to come up with an argument for the general case.</p>
<p>It is possible that a diagonal matrix $\Lambda$ commutes with a matrix $A$ when $A$ is symmetric and $A \Lambda$ is also symmetric. We have</p> <p>$$ \Lambda A = (A^{\top}\Lambda^\top)^{\top} = (A\Lambda)^\top = A\Lambda $$</p> <p>The above trivially holds when $A$ and $\Lambda$ are both diagonal.</p>
<p>A diagonal matrix will not commute with every matrix.</p> <p>$$ \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 2 \end{pmatrix}*\begin{pmatrix} 0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix}=\begin{pmatrix} 0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix}$$</p> <p>But:</p> <p>$$\begin{pmatrix} 0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix} * \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 2 \end{pmatrix} = \begin{pmatrix} 0 &amp; 2 \\ 0 &amp; 0 \end{pmatrix}.$$</p>
logic
<p>I know this seems like an obvious question, but I haven't been able to find any examples of sentences in logic higher than second order, so my intuition on how it's supposed to behave is failing me. There are descriptions describing third order logic as 'properties of properties' but without example syntax, I'm not sure if I'm on the wrong track or not.</p> <p>Propositional logic sentences are simple propositions connected by logical connectives:</p> <p>$\phi$ $\land$ $\psi$</p> <p>$\phi \lor \psi$</p> <p>$\lnot \phi \rightarrow \psi$</p> <p>First order logic sentences are quantified objects with free predicates and functions:</p> <p>$\exists$x $\forall$y P(f(x)) $\rightarrow$ Q(y)</p> <p>Second order logic sentences don't just quantify the objects, but the functions and predicates:</p> <p>$\forall$Q $\exists$P $\exists$f $\exists$x $\forall$y P(f(x)) $\rightarrow$ Q(y)</p> <p>How do we go higher than second order? What are some examples of third, fourth, or fifth order logic sentences?</p>
<p>The axioms of topology, for example, can be seen as third-order axioms. Simply because of the axiom that a topology is closed under unions:</p> <p>$$\forall\mathcal U((\forall U\in\mathcal U\rightarrow U\in\tau)\rightarrow(\exists V\forall x(x\in V\leftrightarrow\exists U\in\mathcal U(x\in U))\land V\in\tau))$$</p> <p>In the language of arithmetic, a well-order of the second-order predicates (namely, $\mathcal P(\Bbb N)$), or even the existence thereof, is a third-order sentence coming from the numbers themselves.</p> <p>To some extent this is the great thing about set theory here. It allows us to take any of these high-order sentences and make them first-order in the language of sets. Of course we can make them into first-order in a two/three/four-sorted logic, which acts a bit like type theory, but you do run into issues there (for example, the characterization of $\Bbb R$ as the unique complete ordered field won't translate well into first-order logic).</p>
<p>In the context of higher-order arithmetic, there are many natural third-order statements. In arithmetic, quantifiers over natural numbers are first-order, quantifiers over sets of natural numbers are second-order, and quantification over sets of sets of natural numbers is third-order. </p> <p>Using standard coding methods, quantifying over real numbers is second-order, so quantifying over sets of real numbers is third-order. </p> <p>Some English sentences that are expressed as third-order statements in the language of arithmetic, but not as second-order statements, include:</p> <ul> <li><p>There is an nonprincipal ultrafilter on $\mathbb{N}$.</p></li> <li><p>Every subset of the unit interval $[0,1]$ has a cluster point. </p></li> <li><p>There is a discontinuous function from $\mathbb{R}$ to $\mathbb{R}$. </p></li> </ul> <p>Similarly, one can obtain fourth-order statements by quantifying over arbitrary subsets of $\mathbb{R}$. </p>
logic
<p>It is said (and I myself have said) that in some cases the easiest way to prove a statement by mathematical induction is to prove a stronger statement by mathematical induction, because then one has a stronger induction hypothesis to use.</p> <p>But then I ask myself: if a bright student were to ask me for some typical examples of that phenomenon, what would I say?</p> <p>The only example that came to mind immediately when I thought of this question is <b>Łos's theorem:</b> A first-order sentence $\varphi$ is true in an ultraproduct $\left(\prod_{i\in I} A_i\right)/F$, where $F$ is an ultrafilter on the index set $I$, if and only if the set $\{i\in I : \varphi\text{ is true in}A_i\}$ is a member of $F$. The stronger statement speaks of first-order formulas (which may contain free variables) rather than of first-order sentences (which have no free variables). The proof is by induction on the formation of first order formulas, and it works since the class of first-order formulas is closed under certain operations and the class of first-order sentences is not.</p> <p>That's not a great example for the situation I imagined.</p> <p>Looking around m.s.e. a bit, I find Steven Stadnicki's answer to <a href="https://math.stackexchange.com/questions/174828/a-probably-trivial-induction-problem-sum-2nk-2-lt1">this question</a> and my answer to <a href="https://math.stackexchange.com/questions/163527/prove-that-pascals-triangle-contains-only-natural-numbers-using-induction">this question</a>, and maybe Martin Brandenburg's answer to <a href="https://math.stackexchange.com/questions/150744/generating-elements-of-galois-group">this question</a>.</p> <p>This is not a great list of examples for illustrative purposes at an elementary level (although Steven Stadnick's answer would fit into such a list).</p> <ul> <li>If the purpose is to illustrate this phenomenon, which examples should be used, both at the most elementary levels and at more advanced levels?</li> <li>Is there a logician's viewpoint on this phenomenon? Might there be, for example, some idempotent mapping $T$ from the class of statements-that-are-weaker-versions-of-things-provable-by-induction to the class of things-provable-by-induction, where $T\varphi$ is in each case a generalization of $\varphi$?</li> </ul>
<p>Consider the simple continued fraction $\langle a_0;a_1,a_2,\dots\rangle$, where all the $a_i$ are integers, and all positive except possibly $a_0$.</p> <p>Define the sequences $p_i$, $q_i$ by </p> <p>$p_{-1}=0$, $p_0=a_0$, and $p_k=a_kp_{k-1}+p_{k-2}$, and</p> <p>$q_{-1}=0$, $q_0=1$, and $q_k=a_kq_{k-1}+q_{k-2}$.</p> <p>We want to show that $\langle a_0;a_1,\dots,a_k\rangle=\frac{p_k}{q_k}$.</p> <p>The standard way to prove the result by induction is to prove the stronger result that for any positive $x$, $$\langle a_0;a_1,\dots,a_{k-1},x\rangle=\frac{xp_{k-1}+p_{k-2}}{xq_{k-1}+q_{k-2}}.$$</p> <p>As a small additional example, a <a href="https://math.stackexchange.com/questions/447543/rationalizing-radicals">recent question</a> asked for a proof that the denominator of $\frac{1}{\sqrt{a_1}+\sqrt{a_2}+\cdots+\sqrt{a_n}}$ can be rationalized. An induction proof used the stronger induction hypothesis that $\frac{1}{\sqrt{a_1}+\sqrt{a_2}+\cdots+\sqrt{a_n}+t}$ can be rationalized, where $t$ is a free parameter.</p> <p>For early induction arguments, however, I think inequalities are quite persuasive, since it is clear that for example knowing that $1+\frac{1}{2^2}+\frac{1}{n^2}\lt 2$ cannot by itself imply that $1+\frac{1}{2^2}+\frac{1}{n^2}+\frac{1}{(n+1)^2}\lt 2$ </p>
<p><strong>Example:</strong></p> <p>You can prove</p> <p>$$ \frac{1}{2}\cdot\frac{3}{4}\cdot\ldots\cdot\frac{2n-1}{2n} &lt; \sqrt{\frac{1}{3n}} $$</p> <p>by strengthening to </p> <p>$$ \frac{1}{2}\cdot\frac{3}{4}\cdot\ldots\cdot\frac{2n-1}{2n} &lt; \sqrt{\frac{1}{3n+1}} $$</p> <p>and using plain induction; this is the simplest example I know (you can make the inequality even shorter using the <a href="http://en.wikipedia.org/wiki/Double_factorial" rel="nofollow noreferrer">double factorial</a> notation). In fact some time ago there was a post here about it, but I couldn't find it.</p> <p>I hope this helps ;-)</p> <p><strong>Edit:</strong></p> <p>I've just seen even simpler example in <a href="https://math.stackexchange.com/q/1032535/26306">this question</a>, that is,</p> <blockquote> <p>Prove that $0 \leq a_n &lt; 1$ for all $n \in \mathbb{N}$ where $a_0 = 0$ and $ a_n = a_{n-1}^2 + \frac{1}{4} $ for $n &gt; 0$.</p> </blockquote> <p>which can be easily solved by proving $0 \leq a_n &lt; \frac{1}{2}$ (in the original post @DanielFischer was first to give this hint).</p>
logic
<p>Why can't some mathematical statement (or whatever is the correct term) be both true and false?</p> <p>For example we can prove (e.g. by induction) that <span class="math-container">$1+2+3+\cdots+n=\frac{n(n+1)}{2}$</span> for all positive integers <span class="math-container">$n$</span>. But how can we be sure that no one will ever find a counter example? What if someone claims that <span class="math-container">$1+2+3+\cdots+1000$</span> equals (e.g.) 500567 and not 500500, which is what the above formula claims.</p> <p>Another example: Why is it impossible for someone to come up with three integer <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span>, for which <span class="math-container">$a^3+b^3=c^3$</span> (contradicting Fermat's Last Theorem)? This bothers me even in the simple intuitive level.</p> <p>Then I have heard about Gödel's incompleteness theorems, second of which says (at least this is how I have interpreted it) that an axiomatic system cannot prove its own consistency. So doesn't Gödel's second incompleteness theorem say basically that "anything is possible"? ...that there can be an integer <span class="math-container">$n$</span> for which <span class="math-container">$1+2+3+\cdots+n \neq \frac{n(n+1)}{2}$</span> or that there can be integers <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> for which <span class="math-container">$a^3+b^3=c^3$</span>?</p>
<p>Gödel's theorem could be more accurately interpreted as saying that we can never be sure of the consistency of a sufficiently complex system. We can't be <em>sure</em>, for instance, that the Peano Axioms don't prove $1+1=3$. We sure hope this isn't the case, but no proof would convince us otherwise (and it's probably not, since the Peano Axioms have an intuitive model as being the natural numbers with addition and multiplication).</p> <p>However, it's still true that $1+1=2$ even if the Peano Axioms say otherwise (indeed, if they proved $1+1=3$, they would also have to prove $1+1\neq 3$, and also every <em>other</em> statement you could possibly make within that system). In fact, we can say that, if a (suitably complex) system is inconsistent, then it admits both a proof and a disproof of every statement - this is the principle of explosion.</p> <p>The difference is that there is an intended model of the Peano Axioms - the natural numbers with addition and multiplication. This is clearly well-defined and certain things are undeniably true of them. We would therefore expect that the Peano Axioms are, in fact, consistent (though we can't prove it) - and, if it is consistent, everything it proves <em>is true</em> and undeniably so. Even if PA were inconsistent, we would still expect proofs like the $1+2+\ldots+n=\frac{n(n+1)}2$ to be work since they are leveraging such simple properties of the structure of the natural numbers.</p> <p>The point here is that "truth" and "proof" are distinct statements - but we tend to identify them because we assume our logical systems are consistent, or at least assume that the bits of them we actually use are consistent.</p>
<p>I have two answers for you.</p> <p>One has already been said so I will only say it briefly: if we exhaustively prove something to be true, there can be no counter example - ie, in the case of an induction argument like you provided. </p> <p>Second, I will answer you with Godel as well. He is mainly known for his Incompleteness Theorems (because they are far more interesting), but his first famous work was for proving his Completeness Theorem. This tell us many things, among which is that consistent and sound system will not have the counter examples you suggest may exist (to quote Wikipedia, a more general formulation would be "It says that for any first-order theory T with a well-orderable language, and any sentence S in the language of the theory, there is a formal proof of S in T if and only if S is satisfied by every model of T (S is a semantic consequence of T)."). </p> <p>Further, you misunderstand his Incompleteness theorems. It does not say no theory is consistent, it simply says no consistent theory can prove its own consistency. Indeed, Godel proved the consistency of Peano Arithmetic using Type Theory which could not prove it's own consistency.</p> <p>However, (depending on your level) all of the theorems you encounter can be <em>assured</em> to be true if proven because ZFC (the most common foundation of mathematics) had it's consistency proven by only assuming the existence of a Weakly Inaccessible Cardinal. I think this is widely accepted, so if you can accept the existence of that, you are safe.</p> <p><strong>EDIT:</strong> It has been made known to me in the comments that making the existance of a Weakly Inaccessible Cardinal an axiom creates a far stronger theory than needed to prove the consistency of ZFC. In any case, the point remains that any system of mathematics you're working with has likely been proven consistent by assuming something only slightly stronger - for whatever that is worth.</p> <p>It also occurs to me that first order logic includes the Law of Noncontradiction (also known as the Law of Excluded Middle) as an axiom, where this is also know to be consistent by Godel's Completeness theorem. So, with this your more general question of "Why not both true and false?" is answered because we take it as axiomatic and show that including such an axiom is consistent.</p>
game-theory
<p>Can someone please guide me to a way by which I can solve the following problem. There is a die and 2 players. Rolling stops as soon as some exceeds 100(not including 100 itself). Hence you have the following choices: 101, 102, 103, 104, 105, 106. Which should I choose given first choice. I'm thinking Markov chains, but is there a simpler way? </p> <p>Thanks.</p> <p>EDIT: I wrote dice instead of die. There is just one die being rolled</p>
<p>My attempt: a combination of my comment and Shai's comment.</p> <p>A partition means a partition of n using only 1,2,3,4,5,6.</p> <p>Number of ways to get to 101: Number of partitions of 101.</p> <p>Number of ways to get to 102: partitions of 96 + partitions of 97... + partitions of 100. Since we can have 96+6, 97+5, ...100+2.</p> <p>...</p> <p>Number of ways to get to 106: Partitions of 100. Since we can get to 106 only by adding to 100 then rolling 6.</p> <p>A partition of n will be the nth coefficient x in $\prod_{k=0}^6 \frac{1}{1-x^k}$ using the geometric series. But an easier observation is that $a_n = a_{n-1}+a_{n-2}+a_{n-3}+a_{n-4}+a_{n-5}+a_{n-6}$. Using this </p> <p>Number of ways to get to 101 is $a_{101}=a_{100}+a_{99}+a_{98}+a_{97}+a_{96}+a_{95}$ Number of ways to get to 102 is $a_{100}+a_{99}+a_{98}+a_{97}+a_{96}$ </p> <p>...</p> <p>Number of ways to get to 106 is $a_{100}$ </p> <p>Since $a_n &gt; 0$ we see that 101 will occur the most frequently.</p>
<p>It's a nice problem. The chance of hitting $101$ first is surprisingly large. Let $a(x)$ be the chance of hitting $101$ first, starting at $x$. </p> <p>We solve recursively by setting $a(106)=a(105)=a(104)=a(103)=a(102)=0$ and $a(101)=1$. Then, for $x$ from $100$ to $0$, put $a(x)={1\over 6}\sum_{j=1}^6 a(x+j)$. By the renewal theorem, $a(x)$ converges to $1/\mu$, where $\mu=(1+2+3+4+5+6)/6=7/2$. The value $a(0)$ is very close to this limit. </p> <p>In fact, the whole hitting distribution is approximately $[6/21,5/21,4/21,3/21,2/21,1/21]$.</p> <hr> <p><strong>Added:</strong> Let $a^\prime(x)$ be the chance of hitting $102$ first, starting at $x$. Then $a^\prime(x)$ satisfies the same recurrence as $a(x)$ for $x\leq 100$, but with different boundary conditions $a^\prime(106)=a^\prime(105)=a^\prime(104)=a^\prime(103)=a^\prime(101)=0$ and $a^\prime(102)=1$. Therefore<br> $$a^\prime(x)=a(x-1)-a(100)a(x)$$ and $$a^\prime(0)\approx {1\over \mu}-{1\over 6}{1\over \mu}={5\over 21}. $$ The rest of the hitting distribution can be analyzed in a similar way. </p>
geometry
<p>In <a href="https://www.penguinrandomhouse.com/books/230949/the-secrets-of-triangles-by-alfred-s-posamentier-and-ingmar-lehmann/9781616145873/" rel="noreferrer"><em>The Secrets of Triangles</em></a> a remarkable theorem is attributed to Jakob Steiner. </p> <p>Each side of a triangle is cut into two segments by an altitude. Build squares on each of those segments, and the alternating squares sum to each other. <a href="https://i.sstatic.net/Cqc7A.png" rel="noreferrer"><img src="https://i.sstatic.net/Cqc7A.png" alt="Steiner&#39;s altitude theorem"></a></p> <p>The book doesn't include a proof, and I'm not sure how to start.</p> <p>Does this theorem have a name? How could one go about proving this beautiful relationship?</p>
<p>Label the squares' side lengths $a, b, c, d, e, f $ (clockwise from $A$). The claim is that $$a^2+c^2+e^2=b^2+d^2+f^2$$</p> <p>Let $x$ be the altitude from $A$. </p> <p>Let $y$ be the altitude from $B$.</p> <p>Let $z$ be the altitude from $C$.</p> <hr> <p>By the Pythagorean theorem applied to the two right triangles that include the altitude from $A$, we have:</p> <p>$$x^2+c^2=(a+b)^2$$</p> <p>$$x^2+d^2=(e+f)^2$$</p> <p>By the Pythagorean theorem applied to the two right triangles that include the altitude from $B$, we have:</p> <p>$$y^2+a^2=(e+f)^2$$</p> <p>$$y^2+b^2=(c+d)^2$$</p> <p>By the Pythagorean theorem applied to the two right triangles that include the altitude from $C$, we have:</p> <p>$$z^2+e^2=(c+d)^2$$</p> <p>$$z^2+f^2=(a+b)^2$$</p> <hr> <p>Labeling the six Pythagorean equations above $(1)$ through $(6)$, we can add $(1)$, $(3)$, and $(5)$ to get:</p> <p>$$ x^2+y^2+z^2 +a^2+c^2+e^2=(a+b)^2+ (c+d)^2 + (e+f)^2$$</p> <p>Add $(2)$, $(4)$, and $(6)$:</p> <p>$$ x^2+y^2+z^2 +b^2+d^2+f^2=(a+b)^2+ (c+d)^2 + (e+f)^2$$</p> <p>Notice that the right sides of the above two equations are equal, so we may equate the left sides:</p> <p>$$ x^2+y^2+z^2+a^2+c^2+e^2= x^2+y^2+z^2+b^2+d^2+f^2 $$</p> <p>Now subtract $x^2+y^2+z^2$ from both sides, and we are done. </p> <p>$$a^2+c^2+e^2=b^2+d^2+f^2$$</p>
<p>Taking a cue from <a href="https://math.stackexchange.com/a/2483937/409">my Law of Cosines trigonograph</a>, we have a straightforward arithmetic of areas:</p> <p><a href="https://i.sstatic.net/XMT64.png" rel="noreferrer"><img src="https://i.sstatic.net/XMT64.png" alt="enter image description here"></a></p> <p>$$\begin{align} \\ \\ \\ \color{red}{X_1} + \color{blue}{[\bullet\bullet\phantom{\bullet}]} &amp;\quad=\quad \color{red}{X_2} + \color{green}{[\bullet\bullet\bullet]} &amp;=\quad b c \cos A \\ \color{blue}{Y_1}\, + \color{green}{[\bullet\bullet\bullet]} &amp;\quad=\quad \color{blue}{Y_2}\, + \color{red}{[\bullet\phantom{\bullet}\;\;\phantom{\bullet}]} &amp;=\quad c a \cos B \\ \color{green}{Z_1} + \color{red}{[\bullet\phantom{\bullet}\;\;\phantom{\bullet}]} &amp;\quad=\quad \color{green}{Z_2} + \color{blue}{[\bullet\bullet\phantom{\bullet}]} &amp;=\quad a b \cos C \\ \hline \\ \color{red}{X_1} + \color{blue}{Y_1} + \color{green}{Z_1} &amp;\quad=\quad \color{red}{X_2} + \color{blue}{Y_2} + \color{green}{Z_2} \end{align}$$</p>
probability
<p><strong>I would like to generate a random axis or unit vector in 3D</strong>. In 2D it would be easy, I could just pick an angle between 0 and 2*Pi and use the unit vector pointing in that direction. </p> <p>But in 3D <strong>I don't know how can I pick a random point on a surface of a sphere</strong>.</p> <p>If I pick two angles the distribution won't be uniform on the surface of the sphere. There would be more points at the poles and less points at the equator.</p> <p>If I pick a random point in the (-1,-1,-1):(1,1,1) cube and normalise it, then there would be more chance that a point gets choosen along the diagonals than from the center of the sides. So thats not good either.</p> <p><strong>But then what's the good solution?</strong> </p>
<p>You need to use an <a href="http://en.wikipedia.org/wiki/Equal-area_projection#Equal-area">equal-area projection</a> of the sphere onto a rectangle. Such projections are widely used in cartography to draw maps of the earth that represent areas accurately.</p> <p>One of the simplest such projections is the axial projection of a sphere onto the lateral surface of a cylinder, as illustrated in the following figure:</p> <p><img src="https://i.sstatic.net/GDe76.png" alt="Cylindrical Projection"></p> <p>This projection is area-preserving, and was used by Archimedes to compute the surface area of a sphere.</p> <p>The result is that you can pick a random point on the surface of a unit sphere using the following algorithm:</p> <ol> <li><p>Choose a random value of $\theta$ between $0$ and $2\pi$.</p></li> <li><p>Choose a random value of $z$ between $-1$ and $1$.</p></li> <li><p>Compute the resulting point: $$ (x,y,z) \;=\; \left(\sqrt{1-z^2}\cos \theta,\; \sqrt{1-z^2}\sin \theta,\; z\right) $$</p></li> </ol>
<p>Another commonly used convenient method of generating a uniform random point on the sphere in $\mathbb{R}^3$ is this: Generate a standard multivariate normal random vector $(X_1, X_2, X_3)$, and then normalize it to have length 1. That is, $X_1, X_2, X_3$ are three independent standard normal random numbers. There are many well-known ways to generate normal random numbers; one of the simplest is the <a href="http://en.wikipedia.org/wiki/Box-Muller">Box-Muller algorithm</a> which produces two at a time.</p> <p>This works because the standard multivariate normal distribution is invariant under rotation (i.e. orthogonal transformations).</p> <p>This has the nice property of generalizing immediately to any number of dimensions without requiring any more thought.</p>
logic
<p>A theorem is defined to be a mathematical statement that is proven to be true. The statement $1+1=2$ has definitely been proven in the history of mankind (Russel and Whitehead had once proven it in the book <em>Principia Mathematica</em>). </p> <p>So can it be considered as a theorem? What determines something to be a theorem (besides it being proven to be true)?</p>
<p>The only thing that makes something a theorem (in a particular deduction system) is if a proof of it is known. </p> <p>Now, as for $1+1=2$, you first must be very precise about what all the symbols mean and what the deduction system is that you allow your proofs to be written in. Once one gets into the details, things get less and less trivial, and far from obvious or straightforward. </p> <p>So, what do you mean by $1$? what do you mean by $2$? and what do you mean by $+$? and most importantly, what do you mean by "proof"? Different answers to these questions will lead to different answers to the question "is $1+1=2$ a theorem?". You can learn more about these issues by studying logic (model theory) and in particular the set of axioms known as the Peano axioms. </p> <p>Just to illustrate using two possible interpretations (and avoiding a precise definition of proof, thus relying on some intuitive understanding of what that is). If you define $2$ to be an abbreviation for $1+1$, assuming we know what $+$ is, then $1+1=2$ is certainly a theorem, with a very short proof. However, a more refined possibility is to define the natural numbers as certain sets, and then define the plus operation by induction. Then (commonly) $0=\emptyset$, $1=\{0\}$, and $2=\{0,1\}$. The actual definition of $+$ is a bit more difficult, but then it can, rather easily, be shown that $1+1=2$. I hope this explains things better. </p>
<p>Theorems are proved from axioms, definitions <em>are</em> axioms.</p> <p>If you define that the symbols $1+1=2$, then you implicitly wrote an axiom which connects the symbols, and proves that $1+1=2$ is a true sentence.</p> <p>Often, however, we use the word "theorem" for statements whose proofs are not <em>trivial</em>. In this case, if you <em>define</em> $2$ as $1+1$ then this is not a theorem, this is a definition.</p> <p>Lastly, as Ittay wrote, you have to be very careful about this. Mathematics require precision, what are the axioms you are assuming? What is the meaning of the symbols, and so on.</p> <p>For more, see the following:</p> <ol> <li><a href="https://math.stackexchange.com/questions/243049/how-do-i-convince-someone-that-11-2-may-not-necessarily-be-true/">How do I convince someone that $1+1=2$ may not necessarily be true?</a></li> <li><a href="https://math.stackexchange.com/questions/278974/prove-that-11-2">Prove that 1+1=2</a></li> <li><a href="https://math.stackexchange.com/questions/95069/how-would-one-be-able-to-prove-mathematically-that-11-2">How would one be able to prove mathematically that $1+1 = 2$?</a></li> </ol>
geometry
<p>Given a surface $f(x,y,z)=0$, how would you determine whether or not it's a surface of revolution, and find the axis of rotation?</p> <p>The special case where $f$ is a polynomial is also of interest.</p> <p>A few ideas that might lead somewhere, maybe:</p> <p><strong>(1) For algebra folks:</strong> Surfaces of the form $z = g(x^2 + y^2)$ are always surfaces of revolution. I don't know if the converse is true. If it is, then we just need to find a coordinate system in which $f$ has this particular form. Finding special coordinate systems that simplify things often involves finding eigenvalues. You use eigenvalues to tackle the special case of quadric surfaces, anyway.</p> <p><strong>(2) For differential geometry folks:</strong> Surfaces of revolution have a very special pattern of lines of curvature. One family of lines of curvature is a set of coaxial circles. I don't know if this property characterizes surfaces of revolution, but it sounds promising.</p> <p><strong>(3) For physicists &amp; engineers:</strong> The axis of rotation must be one of the principal axes for the centroidal moments of inertia, according to physics notes I have read. So, we should compute the centroid and the inertia tensor. I'm not sure how. Then, diagonalize, so eigenvalues, again. Maybe this is actually the same as idea #1.</p> <p><strong>(4) For classical geometry folks:</strong> What characterizes surfaces of revolution (I think) is that every line that's normal to the surface intersects some fixed line, which is the axis of rotation. So, construct the normal lines at a few points on the surface (how many??), and see if there is some line $L$ that intersects all of these normal lines (how?). See <a href="https://math.stackexchange.com/questions/607348/line-intersecting-three-or-four-given-lines?lq=1">this related question</a>. If there is, then this line $L$ is (probably) the desired axis of rotation. This seems somehow related to the answers given by Holographer and zyx.</p> <p>Why is this is interesting/important? Because surfaces of revolution are easier to handle computationally (e.g. area and volume calculations), and easy to manufacture (using a lathe). So, it's useful to be able to identify them, so that they can be treated as special cases. </p> <p>The question is related to <a href="https://math.stackexchange.com/questions/593478/how-to-determine-that-a-surface-is-symmetric/603976#comment1274516_603976">this one about symmetric surfaces</a>, I think. The last three paragraphs of that question (about centroids) apply here, too. Specifically, if we have a bounded (compact) surface of revolution, then its axis of rotation must pass through its centroid, so some degrees of freedom disappear.</p> <p>If you want to test out your ideas, you can try experimenting with $$ f(x,y,z) = -6561 + 5265 x^2 + 256 x^4 + 4536 x y - 1792 x^3 y + 2592 y^2 + 4704 x^2 y^2 - 5488 x y^3 + 2401 y^4 + 2592 x z - 1024 x^3 z - 4536 y z + 5376 x^2 y z - 9408 x y^2 z + 5488 y^3 z + 5265 z^2 + 1536 x^2 z^2 - 5376 x y z^2 + 4704 y^2 z^2 - 1024 x z^3 + 1792 y z^3 + 256 z^4 $$ This is a surface of revolution, and it's compact. Sorry it's such a big ugly mess. It was the simplest compact non-quadric example I could invent.</p> <p>If we rotate to a $(u,v,w)$ coordinate system, where \begin{align} u &amp;= \tfrac{1}{9}( x - 4 y + 8 z) \\ v &amp;= \tfrac{1}{9}(8 x + 4 y + z) \\ w &amp;= \tfrac{1}{9}(-4 x + 7 y + 4 z) \end{align} then the surface becomes $$ u^2 + v^2 + w^4 - 1 = 0 $$ which is a surface of revolution having the $w$-axis as its axis of rotation.</p>
<p>You can reduce it to an algebraic problem as follows:</p> <p>The definition of a surface of revolution is that there is some axis such that rotations about this axis leave the surface invariant. Let's denote the action of a rotation by angle $\theta$ about this axis by the map $\vec x\mapsto \vec R_\theta(\vec x)$. With your surface given as a level set $f(\vec{x})=0$, the invariance condition is that $f(\vec R_\theta(\vec x))=0$ whenever $f(\vec{x})=0$, for all $\theta$.</p> <p>In particular, we can differentiate this with respect to $\theta$ to get $\vec\nabla f(\vec R_\theta(\vec x))\cdot \frac{\partial}{\partial\theta}\vec R_\theta(\vec x)=0$, which at $\theta=0$ gives $\vec\nabla f(\vec x)\cdot \vec k(\vec x)=0$, where $\vec k(\vec x)=\left.\frac{\partial}{\partial\theta}\vec R_\theta(\vec x)\right|_{\theta=0}$ is the vector field representing the action of an infinitesimal rotation. (If the language of differential geometry is familiar, this is a Killing vector field).</p> <p>So with this language established, what we need to check is whether there is any Killing field $\vec k(\vec x)$ associated to a rotation, which is orthogonal to the gradient of $f$ everywhere on the surface (i.e., whenever $f(\vec x)=0$). In fact, this will be not just necessary, but also sufficient, since (intuitively) any rotation can be built from many small ones.</p> <p>Luckily, it's quite straightforward to write down the most general Killing field: if $\vec x_0$ is a point on the axis of rotation, and $\vec a$ a unit vector pointing along the axis, we have $\vec k(\vec x)=\vec a \times (\vec x-\vec x_0)$. (Note that this is a degenerate parametrization, since we can pick any point on the axis, corresponding to shifting $\vec x_0$ by a multiple of $\vec a$, and also send $\vec a$ to $-\vec a$, to give the same rotation).</p> <p>To summarize: the question has been recast as "are there any solutions $\vec x_0, \vec a$ to $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)=0$, which hold for all $\vec x$ such that $f(\vec x)=0$?".</p> <p>(You could also write the equation as $\det[\vec a, \, \vec x-\vec x_0, \,\vec\nabla f(\vec x)]=0$).</p> <p>For your example, I got Mathematica to use this method. I let $\vec x_0=(x_0,y_0,0)$, taking $z_0=0$ to remove some degeneracy, and $\vec a=(a_x,a_y,a_z)$, giving 5 unknowns for the axis. I then found four points on the surface, by solving $f(x,y,z)=0$ with $y=z=0$ and $x=z=0$. Then I got it to solve $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)=0$ for the four points, and $|\vec a|^2=1$ (5 equations for the 5 unknowns), getting a unique solution up to the sign of $\vec a$. I then substituted the solution into $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)$ for general $\vec x$, getting zero, so the solution indeed gave an axis of revolution. It is simplified in this case because all the level sets of $f$ give a surface of revolution about the same axis, so this last step did not require assuming $f(\vec x)=0$.</p>
<p>If you are working numerically you are probably not interested in degenerate cases, so let's assume that the surface is "generic" in a suitable sense (this rules out the sphere, for example). As you point out, it is helpful to use Gaussian curvature $K$ because $K$ is independent of presentation, such as the particular choice of the function $f(x,y,z)$. The good news is that it is possible to calculate $K$ directly from $f$ using a suitable differential operator. For curves in the plane this is the Riess operator from algebraic geometry; for surfaces it is more complicated. This was treated in detail in an article by Goldman:</p> <p>Goldman, Ron: Curvature formulas for implicit curves and surfaces. Comput. Aided Geom. Design 22 (2005), no. 7, 632-658. </p> <p>See <a href="http://u.math.biu.ac.il/~katzmik/goldman05.pdf" rel="nofollow">here</a></p> <p>Now the punchline is that if this is a surface of revolution then the lines $\gamma$ of curvature are very special: they are all circles (except for degenerate cases like the sphere). In particular, $\gamma$ has constant curvature and zero torsion $\tau$ as a curve in 3-space. If $f$ is a polynomial, one should get a rational function, say $g$, for curvature, and then the condition of constant curvature for $\gamma$ should be expressible by a pair of equations.</p>
probability
<p>I would like your help with proving that for every $0 \leq k \leq n$, </p> <blockquote> <p>$$\binom{n}{k}^{-1}=(n+1)\int_{0}^{1}x^{k}(1-x)^{n-k}dx . $$</p> </blockquote> <p>I tried to integration by parts and to get a pattern or to use the binomial formula somehow, but it didn't go well.</p> <p>Thanks a lot!</p>
<p>Let's do it somewhat like the way the Rev. Thomas Bayes did it in the 18th century (but I'll phrase it in modern probabilistic terminology).</p> <p>Suppose $n+1$ independent random variables $X_0,X_1,\ldots,X_n$ are uniformly distributed on the interval $[0,1]$.</p> <p>Suppose for $i=1,\ldots,n$ (starting with $1$, not with $0$) we have: $$Y_i = \begin{cases} 1 &amp; \text{if }X_i&lt;X_0 \\ 0 &amp; \text{if }X_i&gt;X_0\end{cases}$$</p> <p>Then $Y_1,\ldots,Y_n$ are conditionally independent given $X_0$, and $\Pr(Y_i=1\mid X_0)= X_0$.</p> <p>So $\Pr(Y_1+\cdots+Y_n=k\mid X_0) = \dbinom{n}{k} X_0^k (1-X_0)^{n-k},$ and hence $$\Pr(Y_1+\cdots+Y_n=k) = \operatorname{E}\left(\dbinom{n}{k} X_0^k (1-X_0)^{n-k}\right).$$</p> <p>This is equal to $$ \int_0^1 \binom nk x^k(1-x)^{n-k}\;dx. $$</p> <p>But the event is the same as saying that the index $i$ for which $X_i$ is in the $(k+1)$th position when $X_0,X_1,\ldots,X_n$ are sorted into increasing order is $0$.</p> <p>Since all $n+1$ indices are equally likely to be in that position, this probability is $1/(n+1)$.</p> <p>Thus $$\int_0^1\binom nk x^k(1-x)^{n-k}\;dx = \frac{1}{n+1}.$$</p>
<p>The method in Michael Hardy's answer is my favorite, but here's my second favorite (using the binomial theorem as you suggested): \begin{align} &amp; \sum_{k=0}^n t^k \int_0^1 {n \choose k} x^k (1 - x)^{n-k} \, dx = \int_0^1 (1 + (t-1)x)^n \, dx \\[10pt] = {} &amp; \frac{t^{n+1} - 1}{(t-1)(n+1)} = \frac{1 + t + \cdots + t^n}{n+1}. \end{align}</p> <p>Generating functions are surprisingly useful for evaluating certain types of discrete families of integrals. For example, to evaluate $$I_k(x) = \int_0^x u^k e^u \, du$$</p> <p>simply observe that $$\sum I_k(x) \frac{t^k}{k!} = \int_0^x e^{ut} e^u \, du = \frac{e^{(t+1)x} - 1}{t+1}.$$</p> <p>See also <a href="https://mathoverflow.net/questions/64556/need-help-evaluating-the-integral-int-1-infty-frac-u-u2-left-log/64568#64568">this MO question</a>. </p>
linear-algebra
<p>How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$ Could anyone explain this to me, please?</p> <p>I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components...</p> <p>When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.</p>
<p>There exists an infinite number of vectors in 3 dimension that are perpendicular to a fixed one. They should only satisfy the following formula: $$(3\mathbf{i}+4\mathbf{j}-2\mathbf{k}) \cdot v=0$$</p> <p>For finding all of them, just choose 2 perpendicular vectors, like $v_1=(4\mathbf{i}-3\mathbf{j})$ and $v_2=(2\mathbf{i}+3\mathbf{k})$ and any linear combination of them is also perpendicular to the original vector: $$v=((4a+2b)\mathbf{i}-3a\mathbf{j}+3b\mathbf{k}) \hspace{10 mm} a,b \in \mathbb{R}$$</p>
<p>Take cross product with any vector. You will get one such vector.</p>
probability
<p>My friend was asked the following problem in an interview a while back, and it has a nice answer, leading me to believe that there is an equally nice solution.</p> <blockquote> <p>Suppose that there are 42 bags, labeled $0$ though $41$. Bag $i$ contains $i$ red balls and $42-(i+1)$ blue balls. Suppose that you pick a bag, then pull out three balls without replacement. What is the probability that all 3 balls are the same color?</p> </blockquote> <p>The problem can be solved easily by using some basic identities with binomial coefficients, and the answer is $1/2$. Moreover, if $42$ is replaced by $n$, the answer does not change, assuming $n&gt;3$. However, this computational approach obscures any hidden structure there might be. Ideally, I would like a simple and direct proof that the probability of getting RRR is the same as the probability of getting RBB.</p> <p>So, is there a nice solution to the problem, one that could be explained fully to someone without the use of paper? Or is there no good way to explain this beyond computational coincidence?</p>
<p>The game can be reformulated in the following way: There is one urn with 42 balls numbered 0 through 41. You start by drawing and keeping one ball (this corresponds to picking a bag in your version). The urn is then taken away while an assistant paints all balls with a <em>lower</em> number than your first draw red and the rest of them blue. Then you draw three more balls and see whether they have the same color.</p> <p>Now, this is equivalent to first drawing <em>four</em> of the numbered balls, then among those four pick a random one to be the "bag" ball and coloring the other three according to that choice. But doing it that way, we can see that the initial drawing of four balls is entirely superfluous -- only the order relation between them matters, and any ordered set of four balls have the same structure. So we might as well forgo the initial draw and just start out with four balls numbered 1, 2, 3, and 4. Then you win if the one you pick in the second draw is either 1 or 4, and the probability of that is, naturally, 1/2.</p>
<p>While I really like Henning Makholm's solution, it also might be interesting to see a bijection from the RRR outcomes to the RRB outcomes to the RBB outcomes to the BBB outcomes. The bijection doesn't depend on the number of balls chosen (e.g., the process gives RRRR to RRRB to RRBB to RBBB to BBBB in the case of four balls), and so it shows that if we choose $k$ balls rather than $3$ from each bag, the probability of all $k$ balls being the same color is $\frac{2}{k+1}$.</p> <p>For bag $i$, number the red balls $1$ through $i$ and the blue balls $1$ through $n-(i+1)$. Then, for any outcome in any bag, draw an X on the three balls chosen. For an RRR outcome, take the highest-numbered red ball with an X, paint it (retain the X) and all higher-numbered red balls blue, and renumber them so that they are now the highest-numbered blue balls, preserving the internal ordering of the balls switched. Finally, change the label on the bag to account for the new number of red balls. This gives a mapping from the set of RRR outcomes to the set of RRB outcomes. Moreover, the mapping is reversible, since for any RRB outcome you can take the blue ball with an X, paint it and all higher-numbered blue balls red, renumber them so that they are now the highest-numbered red balls, and change the label on the bag to account for the new number of red balls. So we have a bijection between the set of RRR outcomes and the set of RRB outcomes. </p> <p>You can then apply this process again for any RRB outcome, painting the highest-numbered red ball with an X and and all higher-numbered red balls blue. This gives a bijection between the RRB outcomes and the RBB outcomes. Applied again gives a bijection between the RBB outcomes and the BBB outcomes. Thus these four sets are the same size, and so the probability that we obtain an RRR or BBB outcome is $\frac{1}{2}$.</p>
differentiation
<p>In Leibniz notation, the 2nd derivative is written as $$\dfrac{\mathrm d^2y}{\mathrm dx^2}\ ?$$</p> <p>Why is the location of the $2$ in different places in the $\mathrm dy/\mathrm dx$ terms?</p>
<p>Purely symbolically, if we accept that $dy = f'(x)\,dx$, and treat $dx$ as a constant, then $$d^2y = d(dy) = d(f'(x)\,dx) = dx\,d(f'(x)) = dx\,f''(x)\,dx = f''(x)\,(dx)^2,$$ so dividing yields: $$\frac{d^2y}{(dx)^2} = \frac{d^2y}{dx^2} = f''(x).$$</p> <p>As to where this notation actually comes from, though: My guess is that it comes from a time when mathematicians primarily thought of $dx$ and $dy$ as "infinitesimal quantities." There are ways of doing so rigorously (via non-standard analysis), and perhaps there is a way of making this notation rigorous that way.</p> <hr> <p>However, we can still give rigorous meaning to these calculations without appealing to non-standard analysis by using the language of bilinear forms.</p> <p>If $f$ is differentiable, we can define a map \begin{align*} df\colon \mathbb{R} &amp; \to L(\mathbb{R}; \mathbb{R}) \\ df(x)(dx) &amp; = f'(x)\,dx. \end{align*} Here, $L(\mathbb{R};\mathbb{R})$ denotes the set of linear maps from $\mathbb{R} \to \mathbb{R}$, and $dx$ is simply a real number. Going one step further, we can consider the map $$d^2f = d(df)\colon \mathbb{R} \to L(\mathbb{R};L(\mathbb{R};\mathbb{R})).$$ By identifying $L(\mathbb{R}; L(\mathbb{R}; \mathbb{R}))$ with the set of bilinear maps $B(\mathbb{R} \times \mathbb{R};\mathbb{R})$, we have the bilinear map $$d^2f(x)(dx^1, dx^2) = dx^1\, f''(x) \,dx^2$$ whose associated quadratic form is $$d^2f(x)(dx) = f''(x)\,(dx)^2.$$ It is now perfectly legal to divide on both sides by $(dx)^2$, obtaining $$\frac{d^2f}{dx^2} = f''(x).$$</p>
<p>Somewhat mundanely,</p> <p>$$ \frac{d}{dx}\left(\frac{d}{dx}(y)\right) = \frac{d}{dx}\left(\frac{dy}{dx}\right) = \frac{d\,dy}{dx\,dx} = \frac{d^2 y}{dx^2} $$</p>
combinatorics
<blockquote> <p>In the fridge there is a box containing 10 expensive high quality Belgian chocolates, which my mum keeps for visitors. Every day, when mum leaves home for work, I secretly pick 3 chocolates at random, I eat them and replace them with ordinary cheap ones, that have exactly the same wrapping. On the next day I do the same, obviously risking to eat also some of the cheap ones. How many days on average will it take for the full replacement of the expensive chocolates with cheap ones?</p> </blockquote> <p>I would say <span class="math-container">$10/3$</span> but this is very simplistic. Also, the total number of ways to pick 3 chocolates out of 10 is <span class="math-container">$\binom {10} 3=\frac {10!}{3!7!} = 120$</span> which means that after 120 days I will have replaced all chocolates but I don't think it is correct.</p> <p>Any help?</p>
<p>Sketch (not a complete solution, but a road map towards one):</p> <p>We proceed recursively. Let <span class="math-container">$E_i$</span> denote the expected number of day it takes given that you have exactly <span class="math-container">$i$</span> good ones left. The answer you want is <span class="math-container">$E_{10}$</span>.</p> <p>Let's compute <span class="math-container">$E_1$</span>, for example. Each day the good one gets selected with probability <span class="math-container">$\frac {3}{10}$</span>. Thus <span class="math-container">$$E_1=\frac {10}3$$</span></p> <p>Now let's consider <span class="math-container">$E_2$</span>. The first draw you gets <span class="math-container">$0,1$</span> or <span class="math-container">$2$</span> of the good ones. We quickly deduce that <span class="math-container">$$\binom {10}3 \times E_2=\binom 83\times (E_2+1)+\binom 82\times \binom 21 \times (E_1+1)+ \binom 81\times \binom 22\times (1)$$</span></p> <p>This resolves to <span class="math-container">$$E_2=\frac {115}{24}\approx 4.7917$$</span></p> <p>Similarly, for <span class="math-container">$i&gt;2$</span> <span class="math-container">$$\binom {10}3\times E_i=\binom {10-i}3\times (E_i+1)+\binom {10-i}2\times \binom i1\times (E_{i-1}+1)$$</span> <span class="math-container">$$+\binom {10-i}1\times \binom i2\times (E_{i-2}+1)+\binom {10-i}0\times \binom {i}3\times (E_{i-3}+1)$$</span></p> <p>And we can solve the system step by step. The computation requires a bit of attention since in the recursion we must define <span class="math-container">$\binom nm=0$</span> when <span class="math-container">$n&lt;m$</span>.</p> <p>In the end I get about <span class="math-container">$\boxed {9.046}$</span> but as the comments will clearly indicate, a great many careless errors were made en route so I advise checking carefully.</p>
<p><strong>A self-contained, closed-form solution to the problem.</strong></p> <p>Suppose there are <span class="math-container">$n$</span> expensive chocolates and each day you secretly pick <span class="math-container">$m$</span> chocolates for replacement. Let <span class="math-container">$T$</span> be a random variable denoting the number of days it takes to replace all the expensive chocolates. Observe that given any set <span class="math-container">$S$</span> of chocolates, the probability that you have managed to avoid <span class="math-container">$S$</span> in each of the first <span class="math-container">$i$</span> days is <span class="math-container">$$ \left[\binom{n-|S|}{m}/\binom{n}{m}\right]^i. $$</span> Indeed, each of the chosen sets of chocolates is independent of the others and has probability <span class="math-container">$\binom{n-|S|}{m}/\binom{n}{m}$</span> of avoiding the given set <span class="math-container">$S$</span>, since there are <span class="math-container">$\binom{n-|S|}{m}$</span> <span class="math-container">$m$</span>-element subsets avoiding <span class="math-container">$S$</span>.</p> <p>Thus, the probability that you have managed to avoid at least one chocolate during the first <span class="math-container">$i$</span> days is <span class="math-container">$$ \mathbb P(\text{avoid chocolate number 1})+\cdots +\mathbb P(\text{avoid chocolate number n}) $$</span> <span class="math-container">$$ -\mathbb P(\text{avoid chocolates number 1 and 2})-\mathbb P(\text{avoid chocolates number 1 and 3})-\cdots -\mathbb P(\text{avoid chocolates number $n-1$ and $n$}) $$</span> <span class="math-container">$$ \text{$+$ similar sum over all $3$ element subsets, etc}. $$</span> The previous calculation is using the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="noreferrer">principle of inclusion-exclusion</a>: we start by overshooting the probability, then subtract off the double counting, then add back in the triple counting, and so on until we have nailed down the exact probability we are after.</p> <p>Notice that each of the probabilities in the previous display depends only on the number of elements in the set. Thus, we can group them up by number of elements, noting that we will have <span class="math-container">$\binom{n}{s}$</span> sets contributing to the term for the <span class="math-container">$s$</span>-element subsets. Therefore, <span class="math-container">$$ \mathbb P(T&gt;i)=\sum_{s=1}^n\binom{n}{s}\cdot (-1)^{s+1}\left[\binom{n-s}{m}/\binom{n}{m}\right]^i. $$</span></p> <p>Using the <a href="https://math.stackexchange.com/questions/63756/tail-sum-for-expectation">tail sum for expectation formula</a> which states that <span class="math-container">$$ \mathbb ET=\sum_{i=0}^{\infty}\mathbb P(T&gt;i), $$</span> we obtain that <span class="math-container">$$ \mathbb ET=\sum_{i=0}^{\infty}\sum_{s=1}^n\binom{n}{s}(-1)^{s+1}\left[\binom{n-s}{m}/\binom{n}{m}\right]^i. $$</span> All that remains is to simplify this expression. Manipulating infinite alternating sums can be subtle, but in this we can rearrange the order of summation since the series <a href="https://en.wikipedia.org/wiki/Absolute_convergence" rel="noreferrer">converges absolutely</a>. Therefore <span class="math-container">$$ \mathbb ET=\sum_{s=1}^n\binom{n}{s}(-1)^{s+1}\sum_{i=0}^{\infty}\left[\binom{n-s}{m}/\binom{n}{m}\right]^i=\sum_{s=1}^n\binom{n}{s}(-1)^{s+1}\frac{1}{1-\binom{n-s}{m}/\binom{n}{m}}, $$</span> where we have applied the geometric series formula <span class="math-container">$\sum_{i=0}^{\infty}x^i=\frac{1}{1-x}$</span> in the second equality.</p> <p>Finally, we can multiply through in the fraction to obtain that <span class="math-container">$$ \boxed{\mathbb ET=\binom{n}{m}\sum_{s=1}^n\frac{(-1)^{s+1}\binom{n}{s}}{\binom{n}{m}-\binom{n-s}{m}}}. $$</span></p> <p>Substituting <span class="math-container">$n=10$</span> and <span class="math-container">$m=3$</span>, we can evaluate the formula using this python snippet:</p> <pre><code>from math import factorial from fractions import Fraction def binom(n,k): # binomial coefficient n choose k return 0 if n &lt; k else factorial(n)/factorial(k)/factorial(n-k) def e(n,m): # expected number of days to replace all n chocolates if replaced m at a time return binom(n,m)*sum(Fraction(pow(-1,s+1)*binom(n,s),binom(n,m)-binom(n-s,m)) for s in range(1,n+1)) e(10,3) # Fraction(8241679, 911064), approximately 9.05 days </code></pre> <p>This is the general analytical solution to the problem. To see all the missteps and sources that inspired this solution, keep reading below for the previous iterations of this answer that led to here.</p> <hr> <p><em>See the bottom of this post for a previous incarnation of this answer, which cited an incorrect formula and was therefore incorrect.</em></p> <p>This problem has appeared many times in the literature. A closed-form solution is known and is valid with <span class="math-container">$3$</span> and <span class="math-container">$10$</span> replaced with arbitrary integers <span class="math-container">$m$</span> and <span class="math-container">$n$</span> satisfying <span class="math-container">$0\leq m\leq n$</span>.</p> <p>Theorem 2 in <a href="https://doi.org/10.2307/1427566" rel="noreferrer">Stadje [1990]</a> (specifically equation (2.15) therein) with <span class="math-container">$p=1$</span> and <span class="math-container">$l=s=n$</span> states that the desired expectation equals <span class="math-container">$$ \binom{n}{m}\sum_{j=0}^{n-1}\frac{(-1)^{n-j+1}\binom{n}{j}}{\binom{n}{m}-\binom{j}{m}}. $$</span> In this case <span class="math-container">$n=10$</span> and <span class="math-container">$m=3$</span> and therefore the expected number of days equals <span class="math-container">$$ \frac{8241679}{911064}\approx 9.05. $$</span></p> <p>A very nice and self-contained derivation of this formula appears at <a href="https://mathoverflow.net/questions/229060/batched-coupon-collector-problem/335109#335109">mathoverflow</a>, although beware that the answer containing this derivation has a minor mistake, which I have corrected both in this post and over at the linked mathoverflow post as well.</p> <hr> <p><em>Original answer, which cited an incorrect formula from mathoverflow. I have corrected that formula and posted an explanation of the mistake at mathoverflow.</em></p> <p>This problem has been solved (with <span class="math-container">$3$</span> and <span class="math-container">$10$</span> replaced <span class="math-container">$b$</span> and <span class="math-container">$n$</span>, respectively) over at <a href="https://mathoverflow.net/questions/229060/batched-coupon-collector-problem">math overflow</a> (and in the classical literature). The general formula for the expected number of batches required is <span class="math-container">$$ \sum_{s=1}^{n-b}\frac{(-1)^{s+1}\binom{n}{s}}{1-\binom{n-s}{b}/\binom{n}{b}}. $$</span> Plugging in <span class="math-container">$n=10$</span> and <span class="math-container">$b=3$</span> yields <span class="math-container">$$ \frac{\binom{10}{1}}{1-\binom{9}{3}/\binom{10}{3}}-\frac{\binom{10}{2}}{1-\binom{8}{3}/\binom{10}{3}}+\frac{\binom{10}{3}}{1-\binom{7}{3}/\binom{10}{3}}-\cdots +\frac{\binom{10}{7}}{1-\binom{3}{3}/\binom{10}{3}}, $$</span> which evaluates to <span class="math-container">$$ \frac{41039983}{911064}\approx 45. $$</span> Compared to the other answer I see we agree in the denominator but not in the numerator...</p> <p>[EDIT:] In fact, the incorrect answer is exactly <span class="math-container">$36$</span> greater than the correct answer. The incorrect answer equals <span class="math-container">$$ \binom{10}{3}\sum_{j=3}^{9}\frac{(-1)^{11-j}\binom{10}{j}}{\binom{10}{3}-\binom{j}{3}} $$</span> whereas the correct answer equals <span class="math-container">$$ \binom{10}{3}\sum_{j=0}^{9}\frac{(-1)^{11-j}\binom{10}{j}}{\binom{10}{3}-\binom{j}{3}} $$</span> and the difference is <span class="math-container">$$ \binom{10}{3}\sum_{j=0}^{2}\frac{(-1)^{11-j}\binom{10}{j}}{\binom{10}{3}-\binom{j}{3}}=-\binom{10}{0}+\binom{10}{1}-\binom{10}{2}=-1+10-45=-36. $$</span></p>
number-theory
<p><strong>Question:</strong></p> <blockquote> <p>Let $k$ be a positive integer. Show that there exist $n$ such that $$I=k!+(2k)!+(3k)!+\cdots+(nk)!$$ has a prime divisor $P$ such that $P&gt;k!$.</p> </blockquote> <p><strong>My idea:</strong> Let us denote by $d_{p}(n)$ the maximal power of p that n is divisible by. $$I=k![1+(k+1)(k+2)\cdots (2k)+\cdots+(k+1)(k+2)\cdots (nk)]$$ Then I can't prove it. Maybe we can use <a href="http://en.wikipedia.org/wiki/Zsigmondy%27s_theorem">Zsigmondy's theorem</a>?</p> <blockquote> <p>It is said can use follow inequality $$\left(1+\dfrac{1}{12n}\right)\left(\dfrac{n}{e}\right)^n\cdot\sqrt{2n\pi}&lt;n!&lt;\left(1+\dfrac{1}{4n}\right)\left(\dfrac{n}{e}\right)^n\cdot\sqrt{2n\pi}$$ By the way: I fell this problem is very interesting.But I can't prove it. </p> </blockquote> <p>This is In 2014 China mathematics national team training question(Now china Select 6 people for training, is for China to participate in the IMO. This problem is from the training topic)</p> <p>Maybe @Ivanh and so on can help me .Thank you.</p>
<p>Fix $k$ arbitrarily and denote the sum $k! + (2k)! + \cdots + (nk)!$ by $I_n$. It is enough to show that the set of primes $p$ that divide at least one of the $I_n$ is infinite—here is an argument that this is so.</p> <p>Suppose otherwise that there are only finitely many primes that divide any of the $I_n$. For a given prime $p$ and integer $a$, let $e_p(a)$ be the exponent of $p$ in $a$. We'll need the following property of $e_p$: for any pair $a,b$ of integers, \begin{align*} e_p(a+b) = \min\{e_p(a),e_p(b)\} \tag{1} \end{align*} except possibly when $e_p(a) = e_p(b)$. </p> <p>Let $U$ be the set of all primes $p$ such that $e_p(I_{n-1}) \geq e_p((nk)!)$ for all $n$ (the primes with unbounded exponent in the sequence $\{I_n\}$), and let $B$ be the complementary set, i.e., the set of all primes $p$ such that $e_p(I_{n-1})&lt;e_p((nk)!)$ for some $n$. Note that if $e_p(I_{n-1}) &lt; e_p((nk)!)$ then $e_p(I_{n-1}) = e_p(I_n) = \cdots$ by virtue of $(1)$. Since there are only finitely many primes $p \in B$ for which $e_p(I_n)$ is ever nonzero, there is an integer $N$ with the property that $e_p(I_n) = e_p(I_N)$ for each $p \in B$ and for all $n \geq N$ (any $N$ such that $Nk$ is at least as big as the largest prime that divides some $I_n$ will do). The point is that $e_p(I_n)$ is constant for sufficiently large $n$ unless $p\in U$.</p> <p>We have required only that $e_p(I_{n-1}) \geq e_p((nk)!)$ when $p \in U$. But $(1)$ implies that in certain circumstances the two exponents must be equal. Indeed, where $((n+1)k)!$ contains more factors of $p$ than $(nk)!$, we must have $e_p(I_{n-1}) = e_p((nk)!)$ lest $(1)$ give \begin{align*} e_p(I_n) = \min\{e_p(I_{n-1}),e_p((nk)!)\} = e_p((nk)!) &lt; e_p(((n+1)k)!) \end{align*} in violation of the assumption that $p \in U$. </p> <p>If we now choose $n$ carefully we can force $e_p(I_{n-1}) = e_p((nk)!)$ for all $p \in U$. The foregoing argument shows that this will happen whenever $e_p((nk)!) &lt; e_p(((n+1)k)!)$ for all $p\in U$. If $m = \prod_{p\in U}p$, then $n = am -1$ satisfies this condition for all $a\in\mathbb{N}$. Thus: \begin{align*} e_p(I_{am-2}) = e_p(((am-1)k)!) \quad\text{for $p\in U$ and $a\in\mathbb{N}$.} \end{align*}</p> <p>We just need one more fact, and that is that $((am-1)k)!$ contains no more factors of any prime $p \in U$ than $((am-2)k)!$ does. If this is true, then we get the following chain for $p\in U$ and $a\in\mathbb{N}$: \begin{align*} e_p(I_{am-3})\geq e_p(((am-2)k)!) = e_p(((am-1)k)!) = e_p(I_{am-2}). \tag{2} \end{align*} Since for large $a$ we have $e_p(I_{am-3}) = e_p(I_{am-2})$ for $p \notin U$, the chain $(2)$ implies for large $a$ that $I_{am-3}$ contains at least as many factors of every prime $p$ as $I_{am-2}$ does. This in turn means that $I_{am-3}\geq I_{am-2}$, which is our desired contradiction.</p> <p>So it remains only to show that \begin{align*} e_p(((am-1)k)!) = e_p(((am-2)k)!)\quad\text{for $p\in U$}. \tag{3} \end{align*} Since $((am-1)k)!/((am-2)k)! = (amk - k)(amk-k-1)\cdots (amk - 2k+1)$, the exponents will be unequal only if $p$ divides $amk-j$ for some $j&lt;2k$. For $p\in U$ this will be true only if $p$ divides $j$, since in that case $p$ divides $m$. In particular, we would need $p&lt;2k$. But no prime $p$ smaller than $2k$ has unbounded exponent in the sequence $\{I_n\}$ because $I_n/k!$ is not divisible by any such prime. Thus every prime $p\in U$ is bigger than $2k$, and $(3)$ is proved.</p>
<p>Here are a few observations and thoughts.</p> <p>Up to $k=30$ one needs $n=3$ for $k=7,11,12,15,16,18,19,20,21,23$ but $n=2$ is sufficient the other $20$ times.</p> <p>Let $I_{k,n}=I=k!+(2k)!+(3k)!+\cdots+(nk)!=k!J_{k,n}$ where $J_{k,n}=1+\frac{2k!}{k!}+\frac{3k!}{k!}+\cdots+\frac{nk!}{k!}$. A prime less than $2k$ divides all the summands of $J_{k,n}$ except $1$ and hence does not divide $J_{k,n}.$ For $t \gt 2$ and prime $tk \lt p \lt (t+1)k$ we have that $J_{k,n} \equiv J_{k,t} \bmod p$ for $n \ge t$ since all the summands starting with $\frac{(t+1)k!}{k!}$ are multiples of $p$. Hence $p$ divides either all of the $J_{k,n}$ with $kn+k \gt p$ or none of them. </p> <p>This feels like it is getting close to giving hope for a proof but gaps remain. It seems likely that , for fixed $k$, most primes (in a sense) divide only finitely many of the $J_{k,n}$, in fact probably most divide none of them. It turns out that $1+\frac{26!}{13!}=31\cdot 4591 \cdot 455061112081$ so $31 \mid J_{13,n}$ for all $n \ge 2.$ Similarly $31 \mid J_{15,n}$ , $41 \mid J_{18,n}$ and $23 \mid J_{11,n}$ for $n \ge 2$. Even for such primes, $p^e$ divides either all or none of the $J_{k,n}$ with $kn+n \gt ep.$ In these four cases, none get to $p^2$ except that $41^2 \mid J_{18,4}.$ However $41^3 \nmid J_{18,6}.$</p> <p>Up to $k=30$ there are three small cases of $J_{k,2}$ being a square (a square of a prime as it happens) but otherwise no repeated prime factors (also none for $J_{k,3}$ in the $10$ cases above that needed it.)</p> <p>The three cases are as follows (the expansions given look nice but I don't claim anything special about them) $$J_{3,2}=1+4\cdot 5 \cdot 6=1+4\frac{11-1}{2}\frac{11+1}{2}=11^2$$ $$J_{4,2}=1+(5 \cdot 8) (6 \cdot 7) =1+(41-1)(41+1)=41^2$$ and $$J_{7,2}=1+12 \cdot(9\cdot 11 \cdot 14)(8 \cdot 10 \cdot 13)=1+12\frac{4158}{3}\frac{4160}{4}=4159^2.$$</p> <p>As far as the phenomenon seen above (for $p=23,31$) of primes $p=2k+1$ with $p \mid J_{k,2},$ these are the primes described <a href="https://mathoverflow.net/questions/16141/primes-p-such-that-p-1-2-1-mod-p">here</a>.</p>
probability
<p>Consider a two-sided coin. If I flip it $1000$ times and it lands heads up for each flip, what is the probability that the coin is unfair, and how do we quantify that if it is unfair? </p> <p>Furthermore, would it still be considered unfair for $50$ straight heads? $20$? $7$?</p>
<p>First of all, you must understand that there is no such thing as a perfectly fair coin, because there is nothing in the real world that conforms perfectly to some theoretical model. So a useful definition of "fair coin" is one, that for practical purposes behaves like fair. In other words, no human flipping it for even a very long time, would be able to tell the difference. That means, one can assume, that the probability of heads or tails on that coin, is $1/2$. </p> <p>Whether your particular coin is fair (according to above definition) or not, cannot be assigned a "probability". Instead, statistical methods must be used. </p> <p>Here, you make a so called "null-hypothesis": "the coin is fair". You then proceed to calculate the probability of the event you observed (to be precise: the event, or something at least as "strange"), assuming the null-hypothesis were true. In your case, the probability of your event, 1000 heads, or something at least as strange, is $2\times1/2^{1000}$ (that is because you also count 1000 tails).</p> <p>Now, with statistics, you can never say anything for sure. You need to define, what you consider your "confidence level". It's like saying in court "beyond a reasonable doubt". Let's say you are willing to assume confidence level of 0.999 . That means, if something that had supposedly less than 0.001 chance of happening, actually happened, then you are going to say, "I am confident enough that my assumptions must be wrong". </p> <p>In your case, if you assume the confidence level of 0.999, and you have 1000 heads in 1000 throws, then you can say, "the assumption of the null hypothesis must be wrong, and the coin must be unfair". Same with 50 heads in 50 throws, or 20 heads in 20 throws. But not with 7, not at this confidence level. With 7 heads (or tails), the probability is $2 \times 1/2 ^ {7}$ , which is more than 0.001.</p> <p>But if you assume confidence level at 95% (which is commonly done in less strict disciplines of science), then even 7 heads means "unfair". </p> <p>Notice that you can never actually "prove" the null hypothesis. You can only reject it, based on what you observe is happening, and your "standard of confidence". This is in fact what most scientists do - they reject hypotheses based on evidence and the accepted standards of confidence. </p> <p>If your events do not disprove your hypothesis, that does not necessarily mean, it must be true! It just means, it withstood the scrutiny so far. You can also say "the results are consistent with the hypothesis being true" (scientists frequently use this phrase). If a hypothesis is standing for a long time without anybody being able to produce results that disprove it, it becomes generally accepted. However, sometimes even after hundreds of years, some new results might come up which disprove it. Such was the case of General Relativity "disproving" Newton's classical theory. </p>
<p>If you take a coin you have modified so that it always lands in heads and you get $1000$ heads then the probability of it being unfair is $100\%$.</p> <p>If you take a coin you have crafted yourself and carefully made sure that it is a fair coin and then you get $1000$ heads then the probability of it being unfair is $0\%$.</p> <p>Next, you fill a box with coins of both types, then take a random coin.</p> <ul> <li>$NF$ : fair coins in the box.</li> <li>$NU$ : unfair coins in the box</li> <li><p>$P(U)$ : probability of having taken an unfair coin $$P(U) = \frac{NU}{NF + NU}$$</p></li> <li><p>$P(F)$ : probability of having taken a fair coin $$ P(F) = \frac{NF}{NF + NU} = 1 - P(U) $$</p></li> <li>$P(H \mid{U})$ : Probability of having 1000 heads conditioned to having take an unfair coin $$P(H\mid{U}) = 1 $$</li> <li>$P(H\mid{F})$ : Probability of having 1000 heads conditioned to having taken a fair coin $$P(H\mid{F}) = \left( \tfrac{1}{2} \right)^{1000}$$</li> <li>$P(H)$ : Probability of having 1000 heads</li> </ul> <p>\begin{align} P(H) &amp;= P(U \cap H) + P(F \cap H)\\ &amp;= P(H \mid{U})P(U) + P(H \mid{F})P(F)\\ &amp;= P(U) + P(H \mid{F})P(F) \end{align}</p> <p>By applying <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem" rel="noreferrer">Bayes theorem</a> :</p> <p>$P(U \mid{H})$ : probability of the coin being unfair conditioned to getting 1000 heads $$P(U\mid{H}) = \frac{P(H \mid{U})P(U)}{P(H)} = \frac{P(U)}{P(U) + P(H\mid{F})P(F)}$$</p> <p>And that is your answer.</p> <hr> <h2>In example</h2> <p>If $P(U)=1/(6 \cdot 10^{27})$ ($1$ out of every $6 \cdot 10^{27}$ coins are unfair) and you get 1000 heads then the probability of the coin being unfair is \begin{align} \mathbf{99}.&amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999944\% \end{align}</p> <p>Very small coins like the USA cent have a weight of $2.5g$. We can safely assume that there are no coins with a weight less than 1 gram.</p> <p>Earth has a weight of less than $6 \cdot 10^{27}$ grams. Thus we know that there are less than $6 \cdot 10^{27}$ coins. We know that there is at least one unfair coin ( I have seen coins with two heads and zero tails) thus we know that $P(U) \ge 1/(6 \cdot 10^{27})$.</p> <p>And thus we can conclude that if you get 1000 heads then the probability of the coin being unfair is at least \begin{align} \mathbf{99}.&amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999944\% \end{align}</p> <p>This analysis is only valid if you take a random coin and only if coins are either $100\%$ fair or $100\%$ unfair. It is still a good indication that yes, with $1000$ heads you can be certain beyond any reasonable doubt that the coin is unfair.</p>
geometry
<p>The ancient discipline of construction by straightedge and compass is both fascinating and entertaining. But what is its significance in a mathematical sense? It is still taught in high school geometry classes even today. </p> <p>What I'm getting at is this: Are the rules of construction just arbitrarily imposed restrictions, like a form of poetry, or is there a meaningful reason for prohibiting, say, the use of a protractor?</p>
<p>Okay, I seem to be ranting too much in comments, so let me try to put forth my points and opinion here (as a community wiki, since it is opinion). </p> <p>First, to address the question: What is the significance of straightedge and compass constructions within mathematics? Historically, they played an important role and led to a number of interesting material (the three famous impossibilities lead very naturally to transcendental numbers, theories of equations, and the like). They are related to fascinating stuff (numbers constructible by origami, etc). But I would say that their significance parallels a bit the <a href="https://math.stackexchange.com/questions/10029/10034#10034">significance of Cayley's Theorem</a> in Group Theory: although important historically, and relevant to understand the development of many areas of mathematics, they are not that particularly important today. As you can see from the responses, many find them "fascinating", many find them "boring", but nobody seems to have come forth with an important application.</p> <p>Now, addressing the issue of teaching it at K-12. Let me preface this by saying that I am a "survivor" of the <a href="http://en.wikipedia.org/wiki/New_Math" rel="nofollow noreferrer">New Math</a>, which came to Mexico (where I grew up) in the 70s. I would say I became a mathematician in large part <em>despite</em> having been taught with New Math, rather than <em>because</em> of it. Also, I did not attend K-12 education or undergraduate in the U.S.; I am a bit more familiar with undergraduate at the level of precalculus and above thanks to my job, but not very much with the details of curricula in sundry states in the U.S. So I may very well have a wrong impression of details in what follows.</p> <p>Now, one problem, in my view, is that when we talk about "math education", we are really talking about <em>two</em> different things: numeracy and mathematics. This is the same phenomenon we see when we think about "English class". I suspect that English Ph.D.s are nonplussed at people who think they spend their time dealing with grammar, spelling, punctuation, etc, just as mathematicians are nonplussed that people think we spend our time multiplying really big numbers by hand. English education has two distinct components, which we might call "Literacy" and "Literature". Literacy is the component where we try to teach students to read and write effectively, spelling, punctuation, etc. Making themselves understood in written form, and understanding the written word. On the other hand, Literature is the component where they are introduced to Shakespeare, novelists, book reading, short stories, poetry, creative writing, historical and world literature, etc. We consider English education at our schools a failure when it fails in its mission with regards to <strong>literacy</strong>: we don't consider it a failure if students come out not being particularly enthused with reading classic novels or don't become professional poets, or even if they don't care or like poetry (or Shakespeare). The professional English Ph.D. engages in literature, not in literacy. Literacy is the domain of the grammarian.</p> <p>Mathematics education likewise has two components; numeracy (to borrow the term from John Allen Paulos) and mathematics. Numeracy is the parallel of literacy: we want children to be able to handle and understand numbers and basic algebra, percentages, etc., because they are necessary to function in the world. Mathematics is the stuff that mathematicians do, and which we all find so interesting and beautiful. </p> <p>"Mathematics" also includes advanced topics that are necessary for someone who is going to go on to study areas that require mathematics: so the physicists and engineers need to know trigonometry; business and economists need to know advanced statistics; computer science needs to know discrete mathematics; etc. Much like someone going on to college likely needs more than simple command of grammar and spelling. </p> <p>Part of the problem with math education is that it so often conflates numeracy with mathematics; another part of the problem is that mathematics was included in the "classical" curriculum for much the same reason as Latin and Greek were included: historical reasons, because every "gentleman" was expected to know some Latin, some Greek, and some mathematics (and by "mathematics", people meant Euclid). To some extent, we still teach geometry in K-12 because we've always taught geometry. But it is not part of the numeracy curriculum.</p> <p>It would be wonderful if we had the time in K-12 to teach students <em>both</em> numeracy and mathematics; it is a fact that today we are failing at both. We don't perform better in teaching students numeracy by teaching them beautiful mathematics, even if they understand and appreciate them, just like making them really understand and care for Shakespeare's plays (through performance, say) will make them able to understand a set of written instructions, or write a coherent argument.</p> <p>Trying to excite students about mathematics is all well and good; but numeracy should come first. Trying to excite students about the wonderful world of books, plays, and poetry is all well and good, but we need them to be able to read and write first, because that's part of what they will need to function in society.</p> <p>Constructions by straightedge and compass can become an interesting part of <em>mathematics</em> education, just like geometry. I just don't think they have a place in <em>numeracy</em> education. But they are taking the time we need for that numeracy education. Trigonometry <em>used to be</em> part of the numeracy education that people needed; this is no longer the case today. Trigonometry, today, is a foundational science for more advanced studies, not part of numeracy. But we are still teaching trigonometry as a numeracy subject (hence the rules, recipes, mnemonics, and the like). We could try to turn trigonometry into a <em>mathematical</em> subject, sure; or we could postpone it until later and only teach it to those for whom it is an important foundation. </p> <p><em>Added.</em> To clarify: I don't mean to say that the "solution" is to do less math and more numeracy. I think the solution is likely to be complicated, but the first step is to identify exactly what parts of what is currently branded as "mathematics" are really numeracy, and which parts are mathematics. Trigonometry is taught as part of "advanced mathematics", but it is taught as rote and rules because it was really numeracy. We don't need to teach trigonometry as numeracy any more, so we shouldn't. If it is to be taught, it needs to be taught in the right context. Geometry is similar: geometry used to be taught as basic numeracy because "every educated person should know geometry"; (of course, "educated" at the time meant "rich and land owner, or with aspirations in that direction"). We don't need most of geometry as basic numeracy, we want it now as mathematics. So teaching geometry <em>as</em> numeracy is a waste of time, and it takes up the <em>numeracy</em> time that should be spent in other things. (It can still be taught during the <em>mathematics</em> time). I certainly don't say "drop all the math, concentrate on the numeracy". I say, "when dealing with numeracy, concentrate on the numeracy, not the math, and don't confuse the two." Nobody seems to confuse spelling rules with reading novels, because we separate literacy from literature. Too many people confuse arithmetic with mathematics, because we don't separate them.</p> <p>The reason I talk about dropping trigonometry and doing some basic statistics is precisely that: trigonometry is being taught as part of the <em>numeracy</em> curriculum, when it shouldn't. Basic statistics, say at the level of the wonderful <strong>How to Lie With Statistics</strong>, is not taught as part of numeracy. But in today's world, there is a far better case for statistics being part of the basic numeracy education than trigonometry. Everyone coming out of High School <em>should know</em> that taking a 10% pay-cut and then getting a 5% raise does not mean you are now at 95% of your old salary (go do a spot check, see how many people think you <em>are</em>). They need to know the difference between <em>average</em> and <em>median</em>, so they are not misled by statements about "the average salary of the American worker". They should understand what "false positive" and "false negative" means. They should be able to interpret graphs (even the silly ones on the cover of every <em>USA Today</em> issue) and be able to spot the distortions created by chopping axes, etc. These are <em>numeracy</em> issues. </p> <p>Likewise high school geometry: it is trying to be both numeracy and mathematics, and I think it generally fails at both. There are some components of geometry that are part of numeracy, they ought to be treated that way, but the parts that are mathematics should be separate.</p> <p>One problem I have with Lockhart's lament is that he does not make clear the distinction between numeracy and mathematical education. The nightmare he paints for the musician and the artist is precisely that musical education is being turned into the equivalent of numeracy/literacy education, thus doing a disservice to music-as-an-art. </p> <p>The science of mathematics that we all know and love has some intersection with numeracy and arithmetic, but we all know it is a limited intersection; just as the study of literature has some intersection with the study of grammar an spelling, but the intersection is limited. The main purpose of K-12 education (or at the very least, K-6 or K-8) should be <em>numeracy</em>, with some limited forays into mathematics (just as the main purpose will be <em>literacy</em> with some limited forays into <em>literature</em>). The way to teach numeracy is necessarily different from the way to teach mathematics. Numeracy requires that we memorize multiplication tables, for all the horror this will cause to modern education people; this of course is a far cry from teaching the <em>mathematics</em> of multiplication, which may very well be very interesting and awaken the child's curiosity and wonder at the world. That can be done within the context of <em>mathematical</em> education, but it shouldn't be done in the context, and at the expense of, numeracy.</p> <p>Trigonometry used to be part of numeracy; it no longer is. It is now either mathematical or foundational for advanced studies, so it should be treated as such. Geometry used to be taught for reasons which no longer hold, and to some extent we continue teaching it as a historical legacy; we shouldn't. Those components which are numeracy should be taught as that, and we could move the rest (including constructions with compass and straightedge) to the more creative, mathematical education side of the equation. </p> <p>Anyway, I've ranted long enough, and probably made myself a few detractors along the way...</p>
<p>I personally think straightedge and compass is one of the most important tools i acquired during high school, regarding mathematics.</p> <p>Other then it being, as you said, a fascinating and entertaining subject- It was the first time before college I ever had to work under severe restriction- and got unbelievable results. Later on, when I discovered Galois and Field theory the whole thing became even cooler.</p> <p>What I'm getting at, is that I think subjects like Construtions are extremely important for math education. A few days ago someone posted Lockharts Lement- It mentions the fact that math became a subject of industry- kids learn math as a technical and terrifying subject. Telling them to use protractors and calculators and other stuff that make "short cuts" to solutions of problems makes math into paper-work. </p> <p>Putting on restrictions, and teaching how to come up with beautiful stuff under them- makes it into a game. I remember when I was a kid I wasn't very much interested in paperworks. But games, I liked games. And that kind of things is what got me to enroll in a mathematics degree. At least that's my opinion on the matter :)</p>
matrices
<p>Another <a href="https://math.stackexchange.com/questions/4101200/a-3-times3-triangular-matrix-has-a-repeated-eigenvalue-if-it-is-the-square-of#4101200">question</a> brought this up. The only definition I have ever seen for a matrix being upper triangular is, written in component forms, &quot;all the components below the main diagonal are zero.&quot; But of course that property is basis dependent. It is not preserved under change of basis.</p> <p>Yet it doesn't seem as if it would be purely arbitrary because the product of upper triangular matrices is upper triangular, and so forth. It has closure. Is there some other sort of transformation besides a basis transformation that might be relevant here? It seems as if a set of matrices having this property should have some sort of invariants.</p> <p>Is there some sort of isomorphism between the sets of upper triangular matrices in different bases?</p>
<p>Many true things can be said about upper-triangular matrices, obviously... :)</p> <p>In my own experience, a useful more-functional (rather than notational) thing that can be said is that the subgroup of <span class="math-container">$GL_n$</span> consisting of upper-triangular matrices is the <em>stabilizer</em> of the <em>flag</em> (nested sequence) of subspaces consisting of the span of <span class="math-container">$e_1$</span>, the span of <span class="math-container">$e_1$</span> and <span class="math-container">$e_2$</span>, ... with standard basis vectors.</p> <p>Concretely, this means the following. The matrix multiplication of a triangular matrix <span class="math-container">$A$</span> and <span class="math-container">$e_1$</span>, <span class="math-container">$Ae_1$</span>, is equal to a multiple of <span class="math-container">$e_1$</span>, right? However, <span class="math-container">$Ae_2$</span> is more than a multiple of <span class="math-container">$e_2$</span>: it can be any linear combination of <span class="math-container">$e_1$</span> and <span class="math-container">$e_2$</span>. Generally, if you set <span class="math-container">$V_i= \operatorname{span}(e_1, \ldots, e_i) $</span>, try to show that <span class="math-container">$A$</span> is upper triangular if and only if <span class="math-container">$A(V_i) \subseteq V_i$</span>. The nested sequence of spaces</p> <p><span class="math-container">$$ 0 = V_0 \subset V_1 \subset \ldots \subset V_n = \mathbb{R}^n$$</span></p> <p>is called a <em>flag</em> of the total space.</p> <p>One proves a lemma that <em>any</em> maximal chain of subspaces can be mapped to that &quot;standard&quot; chain by an element of <span class="math-container">$GL_n$</span>. In other words, no matter which basis you are using: being triangular is intrinsically to respect a flag with <span class="math-container">$\dim(V_i) = i$</span> (the last condition translate the <em>maximality</em> of the flag).</p> <p>As Daniel Schepler aptly commented, while an ordered basis gives a maximal flag, a maximal flag does <em>not</em> quite specify a basis. There are more things that can be said about flags versus bases... unsurprisingly... :)</p>
<p>Being upper triangular is not a property of linear transformations unless you have an ordered basis. Even changing the order of a basis can change an upper triangular matrix to a matrix which is not, or vice versa.</p> <p>The upper triangular <span class="math-container">$n\times n$</span> matrices are the second most simple form of an <a href="https://en.wikipedia.org/wiki/Incidence_algebra" rel="noreferrer">incidence algebra</a>, where we take the partially ordered set to be <span class="math-container">$\{1,2,\dots,n\}$</span> with the usual order.</p> <p>The most trivial incidence algebra is the algebra of diagonal matrices, where the order is <span class="math-container">$i\preccurlyeq j$</span> iff <span class="math-container">$i=j.$</span></p> <p>One interesting thing about upper-triangular matrices is that they form an algebra even when infinite-dimensional. This is because, while multiplication requires infinite sums, all but finitely many of them are zero. You can even take upper triangular matrices with rows/columns infinite in both directions by using <span class="math-container">$(\mathbb Z,\leq)$</span> for your Poset.</p> <p>(In a general infinite poset <span class="math-container">$P$</span> what is required for the algebra’s multiplication to be well-defined is for the poset to be “locally finite.”)</p>
game-theory
<p>Alice chose a positive integer $n$ and Bob tries to guess it.</p> <p>In every turn, Bob will guess an integer $x$ $(x&gt;0)$:</p> <ol> <li><p>If $x$ equals $n$, then Alice tells Bob that he found it, and the game ends.</p></li> <li><p>If not, then Alice tells Bob whether $x+n$ is prime or not.</p></li> </ol> <p>How to play this game optimally (that is to say, to find the number in the fewest turns)?</p> <p>You can assume $0&lt; n\le100$ (or any reasonable bound).</p> <p>PS: This game is invented by me.</p>
<p>It's not possible to solve the game in all cases with fewer than 7 guesses, because after 6 guesses you have at least 95 unguessed numbers and 6 bits of information and $2^6&lt;94.$</p> <p>But actually 7 is not achievable, since at most 26 primes can appear in any given block of 101. Thus the first guess, in the worst case, will have at least 74 remaining numbers, and so at least 8 guesses are required since $2^6&lt;68$.</p> <p>The goal at each step is to find numbers that split the remaining numbers into two sets of roughly equal size. Here are the beginnings of a strategy for 9. Ask about 3. If the number is 3 you're done; if n + 3 is prime it should not be hard to guess the number in 8 more tries.</p> <p>Otherwise 74 numbers remain: 1, 5, 6, 7, 9, 11, 12, 13, 15, 17, 18, 19, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 37, 39, 41, 42, 43, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 57, 59, 60, 61, 62, 63, 65, 66, 67, 69, 71, 72, 73, 74, 75, 77, 78, 79, 81, 82, 83, 84, 85, 87, 88, 89, 90, 91, 92, 93, 95, 96, 97, 99. Now ask about 10. If n+10 is prime it should be easy to find the number in 7 more tries.</p> <p>Otherwise 50 numbers remain: 5, 6, 11, 12, 15, 17, 18, 22, 23, 24, 25, 29, 30, 32, 35, 36, 39, 41, 42, 45, 46, 47, 48, 52, 53, 54, 55, 59, 60, 62, 65, 66, 67, 71, 72, 74, 75, 77, 78, 81, 82, 83, 84, 85, 88, 89, 90, 92, 95, 96. Now ask about 12. If n is 12 you're done; if n+12 is prime you have 6 tries to find 16 numbers.</p> <p>Now 33 numbers remain: 6, 15, 18, 22, 23, 24, 30, 32, 36, 39, 42, 45, 46, 48, 52, 53, 54, 60, 62, 65, 66, 72, 74, 75, 78, 81, 82, 83, 84, 88, 90, 92, 96. Now ask about 1. If n+1 is prime there are 15 possibilities and you have 5 tries.</p> <p>Now you have just 18: 15, 23, 24, 32, 39, 45, 48, 53, 54, 62, 65, 74, 75, 81, 83, 84, 90, 92. This is a bit tricky, but guess 1139 (or 1148) to split the remainder into two sets of 9.</p> <p>Either way you go at least one of the sets has 5 members (4 would be possible only if you picked one of the 18).</p>
<p>I have a pretty cool but probably impractical solution.</p> <p>Consider this table of primes from $1$ to $110$:</p> <p>$$ \begin{matrix} 2 &amp; 3 &amp; 5 &amp; 7 &amp; 11 \\ 13 &amp; 17 &amp; 19 &amp; 23 &amp; 29 \\ 31 &amp; 37 &amp; 41 &amp; 43 &amp; 47 \\ 53 &amp; 59 &amp; 61 &amp; 67 &amp; 71 \\ 73 &amp; 79 &amp; 83 &amp; 89 &amp; 97 \\ 101 &amp; 103 &amp; 107 &amp; 109 \\ \end{matrix} $$</p> <p>Notice the differences between all the primes. That table would look like this:</p> <p>$$ \begin{matrix} 2 &amp; 1 &amp; 2 &amp; 2 &amp; 4 \\ 2 &amp; 4 &amp; 2 &amp; 4 &amp; 6 \\ 2 &amp; 6 &amp; 4 &amp; 2 &amp; 4 \\ 6 &amp; 6 &amp; 2 &amp; 6 &amp; 4 \\ 2 &amp; 6 &amp; 4 &amp; 6 &amp; 8 \\ 4 &amp; 2 &amp; 4 &amp; 2 \\ \end{matrix} $$</p> <p>Let's look at the combinations of 6 differences in order. They're ordered from left to right and up to down by value as if the sequences represented digits of a 6-digit number:</p> <p>$$1,2,2,4,2,4 \ \ \ \ \ \ \ \ \ \ 2,1,2,2,4,2 \ \ \ \ \ \ \ \ \ \ 2,2,4,2,4,2$$ $$2,4,2,4,2,4 \ \ \ \ \ \ \ \ \ \ 2,4,2,4,6,2 \ \ \ \ \ \ \ \ \ \ 2,4,6,2,6,4$$ $$2,4,6,6,2,6 \ \ \ \ \ \ \ \ \ \ 2,6,4,2,4,6 \ \ \ \ \ \ \ \ \ \ 2,6,4,2,6,4$$ $$2,6,4,6,8,4 \ \ \ \ \ \ \ \ \ \ 4,2,4,2,4,6 \ \ \ \ \ \ \ \ \ \ 4,2,4,6,2,6$$ $$4,2,4,6,6,2 \ \ \ \ \ \ \ \ \ \ 4,2,6,4,6,8 \ \ \ \ \ \ \ \ \ \ 4,6,2,6,4,2$$ $$4,6,6,2,6,4 \ \ \ \ \ \ \ \ \ \ 4,6,8,4,2,4 \ \ \ \ \ \ \ \ \ \ 6,2,6,4,2,4$$ $$6,2,6,4,2,6 \ \ \ \ \ \ \ \ \ \ 6,4,2,4,6,6 \ \ \ \ \ \ \ \ \ \ 6,4,2,6,4,6$$ $$6,4,6,8,4,2 \ \ \ \ \ \ \ \ \ \ 6,6,2,6,4,2 \ \ \ \ \ \ \ \ \ \ 6,8,4,2,4,2$$</p> <p>Notice that no two sequences are equal in the above list</p> <p>Start out by iterating over values of $x$ in $[0,7]$ incrementally until you hit the first prime number. Let this value of $x$ be denoted as $k$.</p> <p><strong>The above step requires $8$ guesses in the worst case</strong></p> <p>You have to use new values for $x$ equal to $k + o$ where $o$ is an offset that starts at $1$ and changes based on what's prime and what's not.</p> <p>If $k + 1$ is prime, you can stop. $n = 2$.</p> <p>Else, try $k + 2$ and if that doesn't work, $k + 4$, then $k + 6$.</p> <p>If for instance, $k + 4$ works, you would then try $k + 4 + 2$. If that's prime too, then continue with these combinations in the same manner:</p> <p>$$4,2,4,2,4,6$$ $$4,2,4,6,2,6$$ $$4,2,4,6,6,2$$ $$4,2,6,4,6,8$$</p> <p>Using the above method, you derive the value of the prime $n + k$. $k$ is known, so you can immediately deduce the value of $n$.</p> <p>In fact, we can optimize that above table of differences to remove the unneeded trials and combine a few of them:</p> <p>$$ \begin{matrix} 1 &amp; 2,1 &amp; 2,2 \\ 2,4,2,4,2 &amp; 2,4,2,4,6 &amp; 2,4,6,2 \\ 2,4,6,6 &amp; 2,6,4,2,4 &amp; 2,6,4,2,6 \\ 2,6,4,6 &amp; 4,2,4,2 &amp; 4,2,4,6,2 \\ 4,2,4,6,6 &amp; 4,2,6 &amp; 4,6,2 \\ 4,6,6 &amp; 4,6,8 &amp; 6,2,6,4,2,4 \\ 6,2,6,4,2,6 &amp; 6,4,2,4 &amp; 6,4,2,6 \\ 6,4,6 &amp; 6,6 &amp; 6,8 \\ \end{matrix} $$</p> <p>The magic numbers to add to $k$ when guessing would be:</p> <p>$$ \begin{matrix} 1 &amp; 2,3 &amp; 2,4 \\ 2,6,8,12,14 &amp; 2,6,8,12,18 &amp; 2,6,12,14 \\ 2,6,12,18 &amp; 2,8,12,14,18 &amp; 2,8,12,14,20 \\ 2,8,12,18 &amp; 4,6,10,12 &amp; 4,6,10,16,18 \\ 4,6,10,16,22 &amp; 4,6,12 &amp; 4,10,12 \\ 4,10,16 &amp; 4,10,18 &amp; 6,8,14,18,20,24 \\ 6,8,14,18,20,26 &amp; 6,10,12,16 &amp; 6,10,12,18 \\ 6,10,16 &amp; 6,12 &amp; 6,14 \\ \end{matrix} $$</p> <p>It's imperative that the trials are done in order from left to right and top to bottom in this table, else, you will get false information. An example of one optimization is to avoid guessing when the only possible combination to continue with is 1 because we <strong>know</strong> that $n + k$ is prime.</p> <p>An example:</p> <ul> <li>You tested $k + 4$, $k + 10$, and you're testing for $k + 12$</li> <li>You're now testing for $k + 16$</li> <li>You're now testing for $k + 18$</li> </ul> <p>That last step is redundant. If $k + 16$ is composite, you can skip the $k + 18$ step because you know it will be prime.</p> <p>I hope this is clear enough.</p>
probability
<h1>Problem</h1> <p>The premise is <em>almost</em> the same as in <a href="https://math.stackexchange.com/questions/29242/probability-that-a-quadratic-polynomial-with-random-coefficients-has-real-roots?rq=1">this question</a>. I'll restate for convenience.</p> <blockquote> <p>Let <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span> be independent random variables uniformly distributed between <span class="math-container">$(-1,+1)$</span>. What is the probability that the polynomial <span class="math-container">$Ax^2+Bx+C$</span> has real roots?</p> </blockquote> <p><strong>Note:</strong> The distribution is now <span class="math-container">$-1$</span> to <span class="math-container">$+1$</span> instead of <span class="math-container">$0$</span> to <span class="math-container">$1$</span>.</p> <h1>My Attempt</h1> <h2>Preparation</h2> <p>When the coefficients are sampled from <span class="math-container">$\mathcal{U}(0,1)$</span>, the probability for the discriminant to be non-negative that is, <span class="math-container">$P(B^2-4AC\geq0) \approx 25.4\% $</span>. This value can be obtained theoretically as well as experimentally. The link I shared above to the older question has several good answers discussing both approaches.</p> <p>Changing the sampling interval to <span class="math-container">$(-1, +1)$</span> makes things a bit difficult from the theoretical perspective. Experimentally, it is rather simple. <a href="https://github.com/hungrybluedev/root-from-parameters/blob/master/Empirical%20Probability%20of%20obtaining%20a%20real%20root.ipynb" rel="noreferrer">This is the code</a> I wrote to simulate the experiment for <span class="math-container">$\mathcal{U}(0,1)$</span>. Changing it from <code>(0, theta)</code> to <code>(-1, +1)</code> gives me an average probability of <span class="math-container">$62.7\%$</span> with a standard deviation of <span class="math-container">$0.3\%$</span></p> <p>I plotted the simulated PDF and CDF. In that order, they are:</p> <p><a href="https://i.sstatic.net/70f4U.png" rel="noreferrer"><img src="https://i.sstatic.net/70f4U.png" alt="PDF" /></a><a href="https://i.sstatic.net/yAIu6.png" rel="noreferrer"><img src="https://i.sstatic.net/yAIu6.png" alt="CDF" /></a></p> <p>So I'm aiming to find a CDF that looks like the second image.</p> <h2>Theoretical Approach</h2> <p>The approach that I find easy to understand is outlined in <a href="https://math.stackexchange.com/a/2874497/203386">this answer</a>. Proceeding in a similar manner, we have</p> <p><span class="math-container">$$ f_A(a) = \begin{cases} \frac{1}{2}, &amp;-1\leq a\leq+1\\ 0, &amp;\text{ otherwise} \end{cases} $$</span></p> <p>The PDFs are similar for <span class="math-container">$B$</span> and <span class="math-container">$C$</span>.</p> <p>The CDF for <span class="math-container">$A$</span> is</p> <p><span class="math-container">$$ F_A(a) = \begin{cases} \frac{a + 1}{2}, &amp;-1\leq a\geq +1\\ 0,&amp;a&lt;-1\\ 1,&amp;a&gt;+1 \end{cases} $$</span></p> <p>Let us assume <span class="math-container">$X=AC$</span>. I proceed to calculate the CDF for <span class="math-container">$X$</span> (for <span class="math-container">$x&gt;0$</span>) as:</p> <p><span class="math-container">$$ \begin{align} F_X(x) &amp;= P(X\leq x)\\ &amp;= P(AC\leq x)\\ &amp;= \int_{c=-1}^{+1}P(Ac\leq x)f_C(c)dc\\ &amp;= \frac{1}{2}\left(\int_{c=-1}^{+1}P(Ac\leq x)dc\right)\\ &amp;= \frac{1}{2}\left(\int_{c=-1}^{+1}P\left(A\leq \frac{x}{c}\right)dc\right)\\ \end{align} $$</span></p> <p>We take a quick detour to make some observations. First, when <span class="math-container">$0&lt;c&lt;x$</span>, we have <span class="math-container">$\frac{x}{c}&gt;1$</span>. Similarly, <span class="math-container">$-x&lt;c&lt;0$</span> implies <span class="math-container">$\frac{x}{c}&lt;-1$</span>. Also, <span class="math-container">$A$</span> is constrained to the interval <span class="math-container">$[-1, +1]$</span>. Also, we're only interested when <span class="math-container">$x\geq 0$</span> because <span class="math-container">$B^2\geq 0$</span>.</p> <p>Continuing, the calculation</p> <p><span class="math-container">$$ \begin{align} F_X(x) &amp;= \frac{1}{2}\left(\int_{c=-1}^{+1}P\left(A\leq \frac{x}{c}\right)dc\right)\\ &amp;= \frac{1}{2}\left(\int_{c=-1}^{-x}P\left(A\leq \frac{x}{c}\right)dc + \int_{c=-x}^{0}P\left(A\leq \frac{x}{c}\right)dc + \int_{c=0}^{x}P\left(A\leq \frac{x}{c}\right)dc + \int_{c=x}^{+1}P\left(A\leq \frac{x}{c}\right)dc\right)\\ &amp;= \frac{1}{2}\left(\int_{c=-1}^{-x}P\left(A\leq \frac{x}{c}\right)dc + 0 + 1 + \int_{c=x}^{+1}P\left(A\leq \frac{x}{c}\right)dc\right)\\ &amp;= \frac{1}{2}\left(\int_{c=-1}^{-x}\frac{x+c}{2c}dc + 0 + 1 + \int_{c=x}^{+1}\frac{x+c}{2c}dc\right)\\ &amp;= \frac{1}{2}\left(\frac{1}{2}(-x+x(\log(-x)-\log(-1)+1) + 0 + 1 + \frac{1}{2}(-x+x(-\log(x)-\log(1)+1)\right)\\ &amp;= \frac{1}{2}\left(2 + \frac{1}{2}(-x+x(\log(x)) -x + x(-\log(x))\right)\\ &amp;= 1 - x \end{align} $$</span></p> <p>I don't think this is correct.</p> <h1>My Specific Questions</h1> <ol> <li>What mistake am I making? Can I even obtain the CDF through integration?</li> <li>Is there an easier way? I used this approach because I was able to understand it well. There are shorter approaches possible (as is evident with the <span class="math-container">$\mathcal{U}(0,1)$</span> case) but perhaps I need to read more before I can comprehend them. Any pointers in the right direction would be helpful.</li> </ol>
<p>I would probably start by breaking into cases based on <span class="math-container">$A$</span> and <span class="math-container">$C$</span>.</p> <p>Conditioned on <span class="math-container">$A$</span> and <span class="math-container">$C$</span> having different signs, there are always real roots (because <span class="math-container">$4AC\leq 0$</span>, so that <span class="math-container">$B^2-4AC\geq0$</span>). The probability that <span class="math-container">$A$</span> and <span class="math-container">$C$</span> have different signs is <span class="math-container">$\frac{1}{2}$</span>.</p> <p>Conditioned on <span class="math-container">$A\geq0$</span> and <span class="math-container">$C\geq 0$</span>, you return to the problem solved in the link above. Why? Because <span class="math-container">$B^2$</span> has the same distribution whether you have <span class="math-container">$B$</span> uniformly distributed on <span class="math-container">$(0,1)$</span> or on <span class="math-container">$(-1,1)$</span>. At the link, they computed this probability as <span class="math-container">$\frac{5+3\log4}{36}\approx0.2544134$</span>. The conditioning event here has probability <span class="math-container">$\frac{1}{4}$</span>.</p> <p>Finally, if we condition on <span class="math-container">$A&lt;0$</span> and <span class="math-container">$C&lt;0$</span>, we actually end up with the same probability, as <span class="math-container">$4AC$</span> has the same distribution in this case as in the case where <span class="math-container">$A\geq0$</span> and <span class="math-container">$C\geq 0$</span>. So, this is an additional <span class="math-container">$\frac{5+3\log 4}{36}\approx0.2544134$</span> conditional probability, and the conditioning event has probability <span class="math-container">$\frac{1}{4}$</span>.</p> <p>So, all told, the probability should be <span class="math-container">$$ \begin{align*} P(B^2-4AC\geq0)&amp;=1\cdot\frac{1}{2}+\frac{1}{4}\cdot\frac{5+3\log4}{36}+\frac{1}{4}\cdot\frac{5+3\log 4}{36}\\ &amp;=\frac{1}{2}+\frac{5+3\log4}{72}\\ &amp;\approx0.6272... \end{align*} $$</span></p>
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> Hereafter, <span class="math-container">$\ds{\bracks{P}}$</span> is an <a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="noreferrer">Iverson Bracket</a>. Namely, <span class="math-container">$\ds{\bracks{P} = \color{red}{1}}$</span> whenever <span class="math-container">$\ds{P}$</span> is <span class="math-container">$\ds{\tt true}$</span> and <span class="math-container">$\ds{\color{red}{0}}$</span> <span class="math-container">$\ds{\tt otherwise}$</span>. They are very convenient whenever we have to <em>manipulate constraints</em>.</p> <hr> <span class="math-container">\begin{align} &amp;\bbox[5px,#ffd]{\int_{-1}^{1}{1 \over 2}\int_{-1}^{1} {1 \over 2}\int_{-1}^{1}{1 \over 2}\bracks{b^{2} - 4ac &gt; 0} \dd c\,\dd a\,\dd b} \\[5mm] = &amp;\ {1 \over 4}\int_{0}^{1}\int_{-1}^{1} \int_{-1}^{1}\bracks{b^{2} - 4ac &gt; 0} \dd c\,\dd a\,\dd b \\[5mm] = &amp;\ {1 \over 4}\int_{0}^{1}\int_{-1}^{1} \int_{0}^{1}\braces{\bracks{b^{2} - 4ac &gt; 0} + \bracks{b^{2} + 4ac &gt; 0}} \dd c\,\dd a\,\dd b \\[5mm] = &amp;\ {1 \over 4}\int_{0}^{1}\int_{0}^{1} \int_{0}^{1}\left\{\bracks{b^{2} - 4ac &gt; 0} + \bracks{b^{2} + 4ac &gt; 0}\right. \\[2mm] &amp;\ \phantom{{1 \over 4}\int_{0}^{1}\int_{-1}^{1} \int_{0}^{1}} \left. + \bracks{b^{2} + 4ac &gt; 0} + \bracks{b^{2} - 4ac &gt; 0}\right\}\dd c\,\dd a\,\dd b \\[5mm] = &amp;\ {1 \over 2} + {1 \over 2}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1} \bracks{b^{2} - 4ac &gt; 0}\dd c\,\dd a\,\dd b \\[5mm] = &amp;\ {1 \over 2} + {1 \over 2}\int_{0}^{1}\int_{0}^{1}{1 \over a}\int_{0}^{a} \bracks{b^{2} - 4c &gt; 0}\dd c\,\dd a\,\dd b \\[5mm] = &amp;\ {1 \over 2} + {1 \over 2}\int_{0}^{1}\int_{0}^{1}\bracks{b^{2} - 4c &gt; 0} \int_{c}^{1}{1 \over a}\,\dd a\,\dd c\,\dd b \\[5mm] = &amp;\ {1 \over 2} - {1 \over 2}\int_{0}^{1}\int_{0}^{1} \bracks{c &lt; {b^{2} \over 4}}\ln\pars{c}\,\dd c\,\dd b \\[5mm] = &amp;\ {1 \over 2} - {1 \over 2}\int_{0}^{1}\int_{0}^{b^{2}/4} \ln\pars{c}\,\dd c\,\dd b \\[5mm] = &amp;\ {1 \over 2} - {1 \over 2}\int_{0}^{1}\bracks{% -\,{1 + 2\ln\pars{2} \over 4}\,b^{2} + {1 \over 2}\,b^{2}\ln\pars{b}}\,\dd b \\[5mm] = &amp; \bbx{{\ln\pars{2} \over 12} + {41 \over 72}} \approx 0.6272 \\ &amp; \end{align}</span>
linear-algebra
<p>A natural vector space is the set of continuous functions on $\mathbb{R}$. Is there a nice basis for this vector space? Or is this one of those situations where we're guaranteed a basis by invoking the Axiom of Choice, but are left rather unsatisfied?</p>
<p>There is, in a fairly strong sense, no reasonable basis of this space. Zoom in on a neighborhood at any point and note that a finite linear combination of functions which have various kinds of nice behavior in that neighborhood also has that nice behavior in that neighborhood (differentiable, $C^k$, smooth, etc.). So any basis necessarily contains, for <em>every such neighborhood</em>, a function which does not behave nicely in that neighborhood. More generally, but roughly speaking, a basis needs to have functions which are at least as pathological as the most pathological continuous functions. </p> <p>(Hamel / algebraic) bases of most infinite-dimensional vector spaces simply are not useful. In applications, the various topologies you could put on such a thing matter a lot and the notion of a <a href="http://en.wikipedia.org/wiki/Schauder_basis">Schauder basis</a> becomes more useful.</p>
<p>Using Nate Eldredge's <a href="https://math.stackexchange.com/questions/136637/what-is-a-basis-for-the-vector-space-of-continuous-functions#comment347999_136637">comment</a> we have that $C(\mathbb R)$ is a Polish vector space.</p> <p>Consider a Solovay model, that is ZF+DC+"All sets have the Baire property". In such model all linear maps into separable vector spaces are continuous, this is a consequence of [1, Th. 9.10]. </p> <p>It is important to remark that a continuous function (from $\mathbb R$ to $\mathbb R$) from a compact set is uniformly continuous is a result which do not require <em>any</em> form of choice, and I believe that Dependent Choice (DC) ensures that uniform converges on compact sets is well behaved.</p> <p>Suppose that there was a Hamel basis, $B$, it has to be of cardinality $\frak c$. So it has $2^\frak c$ many permutations, which induce $2^\frak c$ <strong>different</strong> linear automorphisms.</p> <p>However every linear automorphism is automatically continuous, so it is determined completely by the countable dense set, and therefore there can only be $\frak c$ many linear automorphisms which is a contradiction to Cantor's theorem since $\mathfrak c\neq 2^\frak c$.</p> <p>This is essentially the same argument as I used in <a href="https://math.stackexchange.com/a/122723/622">this answer</a>. </p> <hr> <p>Bibliography:</p> <ol> <li>Kechris, A. <strong><a href="http://books.google.com/books?id=pPv9KCEkklsC" rel="noreferrer">Classical Descriptive Set Theory.</a></strong> <em>Springer-Verlag</em>, 1994.</li> </ol>
linear-algebra
<p>I am trying to write a program that will perform <a href="http://en.wikipedia.org/wiki/Optical_character_recognition" rel="noreferrer">OCR</a> on a mobile phone, and I recently encountered this article : </p> <hr> <p><img src="https://i.sstatic.net/2jenb.png" alt="article excerpt"></p> <hr> <p>Can someone explain this to me ? </p>
<p><a href="https://math.stackexchange.com/a/92176/4423">J.M. has given a very good answer</a> explaining singular values and how they're used in low rank approximations of images. However, a few pictures always go a long way in appreciating and understanding these concepts. Here is an example from one of my presentations from I don't know when, but is exactly what you need. </p> <p>Consider the following grayscale image of a hummingbird (left). The resulting image is a $648\times 600$ image of MATLAB <code>double</code>s, which takes $648\times 600\times 8=3110400$ bytes. </p> <p><img src="https://i.sstatic.net/twFL0m.png" alt="enter image description here"> <img src="https://i.sstatic.net/LzRaNm.png" alt="enter image description here"></p> <p>Now taking an SVD of the above image gives us $600$ singular values that, when plotted, look like the curve on the right. Note that the $y$-axis is in decibels (i.e., $10\log_{10}(s.v.)$). </p> <p>You can clearly see that after about the first $20-25$ singular values, it falls off and the bulk of it is so low, that any information it contains is negligible (and most likely noise). So the question is, <em>why store all this information if it is useless?</em></p> <p>Let's look at what information is actually contained in the different singular values. The figure on the left below shows the image recreated from the first 10 singular values ($l=10$ in J.M.'s answer). We see that the <em>essence</em> of the picture is basically captured in just 10 singular values out of a total of 600. Increasing this to the first 50 singular values shows that the picture is almost exactly reproduced (to the human eye). </p> <p>So if you were to save just the first 50 singular values and the associated left/right singular vectors, you'd need to store only $(648\times 50 + 50 + 50\times 600)\times 8=499600$ bytes, which is only about 16% of the original! (I'm sure you could've gotten a good representation with about 30, but I chose 50 arbitrarily for some reason back then, and we'll go with that.) </p> <p><img src="https://i.sstatic.net/bMUWbm.png" alt="enter image description here"> <img src="https://i.sstatic.net/1BJZ5m.png" alt="enter image description here"></p> <p>So what exactly do the smaller singular values contain? Looking at the next 100 singular values (figure on the left), we actually see some fine structure, especially the fine details around the feathers, etc., which are generally indistinguishable to the naked eye. It's probably very hard to see from the figure below, but you certainly can in <a href="https://i.sstatic.net/eibGG.png" rel="noreferrer">this larger image</a>. </p> <p><img src="https://i.sstatic.net/eibGGm.png" alt="enter image description here"> <img src="https://i.sstatic.net/hPktum.png" alt="enter image description here"></p> <p>The smallest 300 singular values (figure on the right) are complete junk and convey no information. These are most likely due to sensor noise from the camera's CMOS. </p>
<p>It is rather unfortunate that they used "eigenvalues" for the description here (although it's correct); it is better to look at principal component analysis from the viewpoint of singular value decomposition (SVD).</p> <p>Recall that any $m\times n$ matrix $\mathbf A$ possesses the singular value decomposition $\mathbf A=\mathbf U\mathbf \Sigma\mathbf V^\top$, where $\mathbf U$ and $\mathbf V$ are orthogonal matrices, and $\mathbf \Sigma$ is a diagonal matrix whose entries $\sigma_k$ are called <em>singular values</em>. Things are usually set up such that the singular values are arranged in nonincreasing order: $\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_{\min(m,n)}$.</p> <p>By analogy with the eigendecomposition (eigenvalues and eigenvectors), the $k$-th column of $\mathbf U$, $\mathbf u_k$, corresponding to the $k$-th singular value is called the <em>left singular vector</em>, and the $k$-th column of $\mathbf V$, $\mathbf v_k$, is the <em>right singular vector</em>. With this in mind, you can treat $\mathbf A$ as a sum of outer products of vectors, with the singular values as "weights":</p> <p>$$\mathbf A=\sum_{k=1}^{\min(m,n)} \sigma_k \mathbf u_k\mathbf v_k^\top$$</p> <p>Okay, at this point, some people might be grumbling "blablabla, linear algebra nonsense". The key here is that if you treat images as matrices (gray levels, or RGB color values sometimes), and then subject those matrices to SVD, it will turn out that some of the singular values $\sigma_k$ are <em>tiny</em>. The key to "approximating" these images then, is to treat those tiny $\sigma_k$ as zero, resulting in what is called a "low-rank approximation":</p> <p>$$\mathbf A\approx\sum_{k=1}^\ell \sigma_k \mathbf u_k\mathbf v_k^\top,\qquad \ell \ll \min(m,n)$$</p> <p>The matrices corresponding to images can be big, so it helps if you don't have to keep all those singular values and singular vectors around. (The "two", "ten", and "thirty" images mean what they say: $n=256$, and you have chosen $\ell$ to be $2$, $10$, and $30$ respectively.) The criterion of when to zero out a singular value depends on the application, of course.</p> <p>I guess that was a bit long. Maybe I should have linked you to <a href="http://www.mathworks.com/moler/eigs.pdf" rel="noreferrer">Cleve Moler's book</a> instead for this (see page 21 onwards, in particular).</p> <hr> <p><strong>Edit 12/21/2011:</strong></p> <p>I've decided to write a short <em>Mathematica</em> notebook demonstrating low-rank approximations to images, using the famous <a href="http://www.cs.cmu.edu/~chuck/lennapg/" rel="noreferrer">Lenna test image</a>. The notebook can be obtained from me upon request.</p> <p>In brief, the $512\times 512$ example image can be obtained through <code>ImageData[ColorConvert[ExampleData[{"TestImage", "Lena"}], "Grayscale"]]</code>. A plot of $\log_{10} \sigma_k$ looks like this:</p> <p><img src="https://i.sstatic.net/XD4wd.png" alt="log singular values of Lenna"></p> <p>Here is a comparison of the original Lenna image with a few low-rank approximations:</p> <p><img src="https://i.sstatic.net/FAC0i.png" alt="Lenna and low-rank approximations"></p> <p>At least to my eye, taking $120$ out of $512$ singular values (only a bit more than $\frac15$ of the singular values) makes for a pretty good approximation.</p> <p>Probably the only caveat of SVD is that it is a rather slow algorithm. For images that are quite large, taking the SVD might take a long time. At least one only deals with a single matrix for grayscale pictures; for RGB color pictures, one must take the SVD of three matrices, one for each color component.</p>
linear-algebra
<blockquote> <p>Suppose $A$ and $B$ are similar matrices. Show that $A$ and $B$ have the same eigenvalues with the same geometric multiplicities.</p> <p><strong>Similar matrices</strong>: Suppose $A$ and $B$ are $n\times n$ matrices over $\mathbb R$ or $\mathbb C$. We say $A$ and $B$ are similar, or that $A$ is similar to $B$, if there exists a matrix $P$ such that $B = P^{-1}AP$.</p> </blockquote>
<p><span class="math-container">$B = P^{-1}AP \ \Longleftrightarrow \ PBP^{-1} = A$</span>. If <span class="math-container">$Av = \lambda v$</span>, then <span class="math-container">$PBP^{-1}v = \lambda v \ \Longrightarrow \ BP^{-1}v = \lambda P^{-1}v$</span>. So, if <span class="math-container">$v$</span> is an eigenvector of <span class="math-container">$A$</span>, with eigenvalue <span class="math-container">$\lambda$</span>, then <span class="math-container">$P^{-1}v$</span> is an eigenvector of <span class="math-container">$B$</span> with the same eigenvalue. So, every eigenvalue of <span class="math-container">$A$</span> is an eigenvalue of <span class="math-container">$B$</span> and since you can interchange the roles of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> in the previous calculations, every eigenvalue of <span class="math-container">$B$</span> is an eigenvalue of <span class="math-container">$A$</span> too. Hence, <span class="math-container">$A$</span> and <span class="math-container">$B$</span> have the same eigenvalues.</p> <p>Geometrically, in fact, also <span class="math-container">$v$</span> and <span class="math-container">$P^{-1}v$</span> are the same vector, written in different coordinate systems. Geometrically, <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are matrices associated with the same endomorphism. So, they have the same eigenvalues and geometric multiplicities.</p>
<p>The matrices $A$ and $B$ describe the same linear transformation $L$ of some vector space $V$ with respect to different bases. For any $\lambda\in{\mathbb C}$ the set $E_\lambda:=\lbrace x\in V\ |\ Lx=\lambda x\rbrace$ is a well defined subspace of $V$ and therefore has a clearcut dimension ${\rm dim}(E_\lambda)\geq0$ which is independent of any basis one might chose for $V$. Of course, for most $\lambda\in{\mathbb C}$ this dimension is $0$, which means $E_\lambda=\{{\bf 0}\}$. If $\lambda$ is actually an eigenvalue of $L$ then ${\rm dim}(E_\lambda)$ is called the <em>(geometric) multiplicity</em> of this eigenvalue. </p> <p>So there is actually nothing to prove.</p>
geometry
<p>What is the simplest way to find out the area of a triangle if the coordinates of the three vertices are given in $x$-$y$ plane? </p> <p>One approach is to find the length of each side from the coordinates given and then apply <a href="https://en.wikipedia.org/wiki/Heron&#39;s_formula" rel="noreferrer"><em>Heron's formula</em></a>. Is this the best way possible? </p> <p>Is it possible to compare the area of triangles with their coordinates provided without actually calculating side lengths?</p>
<p>What you are looking for is called the <a href="http://en.wikipedia.org/wiki/Shoelace_formula" rel="nofollow noreferrer">shoelace formula</a>:</p> <p><span class="math-container">\begin{align*} \text{Area} &amp;= \frac12 \big| (x_A - x_C) (y_B - y_A) - (x_A - x_B) (y_C - y_A) \big|\\ &amp;= \frac12 \big| x_A y_B + x_B y_C + x_C y_A - x_A y_C - x_C y_B - x_B y_A \big|\\ &amp;= \frac12 \Big|\det \begin{bmatrix} x_A &amp; x_B &amp; x_C \\ y_A &amp; y_B &amp; y_C \\ 1 &amp; 1 &amp; 1 \end{bmatrix}\Big| \end{align*}</span></p> <p>The last line indicates how to generalize the formula to higher dimensions.</p> <p><strong>PS.</strong> Another generalization of the formula is obtained by noting that it follows from a discrete version of the <a href="https://en.wikipedia.org/wiki/Green%27s_theorem" rel="nofollow noreferrer">Green's theorem</a>:</p> <p><span class="math-container">$$ \text{Area} = \iint_{\text{domain}}dx\,dy = \frac12\oint_{\text{boundary}}x\,dy - y\,dx $$</span></p> <p>Thus the signed (oriented) area of a polygon with <span class="math-container">$n$</span> vertices <span class="math-container">$(x_i,y_i)$</span> is given by</p> <p><span class="math-container">$$ \text{Area} = \frac12\sum_{i=0}^{n-1} x_i y_{i+1} - x_{i+1} y_i $$</span></p> <p>where indices are added modulo <span class="math-container">$n$</span>.</p>
<p>You know that <strong>AB × AC</strong> is a vector perpendicular to the plane ABC such that |<strong>AB × AC</strong>|= Area of the parallelogram ABA’C. Thus this area is equal to ½ |AB × AC|.</p> <p><a href="https://i.sstatic.net/3oDbh.png" rel="noreferrer"><img src="https://i.sstatic.net/3oDbh.png" alt="enter image description here" /></a></p> <p>From <strong>AB</strong>= <span class="math-container">$(x_2 -x_1, y_2-y_1)$</span>; <strong>AC</strong>= <span class="math-container">$(x_3-x_1, y_3-y_1)$</span>, we deduce then</p> <p>Area of <span class="math-container">$\Delta ABC$</span> = <span class="math-container">$\frac12$$[(x_2-x_1)(y_3-y_1)- (x_3-x_1)(y_2-y_1)]$</span></p>
probability
<p><strong>Question:</strong> Suppose we have one hundred seats, numbered 1 through 100. We randomly select 25 of these seats. What is the expected number of selected pairs of seats that are consecutive? (To clarify: we would count two consecutive selected seats as a single pair.)</p> <p>For example, if the selected seats are all consecutive (eg 1-25), then we have 24 consecutive pairs (eg 1&amp;2, 2&amp;3, 3&amp;4, ..., 24&amp;25). The probability of this happening is 75/($_{100}C_{25}$). So this contributes $24\cdot 75/(_{100}C_{25}$) to the expected number of consecutive pairs. </p> <p><strong>Motivation</strong>: I teach. Near the end of an exam, when most of the students have left, I notice that there are still many pairs of students next to each other. I want to know if the number that remain should be expected or not. </p>
<p>If you're just interested in the <em>expectation</em>, you can use the fact that expectation is additive to compute</p> <ul> <li>The expected number of consecutive integers among $\{1,2\}$, plus</li> <li>The expected number of consecutive integers among $\{2,3\}$, plus</li> <li>....</li> <li>plus the expected number of consecutive integers among $\{99,100\}$.</li> </ul> <p>Each of these 99 expectations is simply the probability that $n$ and $n+1$ are both chosen, which is $\frac{25}{100}\frac{24}{99}$.</p> <p>So the expected number of pairs is $99\frac{25}{100}\frac{24}{99} = 6$.</p>
<p>Let me present the approach proposed by Henning in another way. </p> <p>We have $99$ possible pairs: $\{(1,2),(2,3),\ldots,(99,100)\}$. Let's define $X_i$ as</p> <p>$$ X_i = \left \{ \begin{array}{ll} 1 &amp; i\text{-th pair is chosen}\\ 0 &amp; \text{otherwise}\\ \end{array} \right . $$</p> <p>The $i$-th pair is denoted as $(i, i+1)$. That pair is chosen when the integer $i$ is chosen, with probability $25/100$, <em>and</em> the integer $i+1$ is also chosen, with probability $24/99$. Then,</p> <p>$$E[X_i] = P(X_i = 1) = \frac{25}{100}\frac{24}{99},$$</p> <p>and this holds for $i = 1,2,\ldots,99$. The total number of chosen pairs is then given as</p> <p>$$X = X_1 + X_2 + \ldots + X_{99},$$</p> <p>and using the linearity of the expectation, we get</p> <p>\begin{align} E[X] &amp;= E[X_1] + E[X_2] + \cdots +E[X_{99}]\\ &amp;= 99E[X_1]\\ &amp;= 99\frac{25}{100}\frac{24}{99} = 6 \end{align}</p>
logic
<p>I know this seems like an obvious question, but I haven't been able to find any examples of sentences in logic higher than second order, so my intuition on how it's supposed to behave is failing me. There are descriptions describing third order logic as 'properties of properties' but without example syntax, I'm not sure if I'm on the wrong track or not.</p> <p>Propositional logic sentences are simple propositions connected by logical connectives:</p> <p>$\phi$ $\land$ $\psi$</p> <p>$\phi \lor \psi$</p> <p>$\lnot \phi \rightarrow \psi$</p> <p>First order logic sentences are quantified objects with free predicates and functions:</p> <p>$\exists$x $\forall$y P(f(x)) $\rightarrow$ Q(y)</p> <p>Second order logic sentences don't just quantify the objects, but the functions and predicates:</p> <p>$\forall$Q $\exists$P $\exists$f $\exists$x $\forall$y P(f(x)) $\rightarrow$ Q(y)</p> <p>How do we go higher than second order? What are some examples of third, fourth, or fifth order logic sentences?</p>
<p>The axioms of topology, for example, can be seen as third-order axioms. Simply because of the axiom that a topology is closed under unions:</p> <p>$$\forall\mathcal U((\forall U\in\mathcal U\rightarrow U\in\tau)\rightarrow(\exists V\forall x(x\in V\leftrightarrow\exists U\in\mathcal U(x\in U))\land V\in\tau))$$</p> <p>In the language of arithmetic, a well-order of the second-order predicates (namely, $\mathcal P(\Bbb N)$), or even the existence thereof, is a third-order sentence coming from the numbers themselves.</p> <p>To some extent this is the great thing about set theory here. It allows us to take any of these high-order sentences and make them first-order in the language of sets. Of course we can make them into first-order in a two/three/four-sorted logic, which acts a bit like type theory, but you do run into issues there (for example, the characterization of $\Bbb R$ as the unique complete ordered field won't translate well into first-order logic).</p>
<p>In the context of higher-order arithmetic, there are many natural third-order statements. In arithmetic, quantifiers over natural numbers are first-order, quantifiers over sets of natural numbers are second-order, and quantification over sets of sets of natural numbers is third-order. </p> <p>Using standard coding methods, quantifying over real numbers is second-order, so quantifying over sets of real numbers is third-order. </p> <p>Some English sentences that are expressed as third-order statements in the language of arithmetic, but not as second-order statements, include:</p> <ul> <li><p>There is an nonprincipal ultrafilter on $\mathbb{N}$.</p></li> <li><p>Every subset of the unit interval $[0,1]$ has a cluster point. </p></li> <li><p>There is a discontinuous function from $\mathbb{R}$ to $\mathbb{R}$. </p></li> </ul> <p>Similarly, one can obtain fourth-order statements by quantifying over arbitrary subsets of $\mathbb{R}$. </p>
combinatorics
<p>A <strong>derangement</strong> is a permutation <span class="math-container">$\sigma$</span> of <span class="math-container">$\{1,2,3,\dots,n\}$</span> such that <span class="math-container">$\sigma(i) \neq i$</span> for every <span class="math-container">$i$</span>. A common application of inclusion/exclusion in undergraduate combinatorics and probability classes is to compute the number of derangements, and in the process show that the probability a random permutation is a derangement approaches <span class="math-container">$\frac{1}{e}$</span> for large <span class="math-container">$n$</span>. </p> <p>There's also a "standard" intuition for this probability, which goes roughly as follows: Let <span class="math-container">$E_i$</span> be the event that <span class="math-container">$\sigma(i)=i$</span>. </p> <p>1) For a given <span class="math-container">$i$</span>, the probability of <span class="math-container">$E_i$</span> is exactly <span class="math-container">$\frac{1}{n}$</span>.</p> <p>2) If <span class="math-container">$n$</span> is large, than these events should be "nearly" independent (<span class="math-container">$E_i$</span> occurring means that <span class="math-container">$\sigma(i) \neq j$</span>, making it a tiny bit more likely that <span class="math-container">$\sigma(j)=j$</span>, but this shouldn't have much of an effect for large <span class="math-container">$n$</span>), so we'd expect the probability none of the <span class="math-container">$E_i$</span> occur to be roughly <span class="math-container">$\left(1-\frac{1}{n}\right)^n$</span>.</p> <p>3) For large <span class="math-container">$n$</span>, <span class="math-container">$\left(1-\frac{1}{n} \right)^n \approx \frac{1}{e}$</span>. </p> <p>Now the last approximation alone already has an error proportional to <span class="math-container">$\frac{1}{n}$</span>. What's suprising then is that, after <a href="http://aleph.math.louisville.edu/teaching/2009FA-681/notes-090901.pdf" rel="noreferrer">working through the inclusion/exclusion</a>, you find that the probability is not just approximately <span class="math-container">$\frac{1}{e}$</span>, but incredibly close -- the error is less than <span class="math-container">$\frac{1}{(n+1)!}$</span>. </p> <p><strong>Is there some alternative intuitive explanation for the <span class="math-container">$\frac{1}{e}$</span> asymptotic probability that gives a sense of why the convergence is so fast?</strong></p>
<p>In fact a much stronger $e$-related statement is true: let $X_i$ denote the number of $i$-cycles in a random permutation on $n$ elements. Then for fixed $k$, as $n \to \infty$ the random variables $X_1, X_2, ... X_k$ are asymptotically independently <a href="http://en.wikipedia.org/wiki/Poisson_distribution">Poisson</a> with rates $1, \frac{1}{2}, ... \frac{1}{k}$. This observation about derangements is a special case applied to $X_1$. See <a href="http://qchu.wordpress.com/2012/11/09/short-cycles-in-random-permutations/">this blog post</a> for details, which proves this fact as a corollary of the exponential formula. The convergence rate is presumably also controlled by the exponential formula but I haven't worked out the details. </p>
<blockquote> <p><strong>Note:</strong> The convergence is so fast, because the <em>derangement number <span class="math-container">$D_n$</span></em> and the <em>Taylor series expansion</em> of <span class="math-container">$e^x$</span> are closely related.</p> </blockquote> <blockquote> <p>The <em>intuition</em> behind it is (for me) trying to develop a better feeling for the mechanism of the <em>inclusion/exclusion</em> principle, which <em>encodes</em> this relationship.</p> </blockquote> <p>According to OPs referenced paper we know the derangement number <span class="math-container">$D_n$</span> is</p> <p><span class="math-container">\begin{align*} D_n=n!\sum_{j=0}^n(-1)^j\frac{1}{j!} \end{align*}</span> Let's compare it with the <em>Taylor expansion series</em> of <span class="math-container">$e^x$</span> at <span class="math-container">$x=-1$</span> <span class="math-container">\begin{align*} \frac{1}{e}=\sum_{j=0}^\infty(-1)^j\frac{1}{j!} \end{align*}</span></p> <blockquote> <p>Observe, that <span class="math-container">$\frac{D_n}{n!}$</span> is the <span class="math-container">$n$</span>-th Taylor polynom of <span class="math-container">$\frac{1}{e}$</span>.</p> </blockquote> <p>We also note, that the series for <span class="math-container">$e^{-1}$</span> satifies the requirement of the <em><a href="http://en.wikipedia.org/wiki/Alternating_series_test" rel="nofollow noreferrer">alternating series test</a></em>.</p> <ul> <li><p>The terms alternate in sign.</p> </li> <li><p>They are decreasing in absolute value.</p> </li> <li><p>They approach <span class="math-container">$0$</span>.</p> </li> </ul> <blockquote> <p>Therefore applying the alternating series error test we obtain if we stop at the <span class="math-container">$n$</span>-th term an absolute error <em>less than the <span class="math-container">$(n+1)$</span>-st term</em>. So, the error is</p> </blockquote> <blockquote> <p><span class="math-container">\begin{align*} \left|\frac{D_n}{n!}-\frac{1}{e}\right|&lt;\frac{1}{(n+1)!} \end{align*}</span></p> </blockquote> <p><em>Note:</em> This and the nice fact that <span class="math-container">$D_n$</span> is the <em>nearest</em> integer to <span class="math-container">$\frac{n!}{e}$</span> can be found e.g. in <em><a href="https://people.math.binghamton.edu/zaslav/Oldcourses/386.F11/stirling-derange.pdf" rel="nofollow noreferrer">Stirling’s Approximation and Derangement Numbers</a></em> by <em>T. Zaslavsky</em></p>
logic
<p>I'm $\DeclareMathOperator{\par}{\unicode{8523}}$ trying to wrap my mind around the $\par$ ("par") operator of linear logic. </p> <p>The other connectives have simple resource interpretations ($A\otimes B$ is "you have both $A$ and $B$", $A\&amp;B$ is "you can have either $A$ or $B$" etc.).</p> <p>But despite the inference rules being simple, I've seen only mentions that $A\par B$ is "difficult to describe" in a resource interpretation, apart from the special case of $A\multimap B ~\equiv ~ A^\bot \par B$ ("you can convert $A$ to $B$").</p> <p>Any way I can understand it better?</p>
<p>⅋ also had me baffled for a long time. The intuition I've arrived at is this: You have both an A and a B, but you <em>can't use them together</em>.</p> <p>Examples of this (considered as entities in a computer program) include:</p> <ul> <li>A⊸A (= $A⅋A^⊥$) is a function value; you can't get rid of it by connecting its input to its output.</li> <li>$A⅋A^⊥$ is a communication channel (future): you can put a value into one end and retrieve it from the other. It is forbidden, however, to do both from the same thread (risk of deadlock, leading to ⊥).</li> </ul> <p>(The computer program analogy may not make sense for everybody, of course - but this has been my approach to the topic, and my basis for intuitions.)</p> <p>You can read the axioms in an up-right-down manner. The axiom for ⅋ introduction,</p> <p>$$ \frac{ Δ_1,A ⊢ Γ_1 \quad Δ_2,B ⊢ Γ_2 }{ Δ_1,Δ_2, A⅋B \ ⊢ \ Γ_1,Γ_2 } $$</p> <p>can thus be read as "given a value of type A⅋B, you can divide your remaining resources into two, transform them into new ones, and combine the results." And A⅋B can not be split in any other way - A and B must become separate and cannot take part in the same transformations.</p>
<p>Please see <a href="http://en.wikipedia.org/wiki/Linear_logic">Linear Logic</a> in wikipedia: the symbol ⅋ ("par") is used to denote <em>multipicative disjunction</em>, whereas ⊗ is used to denote <em>multiplicative conjunction</em>. They are duals of each other. There is also some discussion of this connective.</p> <p>Variously, one can denote the "par" operation using the symbol $ \sqcup $, which is described in the paper (pdf) <a href="http://www.pitt.edu/~belnap/104LinearLogicDisplayed.pdf">"Linear Logic Displayed"</a> as "join-like."</p> <p><strong>The logical rules for ⊗ and ⅋ are as follows</strong>: </p> <p>If Γ: A: B Δ ⊢Θ, then Γ: A⊗B: Δ ⊢ Θ; conversely, if Γ ⊢ Δ: A and ⊢ B: Θ, then Γ ⊢ Δ: A⊗B: Θ.</p> <p>Dually, if Γ ⊢ Δ: A: B: Θ, then Γ ⊢ Δ: A⅋B: Θ; conversely, if A: Γ ⊢Δ and Θ: B ⊢, then Θ: A⅋B: Γ ⊢ Δ.</p> <p>Multiplication distributes over addition if one is a conjunction and one is a disjunction:</p> <ul> <li><p>A⊗(B⊕C) ≡ (A⊗B)⊕(A⊗C) (and on the other side); </p> <p>(multiplicative conjunction distributes over additive disjunction)</p></li> <li><p>A⅋(B&amp;C) ≡ (A⅋B)&amp;(A⅋C) (and on the other side); </p> <p>(multiplicative disjunction over additive conjunction)</p></li> </ul> <p>Also: </p> <ul> <li>A⊗$0$ ≡ $0$ (and on the other side); </li> <li>A⅋⊤≡⊤ (and on the other side).</li> </ul> <p>Linear implication $A\multimap B$ corresponds to the internal hom, which can be defined as (A⊗$B^⊥)^⊥$. There is a <em>de Morgan</em> dual of the tensor called ‘par’, with A⅋B=($A^⊥⊗B^⊥)^⊥$. Tensor and par are the ‘multiplicative’ connectives, which roughly speaking represent the parallel availability of resources.</p> <p>The ‘additive’ connectives &amp; and ⊕, which correspond in another way to traditional conjunction and disjunction, are modeled as usual by products and coproducts. </p> <p>For a nice exploration in linear logic: <a href="http://ncatlab.org/nlab/show/linear+logic">see this</a>:</p> <hr> <blockquote> <p>(<strong>GAME SEMANTICS):</strong></p> <p>Game semantics for linear logic was first proposed by Andreas Blass (Blass (1992).) The semantics here may or may not be the same as proposed by Blass.</p> <p>We can interpret any proposition in linear logic as a game between two players: we and they. The overall rules are perfectly symmetric between us and them, although no individual game is. At any given moment in a game, exactly one of these four situations obtains: it is our turn, it is their turn, we have won, or they have won; the last two states continue forever afterwards (and the game is over). If it is our turn, then they are winning; if it is their turn, then we are winning. So there are two ways to win: because the game is over (and a winner has been decided), or because it is forever the other players turn (either because they have no move or because every move results in its still being their turn).</p> <p>This is a little complicated, but it's important in order to be able to distinguish the four constants:</p> <p>In ⊤, it is their turn, but they have no moves; the game never ends, but we win.</p> <p>Dually, in 0, it is our turn, but we have no moves; the game never ends, but they win.</p> <p>In contrast, in 1, the game ends immediately, and we have won.</p> <p>Dually, in ⊥, the game ends immediately, and they have won.</p> <p>The binary operators show how to combine two games into a larger game:</p> <p>In A&amp;B, is is their turn, and they must choose to play either A or B. Once they make their choice, play continues in the chosen game, with ending and winning conditions as in that game.</p> <p>Dually, in A⊕B, is is our turn, and we must choose to play either A or B. Once we make our choice, play continues in the chosen game, with ending and winning conditions as in that game.</p> <p>In A⊗B, play continues with both games in parallel. If it is our turn in either game, then it is our turn overall; if it is their turn in both games, then it is their turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If we have won both games, then we have won overall; if they have won either game, then they have won overall.</p> <p><strong>Dually, in A⅋B, play continues with both games in parallel. If it is their turn in either game, then it is their turn overall; if it is our turn in both games, then it is our turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If they have won both games, then they have won overall; if we have won either game, then we have won overall.</strong></p> <p>So we can classify things as follows:</p> <p>In a conjunction, they choose what game to play; in a disjunction, we have control. Whoever has control must win at least one game to win overall.</p> <p>In an addition, one game must be played; in a multiplication, all games must be played.</p> <p>To further clarify the difference between ⊤ and 1 (the additive and multiplicative versions of truth, both of which we win); consider A⅋⊤ and A⅋1. In A⅋⊤, it is always their move (since it is their move in ⊤, hence their move in at least one game), so we win just as we win ⊤. (In fact, A⅋⊤≡⊤.) However, in A⅋1, the game 1 ends immediately, so play continues as in A. We have won 1, so we only have to end the game to win overall, but there is no guarantee that this will happen. Indeed, in 0⅋1, the game never ends and it is always our turn, so they win. (In ⊥⅋1, both games end immediately, and we win. In A⊗1, we must win both games to win overall, so this reduces to A; indeed, A⊗1≡A.)</p> <p>Negation is easy: To play A ⊥, simply swap roles and play A.</p> <p>A game is valid if we have a strategy to win (whether by putting the game in a state where we have won or by guaranteeing that it is forever their turn). The soundness and completeness of this interpretation is the theorem that A is a valid game if and only if ⊢A is a valid sequent. (Recall that all questions of validity of sequents can be reduced to the validity of single propositions.)</p> </blockquote>
game-theory
<p>Got this for an interview and didn't get it. How to solve?</p> <p>You and your opponent have a uniform random sampler from 0 to 1. We both sample from our own machines. Whoever has the higher number wins. The catch is when you see your number, you can resample. The opponent can re sample once but you can resample twice. What’s the probability I win?</p> <p>My not-confident-at-all approach: For each player you come up with a strategy that revolves around the idea of “if this number is too low, resample.” you know that for myself, I have three samples, and the EV of the third sample is 1/2. So for the second sample, if it’s below 1/2, you should resample; if above 1/2, do not resample. And you do this for the first sample with a slightly higher threshold. And then assuming our player is opponent they will follow the same approach, but they only have two rolls.</p> <p>No matter what, we know the game can end with six outcomes: it can end with me ending on the first, second, or third sample, and them ending on the first or second outcome. We just condition on each of those six cases and find the probability that my roll is bigger than their roll on that conditional uniform distribution.</p>
<p>Let's look at the single player game which is that I have a budget of <span class="math-container">$n$</span> total rolls and I want to come up with a strategy for getting the biggest roll in expectation.</p> <p>For <span class="math-container">$k\leq n$</span>, let us look at what happens from <span class="math-container">$k$</span> rolls. Now if we think of each of the rolls <span class="math-container">$R_1,R_2,\dots,R_k$</span> as being independent draws from the uniform distribution and we define a new random variable <span class="math-container">$X:=\max\{R_1,R_2,\dots,R_k\}$</span>, we see that for an arbitrary <span class="math-container">$x\in[0,1]$</span>, the probability that <span class="math-container">$X$</span> is smaller or equal to <span class="math-container">$x$</span> is <span class="math-container">$\mathbb{P}(X\leq x)=\mathbb{P}(R_1\leq x, R_2\leq x,\dots R_k\leq x)=\prod_{j=1}^k \mathbb{P}(R_j\leq x)=\prod_{j=1}^k x=x^k$</span>.</p> <p>This is clearly the cumulative distribution function of <span class="math-container">$X$</span>, so we can differentiate that to get the density of <span class="math-container">$X$</span>, which will obviously be <span class="math-container">$f_X(x)=kx^{k-1}$</span>. This discussion can also be found with a bit more detail here: <a href="https://stats.stackexchange.com/questions/18433/how-do-you-calculate-the-probability-density-function-of-the-maximum-of-a-sample">https://stats.stackexchange.com/questions/18433/how-do-you-calculate-the-probability-density-function-of-the-maximum-of-a-sample</a></p> <p>But if we have the density function, we can immediately compute the expected value of <span class="math-container">$X$</span>, which will be <span class="math-container">$\mathbb{E}[X]:=\int_0^1 xf_X(x)dx = \int_0^1 kx^k=\tfrac{k}{k+1}$</span>.</p> <p>So what does this mean? If I still have <span class="math-container">$k$</span> rolls left in my budget, I should expect that the maximum value I will encounter during those remaining <span class="math-container">$k$</span> rolls will be <span class="math-container">$\tfrac{k}{k+1}$</span>.</p> <p>With this in mind, an optimal strategy for the single player game with a total budget of <span class="math-container">$n$</span> rolls is quite straightforward.</p> <p>Do the first roll and get the value <span class="math-container">$x_1$</span>. Is <span class="math-container">$x_1&gt;\tfrac{n-1}{n}$</span>, which is the expected highest value I will see in the remaining <span class="math-container">$n-1$</span> rolls? If yes, stop. Otherwise roll again. Get <span class="math-container">$x_2$</span> on the second roll. Is <span class="math-container">$x_2&gt;\tfrac{n-2}{n-1}$</span>? Then stop. Else roll the third time. Inductively, if on roll <span class="math-container">$\ell$</span> you have that <span class="math-container">$x_\ell&gt;\tfrac{n-\ell}{n-\ell+1}$</span>, stop. Otherwise roll for the <span class="math-container">$\ell+1$</span>-th time. If you are unlucky enough to get to the <span class="math-container">$n$</span>-th roll, the value you will stop at will be arbitrary.</p> <p>But now, what is the expected outcome of the optimal strategy? Note that you would stop at roll <span class="math-container">$1$</span> with probability <span class="math-container">$S_1=\tfrac{1}{n}$</span> and you will only do that when <span class="math-container">$x_1\in(\tfrac{n-1}{n},1]$</span>. The average value of stopping at step <span class="math-container">$1$</span> will clearly be <span class="math-container">$A_1=\tfrac{1}{2}\big(\tfrac{n-1}{n}+1\big)=\tfrac{2n-1}{2n}$</span>.</p> <p>You stop at roll <span class="math-container">$2$</span> if you did not stop at roll <span class="math-container">$1$</span> and if <span class="math-container">$x_2\in(\tfrac{n-2}{n-1},1]$</span>. The probability that you did not stop at roll <span class="math-container">$1$</span> is <span class="math-container">$1-\tfrac{1}{n}=\tfrac{n-1}{n}$</span> and the probability that you roll <span class="math-container">$x_2&gt;\tfrac{n-2}{n-1}$</span> is <span class="math-container">$\tfrac{1}{n-1}$</span>. So the probability that you stop at roll <span class="math-container">$2$</span> is <span class="math-container">$S_2=\tfrac{n-1}{n}\tfrac{1}{n-1}=\tfrac{1}{n}$</span>. The average stopping value on roll <span class="math-container">$2$</span> is <span class="math-container">$A_2=\tfrac{2n-3}{2n-2}$</span>.</p> <p>You stop at roll <span class="math-container">$3$</span> if you did not stop at roll <span class="math-container">$1$</span>, did not stop at roll <span class="math-container">$2$</span> and if <span class="math-container">$x_3\in(\tfrac{n-3}{n-2},1]$</span>. This will happen with probability <span class="math-container">$S_3=(1-2\tfrac{1}{n})\tfrac{1}{n-2}=\tfrac{1}{n}$</span> and your average stopping value will be <span class="math-container">$A_3=\tfrac{2n-5}{2n-4}$</span>.</p> <p>You can show by induction that <span class="math-container">$S_k=\tfrac{1}{n}$</span> since you stop at roll <span class="math-container">$k$</span> if you have not stopped at any of the previous rolls (which happens with probability <span class="math-container">$1-\sum_{j=1}^{k-1} S_j=\tfrac{n-k}{n}$</span> by the induction hypothesis) and if <span class="math-container">$x_k&gt;\tfrac{n-k-1}{n-k}$</span> which has probability <span class="math-container">$\tfrac{1}{n-k}$</span>. The average outcome will be <span class="math-container">$A_k=\tfrac{2n-2k+1}{2n-2k+2}$</span>. This will hold for all <span class="math-container">$2\leq k\leq n-1$</span>. Stopping at roll <span class="math-container">$n$</span> will occur with probability <span class="math-container">$S_n=1-\sum_{j=1}^{n-1} S_j=\tfrac{1}{n}$</span> with average outcome <span class="math-container">$A_n=\tfrac{1}{2}=\tfrac{2n-2n+1}{2n-2n+2}$</span>.</p> <p>The expected value of the single player strategy is then <span class="math-container">$E_n=\sum_{k=1}^n S_kA_k= \tfrac{1}{n} \sum_{k=1}^n \tfrac{2n-2k+1}{2n-2k+2} =\tfrac{1}{n} \sum_{k=1}^n\big(1-\tfrac{1}{2k}\big)=1-\tfrac{1}{2n}\sum_{k=1}^n \tfrac{1}{k}$</span>.</p> <p><span class="math-container">$E_2=1-\tfrac{1}{4}\big(1+\tfrac{1}{2}\big)=\tfrac{5}{8}$</span>.</p> <p><span class="math-container">$E_3=1-\tfrac{1}{6}\big(1+\tfrac{1}{2}+\tfrac{1}{3}\big)=\tfrac{25}{36}$</span>.</p> <p>As expected <span class="math-container">$E_3&gt;E_2$</span>. I am unsure what the game theory perspective on this is though. If both players follow the single-player strategy, the player with <span class="math-container">$3$</span> rolls is definitely expected to win. How the other player would want to adapt his strategy depends on some factors. For instance, do both players know what roll the other player is on or whether they stopped early?</p> <p><strong>Major edit thanks to @hgmath</strong></p> <p>What I described above is not the optimal single player strategy. Indeed, consider the case when <span class="math-container">$n=3$</span>. If on roll one we get <span class="math-container">$x_1\in(E_2,\tfrac{2}{3})=(\tfrac{5}{8},\tfrac{2}{3})$</span>, rolling again and pursuing the above strategy would be a mistake, since our expected outcome would just be <span class="math-container">$E_2$</span> (the upper bound <span class="math-container">$\tfrac{2}{3}$</span> is put there since even with the current strategy we would not reroll if we got above <span class="math-container">$\tfrac{2}{3}$</span>).</p> <p>This suggests an inductive strategy. Namely, if we already know the optimal expected outcome of <span class="math-container">$E_{n-1}$</span> rolls, we should only reroll after the first roll if <span class="math-container">$x_1&lt;E_{n-1}$</span> instead of rerolling if <span class="math-container">$x_1&lt;\tfrac{n-1}{n}$</span>. So let us try to write down these optimal expectations <span class="math-container">$E_k$</span>.</p> <p>If <span class="math-container">$n=1$</span>, i.e., if our roll budget is <span class="math-container">$1$</span>, clearly we will stop after <span class="math-container">$x_1$</span> with probability <span class="math-container">$S_1=1$</span> and the average outcome will be <span class="math-container">$A_1=\tfrac{1}{2}$</span>, meaning that the optimal <span class="math-container">$E_1=S_1A_1=\tfrac{1}{2}$</span>.</p> <p>If our roll budget is <span class="math-container">$n=2$</span>, then after the first roll <span class="math-container">$x_1$</span>, we need to check if <span class="math-container">$x_1&gt;E_1=\tfrac{1}{2}$</span>. This will happen with probability <span class="math-container">$S_1=1-E_1=\tfrac{1}{2}$</span> and the average outcome will be <span class="math-container">$A_1=\tfrac{1+E_1}{2}=\tfrac{3}{4}$</span>. If <span class="math-container">$x_1\leq E_1$</span>, which will happen with probability <span class="math-container">$E_1=\tfrac{1}{2}$</span>, we stop at <span class="math-container">$S_2=1-S_1=\tfrac{1}{2}$</span> with average outcome <span class="math-container">$A_2=\tfrac{1}{2}$</span>. Thus the expected optimal outcome is <span class="math-container">$E_2=S_1A_1+S_2A_2=\tfrac{1}{2}\big((1-E_1)(1+E_1)+E_1\big)=\tfrac{1}{2}(1+E_1-E_1^2)=\tfrac{5}{8}$</span>. So far no difference to our previous computation.</p> <p>However, if <span class="math-container">$n=3$</span>, we should stop at <span class="math-container">$x_1$</span> if <span class="math-container">$x_1&gt;E_2$</span>, with probability <span class="math-container">$S_1=1-E_2=\tfrac{3}{8}$</span> and average outcome <span class="math-container">$A_1=\tfrac{1+E_2}{2}=\tfrac{13}{16}$</span>. If <span class="math-container">$x_1\leq E_2$</span>, then we roll the second time to get <span class="math-container">$x_2$</span>. We stop if <span class="math-container">$x_2&gt;E_1$</span>, an event with probability <span class="math-container">$S_2=(1-S_1)(1-E_1)=E_2(1-E_1)=\tfrac{5}{8}\tfrac{1}{2}=\tfrac{5}{16}$</span>, and an average outcome <span class="math-container">$A_2=\tfrac{1+E_1}{2}=\tfrac{3}{4}$</span>. If <span class="math-container">$x_2&lt;E_1$</span>, we roll again and we are forced to stop with <span class="math-container">$x_3$</span>, an event with probability <span class="math-container">$S_3=1-S_1-S_2=E_2-E_2(1-E_1)=E_1E_2=\tfrac{5}{16}$</span> and average outcome <span class="math-container">$A_3=\tfrac{1}{2}$</span>.</p> <p>Thus, the expected outcome of the optimal <span class="math-container">$3$</span> roll strategy is <span class="math-container">$E_3=S_1 A_1+S_2A_2+S_3A_3=\tfrac{1}{2}\big((1-E_2)(1+E_2)+E_2(1-E_1)(1+E_1)+E_1E_2\big) = \tfrac{1}{2}(1+E_2+E_2E_1-E_2^2-E_2E_1^2)=\tfrac{1}{2}\big(1+\tfrac{5}{8}+\tfrac{5}{16}-\tfrac{25}{64}-\tfrac{5}{32}\big)=\tfrac{89}{128}$</span>.</p> <p>This is bigger than what we got with the old strategy by an eight of a percent.</p> <p>If <span class="math-container">$n=4$</span>, we would get <span class="math-container">$S_1=1-E_3$</span> with <span class="math-container">$2\cdot A_1=1+E_3$</span>, <span class="math-container">$S_2=(1-S_1)(1-E_2)=E_3(1-E_2)$</span> with <span class="math-container">$2\cdot A_2=1+E_2$</span>, <span class="math-container">$S_3=(1-S_1-S_2)(1-E_1)=E_3E_2(1-E_1)$</span> with <span class="math-container">$2\cdot A_3=1+E_1$</span> and <span class="math-container">$S_4=(1-S_1-S_2-S_3)=E_3E_2E_1$</span> with <span class="math-container">$2\cdot A_4=1$</span>, meaning that <span class="math-container">$E_4=\tfrac{1}{2}(1+E_3+E_3E_2+E_3E_2E_1-E_3^2-E_3E_2^2-E_3E_2E_1^2)$</span>. After some computation, this means <span class="math-container">$E_4=\tfrac{25195}{32768}$</span>. This is an almost <span class="math-container">$4\%$</span> increase over the expected outcome of our previous strategy, which would have been <span class="math-container">$1-\tfrac{1}{8}(1+\tfrac{1}{2}+\tfrac{1}{3}+\tfrac{1}{4})=\tfrac{71}{96}$</span>.</p> <p>It is not difficult to find a recursive relation for <span class="math-container">$E_n$</span> based on induction and the obvious pattern from the computed cases, but I don't know if there is any reasonable way to get a closed form expression for <span class="math-container">$E_n$</span>.</p>
<p>First, we propose a method for computing the best response for a certain policy of the opponent.</p> <p>For example, assuming that the opponent plays a thresholding policy with threshold = 1/2. With a single sample, I have a winning probability of 3/8. Therefore in the second round, I will only resample if my winning probability is less than 3/8. We can compute that the threshold is 7/12. Then I can compute the conditional winning probability if I resample in the first round, assuming which to be <span class="math-container">$1/2+a$</span>. Then I can similarly use this value to determine my threshold for the first round.</p> <p>With this idea, if both players can resample once, then one can compute the unique pure strategy Nash equilibrium is <span class="math-container">$(x_1,x_2)=(\frac{\sqrt{5}-1}{2},\frac{\sqrt{5}-1}{2})$</span>, where <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> are the thresholds of the two players. Similarly, we can compute a pure strategy Nash for the original problem, but which will be quite complicated.</p>
number-theory
<blockquote> <p>As is the question in the title, I am wishing to find all positive integers $n$ such that $3^n + 5^n$ is divisible by $n^2 - 1$. </p> </blockquote> <p>I have so far shown both expressions are divisible by $8$ for odd $n\ge 3$ so trivially a solution is $n=3$. I'm not quite sure how to proceed now though. I have conjectured that the only solution is $n=3$ and have tried to prove it but have had little luck. Can anyone point me in the right direction? thanks</p>
<p>This is a community wiki answer to summarize the main results we've got for quick access. Feel free to edit and add more results. Theoretical achievements are here:</p> <p><strong>Result.</strong> $3\mid n$, by virtually everyone.</p> <p><strong>Result.</strong> $n\equiv 1\pmod 2$, and, thus, $n\equiv 3\pmod 6$, obtained by <a href="https://math.stackexchange.com/a/619233/17751">benh</a>. See also the message on <a href="http://chat.stackexchange.com/transcript/message/12824067#12824067">chat</a>.</p> <p><strong>Result.</strong> if a prime $p$ divides $n^2-1$, then $p\equiv 2^k\pmod{15}$ for some $k$, obtained by <a href="https://math.stackexchange.com/a/619233/17751">benh</a>.</p> <p><strong>Result.</strong> $n^2-1$ isn't divisible by $3, 5, 7, 11$. Obtained by <a href="https://math.stackexchange.com/a/615929/17751">Yiorgos S. Smyrlis</a>.</p> <p><strong>Result.</strong> $n\equiv \pm 3\pmod 8$. Obtained by <a href="https://math.stackexchange.com/questions/612346/find-all-positive-integers-n-s-t-3n-5n-is-divisible-by-n2-1#comment1290656_612346">Tim Ratigan</a>.</p> <p><strong>Result.</strong> combining the congruences, $n\equiv 3, 93 \pmod{120}$. See a proof <a href="https://math.stackexchange.com/a/619233/17751">here</a>.</p> <p><strong>Result.</strong> Jack D'Aurizio was able to rule out $n\not\equiv \pm1\pmod p$ for every prime number $p&gt;5$ for which $(3\cdot 5^{-1})$ has an odd order $\pmod{p}$ or an order divisible by $4$, see <a href="https://math.stackexchange.com/a/618920/17751">here</a>. In combination with <a href="https://math.stackexchange.com/a/619233/17751">benh</a>'s result this gives that the smallest odd prime factor of $n^2-1$ is at least $19$. </p> <hr> <p>Numerical checkings are given and updated in this part.</p> <p><strong>Result.</strong> <a href="https://math.stackexchange.com/questions/612346/find-all-positive-integers-n-s-t-3n-5n-is-divisible-by-n2-1#comment1304856_612346">Listing</a> was able to verify by brute force that $n=3 \lor n&gt;10^{12}$, extending the result by <a href="https://math.stackexchange.com/a/619233/17751">Tapio Rajala</a> that $n=3 \lor n&gt;10^{11}$.</p> <p>Add your own result or someone else's. Please give proper credit and don't post the proofs here; link them instead. For further discussion, e.g. disproving or strengthening any claim, use this <a href="http://chat.stackexchange.com/rooms/12070/elementary-number-theory">chatroom</a>.</p> <hr> <p>This turned out to be a long standing open problem. Needless to say, breakthroughs in this question will be very well rewarded. I don't want this question to stop here, so I'll offer a +100 bounty very soon. Keep the good work up!</p>
<p>I'd better make this an answer. This was asked on MO long ago. Nobody could do it. Kevin Buzzard wrote to Andreescu and found out that the authors of the book don't know how to finish the problem. I put an answer summarizing what we had, in basic language. </p> <p>See</p> <p><a href="https://mathoverflow.net/questions/16341/on-polynomials-dividing-exponentials">https://mathoverflow.net/questions/16341/on-polynomials-dividing-exponentials</a></p>
number-theory
<p>The algorithm Mathematica uses for its <code>PrimeQ</code> function <a href="http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html">is described on MathWorld</a>. That web page says <code>PrimeQ</code> uses, "the multiple Rabin-Miller test in bases 2 and 3 combined with a Lucas pseudoprime test." It also says, </p> <blockquote> <p>"As of 1997, this procedure is known to be correct only for all $n&lt;10^{16}$, but no counterexamples are known and if any exist, they are expected to occur with extremely small probability (i.e., much less than the probability of a hardware error on a computer performing the test)."</p> </blockquote> <p>More details on the algorithm are given in <a href="http://rads.stackoverflow.com/amzn/click/0470412151">A Course in Computational Number Theory by David Bressoud</a>. That book says there are 52,593 integers less than $10^{16}$ that are 2- and 3- strong pseudoprimes. It also suggests Lucas pseudoprimes less than $10^{16}$ are rare. Number theory is a very rigorous field. Has any rigorous work been done to substantiate the claim above that, if a <code>PrimeQ</code> counterexamples exist, they are expected to occur with extremely small probability. All I can find so far is an extrapolation of the trend for ($n&lt;10^{16}$), and that is a very hand-wavy argument.</p> <p>Besides that, I am aware of no efforts to extend the brute force testing beyond $10^{16}$. With Moore's law and 15 years of work, considerable progress could have been made in that way.</p>
<p>I and a number of others have proved that there are no BPSW-pseudoprimes below $2^{64}$. This builds on the work of Jan Feitsma around 2009. In particular this means that, barring programming errors, <code>PrimeQ</code> is correct for all values below $1.844\cdot10^{19}.$</p>
<p>Regarding the original question: it is not necessarily the case that Lucas pseudoprimes less than 1E16 are rare. The point is that, if you make a list of psp's base 2 and a list of Lucas psp's, then the two lists are observed to have very little, if any, overlap.</p> <p>As for probabilities: In the Miller-Rabin test, if n is composite, then 3/4 of the bases are witnesses for the compositeness of n.</p> <p>For the strong Lucas test, the corresponding probability (measuring (P,Q) pairs), is 11/15; see F. Arnault, "The Rabin-Monier Theorem for Lucas Pseudoprimes". Mathematics of Computation 66 (218) (April, 1997), pp. 869–881.</p>
geometry
<p>My son is in 2nd grade. His math teacher gave the class a quiz, and one question was this:</p> <blockquote> <p>If a triangle has 3 sides, and a rectangle has 4 sides, how many sides does a circle have?</p> </blockquote> <p>My first reaction was "0" or "undefined". But my son wrote "$\infty$" which I think is a reasonable answer. However, it was marked wrong with the comment, "the answer is 1".</p> <p>Is there an accepted correct answer in geometry?</p> <p><strong>edit:</strong> I ran into this teacher recently and mentioned this quiz problem. She said she thought my son had written "8." She didn't know that a sideways "8" means infinity.</p>
<p>The answer depends on the definition of the word "side." I think this is a terrible question (edit: <em>to put on a quiz</em>) and is the kind of thing that will make children hate math. "Side" is a term that should really be reserved for polygons. </p>
<p>My third-grade son came home a few weeks ago with similar homework questions:</p> <blockquote> <p>How many faces, edges and vertices do the following have?</p> <ul> <li>cube</li> <li>cylinder</li> <li>cone</li> <li>sphere</li> </ul> </blockquote> <p>Like most mathematicians, my first reaction was that for the latter objects the question would need a precise definition of face, edge and vertex, and isn't really sensible without such definitions.</p> <p>But after talking about the problem with numerous people, conducting a kind of social/mathematical experiment, I observed something intriguing. What I observed was that none of my non-mathematical friends and acquaintances had any problem with using an intuitive geometric concept here, and they all agreed completely that the answers should be</p> <ul> <li>cube: 6 faces, 12 edges, 8 vertices</li> <li>cylinder: 3 faces, 2 edges, 0 vertices</li> <li>cone: 2 faces, 1 edge, 1 vertex</li> <li>sphere: 1 face, 0 edges, 0 vertices</li> </ul> <p>Indeed, these were also the answers desired by my son's teacher (who is a truly outstanding teacher). Meanwhile, all of my mathematical colleagues hemmed and hawed about how we can't really answer, and what does "face" mean in this context anyway, and so on; most of them wanted ultimately to say that a sphere has infinitely many faces and infinitely many vertices and so on. For the homework, my son wrote an explanation giving the answers above, but also explaining that there was a sense in which some of the answers were infinite, depending on what was meant.</p> <p>At a party this past weekend full of mathematicians and philosophers, it was a fun game to first ask a mathematician the question, who invariably made various objections and refusals and and said it made no sense and so on, and then the non-mathematical spouse would forthrightly give a completely clear account. There were many friendly disputes about it that evening.</p> <p>So it seems, evidently, that our extensive mathematical training has interfered with our ability to grasp easily what children and non-mathematicians find to be a clear and distinct geometrical concept.</p> <p>(My actual view, however, is that it is our training that has taught us that the concepts are not so clear and distinct, as witnessed by numerous borderline and counterexample cases in the historical struggle to find the right definitions for the $V-E+F$ and other theorems.) </p>