tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
linear-algebra
<p>When I first took linear algebra, we never learned about dual spaces. Today in lecture we discussed them and I understand what they are, but I don't really understand why we want to study them within linear algebra.</p> <p>I was wondering if anyone knew a nice intuitive motivation for the study of dual spaces and whether or not they "show up" as often as other concepts in linear algebra? Is their usefulness something that just becomes more apparent as you learn more math and see them arise in different settings?</p> <hr> <h3>Edit</h3> <p>I understand that dual spaces show up in functional analysis and multilinear algebra, but I still don't really understand the intuition/motivation behind their definition in the standard topics covered in a linear algebra course. (Hopefully, this clarifies my question)</p>
<p>Let <span class="math-container">$V$</span> be a vector space (over any field, but we can take it to be <span class="math-container">$\mathbb R$</span> if you like, and for concreteness I will take the field to be <span class="math-container">$\mathbb R$</span> from now on; everything is just as interesting in that case). Certainly one of the interesting concepts in linear algebra is that of a <em>hyperplane</em> in <span class="math-container">$V$</span>.</p> <p>For example, if <span class="math-container">$V = \mathbb R^n$</span>, then a hyperplane is just the solution set to an equation of the form <span class="math-container">$$a_1 x_1 + \cdots + a_n x_n = b,$$</span> for some <span class="math-container">$a_i$</span> not all zero and some <span class="math-container">$b$</span>. Recall that solving such equations (or simultaneous sets of such equations) is one of the basic motivations for developing linear algebra.</p> <p>Now remember that when a vector space is not given to you as <span class="math-container">$\mathbb R^n$</span>, it doesn't normally have a canonical basis, so we don't have a canonical way to write its elements down via coordinates, and so we can't describe hyperplanes by explicit equations like above. (Or better, we can, but only after choosing coordinates, and this is not canonical.)</p> <p>How can we canonically describe hyperplanes in <span class="math-container">$V$</span>?</p> <p>For this we need a conceptual interpretation of the above equation. And here linear functionals come to the rescue. More precisely, the map</p> <p><span class="math-container">$$\begin{align*} \ell: \mathbb{R}^n &amp;\rightarrow \mathbb{R} \\ (x_1,\ldots,x_n) &amp;\mapsto a_1 x_1 + \cdots + a_n x_n \end{align*}$$</span></p> <p>is a linear functional on <span class="math-container">$\mathbb R^n$</span>, and so the above equation for the hyperplane can be written as <span class="math-container">$$\ell(v) = b,$$</span> where <span class="math-container">$v = (x_1,\ldots,x_n).$</span></p> <p>More generally, if <span class="math-container">$V$</span> is any vector space, and <span class="math-container">$\ell: V \to \mathbb R$</span> is any non-zero linear functional (i.e. non-zero element of the dual space), then for any <span class="math-container">$b \in \mathbb R,$</span> the set</p> <p><span class="math-container">$$\{v \, | \, \ell(v) = b\}$$</span></p> <p>is a hyperplane in <span class="math-container">$V$</span>, and all hyperplanes in <span class="math-container">$V$</span> arise this way.</p> <p>So this gives a reasonable justification for introducing the elements of the dual space to <span class="math-container">$V$</span>; they generalize the notion of linear equation in several variables from the case of <span class="math-container">$\mathbb R^n$</span> to the case of an arbitrary vector space.</p> <p>Now you might ask: why do we make them a vector space themselves? Why do we want to add them to one another, or multiply them by scalars?</p> <p>There are lots of reasons for this; here is one: Remember how important it is, when you solve systems of linear equations, to add equations together, or to multiply them by scalars (here I am referring to all the steps you typically make when performing Gaussian elimination on a collection of simultaneous linear equations)? Well, under the dictionary above between linear equations and linear functionals, these processes correspond precisely to adding together linear functionals, or multiplying them by scalars. If you ponder this for a bit, you can hopefully convince yourself that making the set of linear functionals a vector space is a pretty natural thing to do.</p> <p>Summary: just as concrete vectors <span class="math-container">$(x_1,\ldots,x_n) \in \mathbb R^n$</span> are naturally generalized to elements of vector spaces, concrete linear expressions <span class="math-container">$a_1 x_1 + \ldots + a_n x_n$</span> in <span class="math-container">$x_1,\ldots, x_n$</span> are naturally generalized to linear functionals.</p>
<p>Since there is no answer giving the following point of view, I'll allow myself to resuscitate the post.</p> <p>The dual is intuitively the space of "rulers" (or measurement-instruments) of our vector space. Its elements measure vectors. This is what makes the dual space and its relatives so important in Differential Geometry, for instance. This immediately motivates the study of the dual space. For motivations in other areas, the other answers are quite well-versed.</p> <p>This also happens to explain intuitively some facts. For instance, the fact that there is no canonical isomorphism between a vector space and its dual can then be seen as a consequence of the fact that rulers need scaling, and there is no canonical way to provide one scaling for space. However, if we were to <strong>measure the measure-instruments</strong>, how could we proceed? Is there a canonical way to do so? Well, if we want to measure our measures, why not measure them by how they act on what they are supposed to measure? We need no bases for that. This justifies intuitively why there is a natural embedding of the space on its bidual. (Note, however, that this fails to justify why it is an isomorphism in the finite-dimensional case).</p>
differentiation
<p>I remember my physics teacher mentioning that other trigonometric functions exist apart from the Sin Cos and Tan, he mentioned a few and they did not sound familiar, nothing like Sec Csc and Cot. I would like to learn more about them, if they exist.</p>
<p>Here is the <a href="https://en.wikipedia.org/wiki/Trigonometric_functions" rel="nofollow noreferrer">Wikipedia page</a> on the subject, and here is an <a href="https://commons.wikimedia.org/wiki/File:Circle-trig6.svg" rel="nofollow noreferrer">image of a unit circle</a> from that page that answers your question quite well. Credit for this image goes to Wikipedia <a href="https://commons.wikimedia.org/wiki/User:Tttrung" rel="nofollow noreferrer">User:Tttrung</a>.</p> <p><img src="https://i.sstatic.net/cvOrI.png" alt="Circle with many trig functions" /></p>
<p>I will explain it what the functions mean in geometry and I'll give their derivatives. </p> <p>As usual, we denote the hypotenuse with H, the opposite side with O and the adjacent side with A. </p> <hr> <p>The secant, cosecant and cotangent:</p> <p>$$\sec(x) = \frac{1}{\cos(x)} = \frac{H}{A}$$ $$\csc(x) = \frac{1}{\sin(x)} = \frac{H}{O}$$</p> <p>$$\cot(x) = \frac{1}{\tan(x)} = \frac{\cos(x)}{\sin(x)}=\frac{\csc(x)}{\sec(x)} = \frac{A}{O}$$</p> <hr> <p>Their derivatives:</p> <p>$$\sec'(x) = \sec(x) \tan(x)$$ $$\csc'(x) = - \csc(x) \cot(x)$$ $$\cot'(x) = - \csc^2(x)$$</p> <hr> <p>Further, we have functions like the versed sine ($\mathrm{versin}(x)$. Note that LaTeX doesn't know the command of this function. That says something about how common it is), coversed sine ($\mathrm{coversin}(x)$), versed cosine ($\mathrm{vercosin}(x)$) and coversed cosine ($\mathrm{covercosin}(x)$) which are respectively $1- \cos(x)$, $1+\cos(x)$, $1- \sin(x)$, $1+\sin(x)$. They have the property that they are nonnegative and that is the reason that they are used. And their halves ($\mathrm{haversin}(x)$, $\mathrm{cohaversin}(x)$, etc.) are also used. They further have the property that they are between 0 and 1, just as the absolute value of the sine or cosine. </p> <p>We further have $$\mathrm{exsec}(x) = \sec(x) - 1 = \frac{\mathrm{versin}(x)}{\cos(x)}$$</p> <p>This has the derivative $$\mathrm{exsec}'(x)=\frac{\sin(x)}{\cos^2(x)} = \frac{\tan(x)}{\cos(x)}$$</p> <hr> <p>The last function I want to look at is $$\mathrm{crd}(x)=2\sin\left(\frac{x}{2}\right)$$</p> <p>This has the derivative $$\mathrm{crd}'(x) = \cos\left(\frac{x}{2}\right)$$ The length of a chord with inscribed angle $\theta$ in a circle with radius 1 is $\mathrm{crd}(\theta)$. </p>
geometry
<p>My headphone cables formed this knot:</p> <p><a href="https://i.sstatic.net/Q7s4H.png"><img src="https://i.sstatic.net/Q7s4H.png" alt="enter image description here"></a></p> <p>however I don't know much about knot theory and cannot tell what it is. In my opinion it isn't a figure-eight knot and certainly not a trefoil. Since it has $6$ crossings that doesn't leave many other candidates!</p> <p>What is this knot? How could one figure it out for similarly simple knots?</p>
<p>Arthur's answer is completely correct, but for the record I thought I would give a general answer for solving problems of this type using the <a href="http://www.math.uic.edu/t3m/SnapPy/index.html">SnapPy software package</a>. The following procedure can be used to recognize almost any prime knot with a small number of crossings, and takes about 10-15 minutes for a new user.</p> <p><strong>Step 1.</strong> Download and install the SnapPy software from the <a href="http://www.math.uic.edu/t3m/SnapPy/installing.html">SnapPy installation page</a>. This is very quick and easy, and works in Mac OS X, Windows, or Linux.</p> <p><strong>Step 2.</strong> Open the software and type:</p> <pre><code>M = Manifold() </code></pre> <p>to start the link editor. (Here "manifold" refers to the <a href="https://en.wikipedia.org/wiki/Knot_complement">knot complement</a>.)</p> <p><strong>Step 3.</strong> Draw the shape of the knot. Don't worry about crossings to start with: just draw a closed polygonal curve that traces the shape of the knot. Here is the shape that I traced: <a href="https://i.sstatic.net/WDNbk.png"><img src="https://i.sstatic.net/WDNbk.png" alt="enter image description here"></a></p> <p>If you make a mistake, choose "Clear" from the Tools menu to start over.</p> <p><strong>Step 4.</strong> After you draw the shape of the knot, you can click on the crossings with your mouse to change which strand is on top. Here is my version of the OP's knot:</p> <p><a href="https://i.sstatic.net/BrzcX.png"><img src="https://i.sstatic.net/BrzcX.png" alt="enter image description here"></a></p> <p><strong>Step 5.</strong> Go to the "Tools" menu and select "Send to SnapPy". My SnapPy shell now looks like this:</p> <p><a href="https://i.sstatic.net/Cd6w1.png"><img src="https://i.sstatic.net/Cd6w1.png" alt="enter image description here"></a></p> <p><strong>Step 6.</strong> Type</p> <pre><code>M.identify() </code></pre> <p>The software will give you various descriptions of the manifold, one of which will identify the prime knot using Alexander-Briggs notation. In this case, the output is</p> <pre><code>[5_1(0,0), K5a2(0,0)] </code></pre> <p>and the first entry means that it's the $5_1$ knot.</p>
<p>Here is a redrawing of your knot, with some colour added to it:</p> <p><a href="https://i.sstatic.net/d1RaP.png"><img src="https://i.sstatic.net/d1RaP.png" alt="enter image description here"></a></p> <p>Now, take the red part, and flip it down, and you get this knot:</p> <p><a href="https://i.sstatic.net/QXk4R.png"><img src="https://i.sstatic.net/QXk4R.png" alt="enter image description here"></a></p> <p>which has five crossings, and, if it weren't for my lousy Paint skills, would look an aweful lot like the pentafoil knot $5_1$ (the edge of a mobius strip with $5$ half-twists, or the $(5, 2)$ torus knot), if you just smoothen out the edges a bit</p>
probability
<p>If you've ever played rock-paper-scissors, and you are reading this on math.stackexchange, you probably know that always playing $1$ of the $3$ choices at random (more precisely: uniformly at random and independently of previous choices) guarantees even chances of victory against any opponent.</p> <p>But "playing at random" is harder than it looks, particularly if you have no tools - from old-fashioned dice to tech accessing thermal noise. In fact, I've seen a little piece of code that marginally, but consistently over time, beats most humans at rock-paper-scissors simply by looking at biases in how they've been playing so far, and predicting future throws accordingly. For example, humans tend to to play long sequences of identical throws with the "wrong" frequency.</p> <p><strong>I was wondering if anyone knows good ways to produce reasonably random bits without tools</strong>; say, enough to compete with even or as-even-as possible odds against a computer trying to predict one's choices. I realize "good" and "reasonably" (and even "tools") is a bit fuzzy, but I'm <em>sure</em> folks will understand the spirit of the question... I don't want to simulate a Mersenne Twister in my head (though a pseudorandom generator with a reasonable balance of randomness and simplicity would definitely be a possibility), nor use the painful method a friend of mine suggested: pull a random hair from one's head, and check if it's white (for most people it's a biased toss, but as long as one's hair is salt-and-pepper one can trade hair for <a href="https://en.wikipedia.org/wiki/Fair_coin#Fair_results_from_a_biased_coin" rel="noreferrer">fairness in the toss</a>).</p> <p>Buried in the comments below, there's a link to a <a href="http://koaning.io/human-entropy.html" rel="noreferrer">web page allowing you to test any such scheme</a>!</p>
<ol> <li><p>I have freckles. I just encircle with thumb and forefinger a “random" portion of my arm, count the freckles, and take the parity. It takes about 3-5 seconds. I guess that someone who is fairly hairy could do the same counting hair.</p></li> <li><p>Talking of counting hair, I take a small lock of my hair (I have long hair), and then try to split it as evenly as possible in two “half locks”, each with half the hair. I repeat the same process on one of the half locks, and then on one of the quarter locks, until I’m left with a single hair. The parity of the number of halvings is my random bit. It takes me about 5-10 seconds to produce it. I’ve found this to be quicker than the more obvious method of just counting the hairs and taking their parity.</p></li> <li><p>I play a really fast paced word association game: I think of a word, the very first word associated to it, the second, third and fourth (using my hand’s fingers to count the five words helps). My random bit is the order of the third last letters of the last two words. It’s easier to do than to say, it takes no more than $3$ seconds.</p></li> <li><p>This is somewhat general, though I guess everyone should make it specific to one’s interests in life. I let my mind wander and pick $3$ artists (musicians, bookwriters, painters, etc.) and think of how many of the pieces of each I’ve heard/read/seen. I add the numbers and take the parity. A programmer friend of mine says for him it works really well with examples of some obscure software categories I can’t quite remember. I guess a gourmet could use types of food: how many vegetable soups. omelettes, and bottles of red wine he’s had in the last two weeks.</p></li> <li><p>I pick a number that strikes my fancy, and play the $3n+1$ game (halve it if even or triple it and add $1$ if odd, and then repeat) for $20$ steps, using the (parity of) the second last digit. I've not tried this extensively enough to detect any bias, which instead affects the last digit, but I guess one could correct it with the method the OP suggests for white hair counting. This is slowish (about $20-30$ seconds), but still a bit faster than the congruential generator another poster suggested, at least for me.</p></li> <li><p>If you are good at remembering phone numbers of people, their least significant digits are a good source of randomness against people who know relatively little of your contacts. You can always scramble them a little, like reversing them and skipping every other digit.</p></li> </ol>
<p>This is one strategy that seems to work for me (36). I talk to myself (11). And for every sentence, I just count the letters in my head (46). Then, I can use the least significant bit, or maybe two, as "random" bits (50). It's a game I used to play as a pasttime for its own sake ... (43) and if I add the numbers wrong, well, it doesn't really seem to affect the randomness (76)! The numbers in parentheses are, if you haven't guessed, the letter counts of each sentence (71).</p>
logic
<p>Today I had an argument with my math teacher at school. We were answering some simple True/False questions and one of the questions was the following:</p> <p><span class="math-container">$$x^2\ne x\implies x\ne 1$$</span></p> <p>I immediately answered true, but for some reason, everyone (including my classmates and math teacher) is disagreeing with me. According to them, when <span class="math-container">$x^2$</span> is not equal to <span class="math-container">$x$</span>, <span class="math-container">$x$</span> also can't be <span class="math-container">$0$</span> and because <span class="math-container">$0$</span> isn't excluded as a possible value of <span class="math-container">$x$</span>, the sentence is false. After hours, I am still unable to understand this ridiculously simple implication. I can't believe I'm stuck with something so simple.<br><br> <strong>Why I think the logical sentence above is true:</strong><br> My understanding of the implication symbol <span class="math-container">$\implies$</span> is the following: If the left part is true, then the right part must be also true. If the left part is false, then nothing is said about the right part. In the right part of this specific implication nothing is said about whether <span class="math-container">$x$</span> can be <span class="math-container">$0$</span>. Maybe <span class="math-container">$x$</span> can't be <span class="math-container">$-\pi i$</span> too, but as I see it, it doesn't really matter, as long as <span class="math-container">$x \ne 1$</span> holds. And it always holds when <span class="math-container">$x^2 \ne x$</span>, therefore the sentence is true.</p> <h3>TL;DR:</h3> <p><strong><span class="math-container">$x^2 \ne x \implies x \ne 1$</span>: Is this sentence true or false, and why?</strong></p> <p>Sorry for bothering such an amazing community with such a simple question, but I had to ask someone.</p>
<p>The short answer is: Yes, it is true, because the contrapositive just expresses the fact that $1^2=1$.</p> <p>But in controversial discussions of these issues, it is often (but not always) a good idea to try out non-mathematical examples:</p> <hr> <p>"If a nuclear bomb drops on the school building, you die."</p> <p>"Hey, but you die, too."</p> <p>"That doesn't help you much, though, so it is still true that you die." </p> <hr> <p>"Oh no, if the supermarket is not open, I cannot buy chocolate chips cookies."</p> <p>"You cannot buy milk and bread, either!"</p> <p>"Yes, but I prefer to concentrate on the major consequences."</p> <hr> <p>"If you sign this contract, you get a free pen."</p> <p>"Hey, you didn't tell me that you get all my money."</p> <p>"You didn't ask."</p> <hr> <p>Non-mathematical examples also explain the psychology behind your teacher's and classmates' thinking. In real-life, the choice of consequences is usually a loaded message and can amount to a lie by omission. So, there is this lingering suspicion that the original statement suppresses information on 0 on purpose. </p> <p>I suggest that you learn about some nonintuitive probability results and make bets with your teacher.</p>
<p>First, some general remarks about logical implications/conditional statements. </p> <ol> <li><p>As you know, $P \rightarrow Q$ is true when $P$ is false, or when $Q$ is true. </p></li> <li><p>As mentioned in the comments, the <em>contrapositive</em> of the implication $P \rightarrow Q$, written $\lnot Q \rightarrow \lnot P$, is logically equivalent to the implication. </p></li> <li><p>It is possible to write implications with merely the "or" operator. Namely, $P \rightarrow Q$ is equivalent to $\lnot P\text{ or }Q$, or in symbols, $\lnot P\lor Q$.</p></li> </ol> <p>Now we can look at your specific case, using the above approaches. </p> <ol> <li>If $P$ is false, ie if $x^2 \neq x$ is false (so $x^2 = x$ ), then the statement is true, so we assume that $P$ is true. So, as a statement, $x^2 = x$ is false. Your teacher and classmates are rightly convinced that $x^2 = x$ is equivalent to ($x = 1$ or $x =0\;$), and we will use this here. If $P$ is true, then ($x=1\text{ or }x =0\;$) is false. In other words, ($x=1$) AND ($x=0\;$) are both false. I.e., ($x \neq 1$) and ($x \neq 0\;$) are true. I.e., if $P$, then $Q$. <br></li> <li>The contrapositive is $x = 1 \rightarrow x^2 = x$. True.</li> <li>We use the "sufficiency of or" to write our conditional as: $$\lnot(x^2 \neq x)\lor x \neq 1\;.$$ That is, $x^2 = x$ or $x \neq 1$, which is $$(x = 1\text{ or }x =0)\text{ or }x \neq 1,$$ which is $$(x = 1\text{ or }x \neq 1)\text{ or }x = 0\;,$$ which is $$(\text{TRUE})\text{ or }x = 0\;,$$ which is true. </li> </ol>
logic
<p>I am comfortable with the different sizes of infinities and Cantor's "diagonal argument" to prove that the set of all subsets of an infinite set has cardinality strictly greater than the set itself. So if we have a set $\Omega$ and $|\Omega| = \aleph_i$, then (assuming continuum hypothesis) the cardinality of $2^{\Omega}$ is $|2^{\Omega}| = \aleph_{i+1} &gt; \aleph_i$ and we have $\aleph_{i+1} = 2^{\aleph_i}$.</p> <p>I am fine with these argument.</p> <p>What I don't understand is why should there be a smallest $\infty$? Is there a proof (or) an axiom that there exists a smallest $\infty$ and that this is what we address as "countably infinite"? or to rephrase the question "why can't I find a set whose power set gives us $\mathbb{N}$"?</p> <p>The reason why I am asking this question is in "some sense" $\aleph_i = \log_2(\aleph_{i+1})$. I do not completely understand why this process should stop when $\aleph_i = \aleph_0$.</p> <p>(Though coming to think about it I can feel that if I take an infinite set with $\log_2 \aleph_0$ elements I can still put it in one-to-one correspondence with the Natural number set. So Is $\log_2 \aleph_0 = \aleph_0$ (or) am I just confused? If $n \rightarrow \infty$, then $2^n \rightarrow \infty$ faster while $\log (n) \rightarrow \infty$ slower and $\log (\log (n)) \rightarrow \infty$ even slower and $\log(\log(\log (n))) \rightarrow \infty$ even "more" slower and so on).</p> <p>Is there a clean (and relatively elementary) way to explain this to me?</p> <p>(I am totally fine if you direct me to some paper (or) webpage. I tried googling but could not find an answer to my question)</p>
<p>First, let me clear up a misunderstanding.</p> <p>Question: Does $2^\omega = \aleph_1$? More generally, does $2^{\aleph_\alpha} = \aleph_{\alpha+1}$?</p> <p>The answer of "yes" to the first question is known as the continuum hypothesis (CH), while an answer of "yes" to the second is known as the generalized continuum hypothesis (GCH).</p> <p>Answer: Both are undecidable using ZFC. That is, Godel has proven that if you assume the answer to CH is "yes", then you don't add any new contradictions to set theory. In particular, this means it's impossible to prove the answer is "no".</p> <p>Later, Cohen showed that if you assume the answer is "no", then you don't add any new contradictions to set theory. In particular, this means it's impossible to prove the answer is "yes".</p> <p>The answer for GCH is the same.</p> <p>All of this is just to say that while you are allowed to assume an answer of "yes" to GCH (which is what you did in your post), there is no way you can prove that you are correct.</p> <p>With that out of the way, let me address your actual question.</p> <p>Yes, there is a proof that $\omega$ is the smallest infinite cardinality. It all goes back to some very precise definitions. In short, when one does set theory, all one really has to work with is the "is a member of" relation $\in$. One defines an "ordinal number" to be any transitive set $X$ such that $(X,\in)$ is a well ordered set. (Here, "transitive" means "every element is a subset". It's a weird condition which basically means that $\in$ is a transitive relation). For example, if $X = \emptyset$ or $X=\{\emptyset\}$ or $X = \{\{\emptyset\},\emptyset\}$, then $X$ is an ordinal. However, if $X=\{\{\emptyset\}\}$, then $X$ is not.</p> <p>There are 2 important facts about ordinals. First, <strong>every</strong> well ordered set is (order) isomorphic to a unique ordinal. Second, for any two ordinals $\alpha$ and $\beta$, precisely one of the following holds: $\alpha\in\beta$ or $\beta\in\alpha$ or $\beta = \alpha$. In fact, it turns out the the collection of ordinals is well ordered by $\in$, modulo the detail that there is no set of ordinals.</p> <p>Now, cardinal numbers are simply special kinds of ordinal numbers. They are ordinal numbers which can't be bijected (in, perhaps NOT an order preserving way) with any smaller ordinal. It follows from this that the collection of all cardinal numbers is also well ordered. Hence, as long as there is one cardinal with a given property, there will be a least one. One example of such a property is "is infinite".</p> <p>Finally, let me just point out that for <strong>finite</strong> numbers (i.e. finite natural numbers), one usually cannot find a solution $m$ to $m = \log_2(n)$. Thus, as least from an inductive reasoning point of view, it's not surprising that there are infinite cardinals which can't be written as $2^k$ for some cardinal $k$.</p>
<p>Suppose $A$ is an infinite set. In particular, it is not empty, so there exists a $x_1\in A$. Now $A$ is infinite, so it is not $\{x_1\}$, so there exists an $x_2\in A$ such that $x_2\neq x_1$. Now $A$ is infinite, so it is not $\{x_1,x_2\}$, so there exists an $x_3\in A$ such that $x_3\neq x_1$ and $x_3\neq x_2$. Now $A$ is infinite, so it is not $\{x_1,x_2,x_3\}$, so there exists an $x_4\in A$ such that $x_4\neq x_1$, $x_4\neq x_2$ and $x_4\neq x_3$... And so on.</p> <p>This was you can construct a sequence $(x_n)_{n\geq1}$ of elements of $A$ such that $x_i\neq x_j$ if $i\neq j$. If you define a function $f:\mathbb N\to A$ so that $f(i)=x_i$ for all $i\in\mathbb N$, then $f$ is injective.</p> <p>By definition, then, the cardinal of $A$ is greater or equal to that of $\mathbb N$.</p> <p>Since $\mathbb N$ is itself infinite, this shows that the smallest infinite cardinal is $\aleph_0=|\mathbb N|$.</p>
linear-algebra
<p>Is there any intuition why rotational matrices are not commutative? I assume the final rotation is the combination of all rotations. Then how does it matter in which order the rotations are applied?</p>
<p>Here is a picture of a die:</p> <p><a href="https://i.sstatic.net/Ij8xC.jpg"><img src="https://i.sstatic.net/Ij8xC.jpg" alt="enter image description here"></a></p> <p>Now let's spin it $90^\circ$ clockwise. The die now shows</p> <p><a href="https://i.sstatic.net/YNRK3.jpg"><img src="https://i.sstatic.net/YNRK3.jpg" alt="enter image description here"></a></p> <p>After that, if we flip the left face up, the die lands at</p> <p><a href="https://i.sstatic.net/JJRKw.jpg"><img src="https://i.sstatic.net/JJRKw.jpg" alt="enter image description here"></a></p> <hr> <p>Now, let's do it the other way around: We start with the die in the same position:</p> <p><a href="https://i.sstatic.net/Ij8xC.jpg"><img src="https://i.sstatic.net/Ij8xC.jpg" alt="enter image description here"></a></p> <p>Flip the left face up:</p> <p><a href="https://i.sstatic.net/HWIwe.jpg"><img src="https://i.sstatic.net/HWIwe.jpg" alt="enter image description here"></a></p> <p>and then $90^\circ$ clockwise</p> <p><a href="https://i.sstatic.net/pnofv.jpg"><img src="https://i.sstatic.net/pnofv.jpg" alt="enter image description here"></a></p> <p>If we do it one way, we end up with $3$ on the top and $5, 6$ facing us, while if we do it the other way we end up with $2$ on the top and $1, 3$ facing us. This demonstrates that the two rotations do not commute.</p> <hr> <p>Since so many in the comments have come to the conclusion that this is not a complete answer, here are a few more thoughts:</p> <ul> <li>Note what happens to the top number of the die: In the first case we change what number is on the left face, then flip the new left face to the top. In the second case we first flip the old left face to the top, and <em>then</em> change what is on the left face. This makes two different numbers face up.</li> <li>As leftaroundabout said in a comment to the question itself, rotations not commuting is not really anything noteworthy. The fact that they <em>do</em> commute in two dimensions <em>is</em> notable, but asking why they do not commute in general is not very fruitful apart from a concrete demonstration.</li> </ul>
<p>Matrices commute if they <em>preserve each others' eigenspaces</em>: there is a set of eigenvectors that, taken together, describe all the eigenspaces of both matrices, in possibly varying partitions.</p> <p>This makes intuitive sense: this constraint means that a vector in one matrix's eigenspace won't leave that eigenspace when the other is applied, and so the original matrix's transformation still works fine on it. </p> <p>In two dimensions, no matter what, the eigenvectors of a rotation matrix are $[i,1]$ and $[-i,1]$. So since all such matrices have the same eigenvectors, they will commute.</p> <p>But in <em>three</em> dimensions, there's always one real eigenvalue for a real matrix such as a rotation matrix, so that eigenvalue has a real eigenvector associated with it: the axis of rotation. But this eigenvector doesn't share values with the rest of the eigenvectors for the rotation matrix (because the other two are necessarily complex)! So the axis is an eigenspace of dimension 1, so <strong>rotations with different axes can't possibly share eigenvectors</strong>, so they cannot commute.</p>
geometry
<blockquote> <p>To which degree must I rotate a parabola for it to be no longer the graph of a function?</p> </blockquote> <p>I have no problem with narrowing the question down by only concerning the standard parabola: <span class="math-container">$$f(x)=x^2.$$</span></p> <p>I am looking for a specific angle measure. One such measure must exist as the reflection of <span class="math-container">$f$</span> over the line <span class="math-container">$y=x$</span> is certainly no longer well-defined. I realize that preferentially I should ask the question on this site with a bit of work put into it but, alas, I have no intuition for where to start. I suppose I know immediately that it must be less than <span class="math-container">$45^\circ$</span> as such a rotation will cross the y-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,\sqrt{2})$</span>.</p> <p>Any insight on how to proceed?</p>
<p>Rotating the parabola even by the smallest angle will cause it to no longer be well defined.</p> <p>Intuitively, you can prove this for yourself by considering the fact that the derivative of a parabola is unbounded. This means that the parabola becomes arbitrarily &quot;steep&quot; for large (or small) values of <span class="math-container">$x$</span>, i.e. its angle being closer and closer to <span class="math-container">$90^\circ$</span>, and rotating it by even a little will tip it over the <span class="math-container">$90$</span> degrees.</p> <hr /> <p>For a formal proof, first, we need to explain exactly what a rotation of a parabola is. In general, a rotation in <span class="math-container">$\mathbb R^2$</span> is multiplication with a rotation matrix, which has, for a rotation by <span class="math-container">$\phi$</span>, the form <span class="math-container">$$\begin{bmatrix}\cos\phi&amp;-\sin\phi\\\sin\phi&amp;\cos\phi\end{bmatrix}$$</span></p> <p>In other words, if we start with a parabola <span class="math-container">$P= \{(x,y)|x\in\mathbb R\land y=x^2\}$</span>, then the parabola, rotated by an angle of <span class="math-container">$\phi$</span>, is</p> <p><span class="math-container">$$\begin{align}P_\phi &amp;= \left.\left\{\begin{bmatrix}\cos\phi&amp;-\sin\phi\\\sin\phi&amp;\cos\phi\end{bmatrix}\cdot\begin{bmatrix}x\\y\end{bmatrix}\right| x\in\mathbb R, y=x^2\right\}\\ &amp;=\{(x\cos\phi - y\sin\phi, x\sin\phi + y\cos\phi)|x\in\mathbb R, y=x^2\}\\ &amp;= \{(x\cos\phi-x^2\sin\phi, x\sin\phi + x^2\cos\phi)| x\in\mathbb R\}\end{align}.$$</span></p> <hr /> <p>The question now is which values of <span class="math-container">$\phi$</span> construct a well defined parabola <span class="math-container">$P_\phi$</span>, where by &quot;well defined&quot;, we mean &quot;it is a graph of a function&quot;, i.e., for each <span class="math-container">$\overline x\in\mathbb R$</span>, there exists exactly one value <span class="math-container">$\overline y$</span> such that <span class="math-container">$(\overline x,\overline y)\in P_\phi$</span>.</p> <p>Clearly, if <span class="math-container">$\phi = 0$</span>, we have <span class="math-container">$P_0=\{(x, x^2)|x\in\mathbb R\}$</span> which is well defined, because for every <span class="math-container">$\overline x$</span>, the value <span class="math-container">$\overline y=\overline x^2$</span> is the unique value required for <span class="math-container">$(\overline x,\overline y)$</span> to be in <span class="math-container">$P_0$</span>. Also, if <span class="math-container">$\phi=\pi$</span>, then <span class="math-container">$P_\pi = \{(-x, -x^2)|x\in\mathbb R\}$</span> is also well defined because if <span class="math-container">$(\overline x,\overline y)\in P_\pi$</span>, then <span class="math-container">$\overline y=-\overline x^2$</span>.</p> <hr /> <p>Now, observe what happens if <span class="math-container">$\phi\notin\{0,\pi\}$</span>. For now, let's assume that <span class="math-container">$\phi\in(0,\frac\pi2)$</span>. In that case, <span class="math-container">$\sin\phi\neq 0$</span>, which means that the equation <span class="math-container">$$x\cos\phi-x^2\sin\phi=0$$</span> has <strong>two</strong> solutions. One solution is <span class="math-container">$x=0$</span>, the other is <span class="math-container">$x=\frac{\cos\phi}{\sin\phi} = \cot\phi$</span>.</p> <p>This means that, if we take <span class="math-container">$\overline x=0$</span>, there are two values of <span class="math-container">$x$</span> that can create a point <span class="math-container">$(\overline x, \overline y)$</span>, and we have two possible values of <span class="math-container">$\overline y$</span> as well. One is <span class="math-container">$\overline y_1 = 0$</span>, the other is <span class="math-container">$$\overline y_2 = x\sin\phi + x^2\cos\phi = \frac{\cos\phi}{\sin\phi} \sin\phi + \left(\frac{\cos\phi}{\sin\phi}\right)^2\cos\phi =\cos\phi + \frac{\cos^3\phi}{\sin\phi}$$</span></p> <p>and, because <span class="math-container">$\phi\in(0,\frac\pi2)$</span>, we know that <span class="math-container">$\overline y_2&gt;0$</span>, which means <span class="math-container">$\overline y_2\neq \overline y_1$</span>, and therefore, <span class="math-container">$P_\phi$</span> is not a graph of a function.</p> <hr /> <p>Note that the options when <span class="math-container">$\phi$</span> is in one of the other three quadrants can be solved similarly as the one above, or, you can use symmetry to translate all of the other three cases to the one already solved above.</p>
<p>You write (emphasis added):</p> <blockquote> <p>I am looking for a specific angle measure. <strong>One</strong> such measure must exist as the reflection of <span class="math-container">$f$</span> over the line <span class="math-container">$y=x$</span> is certainly no longer well-defined.</p> </blockquote> <p>It sounds like you are looking for <span class="math-container">$\theta_\text{min}$</span>, where a rotation through an angle <span class="math-container">$\theta \geq \theta_\text{min}$</span> produces a graph that is no longer the graph of a function, but a rotation through <span class="math-container">$\theta &lt; \theta_\text{min}$</span> still produces the graph of a function.</p> <p>But, in this case, the equality needs to be on the other side: a rotation through <span class="math-container">$\theta &gt; \theta_\text{min}$</span> produces a graph that is no longer the graph of a function, but a rotation through <span class="math-container">$\theta \leq \theta_\text{min}$</span> still produces the graph of a function.</p> <p>In fact, as we will show, <span class="math-container">$\theta_\text{min} = 0$</span>: <em>any</em> rotation (except for a half-revolution) produces a graph that is no longer the graph of a function.</p> <p>You then write:</p> <blockquote> <p>I suppose I know immediately that it must be less than <span class="math-container">$45^\circ$</span> as such a rotation will cross the y-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,1)$</span>.</p> </blockquote> <p>Actually, it crosses the <span class="math-container">$y$</span>-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,\sqrt{2})$</span>. But the important point is that it crosses a vertical line at multiple points.</p> <p>So now we want to rotate the parabola through some other angle <span class="math-container">$\theta$</span> and check whether it crosses a vertical line at multiple points. <a href="https://math.stackexchange.com/a/4492567/843797">5xum’s answer</a> shows how to do this, and that is great. But for all the talk about it being “elementary”, it requires transformation matrices and trigonometry. A <em>truly</em> elementary solution would require no knowledge beyond lines, parabolas and basic algebra: such a solution is presented below.</p> <p>Instead of rotating the parabola, then checking if it crosses a vertical line at multiple points, we can rotate the axes, then check if the parabola crosses a line that is “vertical” relative to the rotated axes at multiple points. This is the idea behind <a href="https://math.stackexchange.com/a/4492951/843797">Taladris’s answer</a>.</p> <p>As stated in that answer, it always does (again, excluding half-revolutions). But how can we prove this?</p> <p>This “rotated vertical” line could (relative to the original axes) be anything other than a vertical line. That is, it could be a line with gradient <span class="math-container">$m$</span>, for any value of <span class="math-container">$m$</span>. Now, for any such value, we can find a corresponding line that crosses the parabola at multiple points. To see this, note that for <span class="math-container">$m = 0$</span>, we can simply use any line above the origin, and for any other value of <span class="math-container">$m$</span>, we can use the line through the origin, which crosses the parabola there and at a second point given by:</p> <p><span class="math-container">\begin{align} mx &amp;= x^2\\ x &amp;= m\\ y &amp;= m^2. \end{align}</span></p>
logic
<p>I am new to discrete mathematics, and I am trying to simplify this statement. I'm using a chart of logical equivalences, but I can't seem to find anything that really helps reduce this. </p> <p><a href="https://i.sstatic.net/Hvpyg.png" rel="noreferrer"><img src="https://i.sstatic.net/Hvpyg.png" alt=""></a></p> <p>Which of these would help me to solve this problem? I realize I can covert $p \to q$ into $\lnot p \lor q$, but I'm stuck after that step. Any push in the right direction would be awesome.</p>
<p>Is it a tautology? Yes</p> <p>Why? Just draw the truth table and see that the truth value of the main implication is always 'true'. Here, I say '0' is 'false' and '1' is true: \begin{array}{|c|c|c|c|c|} \hline p &amp; q &amp; p\to q &amp; p\land(p\to q) &amp; (p\land(p\to q))\to q \\ \hline 0 &amp; 0&amp; 1&amp;0&amp;1\\ \hline 0 &amp; 1&amp; 1&amp;0&amp;1\\ \hline 1 &amp; 0&amp; 0&amp;0&amp;1\\ \hline 1 &amp; 1&amp; 1&amp; 1&amp; 1\\ \hline \end{array}</p> <p>Edit: you can also say \begin{align}p \land (p \to q) &amp;\equiv p \land (\lnot p \lor q)\\ &amp;\equiv (p \land \lnot p) \lor (p \land q )\\ &amp;\equiv \text {false} \lor (p \land q)\\&amp;\equiv p \land q, \end{align}</p> <p>that implies $q $.</p> <p>Edit 2: this inference method is called <em>modus ponens</em>, and it is the simplest one.</p> <p>For instance, say $p = $ it rains, $g = $ I get wet. </p> <p>So, if we know that the implication <em>if it rains, then I get wet</em> is true (that is, $p\to q $) and <em>It rains</em> ($p $), what can we infere? It is obvious that <em>I get wet</em> ($q $).</p> <p>Note that, even though $p \land (p\to q) $ is not verified, as $p\land (p\to q) \to q$, the main implication is verified.</p>
<p>One approach is just to start writing a truth table - after all, there are only four cases. And this can be reduced: If $q$ is true, the statement is true (this is two cases). If $p$ is false, the statement is true because</p> <p>$$p \wedge (p \to q)$$</p> <p>is false. The only case that's left is when $p$ is true and $q$ is false, and I'll leave it to you to verify that the proposition is true in this case too.</p>
probability
<p>A mathematician and a computer are playing a game: First, the mathematician chooses an integer from the range $2,...,1000$. Then, the computer chooses an integer <em>uniformly at random</em> from the same range. If the numbers chosen share a prime factor, the larger number wins. If they do not, the smaller number wins. (If the two numbers are the same, the game is a draw.)</p> <p>Which number should the mathematician choose in order to maximize his chances of winning?</p>
<p>For fixed range:</p> <pre><code>range = 16; a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@ Flatten@Table[ Table[Position[a, a[[y, m]]][[n, 1]], {n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n, 1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; cf = Grid[{{Join[{"n"}, Rest@(r = Range[range])] // ColumnForm, Join[{"win against n"}, Rest@w] // ColumnForm, Join[{"lose against n"}, Rest@l] // ColumnForm, Join[{"probability win for n"}, (p = Drop[Table[ results[[n]]/Total@Drop[results, 1] // N,{n, 1, range}], 1])] // ColumnForm}}] Flatten[Position[p, Max@p] + 1] </code></pre> <p>isn't great code, but fun to play with for small ranges, gives</p> <p><img src="https://i.sstatic.net/6oqyM.png" alt="enter image description here"> <img src="https://i.sstatic.net/hxDGU.png" alt="enter image description here"></p> <p>and perhaps more illuminating</p> <pre><code>rr = 20; Grid[{{Join[{"range"}, Rest@(r = Range[rr])] // ColumnForm, Join[{"best n"}, (t = Rest@Table[ a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@Flatten@Table[Table[ Position[a, a[[y, m]]][[n, 1]], {n, 1,Length@Position[a, a[[y, m]]]}], {m, 1,PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n,1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; p = Drop[Table[results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1]; {Flatten[Position[p, Max@p] + 1], Max@p}, {range, 1, rr}]/.Indeterminate-&gt; draw); Table[t[[n, 1]], {n, 1, rr - 1}]] // ColumnForm, Join[{"probability for win"}, Table[t[[n, 2]], {n, 1, rr - 1}]] // ColumnForm}}] </code></pre> <p>compares ranges:</p> <p><img src="https://i.sstatic.net/P2aA8.png" alt="enter image description here"></p> <p>Plotting mean "best $n$" against $\sqrt{\text{range}}$ gives</p> <p><img src="https://i.sstatic.net/LMkAz.png" alt="enter image description here"></p> <p>For range=$1000,$ "best $n$" are $29$ and $31$, which can be seen as maxima in this plot:</p> <p><img src="https://i.sstatic.net/06gXJ.png" alt="enter image description here"></p> <h1>Update</h1> <p>In light of DanielV's comment that a "primes vs winchance" graph would probably be enlightening, I did a little bit of digging, and it turns out that it is. Looking at the "winchance" (just a weighting for $n$) of the primes in the range only, it is possible to give a fairly accurate prediction using</p> <pre><code>range = 1000; a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@Flatten@Table[ Table[Position[a, a[[y, m]]][[n, 1]], {n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[range], b[[n]]], {n, 1, range}]; d = Table[Range[n, range], {n, 1, range}]; e = Table[Range[1, n], {n, 1, range}]; w = Table[ DeleteCases[ DeleteCases[ Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}]; l = Table[ DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n], {n, 1, range}]; results = Table[Length@l[[n]], {n, 1, range}]; p = Drop[Table[ results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1]; {Flatten[Position[p, Max@p] + 1], Max@p}; qq = Prime[Range[PrimePi[2], PrimePi[range]]] - 1; Show[ListLinePlot[Table[p[[t]] range, {t, qq}], DataRange -&gt; {1, Length@qq}], ListLinePlot[ Table[2 - 2/Prime[x] - 2/range (-E + Prime[x]), {x, 1, Length@qq + 0}], PlotStyle -&gt; Red], PlotRange -&gt; All] </code></pre> <p><img src="https://i.sstatic.net/BwAmp.png" alt="enter image description here"></p> <p>The plot above (there are $2$ plots here) show the values of "winchance" for primes against a plot of $$2+\frac{2 (e-p_n)}{\text{range}}-\frac{2}{p_n}$$</p> <p>where $p_n$ is the $n$th prime, and "winchance" is the number of possible wins for $n$ divided by total number of possible wins in range ie $$\dfrac{\text{range}}{2}\left(\text{range}-1\right)$$ eg $499500$ for range $1000$.</p> <p><img src="https://i.sstatic.net/zFegO.png" alt="enter image description here"></p> <pre><code>Show[p // ListLinePlot, ListPlot[N[ Transpose@{Prime[Range[PrimePi[2] PrimePi[range]]], Table[(2 + (2*(E - Prime[x]))/range - 2/Prime[x])/range, {x, 1, Length@qq}]}], PlotStyle -&gt; {Thick, Red, PointSize[Medium]}, DataRange -&gt; {1, range}]] </code></pre> <h1>Added</h1> <p>Bit of fun with game simulation:</p> <pre><code>games = 100; range = 30; table = Prime[Range[PrimePi[range]]]; choice = Nearest[table, Round[Sqrt[range]]][[1]]; y = RandomChoice[Range[2, range], games]; z = Table[ Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}]; Count[Table[If[Count[z, choice] == 0 &amp;&amp; y[[m]] &lt; choice \[Or] Count[z, choice] &gt; 0 &amp;&amp; y[[m]] &lt; choice, "lose", "win"], {m, 1, games}], "win"] </code></pre> <p>&amp; simulated wins against computer over variety of ranges </p> <p><img src="https://i.sstatic.net/82BQG.png" alt="enter image description here"></p> <p>with</p> <pre><code>Clear[range] highestRange = 1000; ListLinePlot[Table[games = 100; table = Prime[Range[PrimePi[range]]]; choice = Nearest[table, Round[Sqrt[range]]][[1]]; y = RandomChoice[Range[2, range], games]; z = Table[Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}]; Count[Table[ If[Count[z, choice] == 0 &amp;&amp; y[[m]] &lt; choice \[Or] Count[z, choice] &gt; 0 &amp;&amp; y[[m]] &lt; choice, "lose", "win"], {m, 1, games}], "win"], {range,2, highestRange}], Filling -&gt; Axis, PlotRange-&gt; All] </code></pre> <h1>Added 2</h1> <p>Plot of mean "best $n$" up to range$=1000$ with tentative conjectured error bound of $\pm\dfrac{\sqrt{\text{range}}}{\log(\text{range})}$ for range$&gt;30$.</p> <p><img src="https://i.sstatic.net/gjQ7E.png" alt="enter image description here"></p> <p>I could well be wrong here though. - In fact, on reflection, I think I am (<a href="https://math.stackexchange.com/questions/865820/very-tight-prime-bounds">related</a>).</p>
<p>First consider choosing a prime $p$ in the range $[2,N]$. You lose only if the computer chooses a multiple of $p$ or a number smaller than $p$, which occurs with probability $$ \frac{(\lfloor{N/p}\rfloor-1)+(p-2)}{N-1}=\frac{\lfloor{p+N/p}\rfloor-3}{N-1}. $$ The term inside the floor function has derivative $$ 1-\frac{N}{p^2}, $$ so it increases for $p\le \sqrt{N}$ and decreases thereafter. The floor function does not change this behavior. So the best prime to choose is always one of the two closest primes to $\sqrt{N}$ (the one on its left and one its right, unless $N$ is the square of a prime). Your chance of losing with this strategy will be $\sim 2/\sqrt{N}$.</p> <p>On the other hand, consider choosing a composite $q$ whose prime factors are $$p_1 \le p_2 \le \ldots \le p_k.$$ Then the computer certainly wins if it chooses a prime number less than $q$ (other than any of the $p$'s); there are about $q / \log q$ of these by the prime number theorem. It also wins if it chooses a multiple of $p_1$ larger than $q$; there are about $(N-q)/p_1$ of these. Since $p_1 \le \sqrt{q}$ (because $q$ is composite), the computer's chance of winning here is at least about $$ \frac{q}{N\log q}+\frac{N-q}{N\sqrt{q}}. $$ The first term increases with $q$, and the second term decreases. The second term is larger than $(1/3)/\sqrt{N}$ until $q \ge (19-\sqrt{37})N/18 \approx 0.72 N$, at which point the first is already $0.72 / \log{N}$, which is itself larger than $(5/3)/\sqrt{N}$ as long as $N &gt; 124$. So the sum of these terms will always be larger than $2/\sqrt{N}$ for $N &gt; 124$ or so, meaning that the computer has a better chance of winning than if you'd chosen the best prime.</p> <p>This rough calculation shows that choosing the prime closest to $\sqrt{N}$ is the best strategy for sufficiently large $N$, where "sufficiently large" means larger than about $100$. (Other answers have listed the exceptions, the largest of which appears to be $N=30$, consistent with this calculation.)</p>
differentiation
<p><strong>Here's how I see it</strong> (please read the following if you can, because I address a lot of arguments people have already made):</p> <p>Let's take instantaneous speed, for example. If it's truly instantaneous, then there is no change in $x$ (time), since there's no time <em>interval</em>. </p> <p>Thus, in $\frac{f(x+h) - f(x)}{h}$, $h$ should actually be zero (not arbitrarily close to zero, since that would still be an interval) and therefore instantaneous speed is undefined.</p> <p>If "instantaneous" is just a figure of speech for "very very very small", then I have two problems with it:</p> <p>Firstly, well it's not instantaneous at all in the sense of "at a single moment".</p> <p>Secondly, how is "very very very small" conceptually different from "small"? What's really the difference between considering $1$ second and $10^{-200}$ of a second?</p> <p>I've heard some people talk about "infinitely small" quantities. This doesn't make any sense to me. In this case, what's the process by which a number goes from "not infinitely small" to "ok, now you're infinitely small"? Where's the dividing line in degree of smallness beyond which a number is infinitely small?</p> <p>I understand $\lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$ as the limit of an infinite sequence of ratios, I have no problem with that.</p> <p>But I thought the point of a limit and infinity in general, is that you never <em>get</em> there. For example, when people say "the sum of an infinite geometric series", they really mean "the limit", since you can't possibly add infinitely many terms in the arithmetic sense of the word.</p> <p>So again in this case, since you never get to the limit, $h$ is always some interval, and therefore the rate is not "instantaneous". Same problem with integrals actually; how do you add up infinitely many terms? Saying you can add up an infinity or terms implies that infinity is a fixed number.</p>
<p>In math, there's intuition and there's rigor. Saying <span class="math-container">$$f'(x)=\lim_{h\to0}\frac{f(x+h)-f(x)}h$$</span> is a rigorous statement. It's very formal. Saying &quot;the derivative is the instantaneous rate of change&quot; is intuitive. It has no formal meaning whatsovever. Many people find it helpful for informing their gut feelings about derivatives.</p> <p>(<strong>Edit</strong> I should not understate the importance of gut feelings. You'll need to trust your gut if you ever want to prove hard things.)</p> <p>That being said, here's no reason why you should find it helpful. If it's too fluffy to be useful for you that's fine. But you'll need some intuition on what derivatives are supposed to be describing. I like to think of it as &quot;if I squinted my eyes so hard that <span class="math-container">$f$</span> became linear near some point, then <span class="math-container">$f$</span> would look like <span class="math-container">$f'$</span> near that point.&quot; Find something that works for you.</p>
<p>The idea behind $$\lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$$ is the slope of the graph $y=f(x)$ . </p> <p>Now for a moment forget about your instantaneous velocity and think about your average velocity. What is average velocity? I think average velocity is $$\frac{x_f-x_i}{t_f-t_i}$$. Now if look at this carefully this is my slope but in a given interval of time.</p> <p>So you might question what is the difference between instantaneous velocity and average velocity , Both of them talk about intervals .</p> <p>No that not the idea over here. Now if it had been a linear graph it would have been very easy to calculate your instantaneous velocity , but in your you graph that's not the case. Here you see the particle (or object) is changing its velocity every moment of time and that becomes impossible to deal with . So to remove this element of doubt what we do is try to take the limit and try to get close to the frame of time and try finding the velocity and name it instantaneous velocity. </p>
logic
<p>The way I used to be getting it was that circular reasoning occurs when a proof contains its thesis within its assumptions. Then, everything such a proof "proves" is that this particular statement entails itself; which is trivial since any statement entails itself.</p> <p>But I witnessed a conversation that made me think I'm not getting this at all.</p> <p>In short, Bob accused Alice of circular reasoning. But Alice responded in a way that perplexed me:</p> <blockquote> <p><em>Of course</em> my proof contains its thesis within its assumptions. Each and every proof must be based on axioms, which are assumptions that are not to be proved. Thus each set of axioms implicitly contains all theses that can be proven from this set of axioms. As we know, each theorem in mathematics and logic is little more than a tautology: so is mine.</p> </blockquote> <p>Not sure what should I think? On the one hand, Alice's reasoning seems correct. I, at least, can't find any error there. On the other hand, this entails that... Every valid proof must be circular! Which is absurd.</p> <p>What is a circular proof? And what is wrong with the reasoning above?</p>
<blockquote> <p>Of course my proof contains its thesis within its assumptions. Each and every proof must be based on axioms, which are assumptions that are not to be proved. </p> </blockquote> <p>Hold it right there, Alice. These <em>specific</em> axioms are to be accepted without proof but nothing else is. For anything that is true that is <em>not</em> one of these axioms, the role of proof must be to demonstrate that such a truth <em>can</em> be derived from these axioms and <em>how</em> it would be so derived.</p> <blockquote> <p>Thus each set of axioms implicite contains all thesis that can be proven from this set of axioms.</p> </blockquote> <p><em>Implicit</em>. But the role of a proof is to make the implicit explicit. I can claim that Fermat's last theorem is true. That is a true statement. But merely <em>claiming</em> it is not the same as a proof. I can claim the axioms of mathematics imply Fermat's last theorem and that would be true. But that's still not a proof. To prove it, I must <em>demonstrate</em> <strong><em>how</em></strong> the axioms imply it. And in doing so I can not base any of my demonstration implications upon the knowledge that I know it to be true.</p> <blockquote> <p>As we know, each theorem in mathematics and logics is little more than a tautology:</p> </blockquote> <p>That's not actually what a tautology is. But I'll assume you mean a true statement.</p> <blockquote> <p>so is mine.</p> </blockquote> <p>No one cares if your statement is true. We care if you can demonstrate <em>how</em> it is true. You did not do that.</p>
<p>All reasoning (whether formal or informal, mathematical, scientific, every-day-life, etc.) needs to satisfy two basic criteria in order to be considered good (sound) reasoning:</p> <ol> <li><p>The steps in the argument need to be logical (valid .. the conclusion follows from the premises)</p></li> <li><p>The assumptions (premises) need to be acceptable (true or at least agreed upon by the parties involved in the debate within which the argument is offered)</p></li> </ol> <p>Now, what Alice is pointing out is that in the domain of deductive reasoning (which includes mathematical reasoning), the information contained in the conclusion is already contained in the premises ... in a way, the conclusion thus 'merely' pulls this out. .. Alice thus seems to be saying: "all mathematical reasoning is circular .. so why attack my argument on being circular?"</p> <p>However, this is not a good defense against the charge of circular reasoning. First of all, there is a big difference between 'pulling out', say, some complicated theorem of arithmetic out of the Peano Axioms on the one hand, and simply assuming that very theorem as an assumption proven on the other: </p> <p>In the former scenario, contrary to Alice's claim, we really do not say that circular reasoning is taking place: as long as the assumptions of the argument are nothing more than the agreed upon Peano Axioms, and as long as each inference leading up the the theorem is logically valid, then such an argument satisfied the two forementioned criteria, and is therefore perfectly acceptable.</p> <p>In the latter case, however, circular reasoning <em>is</em> taking place: if all we agreed upon were the Peano axioms, but if the argument uses the conclusion (which is not part of those axioms) as an assumption, then that argument violates the second criterion. It can be said to 'beg the question' ... as it 'begs' the answer to the very question (is the theorem true?) we had in the first place.</p>
differentiation
<p>Say we have a function $f(x)$ that is infinitely differentiable at some point.</p> <p>Is it possible to find $f^{(n)}(x)$ without having to find first $f^{(n-1)}(x)$? If so, does it take less effort than computing preceding derivatives (i.e. $f'(x), f''(x), \cdots, f^{(n-1)}(x)$)?</p> <p>I often find it very tedious to find multiple derivatives so I was wondering if someone knows the answer to this question.</p>
<p>It really depends on the function. Some functions admit compact general expressions for arbitrary order derivatives:</p> <p>$$\begin{align*} \frac{\mathrm d^k}{\mathrm dx^k}x^n&amp;=k!\binom{n}{k}x^{n-k}\\ \frac{\mathrm d^k}{\mathrm dx^k}\sin\,x&amp;=\sin\left(x+\frac{k\pi}{2}\right)\\ \frac{\mathrm d^k}{\mathrm dx^k}\cos\,x&amp;=\cos\left(x+\frac{k\pi}{2}\right)\\ \end{align*}$$</p> <p>and as already mentioned, if your function satisfies a nice differential equation, you can use that differential equation to derive a general expression for your derivatives.</p> <p>(As a bonus, the formula for the derivative of the power function can be generalized to complex $k$, barring exceptional values of $n$ and $k$; this is the realm of the <em>fractional calculus</em>. The formulae for sine and cosine are no longer as simple in the general complex case, though.)</p> <p>In general, however, there isn't always a nice expression. <a href="http://functions.wolfram.com/ElementaryFunctions/Tan/20/02/">This page</a> for instance displays a number of representations for the derivative of the tangent function. None of them look particularly nice. Sometimes, functions won't even allow for an explicit expression for derivatives, as in the case of <a href="http://functions.wolfram.com/ElementaryFunctions/ProductLog/20/02/">the Lambert function</a>. (Note that the last formula in that page requires an auxiliary recursive definition for the polynomials that turn up in the differentiation.)</p> <p>Relatedly: formulae like the <a href="http://en.wikipedia.org/wiki/Leibniz_rule_%28generalized_product_rule%29">Leibniz rule</a> and the <a href="http://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno%27s_formula">Faà di Bruno formula</a> are helpful when determining general expressions for derivatives of more complicated functions. There are also a number of formulae listed <a href="http://functions.wolfram.com/GeneralIdentities/9/">here</a>.</p>
<p>Yes, for the broad-class of functions admitting series expansions such that the coefficients satisfy a constant-coefficient difference equation (recurrence). In this case the coefficients (hence derivatives) can be computed quickly in polynomial time by converting to system form and repeatedly squaring the shift-matrix. This can be comprehended by examining the special case for Fibonacci numbers - see this <a href="https://math.stackexchange.com/a/11482/242">prior question.</a> Once you understand this 2-dimensional case it should be clear how it generalizes to higher-order constant coefficient recurrences.</p> <p>Look up <a href="http://www-math.mit.edu/~rstan/pubs/pubfiles/45.pdf" rel="nofollow noreferrer">D-finite series</a> and holonomic functions to learn more.</p>
matrices
<p>If $A,B$ are $2 \times 2$ matrices of real or complex numbers, then</p> <p>$$AB = \left[ \begin{array}{cc} a_{11} &amp; a_{12} \\ a_{21} &amp; a_{22} \end{array} \right]\cdot \left[ \begin{array}{cc} b_{11} &amp; b_{12} \\ b_{21} &amp; b_{22} \end{array} \right] = \left[ \begin{array}{cc} a_{11}b_{11}+a_{12}b_{21} &amp; a_{11}b_{12}+a_{12}b_{22} \\ a_{21}b_{11}+a_{22}b_{21} &amp; a_{22}b_{12}+a_{22}b_{22} \end{array} \right] $$</p> <p>What if the entries $a_{ij}, b_{ij}$ are themselves $2 \times 2$ matrices? Does matrix multiplication hold in some sort of "block" form ?</p> <p>$$AB = \left[ \begin{array}{c|c} A_{11} &amp; A_{12} \\\hline A_{21} &amp; A_{22} \end{array} \right]\cdot \left[ \begin{array}{c|c} B_{11} &amp; B_{12} \\\hline B_{21} &amp; B_{22} \end{array} \right] = \left[ \begin{array}{c|c} A_{11}B_{11}+A_{12}B_{21} &amp; A_{11}B_{12}+A_{12}B_{22} \\\hline A_{21}B_{11}+A_{22}B_{21} &amp; A_{22}B_{12}+A_{22}B_{22} \end{array} \right] $$ This identity would be very useful in my research.</p>
<p>It depends on how you partition it, not all partitions work. For example, if you partition these two matrices </p> <p>$$\begin{bmatrix} a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i \end{bmatrix}, \begin{bmatrix} a' &amp; b' &amp; c' \\ d' &amp; e' &amp; f' \\ g' &amp; h' &amp; i' \end{bmatrix} $$ </p> <p>in this way </p> <p>$$ \left[\begin{array}{c|cc}a&amp;b&amp;c\\ d&amp;e&amp;f\\ \hline g&amp;h&amp;i \end{array}\right], \left[\begin{array}{c|cc}a'&amp;b'&amp;c'\\ d'&amp;e'&amp;f'\\ \hline g'&amp;h'&amp;i' \end{array}\right] $$</p> <p>and then multiply them, it won't work. But this would</p> <p>$$\left[\begin{array}{c|cc}a&amp;b&amp;c\\ \hline d&amp;e&amp;f\\ g&amp;h&amp;i \end{array}\right] ,\left[\begin{array}{c|cc}a'&amp;b'&amp;c'\\ \hline d'&amp;e'&amp;f'\\ g'&amp;h'&amp;i' \end{array}\right] $$</p> <p>What's the difference? Well, in the first case, all submatrix products are not defined, like $\begin{bmatrix} a \\ d \\ \end{bmatrix}$ cannot be multiplied with $\begin{bmatrix} a' \\ d' \\ \end{bmatrix}$</p> <p>So, what is the general rule? (Taken entirely from the Wiki page on <a href="https://en.wikipedia.org/wiki/Block_matrix" rel="noreferrer">Block matrix</a>)</p> <p>Given, an $(m \times p)$ matrix $\mathbf{A}$ with $q$ row partitions and $s$ column partitions $$\begin{bmatrix} \mathbf{A}_{11} &amp; \mathbf{A}_{12} &amp; \cdots &amp;\mathbf{A}_{1s}\\ \mathbf{A}_{21} &amp; \mathbf{A}_{22} &amp; \cdots &amp;\mathbf{A}_{2s}\\ \vdots &amp; \vdots &amp; \ddots &amp;\vdots \\ \mathbf{A}_{q1} &amp; \mathbf{A}_{q2} &amp; \cdots &amp;\mathbf{A}_{qs}\end{bmatrix}$$</p> <p>and a $(p \times n)$ matrix $\mathbf{B}$ with $s$ row partitions and $r$ column parttions</p> <p>$$\begin{bmatrix} \mathbf{B}_{11} &amp; \mathbf{B}_{12} &amp; \cdots &amp;\mathbf{B}_{1r}\\ \mathbf{B}_{21} &amp; \mathbf{B}_{22} &amp; \cdots &amp;\mathbf{B}_{2r}\\ \vdots &amp; \vdots &amp; \ddots &amp;\vdots \\ \mathbf{B}_{s1} &amp; \mathbf{B}_{s2} &amp; \cdots &amp;\mathbf{B}_{sr}\end{bmatrix}$$</p> <p>that are compatible with the partitions of $\mathbf{A}$, the matrix product</p> <p>$ \mathbf{C}=\mathbf{A}\mathbf{B} $</p> <p>can be formed blockwise, yielding $\mathbf{C}$ as an $(m\times n)$ matrix with $q$ row partitions and $r$ column partitions.</p>
<p>It is always suspect with a very late answer to a popular question, but I came here looking for what a compatible block partitioning is and did not find it:</p> <p>For <span class="math-container">$\mathbf{AB}$</span> to work by blocks the important part is that the partition along the columns of <span class="math-container">$\mathbf A$</span> must match the partition along the rows of <span class="math-container">$\mathbf{B}$</span>. This is analogous to how, when doing <span class="math-container">$\mathbf{AB}$</span> without blocks—which is of course just a partitioning into <span class="math-container">$1\times 1$</span> blocks—the number of columns in <span class="math-container">$\mathbf A$</span> must match the number of rows in <span class="math-container">$\mathbf B$</span>.</p> <p>Example: Let <span class="math-container">$\mathbf{M}_{mn}$</span> denote any matrix of <span class="math-container">$m$</span> rows and <span class="math-container">$n$</span> columns irrespective of contents. We know that <span class="math-container">$\mathbf{M}_{mn}\mathbf{M}_{nq}$</span> works and yields a matrix <span class="math-container">$\mathbf{M}_{mq}$</span>. Split <span class="math-container">$\mathbf A$</span> by columns into a block of size <span class="math-container">$a$</span> and a block of size <span class="math-container">$b$</span>, and do the same with <span class="math-container">$\mathbf B$</span> by rows. Then split <span class="math-container">$\mathbf A$</span> however you wish along its rows, same for <span class="math-container">$\mathbf B$</span> along its columns. Now we have <span class="math-container">$$ A = \begin{bmatrix} \mathbf{M}_{ra} &amp; \mathbf{M}_{rb} \\ \mathbf{M}_{sa} &amp; \mathbf{M}_{sb} \end{bmatrix}, B = \begin{bmatrix} \mathbf{M}_{at} &amp; \mathbf{M}_{au} \\ \mathbf{M}_{bt} &amp; \mathbf{M}_{bu} \end{bmatrix}, $$</span></p> <p>and <span class="math-container">$$ AB = \begin{bmatrix} \mathbf{M}_{ra}\mathbf{M}_{at} + \mathbf{M}_{rb}\mathbf{M}_{bt} &amp; \mathbf{M}_{ra}\mathbf{M}_{au} + \mathbf{M}_{rb}\mathbf{M}_{bu} \\ \mathbf{M}_{sa}\mathbf{M}_{at} + \mathbf{M}_{sb}\mathbf{M}_{bt} &amp; \mathbf{M}_{sa}\mathbf{M}_{au} + \mathbf{M}_{sb}\mathbf{M}_{bu} \end{bmatrix} = \begin{bmatrix} \mathbf{M}_{rt} &amp; \mathbf{M}_{ru} \\ \mathbf{M}_{st} &amp; \mathbf{M}_{su} \end{bmatrix}. $$</span></p> <p>All multiplications conform, all sums work out, and the resulting matrix is the size you'd expect. There is nothing special about splitting in two so long as you match any column split of <span class="math-container">$\mathbf A$</span> with a row split in <span class="math-container">$\mathbf B$</span> (try removing a block row from <span class="math-container">$\mathbf A$</span> or further splitting a block column of <span class="math-container">$\mathbf B$</span>).</p> <p>The nonworking example from the accepted answer is nonworking because the columns of <span class="math-container">$\mathbf A$</span> are split into <span class="math-container">$(1, 2)$</span> while the rows of <span class="math-container">$\mathbf B$</span> are split into <span class="math-container">$(2, 1)$</span>.</p>
probability
<p>In a family with two children, what are the chances, if one of the children is a girl, that both children are girls?</p> <p>I just dipped into a book, <em>The Drunkard's Walk - How Randomness Rules Our Lives</em>, by Leonard Mlodinow, Vintage Books, 2008. On p.107 Mlodinow says the chances are 1 in 3. </p> <p>It seems obvious to me that the chances are 1 in 2. Am I correct? Is this not exactly analogous to having a bowl with an infinite number of marbles, half black and half red? Without looking I draw out a black marble. The probability of the second marble I draw being black is 1/2.</p>
<p>In a family with 2 children there are four possibilities:</p> <p>1) the first child is a boy and the second child is a boy (bb)</p> <p>2) the first child is a boy and the second child is a girl (bg)</p> <p>3) the first child is a girl and the second child is a boy (gb)</p> <p>4) the first child is a girl and the second child is a girl (gg)</p> <p>Since we are given that at least one child is a girl there are three possibilities: bg, gb, or gg. Out of those three possibilities the only one with two girls is gg. Hence the probability is $\frac{1}{3}$.</p>
<p>I think this question confuses a lot of people because there's a lack of intuitive context -- I'll try to supply that.</p> <p>Suppose there is a birthday party to which all of the girls (and none of the boys) in a small town are invited. If you run into a mother who has dropped off a kid at this birthday party and who has two children, the chance that she has two girls is $1/3$. Why? $3/4$ of the mothers with two children will have a daughter at the birthday party, the ones with two girls ($1/4$ of the total mothers with two children) and the ones with one girl and one boy ($1/2$ of the total mothers with two children). Out of these $3/4$ of the mothers, $1/3$ have two girls.</p> <p>On the other hand, if the birthday party is only for fifth-grade girls, you get a different answer. Assuming there are no siblings who are both in the fifth grade, the answer in this case is $1/2$. The child in fifth grade is a girl, but the other child has probability $1/2$ of being a girl. Situations of this kind arise in real life much more commonly than situations of the other kind, so the answer of $1/3$ is quite nonintuitive.</p>
linear-algebra
<p>I'm reading my linear algebra textbook and there are two sentences that make me confused.</p> <p>(1) Symmetric matrix $A$ can be factored into $A=Q\lambda Q^{T}$ where $Q$ is orthogonal matrix : Diagonalizable<br> ($Q$ has eigenvectors of $A$ in its columns, and $\lambda$ is diagonal matrix which has eigenvalues of $A$) </p> <p>(2) Any symmetric matrix has a complete set of orthonormal eigenvectors<br> whether its eigenvalues are distinct or not. </p> <p>It's a contradiction, right?<br> Diagonalizable means the matrix has n distinct eigenvectors (for $n$ by $n$ matrix).<br> If symmetric matrix can be factored into $A=Q\lambda Q^{T}$, it means that<br> symmetric matrix has n distinct eigenvalues.<br> Then why the phrase "whether its eigenvalues are distinct or not" is added in (2)?</p> <p>After reading eigenvalue and eigenvector part of textbook, I conclude that every symmetric matrix is diagonalizable. Is that true?</p>
<p>Diagonalizable doesn't mean it has distinct eigenvalues. Think about the identity matrix, it is diagonaliable (already diagonal, but same eigenvalues. But the converse is true, every matrix with distinct eigenvalues can be diagonalized. </p>
<p>It is definitively NOT true that a diagonalizable matrix has all distinct eigenvalues--take the identity matrix. This is sufficient, but not necessary. There is no contradiction here.</p>
logic
<p>Suppose we want to define a first-order language to do set theory (so we can formalize mathematics). One such construction can be found <a href="http://books.google.com.bn/books?id=u927rHHmylAC&amp;lpg=PP1&amp;pg=PA5#v=onepage&amp;q&amp;f=false" rel="noreferrer">here</a>. What makes me uneasy about this definition is that words such as "set", "countable", "function", and "number" are used in somewhat non-trivial manners. For instance, behind the word "countable" rests an immense amount of mathematical knowledge: one needs the notion of a bijection, which requires functions and sets. One also needs the set of natural numbers (or something with equal cardinality), in order to say that countable sets have a bijection with the set of natural numbers.</p> <p>Also, in set theory one uses the relation of belonging "<span class="math-container">$\in$</span>". But relation seems to require the notion an ordered pair, which requires sets, whose properties are described using belonging...</p> <p>I found the following in Kevin Klement's, <a href="https://people.umass.edu/klement/514/ln.pdf" rel="noreferrer">lecture notes on mathematical logic</a> (pages 2-3).</p> <p>"You have to use logic to study logic. There’s no getting away from it. However, I’m not going to bother stating all the logical rules that are valid in the metalanguage, since I’d need to do that in the metametalanguage, and that would just get me started on an infinite regress. The rule of thumb is: if it’s OK in the object language, it’s OK in the metalanguage too."</p> <p>So it seems that, if one proves a fact about the object language, then one can also use it in the metalanguage. In the case of set theory, one may not start out knowing what sets really are, but after one proves some fact about them (e.g., that there are uncountable sets) then one implicitly "adds" this fact also to the metalanguage.</p> <p>This seems like cheating: one is using the object language to conduct proofs regarding the metalanguage, when it should strictly be the other way round.</p> <p>To give an example of avoiding circularity, consider the definition of the integers. We can define a binary relation <span class="math-container">$R\subseteq(\mathbf{N}\times\mathbf{N})\times(\mathbf{N}\times\mathbf{N})$</span>, where for any <span class="math-container">$a,b,c,d\in\mathbf{N}$</span>, <span class="math-container">$((a,b),(c,d))\in R$</span> iff <span class="math-container">$a+d=b+c$</span>, and then defining <span class="math-container">$\mathbf{Z}:= \{[(a,b)]:a,b\in\mathbf{N}\}$</span>, where <span class="math-container">$[a,b]=\{x\in \mathbf{N}\times\mathbf{N}: xR(a,b)\}$</span>, as in <a href="https://math.stackexchange.com/questions/156264/building-the-integers-from-scratch-and-multiplying-negative-numbers">this</a> question or <a href="https://en.wikipedia.org/wiki/Integer#Construction" rel="noreferrer">here</a> on Wikipedia. In this definition if set theory and natural numbers are assumed, then there is no circularity because one did not depend on the notion of "subtraction" in defining the integers.</p> <p>So my question is:</p> <blockquote> <p><strong>Question</strong> Is the definition of first-order logic circular? If not, please explain why. If the definitions <em>are</em> circular, is there an alternative definition which avoids the circularity?</p> </blockquote> <p>Some thoughts:</p> <ul> <li><p>Perhaps there is the distinction between what sets are (anything that obeys the axioms) and how sets are expressed (using a formal language). In other words, the notion of a <em>set</em> may not be circular, but to talk of sets using a formal language requires the notion of a set in a metalanguage.</p></li> <li><p>In foundational mathematics there also seems to be the idea of first <em>defining</em> something, and then coming back with better machinery to <em>analyse</em> that thing. For instance, one can define the natural numbers using the Peano axioms, then later come back to say that all structures satisfying the axioms are isomorphic. (I don't know any algebra, but that seems right.)</p></li> <li><p>Maybe sets, functions, etc., are too basic? Is it possible to avoid these terms when defining a formal language?</p></li> </ul>
<p>I think an important answer is still not present so I am going to type it. This is somewhat standard knowledge in the field of foundations but is not always adequately described in lower level texts.</p> <p>When we formalize the syntax of formal systems, we often talk about the <em>set</em> of formulas. But this is just a way of speaking; there is no ontological commitment to "sets" as in ZFC. What is really going on is an "inductive definition". To understand this you have to temporarily forget about ZFC and just think about strings that are written on paper. </p> <p>The inductive definition of a "propositional formula" might say that the set of formulas is the smallest class of strings such that:</p> <ul> <li><p>Every variable letter is a formula (presumably we have already defined a set of variable letters). </p></li> <li><p>If $A$ is a formula, so is $\lnot (A)$. Note: this is a string with 3 more symbols than $A$. </p></li> <li><p>If $A$ and $B$ are formulas, so is $(A \land B)$. Note this adds 3 more symbols to the ones in $A$ and $B$. </p></li> </ul> <p>This definition <em>can</em> certainly be read as a definition in ZFC. But it can also be read in a different way. The definition can be used to generate a completely effective procedure that a human can carry out to tell whether an arbitrary string is a formula (a proof along these lines, which constructs a parsing procedure and proves its validity, is in Enderton's logic textbook). </p> <p>In this way, we can understand inductive definitions in a completely effective way without any recourse to set theory. When someone says "Let $A$ be a formula" they mean to consider the situation in which I have in front of me a string written on a piece of paper, which my parsing algorithm says is a correct formula. I can perform that algorithm without any knowledge of "sets" or ZFC.</p> <p>Another important example is "formal proofs". Again, I can treat these simply as strings to be manipulated, and I have a parsing algorithm that can tell whether a given string is a formal proof. The various syntactic metatheorems of first-order logic are also effective. For example the deduction theorem gives a direct algorithm to convert one sort of proof into another sort of proof. The algorithmic nature of these metatheorems is not always emphasized in lower-level texts - but for example it is very important in contexts like automated theorem proving. </p> <p>So if you examine a logic textbook, you will see that all the syntactic aspects of basic first order logic are given by inductive definitions, and the algorithms given to manipulate them are completely effective. Authors usually do not dwell on this, both because it is completely standard and because they do not want to overwhelm the reader at first. So the convention is to write definitions "as if" they are definitions in set theory, and allow the readers who know what's going on to read the definitions as formal inductive definitions instead. When read as inductive definitions, these definitions would make sense even to the fringe of mathematicians who don't think that any infinite sets exist but who are willing to study algorithms that manipulate individual finite strings. </p> <p>Here are two more examples of the syntactic algorithms implicit in certain theorems: </p> <ul> <li><p>Gödel's incompleteness theorem actually gives an effective algorithm that can convert any PA-proof of Con(PA) into a PA-proof of $0=1$. So, under the assumption there is no proof of the latter kind, there is no proof of the former kind. </p></li> <li><p>The method of forcing in ZFC actually gives an effective algorithm that can turn any proof of $0=1$ from the assumptions of ZFC and the continuum hypothesis into a proof of $0=1$ from ZFC alone. Again, this gives a relative consistency result. </p></li> </ul> <p>Results like the previous two bullets are often called "finitary relative consistency proofs". Here "finitary" should be read to mean "providing an effective algorithm to manipulate strings of symbols". </p> <p>This viewpoint helps explain where weak theories of arithmetic such as PRA enter into the study of foundations. Suppose we want to ask "what axioms are required to prove that the algorithms we have constructed will do what they are supposed to do?". It turns out that very weak theories of arithmetic are able to prove that these symbolic manipulations work correctly. PRA is a particular theory of arithmetic that is on one hand very weak (from the point of view of stronger theories like PA or ZFC) but at the same time is able to prove that (formalized versions of) the syntactic algorithms work correctly, and which is often used for this purpose. </p>
<p>It's only circular if you think we need a formalization of logic in order to reason mathematically at all. However, mathematicians reasoned mathematically for many centuries <em>before</em> formal logic was invented, so this assumption is obviously not true.</p> <p>It's an empirical fact that mathematical reasoning existed independently of formal logic back then. I think it is reasonably self-evident, then, that it <em>still</em> exists without needing formal logic to prop it up. Formal logic is a <em>mathematical model</em> of the kind of reasoning mathematicians accept -- but the model is not the thing itself.</p> <p>A small bit of circularity does creep in, because many modern mathematicians look to their knowledge of formal logic when they need to decide whether to accept an argument or not. But that's not enough to make the whole thing circular; there are enough non-equivalent formal logics (and possible foundations of mathematics) to choose between that the choice of which one to use to analyze arguments is still largely informed by which arguments one <em>intuitively</em> wants to accept in the first place, not the other way around.</p>
probability
<p>Hypothetically, if I have a 0.00048% chance of dying when I blink, and I blink once a second, what chance do I have of dying in a single day? </p> <p>I tried $1-0.0000048^{86400}$ but no calculator I could find would support this. How would I work this out manually?</p>
<p>When $n$ is large, $p$ is small and $np&lt;10$, then the Poisson approximation is very good. In that case, the answer is approximately: $$P =1 - e^{-\lambda}=1-0.6605 = 0.3395$$, where $\lambda = np = 0.41472.$</p>
<p>As @Saketh and @dxiv indicate, you want to take a large power: $(1 - p)^{86400}$, where $p$ is tiny. Calculators don't do well at this. But if you use the rule that $$ a^b = \exp(b \log a) $$ then you can compute $$ b \log a \approx 86400 \log .9999952 \approx -0.41472099533 $$ and compute $e$ to that power to get approximately $0.6605...$, and hence your probability of dying is 1 minus that, or about 34%. </p> <p>The key step is in using the logarithm to compute the exponent, for your calculator's built-in log function (perhaps called "ln") is very accurate near 1, and exponentiation is pretty accurate for numbers like $e$ (a little less than $3$) with exponents between $0$ and about $5$. </p>
probability
<p><strong>I would like to generate a random axis or unit vector in 3D</strong>. In 2D it would be easy, I could just pick an angle between 0 and 2*Pi and use the unit vector pointing in that direction. </p> <p>But in 3D <strong>I don't know how can I pick a random point on a surface of a sphere</strong>.</p> <p>If I pick two angles the distribution won't be uniform on the surface of the sphere. There would be more points at the poles and less points at the equator.</p> <p>If I pick a random point in the (-1,-1,-1):(1,1,1) cube and normalise it, then there would be more chance that a point gets choosen along the diagonals than from the center of the sides. So thats not good either.</p> <p><strong>But then what's the good solution?</strong> </p>
<p>You need to use an <a href="http://en.wikipedia.org/wiki/Equal-area_projection#Equal-area">equal-area projection</a> of the sphere onto a rectangle. Such projections are widely used in cartography to draw maps of the earth that represent areas accurately.</p> <p>One of the simplest such projections is the axial projection of a sphere onto the lateral surface of a cylinder, as illustrated in the following figure:</p> <p><img src="https://i.sstatic.net/GDe76.png" alt="Cylindrical Projection"></p> <p>This projection is area-preserving, and was used by Archimedes to compute the surface area of a sphere.</p> <p>The result is that you can pick a random point on the surface of a unit sphere using the following algorithm:</p> <ol> <li><p>Choose a random value of $\theta$ between $0$ and $2\pi$.</p></li> <li><p>Choose a random value of $z$ between $-1$ and $1$.</p></li> <li><p>Compute the resulting point: $$ (x,y,z) \;=\; \left(\sqrt{1-z^2}\cos \theta,\; \sqrt{1-z^2}\sin \theta,\; z\right) $$</p></li> </ol>
<p>Another commonly used convenient method of generating a uniform random point on the sphere in $\mathbb{R}^3$ is this: Generate a standard multivariate normal random vector $(X_1, X_2, X_3)$, and then normalize it to have length 1. That is, $X_1, X_2, X_3$ are three independent standard normal random numbers. There are many well-known ways to generate normal random numbers; one of the simplest is the <a href="http://en.wikipedia.org/wiki/Box-Muller">Box-Muller algorithm</a> which produces two at a time.</p> <p>This works because the standard multivariate normal distribution is invariant under rotation (i.e. orthogonal transformations).</p> <p>This has the nice property of generalizing immediately to any number of dimensions without requiring any more thought.</p>
probability
<blockquote> <p>You have a standard deck of cards and randomly take one card away without looking at it and set it aside. What is the probability that you draw the Jack of Hearts from the pile now containing 51 cards? </p> </blockquote> <hr> <p>I'm confused by this question because if the card you removed from the pile was the Jack of Hearts then the probability would be zero so I'm not sure how to calculate it.</p> <hr> <p><strong>Edit:</strong> I asked this question about a year ago because I was struggling to get an intuitive understanding of an important concept in probability, and the comments and answers were really helpful for me (especially the one about "no new information being added so the probability doesn't change"). </p>
<h3>The Hard Way</h3> <p>The probability that the first card is not the Jack of Hearts is $\frac {51}{52}$ so the probability that the first card is not the Jack of Hearts <em>and</em> the second card <em>is</em> the Jack of Hearts is $\frac {51}{52}\times \frac 1{51}$.<br> The probability that the first card <em>is</em> the Jack of Hearts is $\frac 1{52}$ so the probability that the first card is the Jack of Hearts and the second card is the Jack of Hearts is $\frac 1{52}\times 0$.</p> <p>So the total probability that the second card is Jack of Hearts is:<br> The probability that the second card is after the first card is not +<br> The probability that the second card is after the first card already was $$= \frac {51}{52}\times \frac 1{51} + \frac 1{52}\times 0 = \frac 1{52} + 0 = \frac 1{52}$$</p> <p>That was the hard way.</p> <h3>The Easy Way</h3> <p>The probability that any specific card is any specific value is $\frac 1{52}$. It doesn't matter if it is the first card, the last card, or the 13th card. So the probability that the second card is the Jack of Hearts is $\frac 1{52}$. Picking the first card and not looking at, just going directly to the second card, putting the second card in an envelope and setting the rest of the cards on fire, won't make any difference; all that matters is the second card has a one in $52$ chance of being the Jack of Hearts.</p> <p>Any thing else just wouldn't make any sense.</p> <hr> <p>The thing is throwing in red herrings like "what about the first card?" doesn't change things and <em>if</em> you actually do try to take everything into account, the result, albeit complicated, will come out to be the same.</p>
<blockquote> <p>I'm confused by this question because if the card you removed from the pile was the Jack of Hearts then the probability would be zero so I'm not sure how to calculate it.</p> </blockquote> <p>Well, to go the long way, you need to use the Law of Total Probability. &nbsp; Letting $X$ be the card you took away, and $Y$ the card you subsequently draw from the remaining deck.</p> <p>$$\mathsf P(Y=\mathrm J\heartsuit) = \mathsf P(Y=\mathrm J\heartsuit\mid X\neq \mathrm J\heartsuit)~\mathsf P(X\neq \mathrm J\heartsuit)+0\cdot\mathsf P(X=\mathrm J\heartsuit)$$</p> <p>Well, now, $\mathsf P(X\neq\mathrm J\heartsuit) = 51/52$ is the probability that the card taken is one from the 51 not-jack-of-hearts.</p> <p>Also $\mathsf P(Y=\mathrm J\heartsuit\mid X\neq \mathrm J\heartsuit)$ is the probability that the subsequent selection is the Jack of Hearts when given that that card <em>is among the 51</em> remaining cards.</p> <hr> <p>Alternatively, the short way is to consider : When given a shuffled deck of 52 standard cards, what is the probability that the <em>second from the top</em> is the Jack of Hearts?</p>
differentiation
<blockquote> <p>Let $f:\Bbb R\to\Bbb R$ be a convex function. Then $f$ is differentiable at all but countably many points.</p> </blockquote> <p>It is clear that a convex function can be non-differentiable at countably many points, for example $f(x)=\int\lfloor x\rfloor\,dx$.</p> <p>I just made this theorem up, but my heuristic for why it should be true is that the only possible non-differentiable singularities I can imagine in a convex function are corners, and these involve a jump discontinuity in the derivative, so since the derivative is increasing (where it is defined), you get an inequality like $f'(y)-f'(x)\ge \sum_{t\in(x,y)}\omega_t(f')$, where $\omega_t(f')$ is the oscillation at $t$ (limit from right minus limit from left) and the sum is over all real numbers between $x$ and $y$. Since the sum is convergent (assuming that $x\le y$ are points such that $f$ is differentiable at $x$ and $y$ so that this makes sense), there can only be countably many values in the sum which are non-zero, and at all other points the oscillation is zero and so the derivative exists. Thus there are only countably many non-differentiable points in the interval $(x,y)$, so as long as there is a suitable sequence $(x_n)\to-\infty$, $(y_n)\to\infty$ of differentiable points, the total number of non-differentiable points is a countable union of countable sets, which is countable.</p> <p>Furthermore, I would conjecture that the set of non-differentiable points has empty interior-of-closure, i.e. you can't make a function that is non-differentiable at the rational numbers, but as the above discussion shows there are still a lot of holes in the proof (and I'm making a lot of unjustified assumptions regarding the derivative already being somewhat well-defined). Does anyone know how to approach such a statement?</p>
<p>The set of points of nondifferentiability <a href="https://math.stackexchange.com/a/67857/">can be dense</a>.</p> <p>But you correctly conjectured that it is at most countable. First, convexity implies that for $s&lt;u\le v&lt;t$ we have $$ \frac{f(u)-f(s)}{u-s}\leq\frac{f(t)-f(v)}{t-v} \tag{1}$$ <strong>Sketch of the proof of (1)</strong>. First, use the 3-point convexity definition to show this for $u=v$. When $u&lt;v$, proceed as $$ \frac{f(u)-f(s)}{u-s}\leq\frac{f(t)-f(u)}{t-u}\leq\frac{f(t)-f(v)}{t-v}\qquad \Box$$</p> <p>From (1) it follows that $f$ has <a href="http://en.wikipedia.org/wiki/Left_and_right_derivative" rel="noreferrer">one-sided derivatives</a> ${f}_-'$ and $f_+'$ at every point, and they satisfy $$ {f}_-'(x)\le f_+'(x)\le {f}_-'(y)\le f_+'(y), \quad x&lt;y \tag{2} $$</p> <p>For every point where ${f}_-'(x)&lt; f_+'(x)$, pick a rational number $q$ such that ${f}_-'(x)&lt; q&lt; f_+'(x)$. Inequality (2) implies all these rationals are distinct. Therefore, the set of points of nondifferentiability is at most countable.</p>
<p>I'm going to post an alternative answer for those who wish to avoid the Axiom of Choice (although that argument is much simpler than the one below).</p> <p>It's fairly trivial to demonstrate that whenever $f$ is a convex map (on an open set), then $f$ is both left-differentiable and right-differentiable at every point of its domain. It's also quite typically easy to show that both $f_l'$ and $f_r'$ are both increasing functions. Furthermore, it's also typically easy to show that $f_l'(x)\leq f_r'(x)$ for every $x$ in the domain. However, another property that holds (which is much less frequently invoked however quite important here) is that whenever $x&lt; y$, then $f_r'(x)\leq f_l'(y)$. Each of these can be readily proved from the Three Chord lemma.</p> <p>The idea behind this proof is to use the fact that increasing functions have at most countably many points of discontinuity (which both $f_l'$ and $f_r'$ are). And to use this fact that we are about to prove: if either $f_l'$ or $f_r'$ are continuous at some fixed point, then both of these functions are going to agree at that point.</p> <p>From the inequalities above, we can conclude for any $x$ that $$\lim_{y\rightarrow x^-}f_r'(y)\leq f_l'(x)\leq f_r'(x) \quad\text{and}\quad f_l'(x)\leq f_r'(x)\leq\lim_{y\rightarrow x^+}f_l'(y).$$</p> <p>The assertion of equality in the presence of continuity is quickly deduced from these inequalities.</p> <p>Now let $D$ be the set of points where $f$ is not differentiable. By the lemma we just proved, this set is contained in the set of points where either $f_l'$ or $f_r'$ is discontinuous, which is of course countable.</p>
logic
<p>I am trying very hard to understand Gödel's Incompleteness Theorem. I am really interested in what it says about axiomatic languages, but I have some questions:</p> <p>Gödel's theorem is proved based on Arithmetic and its four operators: is all mathematics derived from these four operators (×, +, -, ÷) ?</p> <p>Operations such as log(x) and sin(x) are indeed atomic operations, like those of arithmetic, but aren't there infinitely many such operators that have inverses (that is, + and - are "inverse" operations, × and ÷ are inverse operations).</p> <p>To me it seems as though making a statement about the limitations of provability given 4 arbitrary operators is absurd, but that probably highlights a gap in my understanding, given that he proved this in 1931 and its unlikely that I have found a counter-argument.</p> <p>As a follow-up remark, why the obsession with arithmetic operators? They probably seem "fundamental" to us as humans, but to me they all seem to be derived from four possible graphical arrangements of numbers (if we consider four sides to a digit), and fundamentally derived from addition.</p> <p>[][] o [][] addition and, its inverse, subtraction</p> <p>[][]<br> [][] multiplication (iterative addition) and, its inverse, division</p> <p>There must be operators that are consistent on the natural numbers that we certainly aren't aware of, no?</p> <p>Please excuse my ignorance, I am hoping I haven't offended any real mathematicians with this posting.</p> <p><hr> edit: I think I am understanding this a lot more, and I think my main difficulty in understanding this was that:</p> <blockquote> <p>There are statements that are true that are unprovable. </p> </blockquote> <p>Seemed like an impossible statement. It does, however, make sense to me at the moment in the context of an axiomatic language with a limited number of axioms. Ultimately, suggesting that there are statements that are true and expressible in the language, but are unprovable in the language (because of the limited set of axioms), is what I believe to be the point of the proof -- is this correct?</p>
<p>There's a fair amount of historical context to make some of Gödel's choices in his original proof clear. For a good overview of the proof, Nagel and Newman's <strong>Goedel's Proof</strong> is pretty good (though not without its detractors). I also highly recommend the late Torkel Franzen's <a href="http://rads.stackoverflow.com/amzn/click/1568812388">Godel's Theorem: An Incomplete Guide to its Use and Abuse</a>. I may myself be guilty of some of those abuses below; I'm trying to shy away from a lot of the technical details, and this usually invites imprecision and even abuse in this particular field. I hope those who know better than me will keep me honest via comments and appropriate ear-pulling.</p> <p><strong>Some history.</strong> Sometimes I think of the 19th century as the <em>Menagerie century</em>. A "menagerie" was like a zoo of weird animals. During a lot of the 19th century, people were trying to clean up some of the logical problems that abounded in the foundations of mathematics. Calculus plainly <em>worked</em>, but a lot of the notions of infinitesimals were simply incompatible with some 'facts' about real numbers (eventually solved by Weierstrass's notion of limits using $\epsilon$s and $\delta$s). A lot of assumptions people had been making implicitly (or explicitly) were shown to be false through the construction of explicit counterexamples (the "weird animals" that lead me to call it the <em>menagerie century</em>; many mathematicians were like explorers bringing back weird animals nobody had seen before and which challenged people's notions of what was and was not the case): the Dirichlet function to show that you can have functions that are discontinuous everywhere; functions that are continuous everywhere but nowhere differentiable; the failure of Dirichlet's principle for some functions; Peano's curve that filled-up a square; etc. Then, the antinomies (paradoxes and contradictions) in the early set theory. Even some work which today we find completely without problem caused a lot of debate: Hilbert's solution of the problem of a finite basis for invariants in any number of variables was originally derided by Gordan as "theology, not mathematics" (the solution was not constructive and involved the use of an argument by contradiction). Many found a lot of these developments deeply troubling. A foundational crisis arose (see the link in Qiaochu's answer).</p> <p>Hilbert, one of the leading mathematicians of the late 19th century, proposed a way to settle the differences between the two main camps. His proposal was essentially to try to use methods that both camps found unassailably valid to show that mathematics was <em>consistent</em>: that it was not possible to prove both a proposition and its negation. In fact, his proposal was to use methods that both camps found unassailable to prove that the methods that <em>one</em> camp found troublesome would not introduce any problems. This was the essence of the <strong>Hilbert Programme.</strong></p> <p>In order to be able to accomplish this, however, one needed to have some way to study proofs and mathematics itself. There arose the notion of "formal proof", the idea of an axiomatization of basic mathematics, etc. There were several competing axiomatic systems for the basics of mathematics: Zermelo attempted to axiomatize set theory (later expanded to Zermelo-Fraenkel set theory); Peano had proposed a collection of axioms for basic arithmetic; and famously Russell and Whitehead had, in their massive book <strong>Principia Mathematica</strong> attempted to establish an axiomatic and deductive system for all of mathematics (as I recall, it takes hundreds of pages to finally get to $1+1=2$). Some early successes were achieved, with people showing that some parts of such theories were in fact consistent (more on this later). Then came Gödel's work.</p> <p><strong>Consistency and Completeness.</strong> We say a formal theory is <em>consistent</em> if you cannot prove both $P$ and $\neg P$ in the theory for some sentence $P$. In fact, because from $P$ and $\neg P$ you can prove anything using classical logic, it is equivalent that a theory is consistent if and only if there is at least one sentence $Q$ such that there is no proof of $Q$ in the theory. By contrast, a theory is said to be <em>complete</em> if given any sentence $P$, either the has a proof of $P$ or a proof of $\neg P$. (Note that an inconsistent theory is necessarily complete). Hilbert proposed to find a consistent and complete axiomatization of arithmetic, together with a proof (using only the basic mathematics that both camps agreed on) that it was both complete and consistent, and that it would remain so even if some of the tools that his camp used (which the other found unpalatable and doubtful) were used with it.</p> <p><strong>Why arithmetic?</strong> Arithmetic was a particularly good field to focus on in the early efforts. First, it was the basis of the "other camp". Kronecker famously said "God gave us the natural numbers, the rest is the work of man." It was hoped that an axiomatization of the natural numbers and their basic operations (addition, multiplication) and relations (order) would be both relatively easy, and also have a hope of being both consistent a complete. That is, it was a good testing ground, because it contained a lot of interesting and nontrivial mathematics, and yet seemed to be reasonably simple.</p> <p><strong>Gödel</strong>. Gödel focused on arithmetic for this reason. As it happens, multiplication is key to the argument (there is something special about multiplication; some theories of the natural numbers that include only addition can be shown to be consistent using only the kinds of tools that Hilbert allowed). To answer one of your questions along the way, Gödel even defined new operations and relations on natural numbers that had little to do with addition and multiplication along the way, so that yes, there are operations other than those (no fetishism about them at all). But in fact, Gödel did not restrict himself <em>solely</em> to arithmetic. His proof is, on its face, about the entire system of mathematics set forth in Russell and Whitehead's <strong>Principia</strong>, though as Gödel notes it can easily be adapted to other systems so long as they satisfy certain criteria (that's why Gödel's original paper has a title that explicitly refers to the <em>Principia</em> "and related systems"). </p> <p>What Gödel showed was that <em>any</em> theory, subject to some technical restrictions (for example, you must have a way of recognizing whether a given sentence is or is not an axiom), that is "strong enough" that you can use it to define a certain portion of arithmetic will necessarily be either incomplete or inconsistent (that is, either you can prove <em>everything</em> in that theory, or else there is at least one sentence $P$ such that neither $P$ nor $\neg P$ can be proven). It's not a limitation based on four operations, or an obsession with those operations: quite the opposite. What it says if that if you want your theory to include <em>at least</em> some arithmetic, then your theory is going to be so complicated that <em>either</em> it is inconsistent, or else there are propositions that can neither be proven nor disproven <em>using only the methods that both camps found valid.</em> </p> <p>That is: what it shows is a limitation of those particular (logically unassailable) methods. If we use other methods, we are able to establish consistency of arithmetic, for example, but if you had your doubts about the consistency of arithmetic in the first place, chances are you will find those methods just as doubtful. </p> <p>Now, about you coda, and the statement "statements that are true but unprovable"; this is not very apt. You will find a lot of criticism to this paraphrase in Franzen's book, with good reason. It's best to think that you have statements that are neither provable nor disprovable. In fact, one of the things we know is that if you have such a statement $P$, in a theory $M$, then you can find a <strong>model</strong> (an interpretation of the axioms of $M$ that makes the axioms true) in which $P$ is true, and a different model in which $P$ is false. So in a sense, $P$ is neither "true" nor "false", because whether it is true or false will depend on the <em>model</em> you are using. For example, the Gödel sentence $G$ that proves the First Incompleteness Theorem (that if arithmetic is consistent then it is incomplete, since there can be no proof of the sentence $G$ and no proof of $\neg G$) is often said to be "true but unprovable" because $G$ can be interpreted as saying "There is no proof of $G$ in this theory." But in fact, you can find a model of arithmetic in which $G$ is <em>false</em>, so why do we say $G$ is "true"? Well, the point is that $G$ is true in what is called "the standard model." There is a particular interpretation of arithmetic (of what "natural number" means, of what $0$ means, of what "successor of $n$" means, of what $+$ means, etc) which we usually have in mind; <em>in that model</em>, $G$ is true but not provable. But we know that there are different models (where 'natural number' may mean something completely different, or perhaps $+$ means something different) where we can <em>show</em> that $G$ is false <em>under that interpretation</em> of the axioms. I would stay away from "true" and "false", and stick with "provable" and "unprovable" when discussing this; it tends to prevent problems.</p> <p><strong>First Summary.</strong> So: there were historical reasons why Gödel focused on arithmetic; the limitation is not of arithmetic itself, but rather of the formal methods in question: if your theory is sufficiently "strong" that it can represent part of arithmetic (plus it satisfies the few technical restrictions), then either your theory is inconsistent, or else the finitistic proof methods at issue cannot suffice to settle all questions (there are sentences $P$ which can neither be proven nor disproven).</p> <p><strong>Can something be rescued?</strong> Well, ideally of course we would have liked a theory that was complete and consistent, and that we could <em>show</em> is complete and consistent using only the kinds of logical methods which we, and pretty much everyone else, finds beyond doubt. But perhaps we can at least show that the other methods don't introduce any problems? That is, that the theory is consistent, even if it is not complete? That at least would be somewhat of a victory.</p> <p>Unfortunately, Gödel also proved that this is not the case. He showed that if your theory is sufficiently complex that it can represent the part of arithmetic at issue (and it satisfies the technical conditions alluded to earlier), and the theory <em>is</em> consistent, then in fact one of the things that it cannot settle is whether the theory is consistent! That is, one can write down a sentence $C$ which makes sense in the theory, and which essentially "means" "This theory is consistent" (much like the Gödel sentence $G$ essentially "means" that "there is not proof of $G$ in this theory"), and which one can prove that if the theory is consistent then the theory has no proof of $C$ and no proof of $\neg C$. </p> <p>Again, this is a limitation of those finitistic methods that everyone finds logically unassailable. In fact, there are proofs of the consistency of arithmetic using transfinite induction, but as I alluded to above, if you harbored doubts about arithmetic in the first place, you are simply not going to be very comfortable with transfinite induction either! Imagine that you are not sure that $X$ is being truthful, and $X$ suggests that you ask $Y$ about $X$s truthfulness; you don't know $Y$, but $X$ assures you that $Y$ is <em>very</em> trustworthy. Well, that's not going to help you, right?</p> <p><strong>Key take-away:</strong> Because the theorem applies to <em>any</em> theory that is sufficiently complex (and satisfies the technical restrictions), we are not even in a position of enlarging our set of axioms to escape these problems. So long as we restrict ourselves to enlargement methods that we find logically unassailable, the technical restrictions will still be satisfied, so that the new theory, stronger and larger though it will be because it has more axioms, will <em>still</em> be incomplete (though it will possibly be <em>other</em> sentences that are now incomplete or unprovable; remember that by adding axioms, we are also potentially expanding the kinds of things about which we can talk). So the theorems are not about shortcomings of <em>particular</em> axiomatic systems, but rather about those finitistic methods within a very large class of systems. </p> <p><strong>What about those 'technical restrictions'?</strong> They are important. Suppose that arithmetic were consistent. That means that there is at least one model for it. We could pick a model $M$, and then say "Let's make a theory whose axioms are exactly those sentences that are true when interpreted in $M$." This is a <em>complete</em> and <em>consistent</em> axiomatic system for arithmetic. Complete, because each sentence is either true in $M$ (and hence an axiom, hence provable in this theory) or else false in $M$, in which case its negation is true (and hence an axiom, and hence provable). And consistent, because it has a model, $M$, and a theory is consistent if and only if it has a model. The problem with this axiomatic theory is that if I give you a sentence, you're going to have a hard time deciding if it is or it is not an axiom! We didn't really achieve anything by taking this axiomatic system. The "technical restrictions" are both in the form of making the system actually usable, and also certain technical issues that arise from the mechanics of the proof. But the restrictions are mild enough that pretty much everyone agrees that most reasonable theories will likely satisfy them.</p> <p><strong>Second summary.</strong> So: if you have a formal axiomatic system which satisfies certain technical (but mild) restrictions, if the theory is large enough that you can represent (a certain part of) arithmetic in it, then the theory is either inconsistent, or else the finitistic methods that everyone agrees are unassailable are insufficient to prove or disprove every sentence in the theory; worse, one of the things that the finitistic methods cannot prove or disprove is <em>whether</em> the theory is in fact consistent or not. </p> <p>Hope that helps. I really recommend Franzen's book in any case. It will lead you away from potential misinterpretations of what the theorem says (I am likely guilty of a few myself above, which will no doubt be addresssed in comments by those who know better than I do). </p>
<p>The incompleteness theorem is much more general than "arithmetic and its four operators." What it says is that any effectively generated formal system which is sufficiently powerful is either inconsistent or incomplete. This requires that the system be capable of talking about arithmetic, but it can also talk about much more than arithmetic.</p> <blockquote> <p>To me it seems as though making a statement about the limitations of provability given 4 arbitrary operators is absurd</p> </blockquote> <p>Then you should study harder! Mathematics is not beholden to your intuition. When your intuition clashes with the mathematics, you should assume that your intuition is wrong. (I am not trying to be harsh here, but this is an attitude you need to digest before you can really learn any mathematics.)</p> <p>You also shouldn't think of arithmetic as just being about the arithmetic operations: it's also about the <em>quantifiers</em>.</p> <blockquote> <p>As a follow-up remark, why the obsession with arithmetic operators?</p> </blockquote> <p>Who claims to have an obsession with arithmetic operators? Perhaps what you are missing here is historical context. The historical context is, roughly, this: back in Gödel's day there was a program, initiated by <a href="http://en.wikipedia.org/wiki/David_Hilbert">Hilbert</a>, which sought to give a complete set of axioms for all of mathematics. That is, Hilbert wanted to write down a set of (consistent) axioms from which all mathematical truths were deducible. (To really understand why Hilbert wanted to do this you should read about the <a href="http://en.wikipedia.org/wiki/Foundations_of_mathematics#Foundational_crisis">foundational crisis in mathematics</a>). This was very grand and ambitious and many people had great hopes for Hilbert's program until Gödel destroyed those hopes with the Incompleteness theorem, which shows that Hilbert's program could not possibly succeed: if the axioms are powerful enough to talk about arithmetic, then they are either inconsistent (you can prove false statements) or incomplete (you can't prove some true statements).</p>
probability
<p>Assume that we are playing a game of Russian roulette (6 chambers) and that there is no shuffling after the shot is fired.</p> <p>I was wondering if you have an advantage in going first?</p> <p>If so, how big of an advantage?</p> <p>I was just debating this with friends, and I wouldn't know what probability to use to prove it. I'm thinking binomial distribution or something like that.</p> <p>If <span class="math-container">$n=2$</span>, then there's no advantage. Just <span class="math-container">$50/50$</span> if the person survives or dies.</p> <p>If <span class="math-container">$n=3$</span>, then maybe the other guy has an advantage. The person who goes second should have an advantage.</p> <p>Or maybe I'm wrong.</p>
<p>For a $2$ Player Game, it's obvious that player one will play, and $\frac16$ chance of losing. Player $2$, has a $\frac16$ chance of winning on turn one, so there is a $\frac56$ chance he will have to take his turn. (I've intentionally left fractions without reducing them as it's clearer where the numbers came from)</p> <p>Player 1 - $\frac66$ (Chance Turn $1$ happening) $\times \ \frac16$ (chance of dying) = $\frac16$</p> <p>Player 2 - $\frac56$ (Chance Turn $2$ happening) $\times \ \frac15$ (chance of dying) = $\frac16$</p> <p>Player 1 - $\frac46$ (Chance Turn $3$ happening) $\times \ \frac14$ (chance of dying) = $\frac16$</p> <p>Player 2 - $\frac36$ (Chance Turn $4$ happening) $\times \ \frac13$ (chance of dying) = $\frac16$</p> <p>Player 1 - $\frac26$ (Chance Turn $5$ happening) $\times \ \frac12$ (chance of dying) = $\frac16$</p> <p>Player 2 - $\frac16$ (Chance Turn $6$ happening) $\times \ \frac11$ (chance of dying) = $\frac16$</p> <p>So the two player game is fair without shuffling. Similarly, the $3$ and $6$ player versions are fair.</p> <p>It's the $4$ and $5$ player versions where you want to go last, in hopes that the bullets will run out before your second turn.</p> <p>For a for $4$ player game, it's:<br> P1 - $\frac26$,<br> P2 - $\frac26$,<br> P3 - $\frac16$,<br> P4 - $\frac16$<br></p> <p>Now, the idea in a $2$ player game is that it is best to be player $2$, because in the event you end up on turn six, you KNOW you have a chambered round, and can use it to shoot player $1$ (or your captor), thus winning, changing your total odds of losing to P1 - $\frac36$, P2 - $\frac26$, Captor - $\frac16$</p>
<p>Your best bet is actually to go <em>last</em>, because this will either make no difference or decrease the probability of shooting yourself. Why? If there are $n$ people and $n$ divides $6$, then the probability of any one person taking the bullet is equal, since the probability distribution is assumed to be uniform over the number of times the trigger is pulled. If $n \le 6$ does not divide $6$, i.e. if $n=4$ or $5$, then the turns wrap around and begin with the first person again, so the first one or two people have to shoot again; naturally, this doubles their probability of being shot. And finally, if $n&gt;6$ then the bullets will have run out before they come to you if you go last.</p>
number-theory
<p>Books on Number Theory for anyone who loves Mathematics?</p> <p>(Beginner to Advanced &amp; just for someone who has a basic grasp of math)</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/038797329X">A Classical Introduction to Modern Number Theory</a> by Ireland and Rosen hands down!</p>
<p>I would still stick with Hardy and Wright, even if it is quite old.</p>
logic
<p>I can rather easily imagine that some mathematician/logician had the idea to symbolize "it <strong>E</strong> xists" by $\exists$ - a reversed E - and after that some other (imitative) mathematician/logician had the idea to symbolize "for <strong>A</strong> ll" by $\forall$ - a reversed A. Or vice versa. (Maybe it was one and the same person.)</p> <p>What is hard (for me) to imagine is, how the one who invented $\forall$ could fail to consider the notations $\vee$ and $\wedge$ such that today $(\forall x \in X) P(x)$ must be spelled out $\bigwedge_{x\in X} P(x)$ instead of $\bigvee_{x\in X}P(x)$? (Or vice versa.)</p> <p>Since I know that this is not a real question, let me ask it like this: Where can I find more about this observation?</p>
<p>See <a href="http://jeff560.tripod.com/set.html">Earliest Uses of Symbols of Set Theory and Logic</a> for this and much more.</p>
<p>The four types of propositions used in the classical Greek syllogisms were called A, E, I, O. Statements of type A were "All p are q". Statement of type E were "Some p are q". So of course a millennium later, mathematicians (who had a classical education) used A and E for these quantifiers, then later turned them upside down to avoid confusion with letters used for other things. </p> <p>By the way: I and O were "All p are not q" = "No p are q" and "Some p are not q"="Not all p are q", but I don't remember which is I and which is O.</p>
combinatorics
<p>I had seen this problem a long time back and wasn't able to solve it. For some reason I was reminded of it and thought it might be interesting to the visitors here.</p> <p>Apparently, this problem is from a mathematics magazine of some university in the United States (sorry, no idea about either).</p> <p>So the problem is:</p> <p>Suppose $S \subset \mathbb{Z}$ (set of integers) such that </p> <p>1) $|S| = 15$</p> <p>2) $\forall ~s \in S, \exists ~a,b \in S$ such that $s = a+b$</p> <p>Show that for every such $S$, there is a non-empty subset $T$ of $S$ such that the sum of elements of $T$ is zero and $|T| \leq 7$.</p> <p><strong>Update</strong> (Sep 13)</p> <p>Here is an approach which seems promising and others might be able to take it ahead perhaps.</p> <p>If you look at the set as a vector $s$, then there is a matrix $A$ with the main diagonal being all $1$, each row containing exactly one $1$ and one $-1$ (or a single $2$) in the non-diagonal position such that $As = 0$.</p> <p>The problem becomes equivalent to proving that for any such matrix $A$ the row space of $A$ contains a vector with all zeroes except for a $1$ and $-1$ or a vector with all zeroes except $\leq 7$ ones.</p> <p>This implies that the numbers in the set $S$ themselves don't matter and we can perhaps replace them with elements from a different field (like say reals, or complex numbers).</p>
<p>A weaker statement, where we allow elements in $T$ to be repeated, can be proved as below: </p> <p>Since we can look at the set $\{-s | s\in S \}$ we may assume there are at most $7$ positive numbers in $S$. Let each positive number be a vertex, from each vertex $s$ we draw an arrow to any vertex $a$ such that $s=a+b$. Since if $s>0$, one of $a,b$ must be positive, there is at least one arrow from any vertex. So there must be a cycle $s_1,\cdots,s_n=s_1$ with $n\leq 8$. We can let $T$ consists of $s_i-s_{i+1}$, $1\leq i\leq n-1$. </p>
<p>I started to write a short note devoted to the problem. Its current version is <a href="https://mega.nz/#!ohZilaxQ!gOnBanPN5f8r1niFya57VU3IbQb15x0m-wbM8iwp3pg" rel="nofollow noreferrer">here</a>. I introduced the following general framework for it.</p> <p>Let <span class="math-container">$S$</span> be a subset of an abelian group. The set <span class="math-container">$S$</span> is called <em>decomposable</em>, provided each its element <span class="math-container">$a$</span> is decomposable, that is there exist <span class="math-container">$b,c\in S$</span> with <span class="math-container">$a=b+c$</span>. Clearly, <span class="math-container">$S$</span> is decomposable iff <span class="math-container">$S\subset S+S$</span>. Let <span class="math-container">$z(S)$</span> be the smallest size of a non-empty subset <span class="math-container">$T$</span> of <span class="math-container">$S$</span> such that <span class="math-container">$\sum T=0$</span>, and <span class="math-container">$z(S)=\infty$</span>, otherwise. Given a natural number <span class="math-container">$n$</span> put <span class="math-container">$$z(n)=\sup\{z(S): S\subset S+S\subset\mathbb R, |S|=n\},$$</span> that is <span class="math-container">$z(n)$</span> is the smallest number <span class="math-container">$m$</span> such that any decomposable set of <span class="math-container">$n$</span> real numbers has a non-empty subset <span class="math-container">$T$</span> of size <span class="math-container">$m$</span> such that <span class="math-container">$\sum T=0$</span>.</p> <p>In the above terms, your question asks to show that <span class="math-container">$z(S)\le 7$</span> for any decomposable set <span class="math-container">$S$</span> consisting of <span class="math-container">$15$</span> <em>integer</em> numbers and Gjergji Zaimi’s <a href="https://mathoverflow.net/questions/16857/existence-of-a-zero-sum-subset">question</a> asks whether <span class="math-container">$z(n)$</span> is finite (in another words, whether <span class="math-container">$z(n)\le n$</span>) for each natural <span class="math-container">$n$</span>.</p> <p>I think that the proposed approaches were not successful because they don't fully exploit structure of decomposable sets. This concerns even so promising approaches as the <a href="https://math.stackexchange.com/a/2447/71850">search for cycles</a> by curious and the <a href="https://mathoverflow.net/a/16871/43954">summation</a> by Hsien-Chih Chang 張顯之. In particular, these approaches don't assure that found zero-sum sequences consist of distinct elements. We shall study decomposable sets in the note. </p> <p>Newertheless, our main result is the following</p> <p><strong>Proposition 3.</strong> For any <span class="math-container">$n\ge 2$</span>, <span class="math-container">$z(n)\ge \left\lfloor\tfrac n2\right\rfloor$</span>.</p> <p><em>Proof.</em> Given a natural number <span class="math-container">$k$</span> put <span class="math-container">$A=\{1,2,\dots, 2^{k-1}\}$</span> and <span class="math-container">$S=A\cup (A-(2^k-1))$</span>. Then <span class="math-container">$|S|=2k$</span>. The set <span class="math-container">$S$</span> is decomposable, because, clearly, each number but <span class="math-container">$1$</span> of the set <span class="math-container">$A$</span> is decomposable, <span class="math-container">$1=2^{k-1}+(2^{k-1}-(2^k-1))$</span>, <span class="math-container">$2^l-(2^k-1)=2^{l-1}+(2^{l-1}-(2^k-1))$</span> for each <span class="math-container">$l=1,\dots, k-1$</span>, and <span class="math-container">$1-(2^k-1)=(2^{k-1}-(2^k-1))+(2^{k-1}-(2^k-1))$</span>.</p> <p>Let <span class="math-container">$T$</span> be a subset of <span class="math-container">$S$</span> with <span class="math-container">$|T|\le k-1$</span> and <span class="math-container">$\sum T=0$</span>. Then clearly <span class="math-container">$k\ge 3$</span> and <span class="math-container">$T$</span> contains at most <span class="math-container">$k-2$</span> positive elements. Since all of them are distinct elements of <span class="math-container">$A$</span>, their sum is at most <span class="math-container">$2^k-2$</span>. On the other hand, the biggest negative element of the set <span class="math-container">$A-(2^k-1)$</span> is <span class="math-container">$-2^{k-1}+1=-(2^k-2)/2$</span>. Thus if <span class="math-container">$T$</span> contains at least two negative elements then <span class="math-container">$\sum T&lt;0$</span>. If <span class="math-container">$T$</span> contains exactly one negative element then <span class="math-container">$\sum T=0$</span> implies that we have a representation of <span class="math-container">$2^k-1$</span> as a sum of at most <span class="math-container">$k-1$</span> powers of <span class="math-container">$2$</span>. with at most one power used twice. This representation collapses to a sum of at most <span class="math-container">$k-1$</span> distinct powers of <span class="math-container">$2$</span>. If the representation contains a power <span class="math-container">$2^l$</span> with <span class="math-container">$l\ge k$</span> then it is bigger than <span class="math-container">$2^k-1$</span>. Otherwise the sum is at most <span class="math-container">$2^1+2^2+\dots +2^{k-1}=2^k-2&lt;2^k-1$</span>. Thus <span class="math-container">$z(S)\ge k$</span>.</p> <p>A set <span class="math-container">$\{-1,0,1\}$</span> witnesses that <span class="math-container">$z(3)\ge 1$</span>. To construct a decomposable set of size <span class="math-container">$2k+1$</span> for <span class="math-container">$k\ge 2$</span> put <span class="math-container">$S^+=S\cup \{2(-2^k+2)\}$</span>. Since the set <span class="math-container">$S$</span> is decomposable and <span class="math-container">$-2^k+2\in S$</span>, the set <span class="math-container">$S^+$</span> is decomposable too. Similarly to the above we can show that <span class="math-container">$z(S^+)\ge k$</span>. The new case is when <span class="math-container">$T$</span> contains exactly one negative element <span class="math-container">$2(-2^k+2)$</span>. Then <span class="math-container">$\sum T=0$</span> implies that we have a representation of <span class="math-container">$2(-2^k+2)$</span> as a sum of at most <span class="math-container">$k-1$</span> powers of <span class="math-container">$2$</span>. not bigger than <span class="math-container">$2^{k-1}$</span> with at most one power used twice. This sum is at most <span class="math-container">$2^2+2^3+\dots +2^{k-1}+2^{k-1}=2^k-4+2^{k-1}&lt;2(2^k-2).$</span> <span class="math-container">$\square$</span></p> <p>We conjecture that the lower bound in Proposition 3 is tight. This conjecture is confirmed for small <span class="math-container">$n$</span> by the problem from your question and in the note we proved it for <span class="math-container">$n\le 5$</span>. I’m going to finish there my draft proofs that the conjecture also holds for <span class="math-container">$n=6$</span> and <span class="math-container">$7$</span>. Of course, this is not a big deal, but Tao Te Ching teaches that <a href="https://en.wikipedia.org/wiki/A_journey_of_a_thousand_miles_begins_with_a_single_step" rel="nofollow noreferrer">a journey of a thousand miles begins with a single step</a>, so let's start. I hope that we’ll be able to continue the journey. </p> <p><strong>Update.</strong> Taras Banakh proved that any finite decomposable subset of an Abelian group contains two non-empty subsets <span class="math-container">$A$</span> and <span class="math-container">$B$</span> of <span class="math-container">$S$</span> such that <span class="math-container">$\sum A+\sum B=0$</span>. On the other had, I found that a counterpart of Proposition 3 does not hold for Abelian groups. Namely, given a natural number <span class="math-container">$n$</span> let <span class="math-container">$S=\{1,2,\dots, 2^{n-1}\}$</span> be a subset of a group <span class="math-container">$\Bbb Z_{2^n-1}$</span>. Since <span class="math-container">$2^i+2^i=2^{i+1}$</span> for each <span class="math-container">$0\le i\le n-2$</span> and <span class="math-container">$2^{n-1}+2^{n-1}\equiv 1\pmod {2^n-1}$</span>, the set <span class="math-container">$S$</span> is decomposable. On the other hand, for any proper non-empty subset <span class="math-container">$T$</span> of <span class="math-container">$S$</span> we have <span class="math-container">$\sum T\equiv t\pmod {2^n-1}$</span> for some <span class="math-container">$0&lt;t&lt;2^n-1$</span>, so <span class="math-container">$\sum T\not\equiv 0\pmod {2^n-1}$</span>. These results are in our paper, which I am preparing to a submission to arXiv. So I’ll going to provide here a link to it soon.</p>
linear-algebra
<blockquote> <p>If $A$ and $B$ are two matrices of the same order $n$, then $$ \operatorname{rank} A + \operatorname{rank}B \leq \operatorname{rank} AB + n. $$</p> </blockquote> <p>I don't know how to start proving this inequality. I would be very pleased if someone helps me. Thanks!</p> <p><strong>Edit I.</strong> Rank of $A$ is the same of the equivalent matrix $A' =\begin{pmatrix}I_r &amp; 0 \\ 0 &amp; 0\end{pmatrix}$. Analogously for $B$, ranks of $A$ and $B$ are $r,s\leq n$. Hence, since $\operatorname{rank}AB = \min\{r,s\}$, then $r+s\leq \min\{r,s\} + n$. (This is not correct since $\operatorname{rank} AB \leq \min\{r,s\}$.</p> <p><strong>Edit II.</strong> A discussion on the rank of a product of $H_f(A)$ and $H_c(B)$ would correct this, but I don't know how to formalize that $\operatorname{rank}H_f(A) +\operatorname{rank}H_c(B) - n \leq \operatorname{rank}[H_f(A)H_c(B)]$.</p>
<p>Subtract <span class="math-container">$2n$</span> from both sides and then multiply by <span class="math-container">$-1$</span>; this gives the equivalent inequality <span class="math-container">$\def\rk{\operatorname{rank}}(n-\rk A)+(n-\rk B)\geq n-\rk(AB)$</span>. In other words (by the rank-nullity theorem) we must show that the sums of the dimensions of the kernels of <span class="math-container">$A$</span> and of <span class="math-container">$B$</span> gives at least the dimension of the kernel of <span class="math-container">$AB$</span>. Intuititively vectors that get killed by <span class="math-container">$AB$</span> either get killed by <span class="math-container">$B$</span> or (later) by <span class="math-container">$A$</span>, and the dimension of <span class="math-container">$\ker(AB)$</span> therefore cannot exceed <span class="math-container">$\dim\ker A+\dim\ker B$</span>. Here is how to make that precise.</p> <p>One has <span class="math-container">$\ker B\subseteq \ker(AB)$</span>, so if one restricts (the linear map with matrix) <span class="math-container">$B$</span> to <span class="math-container">$\ker(AB)$</span>, giving a map <span class="math-container">$\tilde B:\ker(AB)\to K^n$</span>, one sees that <span class="math-container">$\ker(\tilde B)=\ker(B)$</span> (both inclusions are obvious). Since the image of <span class="math-container">$\tilde B$</span> is contained in <span class="math-container">$\ker A$</span> by the definition of <span class="math-container">$\ker(AB)$</span>, one has <span class="math-container">$\rk\tilde B\leq\dim\ker A$</span>. Now rank-nullity applied to <span class="math-container">$\tilde B$</span> gives <span class="math-container">$$\dim\ker(AB)=\dim\ker\tilde B+\rk\tilde B\leq\dim\ker B+\dim\ker A,$$</span> as desired.</p>
<p>As left or right multiplications by invertible matrices (and in particular, elementary matrices) don't change the rank of a matrix, we may assume WLOG that $A=\pmatrix{I_r&amp;0\\ 0&amp;0}$. Therefore, \begin{align*} \operatorname{rank}(A)+\operatorname{rank}(B) &amp;=\operatorname{rank}(A)+\operatorname{rank}(AB+(I-A)B)\\ &amp;\le\operatorname{rank}(A)+\operatorname{rank}(AB)+\operatorname{rank}((I-A)B)\\ &amp;\le r+\operatorname{rank}(AB)+(n-r)\\ &amp;=\operatorname{rank}(AB)+n. \end{align*} The last but one line is due to the fact that the first $r$ rows of $(I-A)B$ are zero.</p>
number-theory
<p>The Fibonacci sequence has always fascinated me because of its beauty. It was in high school that I was able to understand how the ratio between 2 consecutive terms of a purely integer sequence came to be a beautiful irrational number.</p> <p>So I wondered yesterday if instead of 2 terms, we kept 3 terms. So I wrote a python program to calculate the ratio. At the 10000th term it came to be close to 1.839...</p> <p>After some research on OEIS and Wikipedia, I found that the series is popular and is known as the tribonacci sequence. But what surprised me the most was the exact ratio given on <a href="https://en.wikipedia.org/wiki/Generalizations_of_Fibonacci_numbers#Tribonacci_numbers" rel="nofollow noreferrer">this link</a>.</p> <blockquote> <p>The <strong>tribonacci constant</strong> <span class="math-container">$$\frac{1+\sqrt[3]{19+3\sqrt{33}} + \sqrt[3]{19-3\sqrt{33}}}{3} = \frac{1+4\cosh\left(\frac{1}{3}\cosh^{-1}\frac{19}{8}\right)}{3} \approx 1.83928675$$</span> (sequence <a href="https://oeis.org/A058265" rel="nofollow noreferrer">A058265</a> in the <a href="https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences" rel="nofollow noreferrer">OEIS</a>)</p> </blockquote> <p>I wonder how a sequence with nothing but natural numbers leads us to non-Euclidean geometry. I wonder if someone could tell me how these two are related.</p> <p>Note: I don't actually want the exact solution which would be extremely difficult to understand for a high schooler like me, I just want to know if there is a way to connect number theory and non-Euclidian geometry.</p>
<p>Similar to De Moivre's formula:</p> <p><span class="math-container">$$\cos nx \pm i\sin nx = (\cos x\pm i\sin x)^n$$</span></p> <p>there is the <em>hyperbolic</em> <a href="https://en.wikipedia.org/wiki/De_Moivre%27s_formula#Hyperbolic_trigonometry" rel="noreferrer">De Moivre formula</a>:</p> <p><span class="math-container">$$\cosh nx \pm \sinh nx = (\cosh x\pm\sinh x)^n$$</span></p> <p>which means this: if you can represent a real number <span class="math-container">$a$</span> as <span class="math-container">$a=\cosh x\pm\sinh x$</span>, then <span class="math-container">$\sqrt[n]{a}=\cosh (x/n)\pm\sinh (x/n)$</span>. In other words, hyperbolic trigonometric functions can help us <em>exponentiate</em> and <em>take roots</em>. (Note the &quot;ordinary&quot; trigonometric functions can do the same - for roots of complex numbers.)</p> <p>In this case, let's take <span class="math-container">$x=\pm\cosh^{-1}\left(2+\frac{3}{8}\right)$</span> so that <span class="math-container">$\cosh x=2\frac{3}{8}=\frac{19}{8}$</span>. This (from well-known identity <span class="math-container">$\cosh^2x-\sinh^2x=1$</span>) gives <span class="math-container">$\sinh x=\pm\frac{3\sqrt{33}}{8}$</span>. Now, take <span class="math-container">$a=\frac{1}{8}(19\pm 3\sqrt{33})=\cosh x\pm \sinh x$</span>. All that is left is to apply the hyperbolic De Moivre's formula with <span class="math-container">$n=3$</span> to take the cube root and prove that the formula from the Wikipedia article you have cited is correct.</p>
<blockquote> <p>I wonder how a sequence with nothing but natural numbers leads us to trigonometry. I wonder if someone would say me how these two are related.</p> </blockquote> <p>Briefly, the ratio between consecutive terms of the Tribonacci sequence</p> <p>approaches the real root of <span class="math-container">$x^3-x^2-x=1$</span></p> <p>(like that of the Fibonacci sequence approaches the positive root of <span class="math-container">$x^2-x=1$</span>),</p> <p>and the solution of that <a href="https://en.wikipedia.org/wiki/Cubic_equation#Trigonometric_and_hyperbolic_solutions" rel="noreferrer">cubic equation</a> can be expressed in terms of hyperbolic functions.</p>
linear-algebra
<p>Where does the definition of the determinant come from, and is the definition in terms of permutations the first and basic one? What is the deep reason for giving such a definition in terms of permutations?</p> <p><span class="math-container">$$ \text{det}(A)=\sum_{p}\sigma(p)a_{1p_1}a_{2p_2}...a_{np_n}. $$</span></p> <p>I have found this one useful:</p> <p><a href="http://www-igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF" rel="nofollow noreferrer">Thomas Muir, <em>Contributions to the History of Determinants 1900-1920</em></a>.</p>
<p>This is only one of many possible definitions of the determinant.</p> <p>A more "immediately meaningful" definition could be, for example, to define the determinant as the unique function on $\mathbb R^{n\times n}$ such that</p> <ul> <li>The identity matrix has determinant $1$.</li> <li>Every singular matrix has determinant $0$.</li> <li>The determinant is linear in each column of the matrix separately.</li> </ul> <p>(Or the same thing with rows instead of columns).</p> <p>While this seems to connect to high-level properties of the determinant in a cleaner way, it is only half a definition because it requires you to prove that a function with these properties <em>exists</em> in the first place and <em>is unique</em>.</p> <p>It is technically cleaner to choose the permutation-based definition because it is obvious that it defines <em>something</em>, and then afterwards prove that the thing it defines has all of the high-level properties we're <em>really</em> after.</p> <p>The permutation-based definition is also very easy to generalize to settings where the matrix entries are not real numbers (e.g. matrices over a general commutative ring) -- in contrast, the characterization above does not generalize easily without a close study of whether our existence and uniqueness proofs will still work with a new scalar ring.</p>
<p>The amazing fact is that it seems matrices were developed to study determinants. I'm not sure, but I think the "formula" definition of the determinant you have there is known as the Leibnitz formula. I am going to quote some lines from the following source <a href="http://www.jstor.org/stable/2686426?seq=1#page_scan_tab_contents">Tucker, 1993</a>.:</p> <blockquote> <p>Matrices and linear algebra did not grow out of the study of coefficients of systems of linear equations, as one might guess. Arrays of coefficients led mathematicians to develop determinants, not matrices. Leibnitz, co-inventor of calculus, used determinants in 1693 about one hundred and fifty years before the study of matrices in their own right. Cramer presented his determinant-based formula for solving systems of linear equations in 1750. The first implicit use of matrices occurred in Lagrange's work on bilinear forms in the late 18th century.</p> </blockquote> <p>--</p> <blockquote> <p>In 1848, J. J. Sylvester introduced the term "matrix," the Latin word for womb, as a name for an array of numbers. He used womb, because he viewed a matrix as a generator of determinants. That is, every subset of k rows and k columns in a matrix generated a determinant (associated with the submatrix formed by those rows and columns).</p> </blockquote> <p>You would probably have to dig (historical texts, articles) to find out why exactly Leibnitz devised the definition, most probably he had some hunch/intuition that it could lead to some breakthroughs in understanding the underlying connection between coefficients and the solution of a system equations...</p>
game-theory
<p>You're given a list of $22$ points in $[0,1]$ (not necessarily distinct), and you're asked to select, at every iteration, $2$ points to be substituted by their midpoint. After $20$ iteration, you should end up with $2$ points. Is there a selection strategy that leads to $2$ points that are at most $10^{-3}$ apart independently of the distribution of the initial list? For $n$ starting points, what would be the optimal distance between the final $2$ points that one could achieve?</p>
<p>Consider the $n$-tuples ${\bf a}_n:=(0,0,\ldots,0,1)\in[0,1]^n$. I claim that for these ${\bf a}_n$ the optimal final distance $d_n$ is given by $$d_n={1\over 2^{n-2}}\ .$$ <em>Proof.</em> This is certainly true for $n=2$. Assume that it is true for an $n\geq2$, and consider ${\bf a}_{n+1}$. At the first step of the process we can either average a $0$ with $1$, or average two zeros. Using the first option we arrive at ${\bf a}_n'=(0,0,\ldots,0,{1\over2})\in[0,1]^n$, and using the second option we arrive at ${\bf a}_n$. It follows that $$d_{n+1}=\min\{{1\over2}d_n,d_n\bigr\}={1\over2}d_n\ .$$ Therefore the best "universal" constant $\delta_n$ is $\geq{1\over 2^{n-2}}$. It is easily checked that $\delta_3={1\over2}$ (here the optimal strategy is to first average the extreme $x_k$). The following figure shows the optimal end result for the quadruple $(0,x,y,1)$. We can learn two things from this figure: When $n=4$ then we can always obtain a final difference $d\leq0.25={1\over 2^2}$, which implies $\delta_4={1\over4}$, and, more important: Things get <em>very complicated</em> with increasing $n$.</p> <p><img src="https://i.sstatic.net/1FWT4.jpg" alt="enter image description here"></p> <p>For $n=5$ one cannot draw such a figure. Instead one can do the following: Denote by $g(x,y,z)$ the optimal end result for the quintuple $(0,x,y,z,1)$. The following figure shows the $121$ graphs of the functions $$g\left({j\over10},{k\over10},z\right)\qquad(0\leq z\leq 1)$$ for $0\leq j\leq 10$, $\&gt;0\leq k\leq 10$. The figure supports the conjecture $\delta_5={1\over 8}$. Exact computation gives $$g(0,0,0)={1\over8},\qquad g\left(0,0,{3\over4}\right)={1\over8}\ .$$ <img src="https://i.sstatic.net/ONQyJ.jpg" alt="enter image description here"></p>
<p>This is not a complete answer but just an example in which the distance between two final points is always <em>greater</em> than $\delta_n=\frac{1}{2^{n-2}}$.</p> <p>In fact, let $n$ be an even number. Consider an n-tuple $(0,\ldots,0,1-\varepsilon,1)$ with $\varepsilon=\frac{1}{2^{n/2-2}}$. Note that the sum of all numbers after $i$th iteration is at least $(2-\varepsilon)/2^i$; therefore, two final points cannot be closer than $(2-\varepsilon)/2^{n-2}=(2+o(1))\delta_n$ if in some iteration we take the midpoint of two nonzero points.</p> <p>Therefore, what we have to do on every move is <em>either</em> to cancel one of the zeros <em>or</em> to divide one of nonzero numbers by two. Then, we end up with two points $\frac{1}{2^p}$ and $\frac{1-\varepsilon}{2^q}$ with $p+q\leqslant n-2$. For sufficiently large $n$, the minimum of $\left|\frac{1}{2^p}-\frac{1-\varepsilon}{2^q}\right|$ is attained when $p=q=n/2-1$, and this minimum equals $\frac{\varepsilon}{2^{n/2-1}}=\frac{1}{2^{n-3}}=2\delta_n$.</p> <p>I hope this technique can lead to a much better lower bound being generalized to deal with the case of many nonzero points.</p> <p>UPD1. Now I noted that this answer exploits the same idea as Erik's comment on Christian Blatter's answer.</p> <p>UPD2. As TonyK pointed out in the comment, the tuple $(0,0,0,0,5/7,1)$ has a lower bound of $1/14$ for distance between final points.</p>
probability
<p>If we have a sequence of random variables $X_1,X_2,\ldots,X_n$ converges in distribution to $X$, i.e. $X_n \rightarrow_d X$, then is $$ \lim_{n \to \infty} E(X_n) = E(X) $$ correct?</p> <p>I know that converge in distribution implies $E(g(X_n)) \to E(g(X))$ when $g$ is a bounded continuous function. Can we apply this property here?</p>
<p>With your assumptions the best you can get is via Fatou's Lemma: $$\mathbb{E}[|X|]\leq \liminf_{n\to\infty}\mathbb{E}[|X_n|]$$ (where you used the continuous mapping theorem to get that $|X_n|\Rightarrow |X|$).</p> <p>For a "positive" answer to your question: you need the sequence $(X_n)$ to be uniformly integrable: $$\lim_{\alpha\to\infty} \sup_n \int_{|X_n|&gt;\alpha}|X_n|d\mathbb{P}= \lim_{\alpha\to\infty} \sup_n \mathbb{E} [|X_n|1_{|X_n|&gt;\alpha}]=0.$$ Then, one gets that $X$ is integrable and $\lim_{n\to\infty}\mathbb{E}[X_n]=\mathbb{E}[X]$.</p> <p>As a remark, to get uniform integrability of $(X_n)_n$ it suffices to have for example: $$\sup_n \mathbb{E}[|X_n|^{1+\varepsilon}]&lt;\infty,\quad \text{for some }\varepsilon&gt;0.$$</p>
<p>Try $\mathrm P(X_n=2^n)=1/n$, $\mathrm P(X_n=0)=1-1/n$.</p>
linear-algebra
<p>Multiplication of matrices &mdash; taking the dot product of the $i$th row of the first matrix and the $j$th column of the second to yield the $ij$th entry of the product &mdash; is not a very intuitive operation: if you were to ask someone how to mutliply two matrices, he probably would not think of that method. Of course, it turns out to be very useful: matrix multiplication is precisely the operation that represents composition of transformations. But it's not intuitive. <strong>So my question is where it came from. Who thought of multiplying matrices in that way, and why?</strong> (Was it perhaps multiplication of a matrix and a vector first? If so, who thought of multiplying <em>them</em> in that way, and why?) My question is intact no matter whether matrix multiplication was done this way only after it was used as representation of composition of transformations, or whether, on the contrary, matrix multiplication came first. (Again, I'm not asking about the <em>utility</em> of multiplying matrices as we do: this is clear to me. I'm asking a question about history.)</p>
<p>Matrix multiplication is a symbolic way of substituting one linear change of variables into another one. If $x' = ax + by$ and $y' = cx+dy$, and $x'' = a'x' + b'y'$ and $y'' = c'x' + d'y'$ then we can plug the first pair of formulas into the second to express $x''$ and $y''$ in terms of $x$ and $y$: $$ x'' = a'x' + b'y' = a'(ax + by) + b'(cx+dy) = (a'a + b'c)x + (a'b + b'd)y $$ and $$ y'' = c'x' + d'y' = c'(ax+by) + d'(cx+dy) = (c'a+d'c)x + (c'b+d'd)y. $$ It can be tedious to keep writing the variables, so we use arrays to track the coefficients, with the formulas for $x'$ and $x''$ on the first row and for $y'$ and $y''$ on the second row. The above two linear substitutions coincide with the matrix product $$ \left( \begin{array}{cc} a'&amp;b'\\c'&amp;d' \end{array} \right) \left( \begin{array}{cc} a&amp;b\\c&amp;d \end{array} \right) = \left( \begin{array}{cc} a'a+b'c&amp;a'b+b'd\\c'a+d'c&amp;c'b+d'd \end{array} \right). $$ So matrix multiplication is just a <em>bookkeeping</em> device for systems of linear substitutions plugged into one another (order matters). The formulas are not intuitive, but it's nothing other than the simple idea of combining two linear changes of variables in succession.</p> <p>Matrix multiplication was first defined explicitly in print by Cayley in 1858, in order to reflect the effect of composition of linear transformations. See paragraph 3 at <a href="https://web.archive.org/web/20120910034016/http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html">http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html</a>. However, the idea of tracking what happens to coefficients when one linear change of variables is substituted into another (which we view as matrix multiplication) goes back further. For instance, the work of number theorists in the early 19th century on binary quadratic forms $ax^2 + bxy + cy^2$ was full of linear changes of variables plugged into each other (especially linear changes of variable that we would recognize as coming from ${\rm SL}_2({\mathbf Z})$). For more on the background, see the paper by Thomas Hawkins on matrix theory in the 1974 ICM. Google "ICM 1974 Thomas Hawkins" and you'll find his paper among the top 3 hits.</p>
<p>Here is an answer directly reflecting the historical perspective from the paper <em>Memoir on the theory of matrices</em> By Authur Cayley, 1857. This paper is available <a href="https://ia600701.us.archive.org/20/items/philtrans05474612/05474612.pdf" rel="nofollow noreferrer">here</a>.</p> <p>This paper is credited with &quot;containing the first abstract definition of a matrix&quot; and &quot;a matrix algebra defining addition, multiplication, scalar multiplication and inverses&quot; (<a href="http://www-history.mcs.st-and.ac.uk/history/HistTopics/Matrices_and_determinants.html" rel="nofollow noreferrer">source</a>).</p> <p>In this paper a nonstandard notation is used. I will do my best to place it in a more &quot;modern&quot; (but still nonstandard) notation. The bulk of the contents of this post will come from pages 20-21.</p> <p>To introduce notation, <span class="math-container">$$ (X,Y,Z)= \left( \begin{array}{ccc} a &amp; b &amp; c \\ a' &amp; b' &amp; c' \\ a'' &amp; b'' &amp; c'' \end{array} \right)(x,y,z)$$</span></p> <p>will represent the set of linear functions <span class="math-container">$(ax + by + cz, a'x + b'y + c'z, a''x + b''y + c''z)$</span> which are then called <span class="math-container">$(X,Y,Z)$</span>.</p> <p>Cayley defines addition and scalar multiplication and then moves to matrix multiplication or &quot;composition&quot;. He specifically wants to deal with the issue of:</p> <p><span class="math-container">$$(X,Y,Z)= \left( \begin{array}{ccc} a &amp; b &amp; c \\ a' &amp; b' &amp; c' \\ a'' &amp; b'' &amp; c'' \end{array} \right)(x, y, z) \quad \text{where} \quad (x, y, z)= \left( \begin{array}{ccc} \alpha &amp; \beta &amp; \gamma \\ \alpha' &amp; \beta' &amp; \gamma' \\ \alpha'' &amp; \beta'' &amp; \gamma'' \\ \end{array} \right)(\xi,\eta,\zeta)$$</span></p> <p>He now wants to represent <span class="math-container">$(X,Y,Z)$</span> in terms of <span class="math-container">$(\xi,\eta,\zeta)$</span>. He does this by creating another matrix that satisfies the equation:</p> <p><span class="math-container">$$(X,Y,Z)= \left( \begin{array}{ccc} A &amp; B &amp; C \\ A' &amp; B' &amp; C' \\ A'' &amp; B'' &amp; C'' \\ \end{array} \right)(\xi,\eta,\zeta)$$</span></p> <p>He continues to write that the value we obtain is:</p> <p><span class="math-container">$$\begin{align}\left( \begin{array}{ccc} A &amp; B &amp; C \\ A' &amp; B' &amp; C' \\ A'' &amp; B'' &amp; C'' \\ \end{array} \right) &amp;= \left( \begin{array}{ccc} a &amp; b &amp; c \\ a' &amp; b' &amp; c' \\ a'' &amp; b'' &amp; c'' \end{array} \right)\left( \begin{array}{ccc} \alpha &amp; \beta &amp; \gamma \\ \alpha' &amp; \beta' &amp; \gamma' \\ \alpha'' &amp; \beta'' &amp; \gamma'' \\ \end{array} \right)\\[.25cm] &amp;= \left( \begin{array}{ccc} a\alpha+b\alpha' + c\alpha'' &amp; a\beta+b\beta' + c\beta'' &amp; a\gamma+b\gamma' + c\gamma'' \\ a'\alpha+b'\alpha' + c'\alpha'' &amp; a'\beta+b'\beta' + c'\beta'' &amp; a'\gamma+b'\gamma' + c'\gamma'' \\ a''\alpha+b''\alpha' + c''\alpha'' &amp; a''\beta+b''\beta' + c''\beta'' &amp; a''\gamma+b''\gamma' + c''\gamma''\end{array} \right)\end{align}$$</span></p> <p>This is the standard definition of matrix multiplication. I must believe that matrix multiplication was defined to deal with this specific problem. The paper continues to mention several properties of matrix multiplication such as non-commutativity, composition with unity and zero and exponentiation.</p> <p>Here is the written rule of composition:</p> <blockquote> <p>Any line of the compound matrix is obtained by combining the corresponding line of the first component matrix successively with the several columns of the second matrix (p. 21)</p> </blockquote>
number-theory
<p>I was looking at a list of primes. I noticed that $ \frac{AM (p_1, p_2, \ldots, p_n)}{p_n}$ seemed to converge.</p> <p>This led me to try $ \frac{GM (p_1, p_2, \ldots, p_n)}{p_n}$ which also seemed to converge.</p> <p>I did a quick Excel graph and regression and found the former seemed to converge to $\frac{1}{2}$ and latter to $\frac{1}{e}$. As with anything related to primes, no easy reasoning seemed to point to those results (however, for all natural numbers it was trivial to show that the former asymptotically tended to $\frac{1}{2}$).</p> <p>Are these observations correct and are there any proofs towards:</p> <p>$$ { \lim_{n\to\infty} \left( \frac{AM (p_1, p_2, \ldots, p_n)}{p_n} \right) = \frac{1}{2} \tag1 } $$</p> <p>$$ { \lim_{n\to\infty} \left( \frac{GM (p_1, p_2, \ldots, p_n)}{p_n} \right) = \frac{1}{e} \tag2 } $$</p> <p>Also, does the limit $$ { \lim_{n\to\infty} \left( \frac{HM (p_1, p_2, \ldots, p_n)}{p_n} \right) \tag3 } $$ exist?</p>
<p>Your conjecture for GM was proved in 2011 in the short paper <em><a href="http://nntdm.net/papers/nntdm-17/NNTDM-17-2-01-03.pdf" rel="noreferrer">On a limit involving the product of prime numbers</a></em> by József Sándor and Antoine Verroken.</p> <blockquote> <p><strong>Abstract</strong>. Let $p_k$ denote the $k$th prime number. The aim of this note is to prove that the limit of the sequence $(p_n / \sqrt[n]{p_1 \cdots p_n})$ is $e$.</p> </blockquote> <p>The authors obtain the result based on the prime number theorem, i.e., $$p_n \approx n \log n \quad \textrm{as} \ n \to \infty$$ as well as an inequality with Chebyshev's function $$\theta(x) = \sum_{p \le x}\log p$$ where $p$ are primes less than $x$.</p>
<p>We can use the simple version of the prime counting function $$p_n \approx n \log n$$ and plug it into your expressions. For the arithmetic one, this becomes $$\lim_{n \to \infty} \frac {\sum_{i=1}^n p_i}{np_n}=\lim_{n \to \infty} \frac {\sum_{i=1}^n i\log(i)}{np_n}=\lim_{n \to \infty} \frac {\sum_{i=1}^n \log(i^i)}{np_n}\\=\lim_{n \to \infty} \frac {\log\prod_{i=1}^n i^i}{np_n}=\lim_{n \to \infty}\frac {\log(H(n))}{n^2\log(n)}$$ Where $H(n)$ is the <a href="http://mathworld.wolfram.com/Hyperfactorial.html" rel="noreferrer">hyperfactorial function</a>. We can use the expansion given on the Mathworld page to get $$\log H(n)\approx \log A -\frac {n^2}4+\left(\frac {n(n+1)}2+\frac 1{12}\right)\log (n)$$ and the limit is duly $\frac 12$ </p> <p>I didn't find a nice expression for the product of the primes.</p>
linear-algebra
<p>I have a very simple question that can be stated without any proof. Are all eigenvectors, of any matrix, always orthogonal? I am trying to understand principal components and it is crucial for me to see the basis of eigenvectors.</p>
<p>In general, for any matrix, the eigenvectors are NOT always orthogonal. But for a special type of matrix, symmetric matrix, the eigenvalues are always real and eigenvectors corresponding to distinct eigenvalues are always orthogonal. If the eigenvalues are not distinct, an orthogonal basis for this eigenspace can be chosen using <a href="https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process" rel="noreferrer">Gram-Schmidt</a>.</p> <p>For any matrix <span class="math-container">$M$</span> with <span class="math-container">$n$</span> rows and <span class="math-container">$m$</span> columns, <span class="math-container">$M$</span> multiplies with its transpose, either <span class="math-container">$M M'$</span> or <span class="math-container">$M'M$</span>, results in a symmetric matrix, so for this symmetric matrix, the eigenvectors are always orthogonal.</p> <p>In the application of PCA, a dataset of <span class="math-container">$n$</span> samples with <span class="math-container">$m$</span> features is usually represented in a <span class="math-container">$n\times m$</span> matrix <span class="math-container">$D$</span>. The variance and covariance among those <span class="math-container">$m$</span> features can be represented by a <span class="math-container">$m\times m$</span> matrix <span class="math-container">$D'D$</span>, which is symmetric (numbers on the diagonal represent the variance of each single feature, and the number on row <span class="math-container">$i$</span> column <span class="math-container">$j$</span> represents the covariance between feature <span class="math-container">$i$</span> and <span class="math-container">$j$</span>). The PCA is applied on this symmetric matrix, so the eigenvectors are guaranteed to be orthogonal.</p>
<p>Fix two linearly independent vectors $u$ and $v$ in $\mathbb{R}^2$, define $Tu=u$ and $Tv=2v$. Then extend linearly $T$ to a map from $\mathbb{R}^n$ to itself. The eigenvectors of $T$ are $u$ and $v$ (or any multiple). Of course, $u$ need not be perpendicular to $v$.</p>
linear-algebra
<p>Wikipedia introduces the vector product for two vectors $\vec a$ and $\vec b$ as $$ \vec a \times\vec b=(\| \vec a\| \|\vec b\|\sin\Theta)\vec n $$ It then mentions that $\vec n$ is the vector normal to the plane made by $\vec a$ and $\vec b$, implying that $\vec a$ and $\vec b$ are 3D vectors. Wikipedia mentions something about a 7D cross product, but I'm not going to pretend I understand that.</p> <p>My idea, which remains unconfirmed with any source, is that a cross product can be thought of a vector which is orthogonal to all vectors which you are crossing. If, and that's a big IF, this is right over all dimensions, we know that for a set of $n-1$ $n$-dimensional vectors, there exists a vector which is orthogonal to all of them. The magnitude would have something to do with the area/volume/hypervolume/etc. made by the vectors we are crossing.</p> <p>Am I right to guess that this multidimensional aspect of cross vectors exists or is that last part utter rubbish?</p>
<p>Yes, you are correct. You can generalize the cross product to $n$ dimensions by saying it is an operation which takes in $n-1$ vectors and produces a vector that is perpendicular to each one. This can be easily defined using the exterior algebra and Hodge star operator <a href="http://en.wikipedia.org/wiki/Hodge_dual">http://en.wikipedia.org/wiki/Hodge_dual</a>: the cross product of $v_1,\ldots,v_{n-1}$ is then just $*(v_1 \wedge v_2 \cdots \wedge v_{n-1}$).</p> <p>Then the magnitude of the cross product of n-1 vectors is the volume of the higher-dimensional parallelogram that they determine. Specifying the magnitude and being orthogonal to each of the vectors narrows the possiblity to two choices-- an orientation picks out one of these.</p>
<p>The answer to this problem is sadly not very well-known, since it depends on what you really want as "cross product".</p> <p>1st solution. r-ary operation in any dimension with certain axiomns. Suppose an r-ary operation on certain d-dimensional space V. Then, a r-fold d-dimensional "cross product" multilinear operation exists: </p> <p><span class="math-container">$$ (C_1\times C_2\times \ldots\times C_r): V^{dr}=\underbrace{V^d\times \cdots \times V^d}_{r}\longrightarrow V^d$$</span></p> <p>such as <span class="math-container">$$\forall i=1,2,...,r$$</span> we have that</p> <p><span class="math-container">$$ (C_1\times C_2\times \ldots\times C_r)\cdot C_i=0$$</span></p> <p><span class="math-container">$$ (C_1\times C_2\times \ldots\times C_r)\cdot (C_1\times C_2\times \ldots\times C_r)=\det (C_i\cdot C_j)$$</span></p> <p>Eckmann (1943) and Whitehead (1963) solved this problem in the continuous case over real euclidean spaces, while Brown and Gray (1967) solved the multilinear case. Moreover, the solution that I am going to provide is valid in any field with characteristic different of 2 and with <span class="math-container">$1\leq r\leq d$</span>. The theorem (due to Eckmann, Whitehead, and Brown-Gray) says that the "generalized cross product" (including the 3d case) exists when:</p> <p>A) <span class="math-container">$d$</span> is even, <span class="math-container">$r = 1$</span>. A cross product exists in every even dimension with one single factor. This can be thought some kind of "Wick rotation" if you are aware of this concept in every even dimensions! This cross product with a single factor is a bit non-trivial but easy to understand. </p> <p>B) <span class="math-container">$d$</span> is arbitrary, <span class="math-container">$r = d − 1$</span>. A cross product exists in arbitrary dimension d and (d-1) factors. It is also said that an arbitrary (d-1)-fold cross product exists in any dimension. Just take the determinant of those (d-1) vectors with the versors <span class="math-container">$(e_1,...,e_r)$</span>!</p> <p>C) <span class="math-container">$d = 3, 7, r = 2$</span>. A 2-fold cross vector exists in dimension 3 and 7. Therefore, the "bilinear" cross product can only exists with two factors in 3D and 7D. The 3D cross product is well known, the 7D cross product can be found (both in coordinate and free coordinate versions) in wikipedia. </p> <p>D) <span class="math-container">$d = 8, r = 3$</span>. A 3-fold cross product exists in eight dimensions. There, there is a non-trivial 3-fold cross in 8D, i.e., you can build a non-trivial cross product with 3 vectors in 8 dimensions. I haven't seen a coordinate expression for this but I believe someone did it (I could write a post about it, though, in my blog, in the near future).</p> <p>This happens in euclidean signature, I suppose there are some variants in pseudo-euclidean metrics (and perhaps some non-trivial subcases; I have heard about a non-trivial 3-fold cross product in 4D but I can not find a reference). Moreover, you can find a similar conclusion in the book Clifford algebras and Spinors by P. Lounesto. Geometric algebra is very useful when handling with this vector stuff since vectors are just a particular grade of a polyvector/cliffor/blade... </p> <p>2nd solution. Cross product can be seen as the dual of the exterior product via <span class="math-container">$ia\times b=a\wedge b$</span> or <span class="math-container">$\star (a\wedge b)=a\times b$</span>. Therefore, the wedge product (exterior product, a bivector) is much more fundamental since it can be defined in ANY spacetime dimension. You can of course identify bivectors with antisymmetric matrices too, but that is only a realization of the bivector. Indeed, bivector defines rotations in a give plane and this is much more useful than thinking in terms of a vector. Bivectors are the generators of rotations in N-dimensional spaces (even if you consider multivectors or polyvectors fields). Thus, the second solution is to consider the exterior product as the true generalization (with two factors!) of the cross product in any spacetime dimension. </p> <p>3rd solution. Use k-forms (k-vectors) and give up the 2-ary condition., assuming a metric can be defined. If you keep wanting a VECTOR, or 1-blade, then using the Hodge star operator: <span class="math-container">$$V=\star(V_1\wedge\cdots \wedge V_{N-1})$$</span> This produces a 1-form (1-vector) from an N-1-form, N-1-vector. Indeed, if you have a non-factorizable nonsimple k-vector on N-dimensional space, the star operator, with a single term!, produces a (N-k)-form or (N-k)-vector in general, as said in the references of other users.</p>
logic
<p>I strive to find a statement $S(n)$ with $n \in N$ that can be proven to be not generally true despite the fact that noone knows a counterexample, i.e. it holds true for all $n$ ever tested so far. Any help?</p>
<p>Statement: "There are no primes greater than $2^{60,000,000}$". No known counter-example. Counter example must exist since the set of primes is infinite (Euclid).</p>
<p>According the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Skewes%27_number">Skewes' number</a>, there is no explicit value $x$ known (yet) for which $\pi(x)\gt\text{li}(x)$. (There are, however, candidate values, and there are <em>ranges</em> within which counterexamples are known to lie, so this may not be what the OP is after.)</p> <p>Another example along the same lines is the <a href="http://en.wikipedia.org/wiki/Mertens_conjecture">Mertens conjecture</a>.</p> <p>A somewhat silly example would be the statement "$(100!)!+n+2$ is composite." It's clear that $S(n)$ is <em>true</em> for all "small" values of $n\in\mathbb{N}$, and it's clear that it's <em>false</em> in general, but I'd be willing to bet a small sum of money that no counterexample will be found in the next $100!$ years....</p> <p>(Note: I edited in a "$+2$" to make sure that my silly $S(n)$ is clearly true for $n=0$ and $1$ as well as other "small" values of $n$.)</p>
probability
<p>My friend gave me this puzzle:</p> <blockquote> <p>What is the probability that a point chosen at random from the interior of an equilateral triangle is closer to the center than any of its edges? </p> </blockquote> <hr> <p>I tried to draw the picture and I drew a smaller (concentric) equilateral triangle with half the side length. Since area is proportional to the square of side length, this would mean that the smaller triangle had $1/4$ the area of the bigger one. My friend tells me this is wrong. He says I am allowed to use calculus but I don't understand how geometry would need calculus. Thanks for help.</p>
<p>You are right to think of the probabilities as areas, but the set of points closer to the center is not a triangle. It's actually a weird shape with three curved edges, and the curves are parabolas. </p> <hr> <p>The set of points equidistant from a line $D$ and a fixed point $F$ is a parabola. The point $F$ is called the focus of the parabola, and the line $D$ is called the directrix. You can read more about that <a href="https://en.wikipedia.org/wiki/Conic_section#Eccentricity.2C_focus_and_directrix">here</a>.</p> <p>In your problem, if we think of the center of the triangle $T$ as the focus, then we can extend each of the three edges to give three lines that correspond to the directrices of three parabolas. </p> <p>Any point inside the area enclosed by the three parabolas will be closer to the center of $T$ than to any of the edges of $T$. The answer to your question is therefore the area enclosed by the three parabolas, divided by the area of the triangle. </p> <hr> <p>Let's call $F$ the center of $T$. Let $A$, $B$, $C$, $D$, $G$, and $H$ be points as labeled in this diagram:</p> <p><a href="https://i.sstatic.net/MTg9U.png"><img src="https://i.sstatic.net/MTg9U.png" alt="Voronoi diagram for a triangle and its center"></a></p> <p>The probability you're looking for is the same as the probability that a point chosen at random from $\triangle CFD$ is closer to $F$ than to edge $CD$. The green parabola is the set of points that are the same distance to $F$ as to edge $CD$.</p> <p>Without loss of generality, we may assume that point $C$ is the origin $(0,0)$ and that the triangle has side length $1$. Let $f(x)$ be equation describing the parabola in green. </p> <hr> <p>By similarity, we see that $$\overline{CG}=\overline{GH}=\overline{HD}=1/3$$</p> <p>An equilateral triangle with side length $1$ has area $\sqrt{3}/4$, so that means $\triangle CFD$ has area $\sqrt{3}/12$. The sum of the areas of $\triangle CAG$ and $\triangle DBH$ must be four ninths of that, or $\sqrt{3}/27$.</p> <p>$$P\left(\text{point is closer to center}\right) = \displaystyle\frac{\frac{\sqrt{3}}{12} - \frac{\sqrt{3}}{27} - \displaystyle\int_{1/3}^{2/3} f(x) \,\mathrm{d}x}{\sqrt{3}/12}$$</p> <p>We know three points that the parabola $f(x)$ passes through. This lets us create a system of equations with three variables (the coefficients of $f(x)$) and three equations. This gives</p> <p>$$f(x) = \sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}$$</p> <p>The <a href="http://goo.gl/kSMPmv">integral of this function from $1/3$ to $2/3$</a> is $$\int_{1/3}^{2/3} \left(\sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}\right) \,\mathrm{d}x = \frac{5}{54\sqrt{3}}$$ </p> <hr> <p>This <a href="http://goo.gl/xEFB0s">gives our final answer</a> of $$P\left(\text{point is closer to center}\right) = \boxed{\frac{5}{27}}$$</p>
<p>In response to Benjamin Dickman's request for a solution without calculus, referring to dtldarek's nice diagram in Zubin Mukerjee's answer (with all areas relative to that of the triangle $FCD$):</p> <p>The points $A$ and $B$ are one third along the bisectors from $F$, so the triangle $FAB$ has area $\frac19$. The vertex $V$ of the parabola is half-way between $F$ and the side $CD$, so the triangle $VAB$ has width $\frac13$ of $FCD$ and height $\frac16$ of $FCD$ and thus area $\frac1{18}$. By <a href="https://en.wikipedia.org/wiki/The_Quadrature_of_the_Parabola">Archimedes' quadrature of the parabola</a> (long predating the advent of calculus), the area between $AB$ and the parabola is $\frac43$ of the area of $VAB$. Thus the total area in $FCD$ closer to $F$ than to $CD$ is</p> <p>$$ \frac19+\frac43\cdot\frac1{18}=\frac5{27}\;. $$</p> <p>P.S.: Like Dominic108's solution, this is readily generalized to a regular $n$-gon. Let $\phi=\frac\pi n$. Then the condition $FB=BH$, expressed in terms of the height $h$ of triangle $FAB$ relative to that of $FCD$, is</p> <p>$$ \frac h{\cos\phi}=1-h\;,\\ h=\frac{\cos\phi}{1+\cos\phi}\;. $$</p> <p>This is also the width of $FAB$ relative to that of $FCD$. The height of the arc of the parabola between $A$ and $B$ is $\frac12-h$. Thus, the proportion of the area of triangle $FCD$ that's closer to $F$ than to $CD$ is</p> <p>$$ h^2+\frac43h\left(\frac12-h\right)=\frac23h-\frac13h^2=\frac{2\cos\phi(1+\cos\phi)-\cos^2\phi}{3(1+\cos\phi)^2}=\frac13-\frac1{12\cos^4\frac\phi2}\;. $$</p> <p>This doesn't seem to take rational values except for $n=3$ and for $n\to\infty$, where the limit is $\frac13-\frac1{12}=\frac14$, the value for the circle.</p>
matrices
<p>Suppose that we have two different discreet signal vectors of $N^\text{th}$ dimension, namely $\mathbf{x}[i]$ and $\mathbf{y}[i]$, each one having a total of $M$ set of samples/vectors.</p> <p>$\mathbf{x}[m] = [x_{m,1} \,\,\,\,\, x_{m,2} \,\,\,\,\, x_{m,3} \,\,\,\,\, ... \,\,\,\,\, x_{m,N}]^\text{T}; \,\,\,\,\,\,\, 1 \leq m \leq M$<br> $\mathbf{y}[m] = [y_{m,1} \,\,\,\,\, y_{m,2} \,\,\,\,\, y_{m,3} \,\,\,\,\, ... \,\,\,\,\, y_{m,N}]^\text{T}; \,\,\,\,\,\,\,\,\, 1 \leq m \leq M$</p> <p>And, I build up a covariance matrix in-between these signals.</p> <p>$\{C\}_{ij} = E\left\{(\mathbf{x}[i] - \bar{\mathbf{x}}[i])^\text{T}(\mathbf{y}[j] - \bar{\mathbf{y}}[j])\right\}; \,\,\,\,\,\,\,\,\,\,\,\, 1 \leq i,j \leq M $</p> <p>Where, $E\{\}$ is the "expected value" operator.</p> <p>What is the proof that, for all arbitrary values of $\mathbf{x}$ and $\mathbf{y}$ vector sets, the covariance matrix $C$ is always semi-definite ($C \succeq0$) (i.e.; not negative definte; all of its eigenvalues are non-negative)?</p>
<p>A symmetric matrix $C$ of size $n\times n$ is semi-definite if and only if $u^tCu\geqslant0$ for every $n\times1$ (column) vector $u$, where $u^t$ is the $1\times n$ transposed (line) vector. If $C$ is a covariance matrix in the sense that $C=\mathrm E(XX^t)$ for some $n\times 1$ random vector $X$, then the linearity of the expectation yields that $u^tCu=\mathrm E(Z_u^2)$, where $Z_u=u^tX$ is a real valued random variable, in particular $u^tCu\geqslant0$ for every $u$. </p> <p>If $C=\mathrm E(XY^t)$ for two centered random vectors $X$ and $Y$, then $u^tCu=\mathrm E(Z_uT_u)$ where $Z_u=u^tX$ and $T_u=u^tY$ are two real valued centered random variables. Thus, there is no reason to expect that $u^tCu\geqslant0$ for every $u$ (and, indeed, $Y=-X$ provides a counterexample).</p>
<p>Covariance matrix <strong>C</strong> is calculated by the formula, <span class="math-container">$$ \mathbf{C} \triangleq E\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}. $$</span> Where are going to use <a href="https://en.wikipedia.org/wiki/Definite_matrix" rel="nofollow noreferrer">the definition of positive semi-definite matrix</a> which says:</p> <blockquote> <p>A real square matrix <span class="math-container">$\mathbf{A}$</span> is positive semi-definite if and only if<br /> <span class="math-container">$\mathbf{b}^T\mathbf{A}\mathbf{b}\succeq0$</span><br /> is true for arbitrary real column vector <span class="math-container">$\mathbf{b}$</span> in appropriate size.</p> </blockquote> <p>For an arbitrary real vector <strong>u</strong>, we can write, <span class="math-container">$$ \begin{array}{rcl} \mathbf{u}^T\mathbf{C}\mathbf{u} &amp; = &amp; \mathbf{u}^TE\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}\mathbf{u} \\ &amp; = &amp; E\{\mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}\} \\ &amp; = &amp; E\{s^2\} \\ &amp; = &amp; \sigma_s^2. \\ \end{array} $$</span> Where <span class="math-container">$\sigma_s$</span> is the variance of the zero-mean scalar random variable <span class="math-container">$s$</span>, that is, <span class="math-container">$$ s = \mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}}) = (\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}. $$</span> Square of any real number is equal to or greater than zero. <span class="math-container">$$ \sigma_s^2 \ge 0 $$</span> Thus, <span class="math-container">$$ \mathbf{u}^T\mathbf{C}\mathbf{u} = \sigma_s^2 \ge 0. $$</span> Which implies that covariance matrix of any real random vector is always positive semi-definite.</p>
combinatorics
<p>In my discrete mathematics class our notes say that between set <span class="math-container">$A$</span> (having <span class="math-container">$6$</span> elements) and set <span class="math-container">$B$</span> (having <span class="math-container">$8$</span> elements), there are <span class="math-container">$8^6$</span> distinct functions that can be formed, in other words: <span class="math-container">$|B|^{|A|}$</span> distinct functions. But no explanation is offered and I can't seem to figure out why this is true. Can anyone elaborate?</p>
<p>A function on a set involves running the function on <em>every element</em> of the set A, each one producing some result in the set B. So, for the first run, every element of A gets mapped to an element in B. The question becomes, how many different mappings, all using <em>every element</em> of the set A, can we come up with? Take this example, mapping a 2 element set A, to a 3 element set B. There are 9 different ways, all beginning with both 1 <em>and</em> 2, that result in some different combination of mappings over to B.</p> <p><img src="https://i.sstatic.net/zYKzS.png" alt="enter image description here"></p> <p>The number of functions from A to B is |B|^|A|, or $3^2$ = 9.</p>
<p>Let set $A$ have $a$ elements and set $B$ have $b$ elements. Each element in $A$ has $b$ choices to be mapped to. Each such choice gives you a unique function. Since each element has $b$ choices, the total number of functions from $A$ to $B$ is $$\underbrace{b \times b \times b \times \cdots b}_{a \text{ times}} = b^a$$</p>
linear-algebra
<p>I would like to have some examples of infinite dimensional vector spaces that help me to break my habit of thinking of <span class="math-container">$\mathbb{R}^n$</span> when thinking about vector spaces.</p>
<ol> <li>$\Bbb R[x]$, the polynomials in one variable.</li> <li>All the continuous functions from $\Bbb R$ to itself.</li> <li>All the differentiable functions from $\Bbb R$ to itself. Generally we can talk about other families of functions which are closed under addition and scalar multiplication.</li> <li>All the infinite sequences over $\Bbb R$.</li> </ol> <p>And many many others.</p>
<ol> <li>The space of continuous functions of compact support on a locally compact space, say $\mathbb{R}$.</li> <li>The space of compactly supported smooth functions on $\mathbb{R}^{n}$.</li> <li>The space of square summable complex sequences, commonly known as $l_{2}$. This is the prototype of all separable Hilbert spaces.</li> <li>The space of all bounded sequences.</li> <li>The set of all linear operators on an infinite dimensional vector space.</li> <li>The space $L^{p}(X)$ where $(X, \mu)$ is a measure space.</li> <li>The set of all Schwartz functions.</li> </ol> <p>These spaces have considerable more structure than just a vector space, in particular they can all be given some norm (in third case an inner product too). They all fall under the umbrella of function spaces.</p>
linear-algebra
<p>What is the "standard basis" for fields of complex numbers?</p> <p>For example, what is the standard basis for $\Bbb C^2$ (two-tuples of the form: $(a + bi, c + di)$)? I know the standard for $\Bbb R^2$ is $((1, 0), (0, 1))$. Is the standard basis exactly the same for complex numbers?</p> <p><strong>P.S.</strong> - I realize this question is very simplistic, but I couldn't find an authoritative answer online.</p>
<p>Just to be clear, by definition, a vector space always comes along with a field of scalars $F$. It's common just to talk about a "vector space" and a "basis"; but if there is possible doubt about the field of scalars, it's better to talk about a "vector space over $F$" and a "basis over $F$" (or an "$F$-vector space" and an "$F$-basis").</p> <p>Your example, $\mathbb{C}^2$, is a 2-dimensional vector space over $\mathbb{C}$, and the simplest choice of a $\mathbb{C}$-basis is $\{ (1,0), (0,1) \}$.</p> <p>However, $\mathbb{C}^2$ is also a vector space over $\mathbb{R}$. When we view $\mathbb{C}^2$ as an $\mathbb{R}$-vector space, it has dimension 4, and the simplest choice of an $\mathbb{R}$-basis is $\{(1,0), (i,0), (0,1), (0,i)\}$.</p> <p>Here's another intersting example, though I'm pretty sure it's not what you were asking about:</p> <p>We can view $\mathbb{C}^2$ as a vector space over $\mathbb{Q}$. (You can work through the definition of a vector space to prove this is true.) As a $\mathbb{Q}$-vector space, $\mathbb{C}^2$ is infinite-dimensional, and you can't write down any nice basis. (The existence of the $\mathbb{Q}$-basis depends on the axiom of choice.)</p>
<p>The "most standard" basis is also $\left\lbrace(1,0),\, (0,1)\right\rbrace$. You just take complex combinations of these vectors. Simple :)</p>
linear-algebra
<p>Is it possible to multiply A[m,n,k] by B[p,q,r]? Does the regular matrix product have generalized form?</p> <p>I would appreciate it if you could help me to find out some tutorials online or mathematical 'word' which means N-dimensional matrix product.</p> <p>Upd. I'm writing a program that can perform matrix calculations. I created a class called matrix and made it independent from the storage using object oriented features of C++. But when I started to write this program I thought that it was some general operation to multiply for all kinds of arrays(matrices). And my plan was to implement this multiplication (and other operators) and get generalized class of objects. Since this site is not concerned with programming I didn't post too much technical details earlier. Now I'm not quite sure if that one general procedure exists. Thanks for all comments.</p>
<p>The general procedure is called <a href="http://en.wikipedia.org/wiki/Tensor_contraction">tensor contraction</a>. Concretely it's given by summing over various indices. For example, just as ordinary matrix multiplication $C = AB$ is given by</p> <p>$$c_{ij} = \sum_k a_{ik} b_{kj}$$</p> <p>we can contract by summing across any index. For example, we can write</p> <p>$$c_{ijlm} = \sum_k a_{ijk} b_{klm}$$</p> <p>which gives a $4$-tensor ("$4$-dimensional matrix") rather than a $3$-tensor. One can also contract twice, for example</p> <p>$$c_{il} = \sum_{j,k} a_{ijk} b_{kjl}$$</p> <p>which gives a $2$-tensor.</p> <p>The abstract details shouldn't matter terribly unless you explicitly want to implement <a href="http://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors#Use_in_tensor_analysis">mixed variance</a>, which as far as I know nobody who writes algorithms for manipulating matrices does. </p>
<p>Sorry to revive the thread, but what I found might answer the original question and help others who might stumble into this in the future. This came up for me when I wanted to avoid using for-loop and instead do one big multiplication on 3D matrices. </p> <p>So first, let's look how matrix multiplication works. Say you have <code>A[m,n]</code> and <code>B[n,p]</code>. One requirement is that number of columns of A must match the number of rows of B. Then, all you do is iterate over rows of A (i) and columns of B (j) and the common dimension of both (k) (matlab/octave example):</p> <pre><code>m=2;n=3;p=4;A=randn(m,n);B=randn(n,p); C=zeros(m,p); for i = 1:m for j = 1:p for k = 1:n C(i,j) = C(i,j) + A(i,k)*B(k,j); end end end C-A*B %to check the code, should output zeros </code></pre> <p>So the common dimension <code>n</code> got "contracted" I believe (Qiaochu Yuan's answer made so much sense once I started coding it).</p> <p>Now, assuming you want something similar to happen in 3D case, ie one common dimension to contract, what would you do? Assume you have <code>A[l,m,n]</code> and <code>B[n,p,q]</code>. The requirement of the common dimension is still there - the last one of A must equal the first one of B. Then theoretically (this is just one way to do it and it just makes sense to me, no other foundation for this), n just cancels in <code>LxMxNxNxPxQ</code> and what you get is <code>LxMxPxQ</code>. The result is not even the same kind of creature, it is not 3-dimensional, instead it grew to 4D (just like Qiaochu Yuan pointed out btw). But oh well, how would you compute it? Well, just append 2 more for loops to iterate over new dimensions:</p> <pre><code>l=5;m=2;n=3;p=4;q=6;A=randn(l,m,n);B=randn(n,p,q); C=zeros(l,m,p,q); for h = 1:l for i = 1:m for j = 1:p for g = 1:q for k = 1:n C(h,i,j,g) = C(h,i,j,g) + A(h,i,k)*B(k,j,g); end end end end end </code></pre> <p>At the heart of it, it is still row-by-column kind of operation (hence only one dimension "contracts"), just over more data. </p> <p>Now, my real problem was actually <code>A[m,n]</code> and <code>B[n,p,q]</code>, where the creatures came from different dimensions (2D times 3D), but it seems doable nonetheless (afterall matrix times vector is 2D times 1D). So for me the result is <code>C[m,p,q]</code>:</p> <pre><code>m=2;n=3;p=4;q=5;A=randn(m,n);B=randn(n,p,q); C=zeros(m,p,q);Ct=C; for i = 1:m for j = 1:p for g = 1:q for k = 1:n C(i,j,g) = C(i,j,g) + A(i,k)*B(k,j,g); end end end end </code></pre> <p>which checks out against using the full for-loops:</p> <pre><code>for j = 1:p for g = 1:q Ct(:,j,g) = A*B(:,j,g); %"true", but still uses for-loops end end C-Ct </code></pre> <p>but doesn't achieve my initial goal of just calling some built-in matlab function to do the work for me. Still, it was fun to play with this.</p>
linear-algebra
<p>Suppose <span class="math-container">$A \in M_{2n}(\mathbb{R})$</span>. and<span class="math-container">$$J=\begin{pmatrix} 0 &amp; E_n\\ -E_n&amp;0 \end{pmatrix}$$</span> where <span class="math-container">$E_n$</span> represents identity matrix.</p> <p>if <span class="math-container">$A$</span> satisfies <span class="math-container">$$AJA^T=J.$$</span></p> <p>How to figure out <span class="math-container">$$\det(A)=1~?$$</span></p> <p>My approach:</p> <p>I have tried to separate <span class="math-container">$A$</span> into four submartix:<span class="math-container">$$A=\begin{pmatrix}A_1&amp;A_2 \\A_3&amp;A_4 \end{pmatrix}$$</span> and I must add a assumption that <span class="math-container">$A_1$</span> is invertible. by elementary transfromation:<span class="math-container">$$\begin{pmatrix}A_1&amp;A_2 \\ A_3&amp;A_4\end{pmatrix}\rightarrow \begin{pmatrix}A_1&amp;A_2 \\ 0&amp;A_4-A_3A_1^{-1}A_2\end{pmatrix}$$</span></p> <p>we have: <span class="math-container">$$\det(A)=\det(A_1)\det(A_4-A_3A_1^{-1}A_2).$$</span> From<span class="math-container">$$\begin{pmatrix}A_1&amp;A_2 \\ A_3&amp;A_4\end{pmatrix}\begin{pmatrix}0&amp;E_n \\ -E_n&amp;0\end{pmatrix}\begin{pmatrix}A_1&amp;A_2 \\ A_3&amp;A_4\end{pmatrix}^T=\begin{pmatrix}0&amp;E_n \\ -E_n&amp;0\end{pmatrix}.$$</span> we get two equalities:<span class="math-container">$$A_1A_2^T=A_2A_1^T$$</span> and <span class="math-container">$$A_1A_4^T-A_2A_3^T=E_n.$$</span></p> <p>then <span class="math-container">$$\det(A)=\det(A_1(A_4-A_3A_1^{-1}A_2)^T)=\det(A_1A_4^T-A_1A_2^T(A_1^T)^{-1}A_3^T)=\det(A_1A_4^T-A_2A_1^T(A_1^T)^{-1}A_3^T)=\det(E_n)=1,$$</span></p> <p>but I have no idea to deal with this problem when <span class="math-container">$A_1$</span> is not invertible.</p>
<p>First, taking the determinant of the condition $$ \det AJA^T = \det J \implies \det A^TA = 1 $$ using that $\det J \neq 0$. This immediately implies $$ \det A = \pm 1$$ if $A$ is real valued. The quickest way, if you know it, to show that the determinant is positive is via the <a href="http://en.wikipedia.org/wiki/Pfaffian#Identities">Pfaffian</a> of the expression $A J A^T = J$. </p>
<p>Let me first restate your question in a somewhat more abstract way. Let $V$ be a finite dimensional real vector space. A sympletic form is a 2-form $\omega\in \Lambda^2(V^\vee)$ which is non-degenerate in the sense that $\omega(x,y)=0$ for all $y\in V$ implies that $x=0$. $V$ together with such a specified $\omega$ nondegerate 2-form is called a symplectic space. It can be shown that $V$ must be of even dimension, say, $2n$.</p> <p>A linear operator $T:V\to V$ is said to be a symplectic transformations if $\omega(x,y)=\omega(Tx,Ty)$ for all $x,y\in V$. This is the same as saying $T^*\omega=\omega$. What you want to show is that $T$ is orientation preserving. Now I claim that $\omega^n\neq 0$. This can be shown by choosing a basis $\{a_i,b_j|i,j=1,\ldots,n\}$ such that $\omega(a_i,b_j)=\delta_{ij}$ and $\omega(a_i,a_j)=\omega(b_i,b_j)=0$, for all $i,j=1,\ldots,n $. Then $\omega=\sum_ia_i^\vee\wedge b_i^\vee$, where $\{a_i^\vee,b_j^\vee\}$ is the dual basis. We can compute $\omega^n=n!a_1^\vee\wedge b_1^\vee\wedge\dots\wedge a_n^\vee \wedge b_n^\vee$, which is clearly nonzero.</p> <p>Now let me digress to say a word about determinants. Let $W$ be an n-dimensional vector space and $f:W\to W$ be linear. Then we have induced maps $f_*:\Lambda^n(W)\to \Lambda^n(W)$. Since $\Lambda^n(W)$ is 1-dimensional, $f_*$ is multiplication by a number. This is just the determinant of $f$. And the dual map $f^*:\Lambda^n(W^\vee)\to \Lambda^n(W^\vee)$ is also multiplication by the determinant of $f$. </p> <p>Since $T^*(\omega^n)=\omega^n$, we can see from the above argument that $\det(T)=1$. The key point here is that the sympletic form $\omega$ give a canonical orientation of the space, via the top from $\omega^n$.</p>
game-theory
<p>Alice chose a positive integer $n$ and Bob tries to guess it.</p> <p>In every turn, Bob will guess an integer $x$ $(x&gt;0)$:</p> <ol> <li><p>If $x$ equals $n$, then Alice tells Bob that he found it, and the game ends.</p></li> <li><p>If not, then Alice tells Bob whether $x+n$ is prime or not.</p></li> </ol> <p>How to play this game optimally (that is to say, to find the number in the fewest turns)?</p> <p>You can assume $0&lt; n\le100$ (or any reasonable bound).</p> <p>PS: This game is invented by me.</p>
<p>It's not possible to solve the game in all cases with fewer than 7 guesses, because after 6 guesses you have at least 95 unguessed numbers and 6 bits of information and $2^6&lt;94.$</p> <p>But actually 7 is not achievable, since at most 26 primes can appear in any given block of 101. Thus the first guess, in the worst case, will have at least 74 remaining numbers, and so at least 8 guesses are required since $2^6&lt;68$.</p> <p>The goal at each step is to find numbers that split the remaining numbers into two sets of roughly equal size. Here are the beginnings of a strategy for 9. Ask about 3. If the number is 3 you're done; if n + 3 is prime it should not be hard to guess the number in 8 more tries.</p> <p>Otherwise 74 numbers remain: 1, 5, 6, 7, 9, 11, 12, 13, 15, 17, 18, 19, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 37, 39, 41, 42, 43, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 57, 59, 60, 61, 62, 63, 65, 66, 67, 69, 71, 72, 73, 74, 75, 77, 78, 79, 81, 82, 83, 84, 85, 87, 88, 89, 90, 91, 92, 93, 95, 96, 97, 99. Now ask about 10. If n+10 is prime it should be easy to find the number in 7 more tries.</p> <p>Otherwise 50 numbers remain: 5, 6, 11, 12, 15, 17, 18, 22, 23, 24, 25, 29, 30, 32, 35, 36, 39, 41, 42, 45, 46, 47, 48, 52, 53, 54, 55, 59, 60, 62, 65, 66, 67, 71, 72, 74, 75, 77, 78, 81, 82, 83, 84, 85, 88, 89, 90, 92, 95, 96. Now ask about 12. If n is 12 you're done; if n+12 is prime you have 6 tries to find 16 numbers.</p> <p>Now 33 numbers remain: 6, 15, 18, 22, 23, 24, 30, 32, 36, 39, 42, 45, 46, 48, 52, 53, 54, 60, 62, 65, 66, 72, 74, 75, 78, 81, 82, 83, 84, 88, 90, 92, 96. Now ask about 1. If n+1 is prime there are 15 possibilities and you have 5 tries.</p> <p>Now you have just 18: 15, 23, 24, 32, 39, 45, 48, 53, 54, 62, 65, 74, 75, 81, 83, 84, 90, 92. This is a bit tricky, but guess 1139 (or 1148) to split the remainder into two sets of 9.</p> <p>Either way you go at least one of the sets has 5 members (4 would be possible only if you picked one of the 18).</p>
<p>I have a pretty cool but probably impractical solution.</p> <p>Consider this table of primes from $1$ to $110$:</p> <p>$$ \begin{matrix} 2 &amp; 3 &amp; 5 &amp; 7 &amp; 11 \\ 13 &amp; 17 &amp; 19 &amp; 23 &amp; 29 \\ 31 &amp; 37 &amp; 41 &amp; 43 &amp; 47 \\ 53 &amp; 59 &amp; 61 &amp; 67 &amp; 71 \\ 73 &amp; 79 &amp; 83 &amp; 89 &amp; 97 \\ 101 &amp; 103 &amp; 107 &amp; 109 \\ \end{matrix} $$</p> <p>Notice the differences between all the primes. That table would look like this:</p> <p>$$ \begin{matrix} 2 &amp; 1 &amp; 2 &amp; 2 &amp; 4 \\ 2 &amp; 4 &amp; 2 &amp; 4 &amp; 6 \\ 2 &amp; 6 &amp; 4 &amp; 2 &amp; 4 \\ 6 &amp; 6 &amp; 2 &amp; 6 &amp; 4 \\ 2 &amp; 6 &amp; 4 &amp; 6 &amp; 8 \\ 4 &amp; 2 &amp; 4 &amp; 2 \\ \end{matrix} $$</p> <p>Let's look at the combinations of 6 differences in order. They're ordered from left to right and up to down by value as if the sequences represented digits of a 6-digit number:</p> <p>$$1,2,2,4,2,4 \ \ \ \ \ \ \ \ \ \ 2,1,2,2,4,2 \ \ \ \ \ \ \ \ \ \ 2,2,4,2,4,2$$ $$2,4,2,4,2,4 \ \ \ \ \ \ \ \ \ \ 2,4,2,4,6,2 \ \ \ \ \ \ \ \ \ \ 2,4,6,2,6,4$$ $$2,4,6,6,2,6 \ \ \ \ \ \ \ \ \ \ 2,6,4,2,4,6 \ \ \ \ \ \ \ \ \ \ 2,6,4,2,6,4$$ $$2,6,4,6,8,4 \ \ \ \ \ \ \ \ \ \ 4,2,4,2,4,6 \ \ \ \ \ \ \ \ \ \ 4,2,4,6,2,6$$ $$4,2,4,6,6,2 \ \ \ \ \ \ \ \ \ \ 4,2,6,4,6,8 \ \ \ \ \ \ \ \ \ \ 4,6,2,6,4,2$$ $$4,6,6,2,6,4 \ \ \ \ \ \ \ \ \ \ 4,6,8,4,2,4 \ \ \ \ \ \ \ \ \ \ 6,2,6,4,2,4$$ $$6,2,6,4,2,6 \ \ \ \ \ \ \ \ \ \ 6,4,2,4,6,6 \ \ \ \ \ \ \ \ \ \ 6,4,2,6,4,6$$ $$6,4,6,8,4,2 \ \ \ \ \ \ \ \ \ \ 6,6,2,6,4,2 \ \ \ \ \ \ \ \ \ \ 6,8,4,2,4,2$$</p> <p>Notice that no two sequences are equal in the above list</p> <p>Start out by iterating over values of $x$ in $[0,7]$ incrementally until you hit the first prime number. Let this value of $x$ be denoted as $k$.</p> <p><strong>The above step requires $8$ guesses in the worst case</strong></p> <p>You have to use new values for $x$ equal to $k + o$ where $o$ is an offset that starts at $1$ and changes based on what's prime and what's not.</p> <p>If $k + 1$ is prime, you can stop. $n = 2$.</p> <p>Else, try $k + 2$ and if that doesn't work, $k + 4$, then $k + 6$.</p> <p>If for instance, $k + 4$ works, you would then try $k + 4 + 2$. If that's prime too, then continue with these combinations in the same manner:</p> <p>$$4,2,4,2,4,6$$ $$4,2,4,6,2,6$$ $$4,2,4,6,6,2$$ $$4,2,6,4,6,8$$</p> <p>Using the above method, you derive the value of the prime $n + k$. $k$ is known, so you can immediately deduce the value of $n$.</p> <p>In fact, we can optimize that above table of differences to remove the unneeded trials and combine a few of them:</p> <p>$$ \begin{matrix} 1 &amp; 2,1 &amp; 2,2 \\ 2,4,2,4,2 &amp; 2,4,2,4,6 &amp; 2,4,6,2 \\ 2,4,6,6 &amp; 2,6,4,2,4 &amp; 2,6,4,2,6 \\ 2,6,4,6 &amp; 4,2,4,2 &amp; 4,2,4,6,2 \\ 4,2,4,6,6 &amp; 4,2,6 &amp; 4,6,2 \\ 4,6,6 &amp; 4,6,8 &amp; 6,2,6,4,2,4 \\ 6,2,6,4,2,6 &amp; 6,4,2,4 &amp; 6,4,2,6 \\ 6,4,6 &amp; 6,6 &amp; 6,8 \\ \end{matrix} $$</p> <p>The magic numbers to add to $k$ when guessing would be:</p> <p>$$ \begin{matrix} 1 &amp; 2,3 &amp; 2,4 \\ 2,6,8,12,14 &amp; 2,6,8,12,18 &amp; 2,6,12,14 \\ 2,6,12,18 &amp; 2,8,12,14,18 &amp; 2,8,12,14,20 \\ 2,8,12,18 &amp; 4,6,10,12 &amp; 4,6,10,16,18 \\ 4,6,10,16,22 &amp; 4,6,12 &amp; 4,10,12 \\ 4,10,16 &amp; 4,10,18 &amp; 6,8,14,18,20,24 \\ 6,8,14,18,20,26 &amp; 6,10,12,16 &amp; 6,10,12,18 \\ 6,10,16 &amp; 6,12 &amp; 6,14 \\ \end{matrix} $$</p> <p>It's imperative that the trials are done in order from left to right and top to bottom in this table, else, you will get false information. An example of one optimization is to avoid guessing when the only possible combination to continue with is 1 because we <strong>know</strong> that $n + k$ is prime.</p> <p>An example:</p> <ul> <li>You tested $k + 4$, $k + 10$, and you're testing for $k + 12$</li> <li>You're now testing for $k + 16$</li> <li>You're now testing for $k + 18$</li> </ul> <p>That last step is redundant. If $k + 16$ is composite, you can skip the $k + 18$ step because you know it will be prime.</p> <p>I hope this is clear enough.</p>
probability
<blockquote> <p>You are playing a game in which you have $100$ jellybeans, $10$ of them are poisonous (You eat one, you die). Now you have to pick $10$ at random to eat.<br> <strong>Question</strong>: What is the probability of dying?</p> </blockquote> <p>How I tried to solve it:</p> <p>Each jellybean has a $\frac{1}{10}$ chance of being poisonous. Since you need to take $10$ of them, I multiply it by $10$ which gave $1$ (Guaranteed death).</p> <p><em>How other people tried to solve it</em>:</p> <p>Each jellybean is picked out separately. The first jellybean has a $\frac{10}{100}$ chance of being poisonous, the second -- $\frac{10}{99}$, the third -- $\frac{10}{98}$ and so on.. which gives a sum of roughly $\sim 1.04$ (More than guaranteed death!)</p> <p>Both these results make no sense since there are obviously multiple possibilities were you survive since there are $90$ jellybeans to pick out of.</p> <p>Can someone explain this to me?</p>
<p>Both probablyme and carmichael561 have given a good approach to the problem, but I thought I'd point out why the solutions given by you and your classmates (?) are erroneous.</p> <p>The problem common to both approaches is that they neglect the probability that you die from earlier jelly beans. You take the first jelly bean, and you have a $1/10$ probability of dying; that is all right. But although your classmates are almost right (and you are not) that the second jelly bean has a $10/99$ chance of killing you, that is only true <em>if the first jelly bean didn't already kill you</em>.</p> <p>In other words, the probability that you are killed by the first jelly bean is $1/10$, and the probability that you survive to eat a second jelly bean <em>and</em> it kills you is $9/10 \times 10/99 = 1/11$. Each succeeding jelly bean does, in truth, have a higher probability of killing you <em>if you survive to eat it</em>, but the decreasing probability that you do in fact survive conspires to make its <em>overall</em> effect smaller, so the numbers do not add up to anywhere near $1$, complete certainty.</p> <p>It is possible to continue along in a similar vein: You can add $1/10+1/11 = 21/110$ to obtain the probability that the first two jelly beans kill you; the remainder, $89/110$, is the probability that you survive to eat the third jelly bean, which kills you with probability $10/98$. Your probability of surviving the first two jelly beans only to be killed by the third is then $89/110 \times 10/98 = 89/1078$. You would then have to add up $1/10+1/11+89/1078$ to find the probability that the first three jelly beans kill you, etc.</p> <p>I think you can see that the solutions provided by the other answerers are much more straightforward; this way of approaching the problem by considering its inverse is a common tactic when there are multiple ways to satisfy the conditions of the problem, but only one way to <em>violate</em> them.</p>
<p>So you live if you do not choose a deadly jellybean :)<br> And we die if we select at least one deadly bean, so I think it goes as follows $$P(\text{Die}) = 1-P(\text{Live}) = 1-\frac{\binom{10}{0}\binom{90}{10}}{\binom{100}{10}}=0.6695237889.$$</p> <p>In this case, we literally have good beans and bad beans, and we select without replacement. Then the number of bad beans selected follows a hypergeometric distribution. So for completeness, there are $\binom{10}{0}$ ways to choose a zero bad beans, $\binom{90}{10}$ ways to choose 10 good beans, and finally $\binom{100}{10}$ ways to choose 10 beans from the total.</p> <p>Note: $\binom nk = \frac{n!}{k!(n-k)!}$, the <a href="https://en.wikipedia.org/wiki/Binomial_coefficient">binomial coefficient</a>.</p>
linear-algebra
<p>This question aims to create an &quot;<a href="http://meta.math.stackexchange.com/q/1756/18880">abstract duplicate</a>&quot; of numerous questions that ask about determinants of specific matrices (I may have missed a few):</p> <ul> <li><a href="https://math.stackexchange.com/q/153457/18880">Characteristic polynomial of a matrix of $1$&#39;s</a></li> <li><a href="https://math.stackexchange.com/q/55165/18880">Eigenvalues of the rank one matrix $uv^T$</a></li> <li><a href="https://math.stackexchange.com/q/577937/18880">Calculating $\det(A+I)$ for matrix $A$ defined by products</a></li> <li><a href="https://math.stackexchange.com/q/84206/18880">How to calculate the determinant of all-ones matrix minus the identity?</a></li> <li><a href="https://math.stackexchange.com/q/86644/18880">Determinant of a specially structured matrix ($a$&#39;s on the diagonal, all other entries equal to $b$)</a></li> <li><a href="https://math.stackexchange.com/q/629892/18880">Determinant of a special $n\times n$ matrix</a></li> <li><a href="https://math.stackexchange.com/q/689111/18880">Find the eigenvalues of a matrix with ones in the diagonal, and all the other elements equal</a></li> <li><a href="https://math.stackexchange.com/q/897469/18880">Determinant of a matrix with $t$ in all off-diagonal entries.</a></li> <li><a href="https://math.stackexchange.com/q/227096/18880">Characteristic polynomial - using rank?</a></li> <li><a href="https://math.stackexchange.com/q/3955338/18880">Caclulate $X_A(x) $ and $m_A(x) $ of a matrix $A\in \mathbb{C}^{n\times n}:a_{ij}=i\cdot j$</a></li> <li><a href="https://math.stackexchange.com/q/219731/18880">Determinant of rank-one perturbations of (invertible) matrices</a></li> </ul> <p>The general question of this type is</p> <blockquote> <p>Let <span class="math-container">$A$</span> be a square matrix of rank<span class="math-container">$~1$</span>, let <span class="math-container">$I$</span> the identity matrix of the same size, and <span class="math-container">$\lambda$</span> a scalar. What is the determinant of <span class="math-container">$A+\lambda I$</span>?</p> </blockquote> <p>A clearly very closely related question is</p> <blockquote> <p>What is the characteristic polynomial of a matrix <span class="math-container">$A$</span> of rank<span class="math-container">$~1$</span>?</p> </blockquote>
<p>The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of <em>any</em> square matrix of size$~n$. So the answer to the second question is</p> <blockquote> <p>The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.</p> </blockquote> <p>The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore</p> <blockquote> <p>The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n&gt;1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n&gt;1$ is diagonalisable if and only if $\tr(A)\neq0$.</p> </blockquote> <p>See also <a href="https://math.stackexchange.com/q/52395/18880">this question</a>.</p> <p>For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)</p> <blockquote> <p>For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.</p> </blockquote> <p>In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.</p>
<p>Here’s an answer without using eigenvalues: the rank of <span class="math-container">$A$</span> is <span class="math-container">$1$</span> so its image is spanned by some nonzero vector <span class="math-container">$v$</span>. Let <span class="math-container">$\mu$</span> be such that <span class="math-container">$$Av=\mu v.$$</span></p> <p>We can extend this vector <span class="math-container">$v$</span> to a basis of <span class="math-container">$\mathbb{C}^n$</span>. With respect to this basis now, we have that the matrix of <span class="math-container">$A$</span> has all rows except the first one equal to <span class="math-container">$0$</span>. Since determinant and trace are basis-independent it follows by expanding the first column of <span class="math-container">$A$</span> wrt to this basis that <span class="math-container">$$\det(A-\lambda I)= (-1)^n(\lambda -\mu)\lambda^{n-1}.$$</span> Using this same basis as above we also see that <span class="math-container">$\text{Tr}(A) =\mu$</span>, so the characteristic polynomial of <span class="math-container">$A$</span> turns out to be</p> <p><span class="math-container">$$(-1)^n(\lambda -\text{Tr}(A))\lambda^{n-1}.$$</span></p>
matrices
<blockquote> <p>Alice and Bob play the following game with an $n \times n$ matrix, where $n$ is odd. Alice fills in one of the entries of the matrix with a real number, then Bob, then Alice and so forth until the entire matrix is filled. At the end, the determinant of the matrix is taken. If it is nonzero, Alice wins; if it is zero, Bob wins. Determine who wins playing perfect strategy each time. </p> </blockquote> <p>When $n$ is even it's easy to see why Bob wins every time. and for $n$ equal to $3$ I have brute-forced it. Bob wins. But for $n = 5$ and above I can't see who will win on perfect strategy each time. Any clever approaches to solving this problem?</p>
<p>I tried to approach it from Leibniz formula for determinants</p> <p>$$\det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n A_{i,\sigma_i}.$$</p> <p>There are $n!$ factorial terms in this sum. Alice will have $\frac{n^2+1}{2}$ moves whereas Bob has $\frac{n^2-1}{2}$ moves. There are $n^2$ variables (matrix entries). Each of them taken alone appear in $(n-1)!$ terms in this summation. Whenever Bob picks a zero in his first move for any entry in the matrix, $(n-1)!$ factorial terms of this go to zero. For instance, consider a $5 \times 5$ matrix. So there are 120 terms. In first move, whenever Bob makes any matrix entry zero, he zeros out 24 of this terms. In his second move, he has to pick that matrix entry which has least number of presence in the first zeroed-out 24 terms. There can be multiple such matrix entries. In face, it can be seen that there is surely another matrix entry appearing in 24 non-zero terms in the above sum. Since $n$ is odd in this case, the last chance will always be that of Alice. Because of that, one doesn't have to bother about this terms summing to zero. What Bob has to do if he has to win is that</p> <ul> <li><p>He has to make sure he touches at least once (in effect zeroes) each of this 120 terms. In the $n=5$ case, he has 12 chances. In this 12 chances he has to make sure that he zeros out all this 120 terms. In one sense, It means that he has to average at least 10 terms per chance of his. I looked at the $n=3$ case, bob has 4 chances there and 6 terms, he can zero out all of them in 3 moves. </p></li> <li><p>He has to make sure that Alice doesn't get hold of all the matrix entries in one single term in 120 terms, because then it will be non-zero, and since the last chance is hers, Bob won't be able to zero it out, so she will win. </p></li> </ul> <p>As per above explanation, in the $5 \times 5$, he has to just average killing 10 terms in each chance which seems quite easy to do. I feel this method is a bit easy to generalize and many really clever people in here can do it. </p> <p>EDIT----------------</p> <p>In response to @Ross Milikan, I tried to look at solving $5 \times 5$ case, this is the approach. Consider $5 \times 5$ matrix with its entries filled in by the english alphabets row-wise, so that the matrix of interest is </p> <p>\begin{align} \begin{bmatrix} a &amp; b &amp; c &amp; d&amp; e \\ f&amp; g &amp; h &amp;i&amp; j \\k&amp; l&amp; m&amp; n&amp; o \\ p&amp; q&amp; r&amp; s&amp; t\\ u&amp; v&amp; w&amp; x&amp; y \end{bmatrix} \end{align}</p> <p>Without loss of Generality (WLOG), let Alice pick up $a$ (making any entry zero is advantageous for her). Lets say Bob picks up $b$ (again WLOG, picking up any entry is same). This helps Bob to zero out 24 terms in the total 120. Alice has to pick up one entry in this first row itself otherwise she will be in a disadvantage (since then, Bob gets to pick the 3 terms in total from the first row and gets 72 terms zeroed out). So concerning the first row, Alice picks 3 of them, Bob picks 2 of them (say $b$ and $d$), and hence he zeros out 48 terms of the total 120. Now note that next move is Bob's. Let us swap the second column and first column. This doesn't change the determinant other than its sign. Look at the modified matrix</p> <p>\begin{align} \begin{bmatrix} 0 &amp; \otimes &amp; \otimes &amp; 0 &amp; \otimes \\ g &amp; f &amp; h &amp;i&amp; j \\l&amp; k&amp; m&amp; n&amp; o \\ q&amp; p&amp; r&amp; s&amp; t\\ v&amp; u&amp; w&amp; x&amp; y \end{bmatrix} \end{align}</p> <p>where $0$ is put in entries Bob has modified and $\otimes$ has been put in entries modified by Alice. Now in the first column, lets say Bob gets hold of $g$ and $q$, and alice gets hold of $l$ and $v$. Again Alice has to do this and any other move will put her in a disadvantage. Bob had made 4 moves already, the next move is his and now the matrix will look like, </p> <p>\begin{align} \begin{bmatrix} 0 &amp; \otimes &amp; \otimes &amp; 0 &amp; \otimes \\ 0 &amp; f &amp; h &amp;i&amp; j \\ \otimes &amp; k &amp; m&amp; n&amp; o \\ 0 &amp; p&amp; r&amp; s&amp; t\\ \otimes &amp; u&amp; w&amp; x&amp; y \end{bmatrix} \end{align}</p> <p>Now we are left with the lower $4 \times 4$ matrix, Bob is left with 8 chances, and the first move is his. Compare this with $4 \times 4$ case, it looks intuitively that Bob should win. </p>
<p>The problem states that <span class="math-container">$n$</span> is odd, ie., <span class="math-container">$n \geq 3$</span>.</p> <p>With Alice and Bob filling the determinant slots in turns, the objective is for Alice to obtain a non-zero determinant value and for Bob to obtain a zero determinant value.</p> <p>So, Alice strives to get <em>all rows and columns to be linearly independent</em> of each other whereas Bob strives to get <em>at least two rows or columns linearly dependent</em>. Note that these are equivalent ways of saying: Obtain a non-zero determinant or a zero determinant respectively that correspond with Alice and Bob's game objectives.</p> <p>Intuitively, it feels like the game is stacked against Alice because her criteria is more restrictive than Bob's. But, lets get a mathematical proof while simultaneously outlining Bob's winning strategy.</p> <p>Since Alice starts the game, Bob gets the even numbered moves. So, Bob chooses <span class="math-container">$r = [r_0, r_1, \dots, r_{n-1}]$</span>, a row vector of scalars and <span class="math-container">$c, c^T = [c_0, c_1, \dots, c_{n-1}]$</span>, a column vector of scalars that he will use to create linear dependency relationship between vectors such that, <span class="math-container">$u \ne v, w \ne x$</span> and</p> <p><span class="math-container">$$r_u \times R_u + r_v \times R_v = \mathbf{0}$$</span> <span class="math-container">$$c_w \times C_w + c_x \times C_x = \mathbf{0}^T$$</span></p> <p>where <span class="math-container">$\mathbf{0}$</span> is the row vector with all columns set to zero.</p> <p>He doesn't fill <span class="math-container">$r, c$</span> immediately, but only when Alice makes the first move in a given column or row that is not necessarily the first move of the game (update: see <em>Notes</em> section. Bob decides the value for <span class="math-container">$r$</span> or <span class="math-container">$c$</span> in the last move in a pair of rows or columns). [We will shortly prove that Alice will be making the first move in any given column or row]. When Alice makes her move, Bob calculates the value of <span class="math-container">$r_v$</span> or <span class="math-container">$c_x$</span> based on the value that Alice has previously filled. Once the vector cell for <span class="math-container">$r, c$</span> is filled, he doesn't change it for the remainder of the game.</p> <p>With this strategy in place, he strives to always play his moves (the even numbered moves in the game) ensuring that the linear dependence relation for the pair of rows (or columns) is maintained. The example below shows one of Bob's moves for rows <span class="math-container">$r_k, r_{k+1}$</span>. Note that these could be any pair of rows, not necessarily consecutive ones. Also, this doesn't have to be a pair of rows, it also works with pairs of columns.</p> <p><a href="https://i.sstatic.net/XmjYE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XmjYE.png" alt="enter image description here" /></a></p> <p>Bob never makes a move that fills the first cell in a row or column as part of his strategy. He always follows. Therefore, Alice is forced to make that move.</p> <p>It would be impossible for Alice to beat this strategy because even if she chooses to fill a different row or column, she would be the first one to initiate the move for that row or column and Bob will follow this strategy there with the even numbered moves. It is easy to see that Alice always makes the odd numbered move in any row or column and Bob follows the even numbered move with his strategy of filling in the cell so that the linear dependence condition is met.</p> <p>So, even though Alice gets the last move, Bob will make a winning move (in an earlier even numbered move) causing two rows or two columns to be linearly dependent causing the determinant to evaluate to zero regardless of what Alice does.</p> <hr /> <p><strong>Game play for specific problem</strong></p> <p><em>(posed by @Misha Lavrov in comments)</em></p> <p>The moves are numbered sequentially. Odd numbered moves are by Alice and Even numbered moves are by Bob. The 'x' and 'o' are just indicators and can be any real number filled by Alice or Bob respectively. The problem posed is for <span class="math-container">$n=5$</span> where Alice and Bob have made the following <span class="math-container">$4$</span> moves and it is Alice's turn.</p> <p><a href="https://i.sstatic.net/OprlI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OprlI.png" alt="enter image description here" /></a></p> <p>Note that if Alice makes her move in any of the yellow or green cells, Bob continues with the pairing strategy described above.</p> <p>The game gets interesting if Alice fills any of the blue cells. There are three possibilities:</p> <p><em>Alice's move (type 1):</em></p> <p><a href="https://i.sstatic.net/yuCVv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yuCVv.png" alt="enter image description here" /></a></p> <p>Alice fills one of the first two cells in row 5. Bob can continue the game in columns 1 and 2 and it does not matter what he or Alice fill in any of the cells in the column because Bob will need to have the last move in just one pair of rows or columns in order to win.</p> <p>What happens if Alice chooses her next move outside of the first two columns? That takes us to Alice's moves - type 2 and 3.</p> <p><em>Alice's move (type 2):</em></p> <p><a href="https://i.sstatic.net/N7WUF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N7WUF.png" alt="enter image description here" /></a></p> <p>Alice can choose to fill any cell in columns <span class="math-container">$3,4$</span>. This becomes the first cell in a pair of columns filled by Alice and Bob falls back to his <strong>following</strong> strategy.</p> <p><em>Alice's move (type 3):</em></p> <p><a href="https://i.sstatic.net/t16Yz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t16Yz.png" alt="enter image description here" /></a></p> <p>This is also similar to type 2 in that Bob uses the <strong>following</strong> strategy.</p> <p>So, regardless of where Alice fills, Bob can use the following strategy to force the last move in a pair of columns or rows to be his and he can ensure that he fills the last cell in that pair with a value that ensures the pair (of rows or columns) is linearly dependent.</p> <p>While the above example shows adjacent columns, Bob's strategy works for any pair of rows or columns. He ensures he always has the last move in any pair of rows or columns and hence he is guaranteed a win when he finishes any pair of rows or columns. The rest of the moves are redundant.</p> <p>This is guaranteed when <span class="math-container">$n$</span> is odd since he always makes the second move in the game.</p> <hr /> <p><strong>Short proof</strong></p> <p>In at least one pair of rows or columns Bob always makes the last move since he plays the second move in the game and each player alternates. Alice cannot avoid playing any pair of rows or columns completely because, Bob can fill a row or column in that pair with zeros and will win by default in <span class="math-container">$2n$</span> moves in that case.</p> <p><em><strong>Notes:</strong></em></p> <ul> <li>I originally mentioned that Bob chooses <span class="math-container">$r_k$</span> and <span class="math-container">$c_x$</span> after the first move by Alice in row <span class="math-container">$k$</span> or column <span class="math-container">$x$</span>. In fact, he doesn't have to make the decision until he fills the last cell in the row or column.</li> </ul>
probability
<p>In Season 5 Episode 16 of Agents of Shield, one of the characters decides to prove she can't die by pouring three glasses of water and one of poison; she then randomly drinks three of the four cups. I was wondering how to compute the probability of her drinking the one with poison.</p> <p>I thought to label the four cups $\alpha, \beta, \gamma, \delta$ with events </p> <ul> <li>$A = \{\alpha \text{ is water}\}, \ a = \{\alpha \text{ is poison}\}$</li> <li>$B = \{\beta \text{ is water}\},\ b = \{\beta \text{ is poison}\}$</li> <li>$C = \{\gamma \text{ is water}\},\ c = \{\gamma \text{ is poison}\}$</li> <li>$D = \{\delta \text{ is water}\},\ d = \{\delta \text{ is poison}\}$</li> </ul> <p>If she were to drink in order, then I would calculate $P(a) = {1}/{4}$. Next $$P(b|A) = \frac{P(A|b)P(b)}{P(A)}$$ Next $P(c|A \cap B)$, which I'm not completely sure how to calculate.</p> <p>My doubt is that I shouldn't order the cups because that assumes $\delta$ is the poisoned cup. I am also unsure how I would calculate the conditional probabilities (I know about Bayes theorem, I mean more what numbers to put in the particular case). Thank you for you help.</p>
<p>The probability of not being poisoned is exactly the same as the following problem:</p> <p>You choose one cup and drink from the other three. What is the probability of choosing the poisoned cup (and not being poisoned)? That probability is 1/4.</p> <p>Therefore, the probability of being poisoned is 3/4.</p>
<p>NicNic8 has provided a nice intuitive answer to the question. </p> <p>Here are three alternative methods. In the first, we solve the problem directly by considering which cups are selected if she is poisoned. In the second, we solve the problem indirectly by considering the order in which the cups are selected if she is not poisoned. In the third, we add the probabilities that she was poisoned with the first cup, second cup, or third cup. </p> <p><strong>Method 1:</strong> We use the <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="noreferrer">hypergeometric distribution</a>. </p> <p>There are $\binom{4}{3}$ ways to select three of the four cups. Of these, the person selecting the cups is poisoned if she selects the poisoned cup and two of the three cups of water, which can be done in $\binom{1}{1}\binom{3}{2}$ ways. Hence, the probability that she is poisoned is $$\Pr(\text{poisoned}) = \frac{\binom{1}{1}\binom{3}{2}}{\binom{4}{3}} = \frac{1 \cdot 3}{4} = \frac{3}{4}$$ </p> <p><strong>Method 2:</strong> We subtract the probability that she is not poisoned from $1$. </p> <p>The probability that the first cup she selects is not poisoned is $3/4$ since three of the four cups do not contain poison. If the first cup she selects is not poisoned, the probability that the second cup she selects is not poisoned is $2/3$ since two of the three remaining cups do not contain poison. If both of the first two cups she selects are not poisoned, the probability that the third cup she selects is also not poisoned is $1/2$ since one of the two remaining cups is not poisoned. Hence, the probability that she is not poisoned if she drinks three of the four cups is $$\Pr(\text{not poisoned}) = \frac{3}{4} \cdot \frac{2}{3} \cdot \frac{1}{2} = \frac{1}{4}$$ Hence, the probability that she is poisoned is $$\Pr(\text{poisoned}) = 1 - \Pr(\text{not poisoned}) = 1 - \frac{1}{4} = \frac{3}{4}$$</p> <p><em>Addendum</em>: We can relate this method to the first method by using the hypergeometric distribution.</p> <p>She is not poisoned if she selects all three cups which do not contain poison when selecting three of the four cups. Hence, the probability that she is not poisoned is $$\Pr(\text{not poisoned}) = \frac{\dbinom{3}{3}}{\dbinom{4}{3}} = \frac{1}{4}$$ so the probability she is poisoned is $$\Pr(\text{poisoned}) = 1 - \frac{\dbinom{3}{3}}{\dbinom{4}{3}} = 1 - \frac{1}{4} = \frac{3}{4}$$</p> <p><strong>Method 3:</strong> We calculate the probability that the person is poisoned by adding the probabilities that she is poisoned with the first cup, the second cup, and the third cup.</p> <p>Let $P_k$ denote the event that she is poisoned with the $k$th cup.</p> <p>Since there are four cups, of which just one contains poison, the probability that she is poisoned with her first cup is $$\Pr(P_1) = \frac{1}{4}$$</p> <p>To be poisoned with the second cup, she must not have been poisoned with the first cup and then be poisoned with the second cup. The probability that she is not poisoned with the first cup is $\Pr(P_1^C) = 1 - 1/4 = 3/4$. If she is not poisoned with the first cup, there are three cups remaining of which one is poisoned, so the probability that she is poisoned with the second cup if she is not poisoned with the first is $\Pr(P_2 \mid P_1^C) = 1/3$. Hence, the probability that she is poisoned with the second cup is $$\Pr(P_2) = \Pr(P_2 \mid P_1^C)\Pr(P_1) = \frac{3}{4} \cdot \frac{1}{3} = \frac{1}{4}$$</p> <p>To be poisoned with the third cup, she must not have been poisoned with the first two cups and then be poisoned with the third cup. The probability that she is not poisoned with the first cup is $\Pr(P_1^C) = 3/4$. The probability that she is not poisoned with the second cup given that she was not poisoned with the first is $\Pr(P_2^C \mid P_1^C) = 1 - \Pr(P_2 \mid P_1^C) = 1 - 1/3 = 2/3$. If neither of the first two cups she drank was poisoned, two cups are left, one of which is poisoned, so the probability that the third cup she drinks is poisoned given that the first two were not is $\Pr(P_3 \mid P_1^C \cap P_2^C) = 1/2$. Hence, the probability that she is poisoned with the third cup is $$\Pr(P_3) = \Pr(P_3 \mid P_1^C \cap P_2^C)\Pr(P_2^C \mid P_1^C)\Pr(P_1^C) = \frac{1}{2} \cdot \frac{2}{3} \cdot \frac{3}{4} = \frac{1}{4}$$ Hence, the probability that she is poisoned is $$\Pr(\text{poisoned}) = \Pr(P_1) + \Pr(P_2) + \Pr(P_3) = \frac{1}{4} + \frac{1}{4} + \frac{1}{4} = \frac{3}{4}$$ </p>
matrices
<p>How can we prove that the inverse of an upper (lower) triangular matrix is upper (lower) triangular?</p>
<p>Another method is as follows. An invertible upper triangular matrix has the form $A=D(I+N)$ where $D$ is diagonal (with the same diagonal entries as $A$) and $N$ is upper triangular with zero diagonal. Then $N^n=0$ where $A$ is $n$ by $n$. Both $D$ and $I+N$ have upper triangular inverses: $D^{-1}$ is diagonal, and $(I+N)^{-1}=I-N+N^2-\cdots +(-1)^{n-1}N^{n-1}$. So $A^{-1}=(I+N)^{-1}D^{-1}$ is upper triangular.</p>
<p>Personally, I prefer arguments which are more geometric to arguments rooted in matrix algebra. With that in mind, here is a proof.</p> <p>First, two observations on the geometric meaning of an upper triangular invertible linear map.</p> <ol> <li><p>Define <span class="math-container">$S_k = {\rm span} (e_1, \ldots, e_k)$</span>, where <span class="math-container">$e_i$</span> the standard basis vectors. Clearly, the linear map <span class="math-container">$T$</span> is upper triangular if and only if <span class="math-container">$T S_k \subset S_k$</span>.</p> </li> <li><p>If <span class="math-container">$T$</span> is in addition invertible, we must have the stronger relation <span class="math-container">$T S_k = S_k$</span>.</p> <p>Indeed, if <span class="math-container">$T S_k$</span> was a strict subset of <span class="math-container">$S_k$</span>, then <span class="math-container">$Te_1, \ldots, Te_k$</span> are <span class="math-container">$k$</span> vectors in a space of dimension strictly less than <span class="math-container">$k$</span>, so they must be dependent: <span class="math-container">$\sum_i \alpha_i Te_i=0$</span> for some <span class="math-container">$\alpha_i$</span> not all zero. This implies that <span class="math-container">$T$</span> sends the <b>nonzero</b> vector <span class="math-container">$\sum_i \alpha_i e_i$</span> to zero, so <span class="math-container">$T$</span> is not invertible.</p> </li> </ol> <p>With these two observations in place, the proof proceeds as follows. Take any <span class="math-container">$s \in S_k$</span>. Since <span class="math-container">$TS_k=S_k$</span> there exists some <span class="math-container">$s' \in S_k$</span> with <span class="math-container">$Ts'=s$</span> or <span class="math-container">$T^{-1}s = s'$</span>. In other words, <span class="math-container">$T^{-1} s$</span> lies in <span class="math-container">$S_k$</span>, so <span class="math-container">$T^{-1}$</span> is upper triangular.</p>
logic
<p>I am trying to understand <a href="http://en.wikipedia.org/wiki/Russell%27s_paradox">Russells's paradox</a></p> <p>How can a set contain itself? Can you show example of set which is not a set of all sets and it contains itself.</p>
<p>In modern set theory (read: ZFC) there is no such set. The axiom of foundation ensures that such sets do not exist, which means that the class defined by Russell in the paradox is in fact the collection of all sets.</p> <p>It is possible, however, to construct a model of all the axioms except the axiom of foundation, and generate sets of the form $x=\{x\}$. Alternatively there are stronger axioms such as the Antifoundation axiom which also imply that there are sets like $x=\{x\}$. Namely, sets for which $x\in x$.</p> <p>For the common mathematics one can assume the foundation is based on ZFC or not (because there is a model of ZFC within a model of ZFC-Foundation), so there is no way to point out at a particular set for which it is true.</p> <p><strong>Also interesting:</strong></p> <ol> <li><a href="https://math.stackexchange.com/questions/200255/is-the-statement-a-in-a-true-or-false/">Is the statement $A \in A$ true or false?</a></li> <li><a href="https://math.stackexchange.com/questions/213639/where-is-axiom-of-regularity-actually-used/">Where is axiom of regularity actually used?</a></li> </ol>
<p>$x = \{ x \}$ ... </p> <p>... but actually <a href="http://en.wikipedia.org/wiki/Axiom_of_regularity">one</a> of the axioms of <a href="http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZFC</a> (the "usual" axioms of set theory) has the immediate consequence that no set has itself as a member.</p>
linear-algebra
<p>there is a similar thread here <a href="https://math.stackexchange.com/questions/311580/coordinate-free-proof-of-operatornametrab-operatornametrba">Coordinate-free proof of $\operatorname{Tr}(AB)=\operatorname{Tr}(BA)$?</a>, but I'm only looking for a simple linear algebra proof. </p>
<p>Observe that if $A$ and $B$ are $n\times n$ matrices, $A=(a_{ij})$, and $B=(b_{ij})$, then $$(AB)_{ii} = \sum_{k=1}^n a_{ik}b_{ki},$$ so $$ \operatorname{Tr}(AB) = \sum_{j=1}^n\sum_{k=1}^n a_{jk}b_{kj}. $$ Conclude calculating the term $(BA)_{ii}$ and comparing both traces.</p>
<blockquote> <p>For any couple $(A,B)$ of $n\times n$ matrices with complex entries, the following identity holds: $$ \operatorname{Tr}(AB) = \operatorname{Tr}(BA).$$</p> </blockquote> <p><strong>Proof</strong>. Assuming $A$ is an invertible matrix, $AB$ and $BA$ share the same characteristic polynomial, since they are conjugated matrices due to $BA = A^{-1}(AB)A$. In particular they have the same trace. Equivalently, they share the same eigenvalues (counted according to their algebraic multiplicity) hence they share the sum of such eigenvalues. On the other hand, if $A$ is a singular matrix then $A_\varepsilon\stackrel{\text{def}}{=} A+\varepsilon I$ is an invertible matrix for any $\varepsilon\neq 0$ small enough. It follows that $\operatorname{Tr}(A_\varepsilon B) = \operatorname{Tr}(B A_\varepsilon)$, and since $\operatorname{Tr}$ is a continuous operator, by considering the limits of both sides as $\varepsilon\to 0$ we get $\operatorname{Tr}(AB)=\operatorname{Tr}(BA)$ just as well.</p>
linear-algebra
<p>I am taking a proof-based introductory course to Linear Algebra as an undergrad student of Mathematics and Computer Science. The author of my textbook (Friedberg's <em>Linear Algebra</em>, 4th Edition) says in the introduction to Chapter 4:</p> <blockquote> <p>The determinant, which has played a prominent role in the theory of linear algebra, is a special scalar-valued function defined on the set of square matrices. <strong>Although it still has a place in the study of linear algebra and its applications, its role is less central than in former times.</strong> </p> </blockquote> <p>He even sets up the chapter in such a way that you can skip going into detail and move on:</p> <blockquote> <p>For the reader who prefers to treat determinants lightly, Section 4.4 contains the essential properties that are needed in later chapters.</p> </blockquote> <p>Could anyone offer a didactic and simple explanation that refutes or asserts the author's statement?</p>
<p>Friedberg is not wrong, at least on a historical standpoint, as I am going to try to show it.</p> <p>Determinants were discovered &quot;as such&quot; in the second half of the 18th century by Cramer who used them in his celebrated rule for the solution of a linear system (in terms of quotients of determinants). Their spread was rather rapid among mathematicians of the next two generations ; they discovered properties of determinants that now, with our vision, we mostly express in terms of matrices.</p> <p>Cauchy has given two important results about determinants as explained in the very nice article by Hawkins referenced below :</p> <ul> <li><p>around 1815, Cauchy discovered the multiplication rule (rows times columns) of two determinants. This is typical of a result that has been completely revamped : nowadays, this rule is for the multiplication of matrices, and determinants' multiplication is restated as the homomorphism rule <span class="math-container">$\det(A \times B)= \det(A)\det(B)$</span>.</p> </li> <li><p>around 1825, he discovered eigenvalues &quot;associated with a symmetric <em>determinant</em>&quot; and established the important result that these eigenvalues are real ; this discovery has its roots in astronomy, in connection with Sturm, explaining the word &quot;secular values&quot; he attached to them: see for example <a href="http://www2.cs.cas.cz/harrachov/slides/Golub.pdf" rel="noreferrer">this</a>.</p> </li> </ul> <p>Matrices made a shy apparition in the mid-19th century (in England) ; &quot;matrix&quot; is a term coined by Sylvester <a href="http://mathworld.wolfram.com/Matrix.html" rel="noreferrer">see here</a>. I strongly advise to take a look at his elegant style in his <a href="https://archive.org/stream/collectedmathema04sylvuoft#page/n7/mode/2up" rel="noreferrer">Collected Papers</a>.</p> <p>Together with his friend Cayley, they can rightly be named the founding fathers of linear algebra, with determinants as permanent reference. Here is a major quote of Sylvester:</p> <p><em>&quot;I have in previous papers defined a &quot;Matrix&quot; as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent&quot;.</em></p> <p>A lot of important polynomials are either generated or advantageously expressed as determinants:</p> <ul> <li><p>the characteristic polynomial (of a matrix) is expressed as the famous <span class="math-container">$\det(A-\lambda I)$</span>,</p> </li> <li><p>in particular, the theory of orthogonal polynomials mainly developed at the end of 19th century, can be expressed in great part with determinants,</p> </li> <li><p>the &quot;resultant&quot; of two polynomials, invented by Sylvester (giving a condition for these polynomials to have a common root), etc.</p> </li> </ul> <p>Let us repeat it : for a mid-19th century mathematician, a <em>square</em> array of numbers has necessarily a <strong>value</strong> (its determinant): it cannot have any other meaning. If it is a <em>rectangular</em> array, the numbers attached to it are the determinants of submatrices that can be &quot;extracted&quot; from the array.</p> <p>The identification of &quot;Linear Algebra&quot; as an integral (and new) part of Mathematics is mainly due to the German School (say from 1870 till the 1930's). I don't cite the names, there are too many of them. An example among many others of this german domination: the germenglish word &quot;eigenvalue&quot;. The word &quot;kernel&quot; could have remained the german word &quot;kern&quot; that appears around 1900 (see <a href="http://jeff560.tripod.com/mathword.html" rel="noreferrer">this site</a>).</p> <p>The triumph of Linear Algebra is rather recent (mid-20th century). &quot;Triumph&quot; meaning that now Linear Algebra has found a very central place. Determinants in all that ? Maybe the biggest blade in this swissknife, but not more ; another invariant (this term would deserve a long paragraph by itself), the <strong>trace</strong>, would be another blade, not the smallest.</p> <p>In 19th century, Geometry was still at the heart of mathematical education; therefore, the connection between geometry and determinants has been essential in the development of linear algebra. Some cornerstones:</p> <ul> <li>the development of projective geometry, <em>in its analytical form,</em> in the 1850s. This development has led in particular to place homographies at the heart of projective geometry, with their associated matricial expression. Besides, conic curves, described by a quadratic form, can as well be written under an all-matricial expression <span class="math-container">$X^TMX=0$</span> where <span class="math-container">$M$</span> is a symmetrical <span class="math-container">$3 \times 3$</span> matrix. This convergence to a unique and new &quot;algebra&quot; has taken time to be recognized.</li> </ul> <p>A side remark: this kind of reflexions has been capital in the decision of Bourbaki team to avoid all figures and adopt the extreme view of reducing geometry to linear algebra (see the <a href="https://hsm.stackexchange.com/q/2578/3730">&quot;Down with Euclid&quot;</a> of J. Dieudonné in the sixties).</p> <p>Different examples of the emergence of new trends :</p> <p>a) the concept of <strong>rank</strong>: for example, a pair of straight lines is a conic section whose matrix has rank 1. The &quot;rank&quot; of a matrix used to be defined in an indirect way as the &quot;dimension of the largest nonzero determinant that can be extracted from the matrix&quot;. Nowadays, the rank is defined in a more straightforward way as the dimension of the range space... at the cost of a little more abstraction.</p> <p>b) the concept of <strong>linear transformations</strong> and <strong>duality</strong> arising from geometry: <span class="math-container">$X=(x,y,t)^T\rightarrow U=MX=(u,v,w)$</span> between points <span class="math-container">$(x,y)$</span> and straight lines with equations <span class="math-container">$ux+vy+w=0$</span>. More precisely, the tangential description, i.e., the constraint on the coefficients <span class="math-container">$U^T=(u,v,w)$</span> of the tangent lines to the conical curve has been recognized as associated with <span class="math-container">$M^{-1}$</span> (assuming <span class="math-container">$\det(M) \neq 0$</span>!), due to relationship</p> <p><span class="math-container">$$X^TMX=X^TMM^{-1}MX=(MX)^T(M^{-1})(MX)=U^TM^{-1}U=0$$</span> <span class="math-container">$$=\begin{pmatrix}u&amp;v&amp;w\end{pmatrix}\begin{pmatrix}A &amp; B &amp; D \\ B &amp; C &amp; E \\ D &amp; E &amp; F \end{pmatrix}\begin{pmatrix}u \\ v \\ w \end{pmatrix}=0$$</span></p> <p>whereas, in 19th century, it was usual to write the previous quadratic form as :</p> <p><span class="math-container">$$\det \begin{pmatrix}M^{-1}&amp;U\\U^T&amp;0\end{pmatrix}=\begin{vmatrix}a&amp;b&amp;d&amp;u\\b&amp;c&amp;e&amp;v\\d&amp;e&amp;f&amp;w\\u&amp;v&amp;w&amp;0\end{vmatrix}=0$$</span></p> <p>as the determinant of a matrix obtained by &quot;bordering&quot; <span class="math-container">$M^{-1}$</span> precisely by <span class="math-container">$U$</span></p> <p>(see the excellent lecture notes (<a href="http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html" rel="noreferrer">http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html</a>)). It is to be said that the idea of linear transformations, especially orthogonal transformations, arose even earlier in the framework of the theory of numbers (quadratic representations).</p> <p>Remark: the way the former identities have been written use matrix algebra notations and rules that were unknown in the 19th century, with the notable exception of Grassmann's &quot;Ausdehnungslehre&quot;, whose ideas were too ahead of his time (1844) to have a real influence.</p> <p>c) the concept of <strong>eigenvector/eigenvalue</strong>, initially motivated by the determination of &quot;principal axes&quot; of conics and quadrics.</p> <ul> <li>the very idea of &quot;geometric transformation&quot; (more or less born with Klein circa 1870) associated with an array of numbers (when linear or projective). A matrix, of course, is much more that an array of numbers... But think for example to the persistence of expression &quot;table of direction cosines&quot; (instead of &quot;orthogonal matrix&quot;) as can be found for example still in the 2002 edition of Analytical Mechanics by A.I. Lorrie.</li> </ul> <p>d) The concept of &quot;companion matrix&quot; of a polynomial <span class="math-container">$P$</span>, that could be considered as a tool but is more fundamental than that (<a href="https://en.wikipedia.org/wiki/Companion_matrix" rel="noreferrer">https://en.wikipedia.org/wiki/Companion_matrix</a>). It can be presented and &quot;justified&quot; as a &quot;nice determinant&quot; : In fact, it has much more to say, with the natural interpretation for example in the framework of <span class="math-container">$\mathbb{F}_p[X]$</span> (polynomials with coefficients in a finite field) as the matrix of multiplication by <span class="math-container">$P(X)$</span>. (<a href="https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf" rel="noreferrer">https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf</a>), giving rise to matrix representations of such fields. Another remarkable application of companion matrices : the main numerical method for obtaining the roots of a polynomial is by computing the eigenvalues of its companion matrix using a Francis &quot;QR&quot; iteration (see (<a href="https://math.stackexchange.com/q/68433">https://math.stackexchange.com/q/68433</a>)).</p> <p>References:</p> <p>I discovered recently a rather similar question with a very complete answer by Denis Serre, a specialist in the domain of matrices : <a href="https://mathoverflow.net/q/35988/88984">https://mathoverflow.net/q/35988/88984</a></p> <p>The article by Thomas Hawkins : &quot;Cauchy and the spectral theory of matrices&quot;, Historia Mathematica 2, 1975, 1-29.</p> <p>See also (<a href="http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf" rel="noreferrer">http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf</a>)</p> <p>An important bibliography is to be found in (<a href="http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html" rel="noreferrer">http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html</a>).</p> <p>See also a good paper by Nicholas Higham : (<a href="http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf" rel="noreferrer">http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf</a>)</p> <p>For conic sections and projective geometry, see a) this excellent chapter of lectures of the University of Vienna (see the other chapters as well) : (<a href="https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf" rel="noreferrer">https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf</a>). See as well : (maths.gla.ac.uk/wws/cabripages/conics/conics0.html).</p> <p>Don't miss the following very interesting paper about various kinds of useful determinants : <a href="https://arxiv.org/pdf/math/9902004.pdf" rel="noreferrer">https://arxiv.org/pdf/math/9902004.pdf</a></p> <p>See also <a href="https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Matrix_(mathematics).html" rel="noreferrer">this</a></p> <p>Very interesting precisions on determinants in <a href="https://arxiv.org/pdf/math/9902004.pdf" rel="noreferrer">this text</a> and in these <a href="https://math.stackexchange.com/q/194579">answers</a>.</p> <p>A fundamental work on &quot;The Theory of Determinants&quot; in 4 volumes has been written by Thomas Muir : <a href="http://igm.univ-mlv.fr/%7Eal/Classiques/Muir/History_5/VOLUME5_TEXT.PDF" rel="noreferrer">http://igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF</a> (years 1906, 1911, 1922, 1923) for the last volumes or, for all of them <a href="https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf" rel="noreferrer">https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf</a>. It is very interesting to take random pages and see how the determinant-mania has been important, especially in the second half of the 19th century. Matrices appear at some places with the double bar convention that lasted a very long time. Matrices are mentionned here and there, rarely to their advantage...</p> <p>Many historical details about determinants and matrices can be found <a href="https://mathshistory.st-andrews.ac.uk/HistTopics/Matrices_and_determinants/" rel="noreferrer">here</a>.</p>
<p><strong>It depends who you speak to.</strong></p> <ul> <li>In <strong>numerical mathematics</strong>, where people actually have to compute things on a computer, it is largely recognized that <strong>determinants are useless</strong>. Indeed, in order to compute determinants, either you use the Laplace recursive rule ("violence on minors"), which costs <span class="math-container">$O(n!)$</span> and is infeasible already for very small values of <span class="math-container">$n$</span>, or you go through a triangular decomposition (Gaussian elimination), which by itself already tells you everything you needed to know in the first place. Moreover, for most reasonably-sized matrices containing floating-point numbers, determinants overflow or underflow (try <span class="math-container">$\det \frac{1}{10} I_{350\times 350}$</span>, for instance). To put another nail on the coffin, computing eigenvalues by finding the roots of <span class="math-container">$\det(A-xI)$</span> is hopelessly unstable. In short: in numerical computing, whatever you want to do with determinants, there is a better way to do it without using them.</li> <li>In <strong>pure mathematics</strong>, where people are perfectly fine knowing that an explicit formula exists, all the examples are <span class="math-container">$3\times 3$</span> anyway and people make computations by hand, <strong>determinants are invaluable</strong>. If one uses Gaussian elimination instead, all those divisions complicate computations horribly: one needs to take different paths whether things are zero or not, so when computing symbolically one gets lost in a myriad of cases. The great thing about determinants is that they give you an explicit polynomial formula to tell when a matrix is invertible or not: this is extremely useful in proofs, and allows for lots of elegant arguments. For instance, try proving this fact without determinants: given <span class="math-container">$A,B\in\mathbb{R}^{n\times n}$</span>, if <span class="math-container">$A+Bx$</span> is singular for <span class="math-container">$n+1$</span> distinct real values of <span class="math-container">$x$</span>, then it is singular for all values of <span class="math-container">$x$</span>. This is the kind of things you need in proofs, and determinants are a priceless tool. Who cares if the explicit formula has a exponential number of terms: they have a very nice structure, with lots of neat combinatorial interpretations. </li> </ul>
logic
<p>I've heard of some other paradoxes involving sets (i.e., &quot;the set of all sets that do not contain themselves&quot;) and I understand how paradoxes arise from them. But this one I do not understand.</p> <p>Why is &quot;the set of all sets&quot; a paradox? It seems like it would be fine, to me. There is nothing paradoxical about a set containing itself.</p> <p>Is it something that arises from the &quot;rules of sets&quot; that are involved in more rigorous set theory?</p>
<p>Let <span class="math-container">$|S|$</span> be the cardinality of <span class="math-container">$S$</span>. We know that <span class="math-container">$|S| &lt; |2^S|$</span>, which can be proven with <a href="https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument#General_sets" rel="noreferrer">generalized Cantor's diagonal argument</a>.</p> <hr /> <p><strong>Theorem</strong></p> <blockquote> <p>The set of all sets does not exist.</p> </blockquote> <p><strong>Proof</strong></p> <blockquote> <p>Let <span class="math-container">$S$</span> be the set of all sets, then <span class="math-container">$|S| &lt; |2^S|$</span>. But <span class="math-container">$2^S$</span> is a subset of <span class="math-container">$S$</span>. Therefore <span class="math-container">$|2^S| \leq |S|$</span>. A contradiction. Therefore the set of all sets does <strong>not</strong> exist.</p> </blockquote>
<p>Just by itself the notion of a universal set is not paradoxical.</p> <p>It becomes paradoxical when you add the assumption that whenever $\varphi(x)$ is a formula, and $A$ is a preexisting set, then $\{x\in A\mid \varphi(x)\}$ is a set as well.</p> <p>This is known as bounded comprehension, or separation. The full notion of comprehension was shown to be inconsistent by Russell's paradox. But this version is not so strikingly paradoxical. It is part of many of the modern axiomatizations of set theory, which have yet to be shown inconsistent.</p> <p>We can show that assuming separation holds, the proof of the Russell paradox really translates to the following thing: If $A$ is a set, then there is a subset of $A$ which is not an element of $A$.</p> <p>In the presence of a universal set this leads to an outright contradiction, because this subset should be an element of the set of all sets, but it cannot be.</p> <p>But we may choose to restrict the formulas which can be used in this schema of axioms. Namely, we can say "not every formula should define a subset!", and that's okay. Quine defined a set theory called New Foundations, in which we limit these formulas in a way which allows a universal set to exist. Making it consistent to have the set of all sets, if we agree to restrict other parts of our set theory.</p> <p>The problem is that the restrictions given by Quine are much harder to work with naively and intuitively. So we prefer to keep the full bounded comprehension schema, in which case the set of all set cannot exist for the reasons above.</p> <p>While we are at it, perhaps it should be mentioned that the Cantor paradox, the fact that the power set of a universal set must be strictly larger, also fails in Quine's New Foundation for the same reasons. The proof of Cantor's theorem that the power set is strictly larger simply does not go through without using "forbidden" formulas in the process.</p> <p>Not to mention that the Cantor paradox fails if we do not assume the power set axiom, namely it might be that not all sets have a power set. So if the universal set does not have a power set, there is no problem in terms of cardinality.</p> <p>But again, we are taught from the start that these properties should hold for sets, and therefore they seem very natural to us. So the notion of a universal set is paradoxical for us, for that very reason. We are educated with a bias against universal sets. If you were taught that not all sets should have a power set, or that not all sub-collections of a set which are defined by a formula are sets themselves, then neither solution would be problematic. And maybe even you'd find it strange to think of a set theory without a universal set!</p>
logic
<p>Consider A $\Rightarrow$ B, A $\models$ B, and A $\vdash$ B.</p> <p>What are some examples contrasting their proper use? For example, give A and B such that A $\models$ B is true but A $\Rightarrow$ B is false. I'd appreciate pointers to any tutorial-level discussion that contrasts these operators.</p> <p>Edit: What I took away from this discussion and the others linked is that logicians make a distinction between $\vdash$ and $\models$, but non-logicians tend to use $\Rightarrow$ for both relations plus a few others. Points go to Trevor for being the first to explain the relevance of <em>completeness</em> and <em>soundness</em>.</p>
<p>First let's compare $A \implies B$ with $A \vdash B$. The former is a statement in the object language and the latter is a statement in the meta-language, so it would make more sense to compare $\vdash A \implies B$ with $A \vdash B$. The rule of <em>modus ponens</em> allows us to conclude $A \vdash B$ from $\vdash A \implies B$, and the deduction theorem allows to conclude $\vdash A \implies B$ from $A \vdash B$. Probably there are exotic logics where <em>modus ponens</em> fails or the deduction theorem fails, but I'm not sure that's what you're looking for. I think the short answer is that if you put a turnstile ($\vdash$) in front of $A \implies B$ to make it a statement in the meta-language (asserting that the implication is <em>provable</em>) then you get something equivalent to $A \vdash B$.</p> <p>Next let's compare $A \vdash B$ with $A \models B$. These are both statements in the meta-language. The former asserts the existence of a <em>proof</em> of $B$ from $A$ (syntactic consequence) whereas the latter asserts that every $B$ <em>holds</em> in every <em>model</em> of $A$ (semantic consequence). Whether these are equivalent depends on what class of models we allow in our logical system, and what deduction rules we allow. If the logical system is <em>sound</em> then we can conclude $A \models B$ from $A \vdash B$, and if the logical system is <em>complete</em> then we can conclude $A \vdash B$ from $A \models B$. These are desirable properties for logical systems to have, but there are logical systems that are not sound or complete. For example, if you remove some essential rule of inference from a system it will cease to be complete, and if you add some invalid rule of inference to the system it will cease to be sound.</p>
<p>@Trevor's answer makes the crucial distinctions which need to be made: there's no disagreement at all about that. Symbolically, I'd put things just a bit differently. Consider first <em>these</em> three:</p> <p>$$\to,\quad \vdash,\quad \vDash$$</p> <ol> <li>'$\to$' (or '$\supset$') is a symbol belonging to various formal languages (e.g. the language of propositional logic or the language of the first-order predicate calculus) to express [usually, but not always] the truth-functional conditional. $A \to B$ is a single conditional proposition in the object language under consideration.</li> <li>'$\vdash$' is an expression added as useful shorthand to logician's English (or Spanish or whatever) -- it belongs to the metalanguage in which we talk about consequence relations between formal sentences. Unpacked, $A, A \to B \vdash B$ says in augmented English that in some relevant deductive system, there is a proof from the premisses $A$ and $A \to B$ to the conclusion $B$. (If we are being really pernickety we would write '$A$', '$A \to B$' $\vdash$ '$B$' but it is always understood that $\vdash$ comes with invisible quotes.)</li> <li>'$\vDash$' is another expression added to logician's English (or Spanish or whatever) -- it again belongs to the metalanguage in which we talk about consequence relations between formal sentences. And e.g. $A, A \to B \vDash B$ says that in the relevant semantics, there is no valuation which makes the premisses $A$ and $A \to B$ true and the conclusion $B$ false.</li> </ol> <p>As for '$\Rightarrow$', this -- like the informal use of 'implies' -- seems to be used (especially by non-logicians), in different contexts for any of these three. It is also used, differently again, for the relation of so-called strict implication, or as punctuation in a sequent. So I'm afraid you do just have to be careful to let context disambiguate. The use of the two kinds of turnstile is absolutely standardised. The use of the double arrow isn't.</p>
probability
<p>Need some help.</p> <p>I'm a first year teacher, moving into a probability unit.</p> <p>I was looking through my given materials, and think I've found two errors. It's been a long time since I've done probability and I don't want to ask staff for fear of sounding stupid.</p> <p>I created a Word document that I took a screen shot of, in order to hopefully get some confirmation.</p> <p>Anything inside the orange boxes are from my course materials, anything outside is work I did myself in Word. The red box on the right is just highlighting where I think the error occurs.</p> <p>Any help would be greatly appreciated.</p> <p><a href="https://i.sstatic.net/zbo9u.png" rel="noreferrer"><img src="https://i.sstatic.net/zbo9u.png" alt="enter image description here"></a></p> <p>Noticed a small error on my part. The absolute last thing I wrote should say 3/16. As the book defines P(A and B) as such, not as 11/16, as I suggested. </p>
<p>Not only you're right, but the second question is quite weird: the question asks for $P(A\cap B)$ when in the end the final result is supposedly $P(A\cup B)$. Moreover, $A$ <strong>contains</strong> $B$, which is never pointed out; the term "overlapping" is very misleading in this case. I would throw this material away or be very suspicious about every statement it makes if I were you.</p>
<p>I'm pretty sure its an error on the course materials. The ideas of a number being odd and a number being prime are dependent on each other (an even number cannot be prime unless it is $2$), so the rule $P(A)*P(B)=P(A\cap B)$ is not valid here (it's only valid for independent events). So: trust your instincts, you are correct.</p>
logic
<p>I am doing some homework exercises and stumbled upon this question. I don't know where to start. </p> <blockquote> <p>Prove that the union of countably many countable sets is countable.</p> </blockquote> <p>Just reading it confuses me. </p> <p>Any hints or help is greatly appreciated! Cheers!</p>
<p>Let's start with a quick review of "countable". A set is countable if we can set up a 1-1 correspondence between the set and the natural numbers. As an example, let's take <span class="math-container">$\mathbb{Z}$</span>, which consists of all the integers. Is <span class="math-container">$\mathbb Z$</span> countable?</p> <p>It may seem uncountable if you pick a naive correspondence, say <span class="math-container">$1 \mapsto 1$</span>, <span class="math-container">$2 \mapsto 2 ...$</span>, which leaves all of the negative numbers unmapped. But if we organize the integers like this:</p> <p><span class="math-container">$$0$$</span> <span class="math-container">$$1, -1$$</span> <span class="math-container">$$2, -2$$</span> <span class="math-container">$$3, -3$$</span> <span class="math-container">$$...$$</span></p> <p>We quickly see that there is a map that works. Map 1 to 0, 2 to 1, 3 to -1, 4 to 2, 5 to -2, etc. So given an element <span class="math-container">$x$</span> in <span class="math-container">$\mathbb Z$</span>, we either have that <span class="math-container">$1 \mapsto x$</span> if <span class="math-container">$x=0$</span>, <span class="math-container">$2x \mapsto x$</span> if <span class="math-container">$x &gt; 0$</span>, or <span class="math-container">$2|x|+1 \mapsto x$</span> if <span class="math-container">$x &lt; 0$</span>. So the integers are countable.</p> <p>We proved this by finding a map between the integers and the natural numbers. So to show that the union of countably many sets is countable, we need to find a similar mapping. First, let's unpack "the union of countably many countable sets is countable":</p> <ol> <li><p>"countable sets" pretty simple. If <span class="math-container">$S$</span> is in our set of sets, there's a 1-1 correspondence between elements of <span class="math-container">$S$</span> and <span class="math-container">$\mathbb N$</span>.</p></li> <li><p>"countably many countable sets" we have a 1-1 correspondence between <span class="math-container">$\mathbb N$</span> and the sets themselves. In other words, we can write the sets as <span class="math-container">$S_1$</span>, <span class="math-container">$S_2$</span>, <span class="math-container">$S_3$</span>... Let's call the set of sets <span class="math-container">$\{S_n\}, n \in \mathbb N$</span>.</p></li> <li><p>"union of countably many countable sets is countable". There is a 1-1 mapping between the elements in <span class="math-container">$\mathbb N$</span> and the elements in <span class="math-container">$S_1 \cup S_2 \cup S_3 ...$</span></p></li> </ol> <p>So how do we prove this? We need to find a correspondence, of course. Fortunately, there's a simple way to do this. Let <span class="math-container">$s_{nm}$</span> be the <span class="math-container">$mth$</span> element of <span class="math-container">$S_n$</span>. We can do this because <span class="math-container">$S_n$</span> is by definition of the problem countable. We can write the elements of ALL the sets like this:</p> <p><span class="math-container">$$s_{11}, s_{12}, s_{13} ...$$</span> <span class="math-container">$$s_{21}, s_{22}, s_{23} ...$$</span> <span class="math-container">$$s_{31}, s_{32}, s_{33} ...$$</span> <span class="math-container">$$...$$</span></p> <p>Now let <span class="math-container">$1 \mapsto s_{11}$</span>, <span class="math-container">$2 \mapsto s_{12}$</span>, <span class="math-container">$3 \mapsto s_{21}$</span>, <span class="math-container">$4 \mapsto s_{13}$</span>, etc. You might notice that if we cross out every element that we've mapped, we're crossing them out in diagonal lines. With <span class="math-container">$1$</span> we cross out the first diagonal, <span class="math-container">$2-3$</span> we cross out the second diagonal, <span class="math-container">$4-6$</span> the third diagonal, <span class="math-container">$7-10$</span> the fourth diagonal, etc. The <span class="math-container">$nth$</span> diagonal requires us to map <span class="math-container">$n$</span> elements to cross it out. Since we never "run out" of elements in <span class="math-container">$\mathbb N$</span>, eventually given any diagonal we'll create a map to every element in it. Since obviously every element in <span class="math-container">$S_1 \cup S_2 \cup S_3 ...$</span> is in one of the diagonals, we've created a 1-1 map between <span class="math-container">$\mathbb N$</span> and the set of sets.</p> <p>Let's extend this one step further. What if we made <span class="math-container">$s_{11} = 1/1$</span>, <span class="math-container">$s_{12} = 1/2$</span>, <span class="math-container">$s_{21} = 2/1$</span>, etc? Then <span class="math-container">$S_1 \cup S_2 \cup S_3 ... = \mathbb Q^+$</span>! This is how you prove that the rationals are countable. Well, the positive rationals anyway. Can you extend these proofs to show that the rationals are countable?</p>
<p>@Hovercouch's answer is correct, but the presentation hides a really rather important point that you ought probably to know about. Here it is:</p> <blockquote> <p>The argument depends on accepting (a weak version of) the Axiom of Choice!</p> </blockquote> <p>Why so?</p> <p>You are only given that each $S_i$ is <em>countable</em>. You aren't given up front a way of <em>counting</em> any particular $S_i$, so you need to choose a surjective function $f_i\colon \mathbb{N} \to S_i$ to do the counting (in @Hovercouch's notation, $f_m(n) = s_{mn}$). And, crucially, you need to choose such an $f_i$ countably many times (a choice for each $i$). </p> <p>That's an infinite sequence of choices to make: and it's a version of the highly non-trivial Axiom of Choice that says, yep, it's legitimate to pretend we can do that. </p>
geometry
<p>Today we learned about Pythagoras' theorem. Sadly, I can't understand the logic behind it.</p> <p><em>$A^{2} + B^{2} = C^{2}$</em></p> <p><img src="https://i.sstatic.net/RTMeE.png" alt="enter image description here"></p> <p>$C^{2} = (5 \text{ cm})^2 + (7 \text{ cm})^2$ <br> $C^{2} = 25 \text{ cm}^2 + 49 \text{ cm}^2$ <br> $C^{2} = 74 \text{ cm}^2$<br><br> ${x} = +\sqrt{74} \text{ cm}$<br></p> <p>Why does the area of a square with a side of $5$&nbsp;cm + the area of a square with a side of $7$&nbsp;cm always equal to the missing side's length squared?</p> <p>I asked my teacher but she's clueless and said Pythagoras' theorem had nothing to do with squares.</p> <p>However, I know it does because this formula has to somehow make sense. Otherwise, it wouldn't exist.</p>
<p>Ponder this image and you will see why your intuition about what is going on is correct:</p> <p><img src="https://i.sstatic.net/POhH1.png" alt="enter image description here"></p>
<p>It is not quite important that the shapes of the figures you put on the edges of your right triangle are squares. They can be any figure, since the area of a figure goes with the square of its side. So you can put pentagons, stars, or a camel's shape... In the following picture the area of the large pentagon is the sum of the areas of the two smaller ones:</p> <p><img src="https://i.sstatic.net/ZTtcc.png" alt="I&#39;m am not able to rescale Camels with geogebra... so here are pentagons"></p> <p>But you can also put a triangle similar to the original one, and you can put it <em>inside</em> instead of <em>outside</em>. So here comes the proof of Pitagora's Theorem.</p> <p><img src="https://i.sstatic.net/6LOE5.png" alt="enter image description here"></p> <p>Let $ABC$ be your triangle with a right angle in $C$. Let $H$ be the projection of $C$ onto $AB$. You can easily notice that $ABC$, $ACH$ and $CBH$ are similar. And they are the triangles constructed inside the edges of $ABC$. Clearly the area of $ABC$ is the sum of the areas of $ACH$ and $CBH$, so the theorem is proven.</p> <p>In my opionion this is the proof which gives the essence of the theorem.</p>
differentiation
<p>How would one find: $$\frac{\mathrm d}{\mathrm dx}{}^xx?$$ where ${}^ba$ is defined by $${}^ba\stackrel{\mathrm{def}}{=}\underbrace{ a^{a^{\cdot^{\cdot^{\cdot^a}}}}}_{\text{$b$ times}}$$</p> <hr> <p><strong>Work so far</strong></p> <p>The interval that I am working in is $(0, \infty)$. It doesn't make much sense to consider negative numbers. Although there exists no extension to the reals for tetration I am going to assume that it exists. My theory is that it shouldn't change the algebra involved; (correct me if I am wrong).</p> <p>Some visual analysis on the curve and you can see that it diverges to $+\infty$ extremely rapidly. This means that the derivative is going to have similar properties as well.</p> <p>Let $f(x, y):={}^yx$ so we can rewrite our tetration as $f(x, x)$. Now using the definition of the total derivative: $D\;g(x, y)=\partial_xg(x, y)+\partial_yg(x, y)$. This should allow us to differentiate $f$.</p> <p>$$D\;f(x,y)=\frac{\partial}{\partial x}{}^yx+\frac{\partial}{\partial y}{}^yx$$</p> <p>Let's focus on the first partial derivative $\partial_x{}^yx$. This is just the case of differentiating a finite power tower as $y$ is treated constant. </p> <p>Firstly looking at some examples do derive a general formula for $D\;\;{}^nx$: $$ \begin{array}{c|c} n &amp; D\;\;{}^nx\\ \hline 0 &amp; 0\\ 1 &amp; 1\\ 2 &amp; {}^2x(\log x + 1)\\ 3 &amp; {}^3x\times {}^2x\times x^{-1}(x\log x(\log x + 1)+1) \end{array} $$</p> <p>It is easy to see that there is some pattern emerging however because of it's recursive nature I could not form a formula to describe it.</p> <blockquote> <p><strong>Edit</strong></p> <p>$$\dfrac{d}{dx}\left(e^{{}^nx \log(x)}\right)={}^{n+1}x\dfrac{d}{dx}\left({}^nx \log(x)\right)={}^{n+1}x\left(({}^nx)' \log(x)+\frac{{}^nx}{x}\right)$$ The recursive formula for the partial was pointed out in comments however an explicit formula would be more useful for this purpose.</p> </blockquote> <p>The second partial derivative is interesting and relies on properties of tetration. </p> <p>I was hoping for it to be similar to exponention such that $$D_y \;x^y=D_y\;e^{y\log x}=e^{y \log x}\log x\;D_y\;y=x^y\log x$$ However I am not sure of an '$e$ for tetration' but I hope it would be something like this: $$D_y \;{}^yx=D_y\;{}^{y\;\text{slog} x}t={}^{y\;\text{slog} x}t\;\text{slog} x\;D_y\;y={}^yx\;\text{slog}\; x$$ Where $\text{slog}$ denotes the super logarithm (slogorithm), an inverse of tetration.</p> <blockquote> <p><strong>Edit</strong></p> <p>This may as well be a possible identity which can easily be applied to the above: $$\text{slog}\;\left({}^yx\right)=\text{slog}^y\;(x)$$</p> </blockquote> <p>I am unsure about using slogorithms and tetration in this way and I feel I might just be abusing notation.</p> <hr> <p><strong>Work on Tetration</strong></p> <p>I will update this section with more rigorous definitions and properties of tetration. I cannot prove all of them now.</p> <p>For $x\in \Bbb R$ and $n \in \Bbb N$, $${}^nx:=\underbrace{ x^{x^{\cdot^{\cdot^{\cdot^x}}}}}_{\text{$n$ times}}\tag{1}$$</p> <p><strike>For $x\in\Bbb R$ and $a,b\in\Bbb N$?,</strike> $${{}^b({}^ax)={}^{a^b}x\tag{$\not2$}}$$ Through simply algebra you can find that the above is not the case.</p> <hr> <h2>Update:</h2> <p>This is just differentiating the pentation function.</p>
<p>It seems to me that before much more progress can be made in the <em>calculus</em> of <span class="math-container">${}^xy$</span>, more fundamental questions have to be answereed, such as, how to define <span class="math-container">${}^xy$</span> for <em>rational</em> <span class="math-container">$x$</span>? It's clear how the OP's definition works if <span class="math-container">$x$</span> is a non-negative integer; but how do we define <span class="math-container">${}^xy$</span> if, say, <span class="math-container">$x = 7/2$</span>? What then is &quot;one-half&quot; of an occurrance of <span class="math-container">$x$</span> in the exponential &quot;tower&quot; which is supposed to be <span class="math-container">${}^xy$</span>?</p> <p>I am reminded here of the way <span class="math-container">$x^y$</span> is extended from integers through the reals, by starting with a careful, consistent and believable definition of <span class="math-container">$(p / q)^{(r / s)}$</span> for integral <span class="math-container">$p, q, r, s$</span>; once we have that, a simple, consistent and believable continuity argument allows us to accept a definition of <span class="math-container">$x^y$</span> for real <span class="math-container">$x, y &gt; 0$</span>. We know what <span class="math-container">$(p / q)^r = (p^r / q^r)$</span> <em>means</em>; we know what it means for a positive real <span class="math-container">$z$</span> to satisfy <span class="math-container">$z^s = (p / q)^r$</span>, so we can get a handle on <span class="math-container">$(p / q)^{(r / s)}$</span> from which, by continuity, we can generalize to <span class="math-container">$x^y$</span>. I think an analogous method is needed here, but I don't know what it is. But I think my question of the preceding paragraph might be worth considering early on in this game.</p> <p>Of course, perhaps there is a (reasonably) simple, consistent and believable argument to contruct <span class="math-container">${}^xy$</span> using <span class="math-container">$\exp()$</span>, <span class="math-container">$\log()$</span>, etc., or some sort of differential or similar equation <span class="math-container">${}^xy$</span> must satisfy, or perhaps one could learn something from the <span class="math-container">$\Gamma$</span> function and factorials here which would bypass, at least temporarily, the need to address how <span class="math-container">${}^{(p / q)}(r / s)$</span> is supposed to work, but sooner or later the question will have to be faced, I'll warrant.</p> <p>This is an interesting, though speculative, arena and I am glad to have participated. But until I can answer my own questions to my better satisfaction, I will refrain from further remarks, except to bid those who are ready to climb such unknown heights, &quot;<em><strong>Excelsior!</strong></em></p> <p>Hope this helps, at least with the spirit of the adventure if not with the direction. Happy New Year,</p> <p>and as always,</p> <p><em><strong>Fiat Lux!!!</strong></em></p>
<p>Your "$e$ for tetration" cannot exist. Indeed let $T(x,y)={}^yx$ be the extended tetration and let $t$ be the analogue to $e$ for tetration, in the sense that $$\partial_yT(x,y)=\partial_yT(t,y\operatorname{slog}x) = T(t,y\operatorname{slog}x)\operatorname{slog}x$$ Here $\operatorname{slog}x$ is the (unique, if $T$ is strictly increasing in its first argument) positive real number such that $x=T(x,1)=T(t,\operatorname{slog}x)$. Then $\operatorname{slog}t=1$ and thus by the above we have $$\partial_yT(t,y)=T(t,y)$$ which implies $T(t,y)=T(t,0)e^y=e^y$, which is obviously a contradiction.</p>
geometry
<p>Wait before you dismiss this as a crank question :)</p> <p>A friend of mine teaches school kids, and the book she uses states something to the following effect: </p> <blockquote> <p>If you divide the circumference of <em>any</em> circle by its diameter, you get the <em>same</em> number, and this number is an irrational number which starts off as $3.14159... .$</p> </blockquote> <p>One of the smarter kids in class now has the following doubt: </p> <blockquote> <p>Why is this number equal to $3.14159....$? Why is it not some <em>other</em> irrational number?</p> </blockquote> <p>My friend is in a fix as to how to answer this in a sensible manner. Could you help us with this?</p> <p>I have the following idea about how to answer this: Show that the ratio must be greater than $3$. Now show that it must be less than $3.5$. Then show that it must be greater than $3.1$. And so on ... . </p> <p>The trouble with this is that I don't know of any easy way of doing this, which would also be accessible to a school student.</p> <p><strong><em>Could you point me to some approximation argument of this form?</em></strong></p>
<p>If the kids are not too old you could visually try the following which is very straight forward. Build a few models of a circle of paperboard and then take a wire and put it straigt around it. Mark the point of the line where it is meets itself again and then measure the length of it. You will get something like 3.14.. </p> <p><img src="https://i.sstatic.net/FX2gW.gif" alt="Pi unrolled"></p> <p>Now let them measure themselves the diameter and circumference of different circles and let them plot them into a graph. Tadaa they will see the that its proportional and this is something they (hopefully) already know.</p> <p>Or use the approximation using the archimedes algorithm. Still its not really great as they will have to handle big numbers and the result is rather disappointing as it doesn't reveal the irrationality of pi and just gives them a more accurate number of $\pi$.</p>
<p>You can try doing what Archimedes did: using polygons inside and outside the circle.</p> <p><a href="http://itech.fgcu.edu/faculty/clindsey/mhf4404/archimedes/archimedes.html" rel="nofollow">Here is a webpage which seems to have a good explanation.</a></p> <p>An other method one can try is to use the fact that the area of the circle is $\displaystyle \pi r^2$. Take a square, inscribe a circle. Now randomly select points in the square (perhaps by having many students throw pebbles etc at the figure or using rain or a computer etc). Compute the ratio of the points which are within the circle to the total. This ratio should be approximately the ratio of the areas $ = \displaystyle \frac{\pi}{4}$. Of course, this is susceptible to experimental errors :-)</p> <p>Or maybe just have them compute the approximate area of circle using graph paper, or the approximate perimeter using a piece of string.</p> <p>Not sure how convincing the physical experiments might be, though.</p>
game-theory
<p>I usually don't care too much about the practical relevance of nice mathematics :-) But this time, as I am looking to find some areas where I can apply maths and possibly collaborate with non-mathematicians who are experts in other fields, I wonder whether game theoretical methods really have practical relevance. Are there institutions (policy makers, administrative entities, enterprises, other organisations) who are using game theory for practical analysis and for strategic decisions?</p> <p>If so, I would be very happy to get some hints/references to application case studies or the like, if they exist.</p> <p>Thank you!</p>
<p>The following is an opinion regarding your first question ("has game theory any practical relevance?") by Ariel Rubinstein, a famous game theorist. I do not agree with him, but he has an interesting argument :</p> <p>From <a href="http://arielrubinstein.tau.ac.il/articles/FRANKFURTER_ALLGEMEINE_eng.pdf" rel="noreferrer">an article in the Frankfurter Allgemeine</a> (hope this is not infringing on anyone's copyright...)</p> <blockquote> <p>"I have devoted most of my life to economic theory and game theory. I believe that I would like to do some good for humankind and, in particular, for the people in Israel, the country where I was born and where I make my home. I would like to make an impact and redress injustices. Ostensibly, all this should motivate me to utilize my professional knowledge in order to bring some relief to the world. But, the thing is, that is not how I feel. [...]</p> <p>The heart of game theory is not empirical science. It does not study how people actually behave in strategic situations. It is doubtful whether it is even possible to generalize about the way people will behave in a situation like the Hide and Seek Game. After all, people are diverse. [...]</p> <p>Game theory is written in a mathematical language. [...] Personally, the nearly magical connection between the symbols and the words in game theory is what captivated me. But there are also disadvantages: The formal language greatly limits the audience that really understands it; the abstraction blurs factors that natural thought takes into account and the formality creates an illusion that the theory is scientific.</p> <p>Game theory fascinates me. It addresses the roots of human thought in strategic situations. However, the use of concepts from natural language, together with the use of ostensibly “scientific” tools, tempt people to turn to game theory for answers to questions such as: How should a system of justice be built? Should a state maintain a system of nuclear deterrence? Which coalition should be formed in a parliamentary regime? Nearly every book on game theory begins with the sentence: “Game theory is relevant to …” and is followed by an endless list of fields, such as nuclear strategy, financial markets, the world of butterflies and flowers, and intimate situations between men and women. Articles citing game theory as a source for resolving the world’s problems are frequently published in the daily press. But after nearly forty years of engaging in this field, I have yet to find even a single application of game theory in my daily life. [...]</p> <p>In my view, game theory is a collection of fables and proverbs. Implementing a model from game theory is just as likely as implementing a fable. A good fable enables us to see a situation in life from a new angle and perhaps influence our action or judgment one day. But it would be absurd to say that “The Emperor’s New Clothes” predicts the path of Berlusconi [...]</p> <p>The search for the practical meaning of game theory derives from the perception that academic teaching and research directly benefit society. This is not my worldview. Research universities, particularly in the fields of the humanities and social sciences, are part of a cultural fabric. Culture is gauged by how interesting and challenging it is, and not by the benefit it brings. I believe that game theory is part of the culture that ponders the way we think. This is an ideal that can be achieved in many ways – literature, art, brain research and yes, game theory too. If someone also finds a practical use for game theory, that would be great. But in my view, universities are supposed to be “God’s little acre,” where society fosters what is interesting, intriguing, aesthetic and intellectually challenging, and not necessarily what is directly beneficial.</p> </blockquote> <p>I think Rubinstein develops these ideas in yet another paper or interview but I could not find it (in this other paper, he notably argued that if game theory has ever been useful to anyone, it is to the wealthy and to the powerfull and that, as a consequence, it did not help foster anything like "social justice"). Anyone has a clue on that?</p>
<p>One of the best examples is the application of Game Theory to the theory of auctions. Prominent game theorists were on both sides of the electromagnetic spectrum auctions, and one got a knighthood out of it. <a href="http://www.bristol.ac.uk/efm/people/kenneth-g-binmore/" rel="nofollow">http://www.bristol.ac.uk/efm/people/kenneth-g-binmore/</a></p> <p>The questions of how to design the auction, and how best to bid in the auction, is classic applied game theory, with millions and perhaps billions of dollars at stake. <a href="http://www.cramton.umd.edu/papers2000-2004/01hte-spectrum-auctions.pdf" rel="nofollow">http://www.cramton.umd.edu/papers2000-2004/01hte-spectrum-auctions.pdf</a></p>
combinatorics
<p>I checked several thousand natural numbers and observed that $\lfloor n!/e\rfloor$ seems to always be an even number. Is it indeed true for all $n\in\mathbb N$? How can we prove it?</p> <p>Are there any positive irrational numbers $a\ne e$ such that $\lfloor n!/a\rfloor$ is even for all $n\in\mathbb N$?</p>
<p>Note that:</p> <p>$$e^{-1}=\sum_{k=0}^\infty \frac{(-1)^k}{k!}$$</p> <p>Then:</p> <p>$$\frac{n!}e=n!e^{-1} = \left(\sum_{k=0}^{n} (-1)^k\frac{n!}{k!}\right) + \sum_{k=n+1}^{\infty} (-1)^{k}\frac{n!}{k!}$$</p> <p>Show that if $a_n=\sum_{k=n+1}^{\infty} (-1)^{k}\frac{n!}{k!}$ then $0&lt;|a_{n}|&lt;1$ and $a_n&gt;0$ if and only if $n$ is odd.</p> <p>So the when $n$ is odd, the value is: $$\left\lfloor\frac{n!}{e}\right\rfloor=\sum_{k=0}^{n} (-1)^k\frac{n!}{k!}\tag{1}$$ When $n$ is even it is one less: $$\left\lfloor\frac{n!}{e}\right\rfloor=-1+\sum_{k=0}^{n} (-1)^k\frac{n!}{k!}\tag{2}$$</p> <p>Now, almost all of these terms are even. The last term $n!/n!=1$ is odd. When $n$ is odd, the second-to-last term $n!/(n-1)!$ is odd, also. But all other terms are even.</p> <p>So for $n$ odd, there are two odd terms in the sum, $k=n,n-1$.</p> <p>For $n$ even, there are two odd terms in the sum, $-1$ and $k=n.$</p> <hr> <p>The trick, then, is to show that the $a_n$ has these properties: $$\begin{align} &amp;0&lt;|a_n|&lt;1\\ &amp;a_n&gt;0\iff n\text{ is odd} \end{align}$$</p> <p>To show these, we note that $\frac{n!}{k!}$ is strictly decreasing for $k&gt;n$ and $(-1)^k\frac{n!}{k!}$ is alternating. In general, any alternating sum of a decreasing series converges to a value strictly between $0$ and the first term of the sequence, which in this case is $\frac{(-1)^{n+1}}{n+1}.$</p>
<p>The number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $[n]=\{1,\ldots,n\}$ is </p> <p>$$d_n=n!\sum_{k=0}^n\frac{(-1)^k}{k!}\;,$$</p> <p>so</p> <p>$$\frac{n!}e-d_n=n!\sum_{k&gt;n}\frac{(-1)^k}{k!}\;,$$</p> <p>which is less than $\frac1{n+1}$ in absolute value. Thus for $n\ge 1$, $d_n$ is the integer nearest $\frac{n!}e$, and</p> <p>$$d_n=\begin{cases} \left\lfloor\frac{n!}e\right\rfloor,&amp;\text{if }n\text{ is odd}\\\\ \left\lceil\frac{n!}e\right\rceil,&amp;\text{if }n\text{ is even}\;. \end{cases}$$</p> <p>The recurrence $d_n=nd_{n-1}+(-1)^n$ is also well-known. We have $d_0=1$, so an easy induction shows that $d_n$ is odd when $n$ is even, and even when $n$ is odd. Thus, for odd $n$ we have $\left\lfloor\frac{n!}e\right\rfloor=d_n$ is even, and for even $n$ we have $\left\lfloor\frac{n!}e\right\rfloor=\left\lceil\frac{n!}e\right\rceil-1$ is again even.</p>
differentiation
<p>How can I prove that a function that is its own derivative exists? And how can I prove that this function is of the form $a(b^x)$?</p>
<p>There are two ways you could show it. The harder route would be to prove the existence and uniqueness theorem for ordinary differential equations, thus showing there exists solutions to $y'=y$. </p> <p>The more direct way would be to just construct the function $e^x$ and show that it's its own derivative. You would start by defining $$\ln(x) = \int_1^x \frac{1}{t}\, dt$$ and prove that it's a strictly increasing function on $(0,\infty)$ with range $(-\infty, \infty)$. It follows that $\ln(x)$ has an inverse, which we should dub $e^x$. As for finding the derivative of this new and mysterious function: $$y=e^x$$ $$\ln(y)=x$$ Taking the $x$ derivative of both sides, $$\frac{y'}{y} = 1$$ $$\implies y'=y$$ And do show that every function which is its own derivative is a constant multiple of $e^x$, suppose that $f'=f$. Then, noting that $e^x$ is nowhere zero, $$\frac{d}{dx} \frac{f(x)}{e^x} = \frac{f'(x)e^x-f(x)e^x}{(e^x)^2} = \frac{f(x)e^x-f(x)e^x}{(e^x)^2} = 0$$ Therefore, $$\frac{f(x)}{e^x}$$ is constant since it has a connected domain, and so $f(x) = ce^x$ for some $c$. </p>
<p>$f(x) = 0$ is trivially its own derivative, and is of the form $a(b^x)$ for $a=0$ and any positive $b$. That's all we need to solve the problem posed.</p>
probability
<p>In <a href="https://math.stackexchange.com/a/656426/25554">this math.se post</a> I described in some detail a certain paradox, which I will summarize:</p> <blockquote> <p>$A$ writes two distinct numbers on slips of paper. $B$ selects one of the slips at random (equiprobably), examines its number, and then, without having seen the other number, predicts whether the number on her slip is the larger or smaller of the two. $B$ can obviously achieve success with probability $\frac12$ by flipping a coin, and it seems impossible that she could do better. However, there is a strategy $B$ can follow that is guaranteed to produce a correct prediction with probability strictly greater than $\frac12$.</p> </blockquote> <p>The strategy, in short, is:</p> <ul> <li>Prior to selecting the slip, $B$ should select some probability distribution $D$ on $\Bbb R$ that is everywhere positive. A normal distribution will suffice.</li> <li>$B$ should generate a random number $y\in \Bbb R$ distributed according to $D$. </li> <li>Let $x$ be the number on the slip selected by $B$. If $x&gt;y$, then $B$ predicts that $x$ is the larger of the two numbers; if $x&lt;y$ she predicts that $x$ is the smaller of the two numbers. ($y=x$ occurs with probability $0$ and can be disregarded.)</li> </ul> <p>I omit the analysis that shows that this method predicts correctly with probability <em>strictly</em> greater than $\frac12$; the details are in the other post.</p> <p>I ended the other post with “I have heard this paradox attributed to Feller, but I'm afraid I don't have a reference.”</p> <p>I would like a reference.</p>
<p>Thanks to a helpful comment, since deleted, by user <a href="https://math.stackexchange.com/users/128037/stefanos">Stefanos</a>, I was led to this (one-page) paper of <a href="https://en.wikipedia.org/wiki/Thomas_M._Cover">Thomas M. Cover</a> “<a href="http://www-isl.stanford.edu/~cover/papers/paper73.pdf">Pick the largest number</a>”<em>Open Problems in Communication and Computation</em> Springer-Verlag, 1987, p152.</p> <p>Stefanos pointed out that there is an extensive discussion of related paradoxes in the Wikipedia article on the ‘<a href="https://en.wikipedia.org/wiki/Two_envelopes_problem">Two envelope problem</a>’. Note that the paradox I described above does not appear until late in the article, in the section "<a href="https://en.wikipedia.org/wiki/Two_envelopes_problem#Randomized_solutions">randomized solutions</a>".</p> <p>Note also that the main subject of that article involves a paradox that arises from incorrect reasoning, whereas the variation I described above is astonishing but sound.</p> <p>I would still be interested to learn if this paradox predates 1987; I will award the "accepted answer" check and its 15 points to whoever posts the earliest appearance.</p>
<p><a href="https://johncarlosbaez.wordpress.com/2015/07/20/the-game-of-googol/" rel="noreferrer">In his blog post</a>, mathematical physicist John Baez considered this problem described by Thomas M. Cover as the special case $n = 2$ of what was called "the game of googol" by Martin Gardner in his column in the Feb. 1960 issue of Scientific American.</p> <p>That is, John Baez accepted (the suggestion by one of his blog readers) that this is a variant of the famous secretary problem, which <a href="https://en.wikipedia.org/wiki/Secretary_problem#The_game_of_googol" rel="noreferrer">wiki entry actually includes this "game of googol".</a></p> <p>If you are willing to take this point of view (Thomas M. Cover sort of did, in his closing remark), then the quest becomes the history of (this variant of) the secretary problem.</p> <p>John Baez's blog post is a good expository piece with some good discussion in the comments there. However, he didn't really do the literature search that pertains to Thomas M. Cover's approach going backwards in time.</p> <p>Similarly, there's no citation in that <a href="https://www.quantamagazine.org/information-from-randomness-puzzle-20150707" rel="noreferrer">Quanta magazine article</a> that got John Baez's notice. FWIW, the analysis there is thorough enough while being easily accessible to the general public.</p> <p>My personal stance at this point is that, nobody before Thomas M. Cover looked at the secretary problem in this particular way. He passed away in 2012 so one can only learn how he encountered this problem by studying his notes or asking his collaborators, friends, etc.</p> <p>I have to admit that I didn't go through <a href="https://scholar.google.com.tw/scholar?hl=en&amp;as_sdt=0,5&amp;sciodt=0,5&amp;cites=10356219223560197333&amp;scipsc=" rel="noreferrer">all the publications (27 so far) that cited that short paper</a> by Thomas M. Cover to see if they found anything preceding that.</p> <p>For the record, if one wants to see Martin Gardner's original text, I cannot find an open access of the Feb.1960 Scientific American nor the earliest reprint of the column in his 1966 book.${}^2$ The best thing I've got is the <a href="https://books.google.com.tw/books?id=sUuBCzazfYUC&amp;pg=PA17&amp;lpg=PA17&amp;dq=martin+gardner+googol&amp;source=bl&amp;ots=4Mr5gjyJgn&amp;sig=awPckIsv5Oi5Wm6sTuMydqFIicg&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwixsIv40OjXAhUFHpQKHbewC1oQ6AEIQTAE#v=onepage&amp;q=martin%20gardner%20googol&amp;f=false" rel="noreferrer">1994 google book</a>. See the 2nd paragraph on p.18 for the case $n=2$. Martin Gardner certainly didn't know what he was missing.</p> <hr> <p><strong>Footnote 1:</strong>$\quad$ <em>"New Mathematical Diversions from Scientific American". Simon and Schuster, 1966, Chapter 3, Problem 3</em>. This is a relatively long chapter in the book, as it essentially deals with the secretary problem.</p>
matrices
<p>What is the difference between a matrix and a tensor? Or, what makes a tensor, a tensor? I know that a matrix is a table of values, right? But, a tensor? </p>
<p>Maybe to see the difference between rank 2 tensors and matrices, it is probably best to see a concrete example. Actually this is something which back then confused me very much in the linear algebra course (where we didn't learn about tensors, only about matrices).</p> <p>As you may know, you can specify a linear transformation <span class="math-container">$a$</span> between vectors by a matrix. Let's call that matrix <span class="math-container">$A$</span>. Now if you do a basis transformation, this can also be written as a linear transformation, so that if the vector in the old basis is <span class="math-container">$v$</span>, the vector in the new basis is <span class="math-container">$T^{-1}v$</span> (where <span class="math-container">$v$</span> is a column vector). Now you can ask what matrix describes the transformation <span class="math-container">$a$</span> in the new basis. Well, it's the matrix <span class="math-container">$T^{-1}AT$</span>.</p> <p>Well, so far, so good. What I memorized back then is that under basis change a matrix transforms as <span class="math-container">$T^{-1}AT$</span>.</p> <p>But then, we learned about quadratic forms. Those are calculated using a matrix <span class="math-container">$A$</span> as <span class="math-container">$u^TAv$</span>. Still, no problem, until we learned about how to do basis changes. Now, suddenly the matrix did <em>not</em> transform as <span class="math-container">$T^{-1}AT$</span>, but rather as <span class="math-container">$T^TAT$</span>. Which confused me like hell: how could one and the same object transform differently when used in different contexts?</p> <p>Well, the solution is: because we are actually talking about different objects! In the first case, we are talking about a tensor that takes vectors to vectors. In the second case, we are talking about a tensor that takes two vectors into a scalar, or equivalently, which takes a vector to a <em>covector</em>.</p> <p>Now both tensors have <span class="math-container">$n^2$</span> components, and therefore it is possible to write those components in a <span class="math-container">$n\times n$</span> matrix. And since all operations are either linear or bilinear, the normal matrix-matrix and matrix-vector products together with transposition can be used to write the operations of the tensor. Only when looking at basis transformations, you see that both are, indeed, <em>not</em> the same, and the course did us (well, at least me) a disservice by not telling us that we are really looking at two different objects, and not just at two different uses of the same object, the matrix.</p> <p>Indeed, speaking of a rank-2 tensor is not really accurate. The rank of a tensor has to be given by <em>two</em> numbers. The vector to vector mapping is given by a rank-(1,1) tensor, while the quadratic form is given by a rank-(0,2) tensor. There's also the type (2,0) which also corresponds to a matrix, but which maps two covectors to a number, and which again transforms differently.</p> <p>The bottom line of this is:</p> <ul> <li>The components of a rank-2 tensor can be written in a matrix.</li> <li>The tensor is not that matrix, because different types of tensors can correspond to the same matrix.</li> <li>The differences between those tensor types are uncovered by the basis transformations (hence the physicist's definition: &quot;A tensor is what transforms like a tensor&quot;).</li> </ul> <p>Of course, another difference between matrices and tensors is that matrices are by definition two-index objects, while tensors can have any rank.      </p>
<p>Indeed there are some &quot;confusions&quot; some people do when talking about tensors. This happens mainly on Physics where tensors are usually described as &quot;objects with components which transform in the right way&quot;. To really understand this matter, let's first remember that those objects belong to the realm of linear algebra. Even though they are used a lot in many branches of mathematics the area of mathematics devoted to the systematic study of those objects is really linear algebra.</p> <p>So let's start with two vector spaces <span class="math-container">$V,W$</span> over some field of scalars <span class="math-container">$\Bbb F$</span>. Now, let <span class="math-container">$T : V \to W$</span> be a linear transformation. I'll assume that you know that we can associate a matrix with <span class="math-container">$T$</span>. Now, you might say: so linear transformations and matrices are all the same! And if you say that, you'll be wrong. The point is: one <em>can</em> associate a matrix with <span class="math-container">$T$</span> only when one fix some basis of <span class="math-container">$V$</span> and some basis of <span class="math-container">$W$</span>. In that case we will get <span class="math-container">$T$</span> represented on those bases, but if we don't introduce those, <span class="math-container">$T$</span> will be <span class="math-container">$T$</span> and matrices will be matrices (rectangular arrays of numbers, or whatever definition you like).</p> <p>Now, the construction of tensors is much more elaborate than just saying: &quot;take a set of numbers, label by components, let they transform in the correct way, you get a tensor&quot;. In truth, this &quot;definition&quot; is a consequence of the actual definition. Indeed the actual definition of a tensor is meant to introduce what we call &quot;Universal Property&quot;.</p> <p>The point is that if we have a collection of <span class="math-container">$p$</span> vector spaces <span class="math-container">$V_i$</span> and another vector space <span class="math-container">$W$</span> we can form functions of several variables <span class="math-container">$f: V_1\times \cdots \times V_p \to W$</span>. A function like this will be called multilinear if it's linear in each argument with the others held fixed. Now, since we know how to study linear transformations we ask ourselves: is there a construction of a vector space <span class="math-container">$S$</span> and one <em>universal</em> multilinear map <span class="math-container">$T : V_1 \times \cdots \times V_p \to S$</span> such that <span class="math-container">$f = g \circ T$</span> for some <span class="math-container">$g : S \to W$</span> linear and such that this holds for all <span class="math-container">$f$</span>? If that's always possible we'll reduce the study of multilinear maps to the study of linear maps.</p> <p>The happy part of the story is that this is always possible, the construction is well defined and <span class="math-container">$S$</span> is denoted <span class="math-container">$V_1 \otimes \cdots \otimes V_p$</span> and is called the <em>tensor product</em> of the vector spaces and the map <span class="math-container">$T$</span> is the tensor product of the vectors. An element <span class="math-container">$t \in S$</span> is called a tensor. Now it's possible to prove that if <span class="math-container">$V_i$</span> has dimension <span class="math-container">$n_i$</span> then the following relation holds:</p> <p><span class="math-container">$$\dim(V_1\otimes \cdots \otimes V_p)=\prod_{i=1}^p n_i$$</span></p> <p>This means that <span class="math-container">$S$</span> has a basis with <span class="math-container">$\prod_{i=1}^p n_i$</span> elements. In that case, as we know from basic linear algebra, we can associate with every <span class="math-container">$t \in S$</span> its components in some basis. Now, those components are what people usually call &quot;the tensor&quot;. Indeed, when you see in Physics people saying: &quot;consider the tensor <span class="math-container">$T^{\alpha \beta}$</span>&quot; what they are really saying is &quot;consider the tensor <span class="math-container">$T$</span> whose components in some basis understood by context are <span class="math-container">$T^{\alpha \beta}$</span>&quot;.</p> <p>So if we consider two vector spaces <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> with dimensions respectivly <span class="math-container">$n$</span> and <span class="math-container">$m$</span>, by the result I've stated <span class="math-container">$\dim(V_1 \otimes V_2)=nm$</span>, so for every tensor <span class="math-container">$t \in V_1 \otimes V_2$</span> one can associate a set of <span class="math-container">$nm$</span> scalars (the components of <span class="math-container">$t$</span>), and we are obviously allowed to plug those values into a matrix <span class="math-container">$M(t)$</span> and so there's a correspondence of tensors of rank <span class="math-container">$2$</span> with matrices.</p> <p>However, exactly as in the linear transformation case this correspondence is only possible when we have selected bases on the vector spaces we are dealing with. Finally, with every tensor it is possible to associate also a multilinear map. So tensors can be understood in their fully abstract and algebraic way as elements of the tensor product of vector spaces, and can also be understood as multilinear maps (this is better for intuition) and we can associate matrices to those.</p> <p>So after all this hassle with linear algebra, the short answer to your question is: matrices are matrices, tensors of rank 2 are tensors of rank 2, however there's a correspondence between them whenever you fix a basis on the space of tensors.</p> <p>My suggestion is that you read Kostrikin's &quot;Linear Algebra and Geometry&quot; chapter <span class="math-container">$4$</span> on multilinear algebra. This book is hard, but it's good to really get the ideas. Also, you can see about tensors (constructions in terms of multilinear maps) in good books of multivariable Analysis like &quot;Calculus on Manifolds&quot; by Michael Spivak or &quot;Analysis on Manifolds&quot; by James Munkres.</p>
differentiation
<p>I am interested in the derivative of a function defined on a subset <span class="math-container">$S$</span> of <span class="math-container">$[0, 1]$</span>. The subset in question is dense in <span class="math-container">$[0, 1]$</span> but has Lebesgue measure zero. My actual question can be found at the bottom of this post.</p> <p>There has been a few questions on the subject, but none leading to something interesting as far as I am concerned. See <a href="https://math.stackexchange.com/questions/1784875/about-the-derivative-of-a-function-defined-on-rational-numbers">here</a>, <a href="http://oeis.org/wiki/Arithmetic_derivative" rel="nofollow noreferrer">here</a> (concept of arithmetic derivative) and also <a href="https://en.wikipedia.org/wiki/Minkowski%27s_question-mark_function" rel="nofollow noreferrer">here</a> (Minkowski's question mark function, related to the material discussed here.) </p> <p>Generally these discussions lead to some kind of non-sense math. Here it is the opposite. I have a framework that does work as far as applications and computations are concerned, but I have a hard time putting it into some sound mathematical framework. It would have to be some kind of non-standard calculus.</p> <p>Perhaps the simplest example (though I have plenty of other similar cases) is as follows. Let <span class="math-container">$Z$</span> be a random variable defined as follows: <span class="math-container">$$Z = \sum_{k=1}^\infty \frac{X_k}{2^k}$$</span></p> <p>where the <span class="math-container">$X_k$</span>'s are identically and independently distributed with a Bernouilli<span class="math-container">$(p)$</span> distribution. Thus <span class="math-container">$P(X_k = 1) = p$</span> and <span class="math-container">$P(X_k = 0) = 1-p$</span>. Here <span class="math-container">$0 &lt; p &lt; 1$</span>. In short, the <span class="math-container">$X_k$</span>'s are the binary digits of the random number <span class="math-container">$Z$</span>.</p> <p>There are two cases.</p> <p><strong>Case <span class="math-container">$p=\frac{1}{2}$</span></strong></p> <p>In this case, <span class="math-container">$Z$</span> has a uniform distribution on <span class="math-container">$S$</span>, where <span class="math-container">$S$</span> is the set of <a href="https://en.wikipedia.org/wiki/Normal_number" rel="nofollow noreferrer">normal numbers</a> in <span class="math-container">$[0, 1]$</span>. It is known that <span class="math-container">$S$</span> has Lebesgue measure <span class="math-container">$1$</span>, and that <span class="math-container">$S$</span> is dense in <span class="math-container">$[0, 1]$</span>. Yet it is full of holes (no rational number is is a normal number due to their periodicity, thus the <span class="math-container">$X_k$</span>'s are not independent for rational numbers.)</p> <p>This is the simplest case. One might wonder if the density <span class="math-container">$f_Z$</span> (the derivative of the distribution <span class="math-container">$F_Z$</span>) exists. Yet <span class="math-container">$f_Z(z) = 1$</span> if <span class="math-container">$z \in S$</span> works perfectly well for all purposes. It can easily be extended to <span class="math-container">$f_Z(z) = 1$</span> if <span class="math-container">$z \in [0, 1]$</span>. Let us denote the extended function as <span class="math-container">$\tilde{f}_Z$</span>. You can compute all the moments using the extended <span class="math-container">$\tilde{f}_Z$</span> and get the right answer. If <span class="math-container">$s$</span> is a positive real number, then <span class="math-container">$$E(Z^s) = \int_0^1 z^s \tilde{f}_Z(z) dz = \frac{1}{s+1}.$$</span></p> <p>You could argue that <span class="math-container">$\tilde{f}_Z$</span> (and thus <span class="math-container">$f_Z$</span>) can be obtained by inverting the above functional equation, using some kind of Laplace transform. So we can bypass the concept of derivative entirely, it seems. </p> <p><strong>Case <span class="math-container">$p\neq \frac{1}{2}$</span></strong></p> <p>Now we are dealing with a hard nut to crack, and a wildly chaotic system: <span class="math-container">$Z$</span>'s support domain is a set <span class="math-container">$S'$</span> that is a subset of non-normal numbers in <span class="math-container">$[0, 1]$</span>. This set <span class="math-container">$S'$</span> has now Lebesgue measure zero, yet it is dense in <span class="math-container">$[0, 1]$</span>. For the distribution, it is not a problem: even discrete random variables have a distribution <span class="math-container">$F_Z$</span> defined for all positive real numbers: <span class="math-container">$F_Z(z) = P(Z \leq z)$</span>. </p> <p>The issue is with the density <span class="math-container">$f_Z = dF_Z/dz$</span>. It sounds either it should be zero everywhere or not exist. My guess is that you might be able to define a new, workable concept of density. In the neighborhood of every point <span class="math-container">$z \in S'$</span>, it looks like <span class="math-container">$g(z,h) = (F_Z(z+h) - F_Z(z))/h$</span> oscillates infinitely many times with no limit as <span class="math-container">$h\rightarrow 0$</span>, yet these oscillations are bounded most of the time, perhaps leading to the fact that averaging <span class="math-container">$g(z, h)$</span> around <span class="math-container">$h = 0$</span>, using smaller and smaller values of <span class="math-container">$h$</span>, could provide a sound definition for the density <span class="math-container">$f_Z$</span>.</p> <p>Again, despite the chaotic nature of the system (see how the the <em>would-be</em> density could potentially look like in the picture below) all the following quantities exist and can be computed exactly and then confirmed by empirical evidence, even though the integrals below may not make sense:</p> <p><span class="math-container">$$E(Z) = \int_{0}^{1} z f_Z(z) dz = p \\ E(Z^2) = \int_{0}^{1} z^2 f_Z(z) dz =\frac{p}{3}(1+2p)\\ E(Z^3) = \int_{0}^{1} z^3 f_Z(z) dz =\frac{p}{7}(1+4p+2p^2)\\ E(Z^4) = \int_{0}^{1} z^4 f_Z(z) dz =\frac{p}{105}(7+46p + 44p^2+8p^3) $$</span> Indeed, a general formula for <span class="math-container">$E(Z^s) = \int_0^1 z^s f_Z(z)dz$</span> is available for <span class="math-container">$s \geq 0$</span>, defined by the following functional equation (see <a href="https://stats.stackexchange.com/questions/439528/parameter-estimation-when-the-likelihood-function-does-not-exist">here</a>): <span class="math-container">$$E(Z^s) = \frac{p}{2^s-1+p}\cdot E((1+Z)^s) .$$</span></p> <p>In other words, we would have, under some appropriate calculus theory with a sound definition of integral and derivative: <span class="math-container">$$\int_{S'}z^s f_Z(z)dz = \frac{p}{2^s-1+p}\cdot\int_{S'}(1+z)^s f_Z(z) dz .$$</span></p> <p>Here is how the density <span class="math-container">$f_Z$</span>, if properly defined, could look like for <span class="math-container">$p=0.75$</span> (see <a href="https://stats.stackexchange.com/questions/439528/parameter-estimation-when-the-likelihood-function-does-not-exist">here</a> and <a href="https://www.datasciencecentral.com/profiles/blogs/a-strange-family-of-statistical-distributions" rel="nofollow noreferrer">here</a>):</p> <p><a href="https://i.sstatic.net/HT0ZF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HT0ZF.png" alt="enter image description here"></a></p> <p>Below is the empirical percentile distribution, for this particular <span class="math-container">$Z$</span>: </p> <p><a href="https://i.sstatic.net/itiHK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itiHK.png" alt="enter image description here"></a></p> <p><strong>Other related problems</strong></p> <p>If instead, we consider the model <span class="math-container">$Z = X_1 + X_1 X_2 + X_1 X_2 X_3 + \cdots$</span> with <span class="math-container">$X$</span> Bernouilli<span class="math-container">$(p)$</span>, then <span class="math-container">$Z$</span> has a geometric distribution of parameter <span class="math-container">$1-p$</span>, see section 2.2 in <a href="https://www.datasciencecentral.com/profiles/blogs/chaos-attractors-in-machine-learning-systems" rel="nofollow noreferrer">this article</a>. This system is also equivalent to the binary numeration system discussed so far: see section 5 in the same article. It results in <span class="math-container">$Z$</span> having a standard, well-known discrete distribution. But if this time <span class="math-container">$P(X=-0.5) = 0.5 = P(X=0.5)$</span> then <span class="math-container">$Z$</span> is uniform on a subset <span class="math-container">$S$</span> of <span class="math-container">$[-1, 1]$</span>, with <span class="math-container">$S$</span> also full of holes.</p> <p>Here is another interesting model:</p> <p><span class="math-container">$$Z=\sqrt{X_1+\sqrt{X_2+\sqrt{X_3+\cdots}}}$$</span></p> <p>The distribution for <span class="math-container">$X$</span> is as follows: <span class="math-container">$$P(X=0) = \frac{1}{2}, P(X=1) = \frac{1}{1 + \sqrt{5}}, P(X=2) = \frac{3 - \sqrt{5}}{4} \mbox{ } (\star)$$</span> </p> <p>This corresponds to a different numeration system, and the choice for <span class="math-container">$X$</span>'s distribution is not arbitrary, see <a href="https://www.datasciencecentral.com/profiles/blogs/math-fun-infinite-nested-radicals-of-random-variables" rel="nofollow noreferrer">here</a>: in short, it makes the system smoother and possibly easier to solve. To the contrary <span class="math-container">$P(X=0) = P(X=1)=$</span> <span class="math-container">$P(X=2)= \frac{1}{3}$</span> yields a far more chaotic system, and the case <span class="math-container">$P(X=0) =$</span> <span class="math-container">$P(X=1) = \frac{1}{2}$</span> is so wild that the support domain of <span class="math-container">$Z$</span> has huge gaps, very visible to the naked eye. </p> <p>Normal numbers in the nested square root system are very different from normal numbers in the binary numeration system. The successive digits have a very specific auto-correlation structure, and the digits <span class="math-container">$0, 1, 2$</span> are not evenly distributed for normal numbers in that system. It is clear that if we assume that the <span class="math-container">$X_k$</span>'s are i.i.d, then <span class="math-container">$Z$</span> is not a normal number in that system. Yet we get a very good approximation for <span class="math-container">$F_Z$</span>, much smoother (at least visually) than in the binary numeration system investigated earlier, with <span class="math-container">$p\neq \frac{1}{2}$</span>. In particular, <span class="math-container">$F_Z$</span> is very well approximated by a log function, see chart below.</p> <p><a href="https://i.sstatic.net/PMOnB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PMOnB.png" alt="enter image description here"></a></p> <p>Here the blue line is the empirical distribution for <span class="math-container">$Z$</span>, the red line is the log approximation. And below is a spectacular chart, featuring the approximation error <span class="math-container">$\epsilon(z) = F_Z(z) -\log_2(z)$</span>. It's a fractal! (source: see section 2.2 in <a href="https://www.datasciencecentral.com/profiles/blogs/math-fun-infinite-nested-radicals-of-random-variables" rel="nofollow noreferrer">this article</a>). In short, it is no more differentiable than a Brownian motion, and technically, the derivative <span class="math-container">$f_Z$</span> does not exist. Yet all moments of <span class="math-container">$Z$</span> can be computed exactly from the functional equation attached to that system (<span class="math-container">$F_{Z^2}=F_{X+Z}$</span>) and confirmed empirically. Even though the distribution looks smooth to the naked eye, we are dealing here with a very chaotic system in disguise. Again we need non-standard calculus to handle the density, whose support is a dense set of Lebesgue measure zero in <span class="math-container">$[1, 2]$</span>. </p> <p>Since fractals are nowhere differentiable, <span class="math-container">$f_Z$</span> does not exist. Yet one could imagine a "density" that would look like <span class="math-container">$f_Z(z)=\frac{1}{z\log 2}$</span> for <span class="math-container">$z\in [1,2]$</span>.</p> <p><a href="https://i.sstatic.net/L9TM1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L9TM1.png" alt="enter image description here"></a></p> <p><strong>My question</strong></p> <p>Is there an existing theory to handle this type of density-like-substance? Following some advice, I also posted this question on MathOverflow, <a href="https://mathoverflow.net/questions/347799/derivatives-in-unusual-support-domains">here</a>. </p>
<p>It really looks to me that you're after the concept of a <a href="https://en.wikipedia.org/wiki/Distribution_(mathematics)" rel="noreferrer">generalized function</a> which allow you to manipulate things that are decidedly <em>not</em> functions in a way that really looks like they are functions. The basic premise of a generalized function is that you know how they're supposed to integrate against nice functions, but they clearly are not functions themselves - which is exactly the path you seem to be going down. You can also think of them as what happens if you keep differentiating a continuous function - much in the same way that "a measure" is a reasonably good answer to the question of what happens if you differentiate an increasing function.</p> <p>More formally, to define a generalized function, we start by defining some functions which are <em>very</em> well behaved, known as test functions. A function <span class="math-container">$f:\mathbb R\rightarrow\mathbb R$</span> is a test function if and only if it is smooth and compactly supported. The set of test functions is known as <span class="math-container">$T$</span>. We also put a topology on <span class="math-container">$T$</span> by the rule that <span class="math-container">$f_1,f_2,\ldots$</span> converges to <span class="math-container">$f$</span> if and only if, for each <span class="math-container">$n$</span>, the <span class="math-container">$n^{th}$</span> derivatives of this sequence converge uniformly to the <span class="math-container">$n^{th}$</span> derivative of <span class="math-container">$f$</span>.</p> <p>A generalized function is then just a continuous map from <span class="math-container">$T$</span> to <span class="math-container">$\mathbb R$</span>. The idea is that we would associate a continuous function <span class="math-container">$g:\mathbb R\rightarrow\mathbb R$</span> to the map <span class="math-container">$$f\mapsto \int f(x)g(x)\,dx.$$</span> More generally, we can associate to each measure <span class="math-container">$\mu$</span> the generalized function <span class="math-container">$$f\mapsto \int f(x)\,d\mu.$$</span> The really neat thing about this definition is that we can use tricks to start to reason about distributions the same way we would reason about functions; let me focus on your second example as it's a nice example of how this reasoning works out without too much difficulty.</p> <p>So, first, we would like to define the derivative of a generalized function <span class="math-container">$g$</span>. You can either think about this either as a convenient extension of the map <span class="math-container">$g\mapsto g'$</span> from the subset of generalized functions which are actually differentiable to the entire set, or as, noting that differentiation is linear, taking the transpose of the map that takes <span class="math-container">$g\mapsto g'$</span> in the set of test functions taken as an inner product space - or you can just think about integration by parts as justification. In any case, you should come across the following formula (where we now abuse the integral sign <span class="math-container">$\int g' \cdot f$</span> to formally mean <span class="math-container">$g'(f)$</span> - but avoid this notation because it is more confusing than abusing notation!): <span class="math-container">$$\int g'\cdot f = -\int g\cdot f'.$$</span> This is, of course, actually true if <span class="math-container">$g$</span> and <span class="math-container">$f$</span> are differentiable and their product is compactly supported - being a consequence of integration by parts - and one can (and should) use it to differentiate distributions in general.</p> <p>You can also define compositions of distributions with nice functions in a similar manner; let's stick with linear functions for simplicity. In particular, if <span class="math-container">$g(x)$</span> is a distribution, lets define <span class="math-container">$g(2x)$</span> in the same way as before (abusing notation <em>even more</em> now - just remember that <span class="math-container">$g(x)$</span> and <span class="math-container">$g(2x)$</span> are not truly functions, but symbols of their own right): <span class="math-container">$$\int g(2x)\cdot f(x)\,dx = \frac{1}2\int g(x)\cdot f\left(\frac{x}2\right)\,dx$$</span> which is essentially the result of a <span class="math-container">$u$</span>-substitution. Similarly, we could define <span class="math-container">$$\int g(2x-1)\cdot f(x)\,dx = \frac{1}2\int g(x)\cdot f\left(\frac{x+1}2\right)\,dx.$$</span></p> <p>So, even though we can't <em>plot</em> these things (though, if we're at least still within the realm of <em>measure theory</em> - which these density functions absolutely are - we can make histograms which look like what you've done), we can reason about them by analogies to functions that are rooted formally in a theory that will let us extract results via integration later. In particular, our theory lets us, without reservation, differentiate any increasing function - and thus fills in what your values <span class="math-container">$f_Z$</span> have to be.</p> <p>So, let's let <span class="math-container">$G_Z$</span> be the cumulative distribution given by randomly selecting the bits of a number in <span class="math-container">$[0,1]$</span> from a Bernoulli distribution with parameter <span class="math-container">$p$</span>. While we could notice some properties of <span class="math-container">$G_Z$</span> directly and infer things from differentiation, let's instead look at some more interesting reasoning directly on <span class="math-container">$g_Z$</span> enabled by this.</p> <p>So, as a first step, let's let <span class="math-container">$Z_n$</span> be a discrete random variable given as a base two value <span class="math-container">$0.b_1b_2b_3\ldots b_n$</span> of bits chosen from the given Bernoulli distribution. Then, <span class="math-container">$g_{Z_n}$</span> will represent our probability mass function and we will take it to satisfy: <span class="math-container">$$\int g_{Z_n}(x) \cdot f(x) = \mathbb E[f(Z_n)].$$</span> This is, of course, just summing up some of the values of <span class="math-container">$f$</span>. However, it is helpful, because to produce the distribution of <span class="math-container">$Z_{n+1}$</span>, all we need to do is randomly pick the first bit, then add half of a value chosen as in <span class="math-container">$Z_n$</span>. This works out to the following equation: <span class="math-container">$$g_{Z_{n+1}}(x) = 2(1-p)g_{Z_n}(2x) + 2p\cdot g_{Z_n}(2x-1).$$</span> which can be explicitly verified in terms of the expectations, if desired. However, the final distribution you want ought to be (and <em>is</em>, in pretty much any topology one might put on the generalized functions) is the limit of <span class="math-container">$g_{Z_n}$</span> as <span class="math-container">$n$</span> goes to <span class="math-container">$\infty$</span> - and the operations here are continuous, so we will get <span class="math-container">$$g_{Z}(x) = 2(1-p)\cdot g_Z(2x) + 2p\cdot g_Z(2x-1)$$</span> which is exactly the sort of algebraic equation that would lead to the sort of structures you are seeing - and gives you the iterative formula that leads to the sort of density plots you've made. Of course, as a measure, this is supported on the set of numbers <span class="math-container">$x$</span> in <span class="math-container">$[0,1]$</span> where the asymptotic density of the set of indices where the corresponding bit of <span class="math-container">$x$</span> in binary is <span class="math-container">$1$</span> equals <span class="math-container">$p$</span> - so can't be converted to a function (even via tools like the <a href="https://en.wikipedia.org/wiki/Radon%E2%80%93Nikodym_theorem" rel="noreferrer">Radon-Nikodym derivative</a> which converts from measures to functions where possible) - but, nonetheless, these generalized functions can still provide some of the framework you seem to be after.</p> <hr> <p>As an aside, you can work with measures this way too; if you let <span class="math-container">$\mu$</span> be the probability measure associated to the process we've been discussing, you can write <span class="math-container">$$\mu(S) = p\mu\left(\frac{S}2\right) + (1-p)\mu\left(\frac{S+1}2\right)$$</span> which has the same structure. If you're happy to work with measures and don't want to try differentiating the measure any further, then they're a good answer to your search as well.</p> <hr> <p>I might note that these absolutely cannot see the sort of holes that you are talking about - but <em>every</em> finite measure on <span class="math-container">$\mathbb R$</span> is supported on a totally disconnected set (because only countably many points can have positive finite mass, and any countable set of points not including such a point has no measure). Generally, with probability, the "blurring" effect of using smooth test functions is actually desirable, because the measure of an interval is <em>not</em> the sum of the measures of its points. This is also why one rarely every works with <em>functions</em> when dealing with measures, but instead works with <span class="math-container">$L^p$</span> spaces (whose members are <em>not</em> functions, but rather almost-everywhere equivalence classes of functions).</p> <p>To say this more generally: a function is defined by the fact that it can be evaluated at any point. This is not really something you desire when talking about probability distributions, because the quantity is too spread out to register evaluation. A measure, more usefully, is defined by the fact that it can integrate against continuous functions (e.g. as in the <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani_representation_theorem" rel="noreferrer">Riesz representation theorem</a>) and then a distribution is a generalization which can integrate against <em>smooth</em> functions. The latter two objects tend to be more useful in these situations.</p>
<p><a href="https://en.wikipedia.org/wiki/Probability_theory#Measure-theoretic_probability_theory" rel="nofollow noreferrer">Measure-theoretic probability</a> provides a formalism in which all quantities in your question are rigorously defined. In my opinion this is the natural framework to use (instead of introducing an ad-hoc approach) since you have already invoked measure theory in your question!</p> <p>Even better, there is substantial (and beautiful) literature studying the exact same fractal systems you are interested in - read to the bottom of this answer (skimming the pedantic bits you probably already know) to find out more.</p> <p><strong>TL;DR</strong> The answer to your question</p> <blockquote> <p>Is there an existing theory to handle this type of density-like-substance? </p> </blockquote> <p>is a resounding <strong>Yes!</strong></p> <hr> <p><strong>Some background</strong></p> <p>In measure-theoretic probability, one starts with a sample space <span class="math-container">$(\Omega,\mathcal F)$</span> which is an arbitrary set equipped with a <span class="math-container">$\sigma$</span>-algebra. In this case, it is natural to take <span class="math-container">$\Omega=\{0,1\}^{\mathbb N}$</span> and use the product <span class="math-container">$\sigma$</span>-algebra. Equip <span class="math-container">$(\Omega,\mathcal F)$</span> with the measure <span class="math-container">$\mathbb P_p=\textrm{Ber}_p^{\mathbb N}$</span> which is the product of <span class="math-container">$p$</span>-Bernoulli measures as you have described.</p> <p>In this formalism, a random variable is nothing other than a <a href="https://en.wikipedia.org/wiki/Measurable_function" rel="nofollow noreferrer">measurable function</a> <span class="math-container">$f\colon \Omega\to\mathbb R$</span> where <span class="math-container">$\mathbb R$</span> is equipped with the Borel <span class="math-container">$\sigma$</span>-algebra. In particular, we have a random variable given by <span class="math-container">$$ Z(\omega_1,\omega_2,\ldots)=\sum_{n=1}^{\infty}\frac{\omega_n}{2^n}. $$</span> It is a measurable function since it is a limit of measurable functions (namely, each partial sum is straightforwardly seen to be measurable).</p> <p>Since <span class="math-container">$\mathbb P_p(0\leq Z\leq 1)=1$</span>, the random variable <span class="math-container">$Z$</span> is seen to be integrable, so that the mean <span class="math-container">$\mathbb EZ$</span> is well-defined. By the same token, <span class="math-container">$\mathbb E[Z^k]$</span> is well-defined for all integers <span class="math-container">$k\geq 0$</span>.</p> <p>This is not just abstract nonsense: it leads to an operational calculus of the kind you seek. For instance, we have the well-known tail integral for moments of a random variable given by <span class="math-container">$$ \mathbb E[Z^k]=\int_0^{\infty}kZ^{k-1}\mathbb P_p(Z&gt;t)\ dt,\qquad k\in\mathbb N, $$</span> which is a special case of <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem#Tonelli&#39;s_theorem_for_non-negative_measurable_function" rel="nofollow noreferrer">Tonelli's theorem</a>.</p> <p>The distribution of the random variable <span class="math-container">$Z$</span>, let's call it <span class="math-container">$\mu_p$</span>, is defined to be the pushforward of the measure <span class="math-container">$\mathbb P_p$</span> under the mapping <span class="math-container">$Z\colon\Omega\to\mathbb R$</span>. In symbols, <span class="math-container">$$\mu_p(A)=\mathbb P_p\bigl(Z^{-1}(A)\bigr),$$</span>for all Borel subsets <span class="math-container">$A$</span> of <span class="math-container">$\mathbb R$</span>. It is a probability measure on <span class="math-container">$\mathbb R$</span>, with support contained in <span class="math-container">$[0,1]$</span>. As you have pointed out, the density does not exist in general. In fact, by the <a href="https://en.wikipedia.org/wiki/Radon%E2%80%93Nikodym_theorem" rel="nofollow noreferrer">Radon-Nikodym theorem</a> it exists if and only if the measure <span class="math-container">$\mu_p$</span> is absolutely continuous with respect to Lebesgue measure, in which case the density is precisely the Radon-Nikodym derivative of <span class="math-container">$\mu_p$</span> with respect to the Lebesgue measure. The power of measure-theoretic probability is that all techniques and formulas that one is accustomed to working with carry over to the language of probability measures and random variables, ceasing to rely upon the existence of a density. (Of course, complications arise and a little adjustment is required - but this forms the core of the modern probabilist's arsenal of tools.) Thus one has the Lebesgue integral <span class="math-container">$$ \mathbb E[Z^k]=\int_0^{\infty}t^{k}\ d\mu_p(t),\qquad k\in\mathbb N, $$</span> regardless of whether <span class="math-container">$\mu_p$</span> possesses a density.</p> <hr> <p><strong>Modern research</strong></p> <p>With all this being said, it is still a very interesting question (related to open lines of current research) to understand when the density exists, and what are its fractal properties. I refer you to the <a href="http://u.math.biu.ac.il/~solomyb/RESEARCH/sixty.pdf" rel="nofollow noreferrer">survey paper</a> by <a href="http://u.math.biu.ac.il/~solomyb/RESEARCH/notes.html" rel="nofollow noreferrer">Peres, Schlag, and Solomyak</a> entitled "Sixty Years of Bernoulli Convolutions", where a closely related generalization of your <span class="math-container">$p=\tfrac12$</span> case is considered, namely to the random variable <span class="math-container">$$ Z_\lambda(\omega_1,\omega_2,\ldots)=\sum_{n=1}^{\infty}\omega_n\lambda^n,\qquad \lambda\in[0,1]. $$</span> The distribution <span class="math-container">$\nu_\lambda$</span> of the random variable <span class="math-container">$Z_\lambda$</span> has fascinating properties when <span class="math-container">$\lambda\in(\tfrac12,1)$</span>, and the study of these properties leads to beautiful connections with "harmonic analysis, the theory of algebraic numbers, dynamical systems, and Hausdorff dimension estimation" (direct quote from the abstract).</p> <p>The fractal nature of these measures is a prominent motif throughout this line of inquiry, and appears at the forefront in the article <a href="https://www.ams.org/journals/tran/1998-350-10/S0002-9947-98-02292-2/S0002-9947-98-02292-2.pdf" rel="nofollow noreferrer">Self-similar measures and intersections of Cantor sets</a> written by a subset of the authors of the survey paper, which goes beyond the <span class="math-container">$p=\tfrac12$</span> case to study the measure <span class="math-container">$\nu_{\lambda}^{p}:=(Z_\lambda)_*(\mathbb P_p)$</span>, of which the special case <span class="math-container">$\lambda=\tfrac12$</span> is the distribution of your random variable <span class="math-container">$Z$</span>.</p> <p>The aforementioned survey article is featured prominently in the proceedings of an international conference on fractal geometry, collected in the (unfortunately non-freely available) book <a href="https://www.springer.com/gp/book/9783764362157" rel="nofollow noreferrer">Fractal Geometry and Stochastics II</a>, which serves as a jumping off point to the literature on dynamical systems leading to fractal geometries closesly related to your list of "other related problems" as well. Note that many of the individual articles collected in this book (and its bibliography) <em>are</em> freely available at either the authors' webpages or on preprint servers.</p> <hr> <p><strong>Broader picture</strong></p> <p>To make contact with the answer posted by Milo here, let me point out that the theory of measures may be regarded as a special case of the theory of generalized functions - i.e., every measure is a generalized function with respect to the class of bounded continuous test functions (with the Lebesgue integral supplying the pairing between measure and test function). </p> <p>My perspective is that measure theory is (broadly speaking) a more developed field than that of generalized functions, for the same reason that continuous functions are better understood than measures. What I mean is that one can group the objects of study in real analysis by how "well-behaved" the objects are. After <a href="https://en.wikipedia.org/wiki/Linear_function" rel="nofollow noreferrer">linear functions</a>, the best behaved functions are <a href="https://en.wikipedia.org/wiki/Polynomial" rel="nofollow noreferrer">polynomials</a>, after which comes (in order of increasing generality) <a href="https://en.wikipedia.org/wiki/Analytic_function" rel="nofollow noreferrer">analytic functions</a>, then <a href="https://en.wikipedia.org/wiki/Smoothness" rel="nofollow noreferrer">smooth functions</a>, then <a href="https://en.wikipedia.org/wiki/Continuous_function" rel="nofollow noreferrer">continuous functions</a>, then measurable functions, <em>then</em> measures, and finally <a href="https://en.wikipedia.org/wiki/Generalized_function" rel="nofollow noreferrer">generalized functions</a>. (Everything function <span class="math-container">$f$</span> that appears before measures in this list can be canonically associated with the measure <span class="math-container">$f\cdot \textrm{Lebesgue}$</span>; and measures can be regarded as generalized functions as already explained - so everything is on the same footing, just with varying degrees of <a href="https://math.stackexchange.com/questions/1406707/what-does-the-term-regularity-mean">regularity</a>.)</p> <p>Fortunately the problems you are interested in (and all the different flavors of dynamical systems, including ergodic theory) live at (or below) the "measure theory" rung on the ladder of abstractions.</p>
linear-algebra
<p>So we vector spaces, linear transformations, inner products etc all have their own axioms that they have to satisfy in order to be considered to be what they are. </p> <p>But how did we come to decide to include those axioms and not include others? </p> <p>For example, why does this rule hold in inner product spaces $c\langle u,v\rangle=\langle cu,v\rangle$, when my intuition says that it should be $\langle cu,cv\rangle$?</p> <p>And how did we decide that it was scalar multiplication and additive were sufficient criteria for something to be a linear map? </p>
<p>Linear algebra is one of the first "abstractions" that you encounter in mathematics that is not very well motivated by experience. (Well... there are numerals, which are a pretty tricky abstraction as well, but most of us don't recall learning those.)</p> <p>It helps to have the backstory. </p> <p>Mathematicians studied geometry and simple transformations of the plane like "rotation" and "translation" (moving everything to the right by 3 inches, or up by 7 inches, or northeast by 3.2 inches, etc.) as far back as Euclid, and at some point, they noticed that you could do things like "do one translation after another", and the result was the same as if you'd done some different translation. And even if you did the first two translations in a different order, the resulting translation was still the same. So pretty soon they said "Hey, these translations are behaving a little like numbers do when we add them together: the order we add them in doesn't matter, and there's even something that behaves the way zero does: the "don't move at all" transformation, when composed with any other translation, gives that other translation." </p> <p>So you have two different sets of things: ordinary numbers, and "translations of the plane", and for both, there's a way of combining ("+" for numbers, "composition of transformations" for translations), and each of these combining rules has an identity element ("0" for addition, "don't move at all" for translation), and for both operations ("+" and "compose"), the order of operations doesn't matter, and you start to realize something: if I proved something about numbers using only the notion of addition, and the fact that there's an identity, and that addition is commutative, I could just replace a bunch of words and I'd have a proof about the set of all translations of the plane!</p> <p>And the next thing you know, you're starting to realize that other things have these kinds of shared properties as well, so you say "I'm going to give a name to sets of things like that: I'll call them 'groups'." (Later, you realize that the commutativity of addition is kind of special, and you really want to talk about other operations as well, so you enlarge your notion of "group" and instead call these things "Abelian groups," after Abel, the guy who did a lot of the early work on them.)</p> <p>The same thing happened with linear algebra. There are some sets of things that have certain properties, and someone noticed that they ALL had the same properties, and said "let's name that kind of collection". It wasn't a pretty development -- the early history of vectors was complicated by people wanting to have a way to multiply vectors in analogy with multiplying real numbers or complex numbers, and it took a long time for folks to realize that having a "multiplication" was nice, but not essential, and that even for collections that didn't have multiplication, there were still a ton of important results. </p> <p>In a way, though, the most interesting thing was not the sets themselves -- the "vector spaces", but rather, the class of transformations that preserve the properties of a vector space. These are called "linear transformations", and they are a generalization of the transformations you learn about in Euclid. </p> <p>Why are these so important? One reason for their historical importance is that for a function from $n$-space to $k$-space, the derivative, evaluated at some point of $n$-space, is a linear transformation. In short: something we cared a lot about -- derivatives -- turns out to be very closely tied to linear transformations. </p> <p>For a function $f: R \to R$, the derivative $f'(a)$ is usually regarded as "just a number". But consider for a moment $$ f(x) = \sqrt{x}\\ f'(x) = \frac{1}{2 \sqrt{x}}\\ f(100) = 10 \\ f'(100) = \frac{1}{20} $$ Suppose you wanted to compute the square root of a number that's a little way from 100, say, 102. We could say "we moved 2 units in the domain; how far do we have to move away from 10 (i.e., in the codomain)?" The answer is that the square root of $102$ is (very close to) the square root of 100, displaced by $2 \cdot \frac{1}{20}$, i.e., to $10.1$. (In fact, $10.1^2 = 102.01$, which is pretty accurate!)</p> <p>So we can regard "multiplication by $1/20$" as the derivative of square-root at $a = 100$, and this gives a linear transformation from "displacements near 100, in the domain" to "displacements near 10, in the codomain."</p> <p>The importance of derivatives made it really worthwhile to understand the properties of such transformations, and therefore to also understand their domains...and pretty soon, other situations that arose in other parts of math turned out to "look like those". For instance, the set of all polynomials of degree no more than $n$ turns out to be a vector space: you can add polynomials, you can multiply them by a constant, etc. And the space of all convergent sequences of real numbers turns out to be a vector space. And the set of all periodic functions of period 1 turns out to be a vector space. And pretty soon, so many things seemed to be sharing the same properties that someone gave 'sets that had those particular properties" a name: vector spaces. </p> <p>Nowadays, seeing each new thing that's introduced through the lens of linear algebra can be a great aid...so we introduce the general notion first, and many students are baffled. My own preference, in teaching linear algebra, is to look at three or four examples, like "period-1 periodic functions" and "convergent sequences" and "polynomials of degree no more than $n$", and have the students notice that there are some similarities, and only <em>then</em> define "vector space". But that's a matter of taste. </p>
<p>For the particular case of inner products being bilinear:</p> <p>Inner products are intended to generalize the usual <em>dot product</em> from plane and space vector algebra. Therefore it would not make sense to require it to satisfy a property that the dot product doesn't have.</p> <p>For example, in the plane we have $2\bigl[ (1,1)\cdot (1,2) \bigr] = 6$ whereas $$[2(1,1)]\cdot[2(1,2)]=(2,2)\cdot(2,4) = 12$$ so your proposed rule doesn't hold for the dot product.</p>
geometry
<p><img src="https://i.sstatic.net/d6azj.jpg" alt="enter image description here" /></p> <p>I discovered this elegant theorem in my facebook feed. Does anyone have any idea how to prove?</p> <p>Formulations of this theorem can be found in the answers and the comments. You are welcome to join in the discussion.</p> <p>Edit: Current progress: The theorem is proven. There is a bound to curvature to be satisfied before the theorem can hold.</p> <p>Some unresolved issues and some food for thought:</p> <p>(1) Formalise the definition (construction) of the parallel curve in the case of concave curves. Thereafter, consider whether this theorem is true under this definition, where the closed smooth curve is formed by convex and concave curves linked alternatively.</p> <p>(2) Reaffirm the upper bond of <span class="math-container">$r$</span> proposed, i.e. <span class="math-container">$r=\rho$</span>, where <span class="math-container">$\rho$</span> is the radius of the curvature, to avoid self-intersection.</p> <p>(3) What is the least identity of the curves in order for this theorem to be true? This is similar to the first part of question (1).</p> <p>(4) Can this proof be more generalised? For example, what if this theorem is extended into higher dimensions? (Is there any analogy in higher dimensions?)</p> <p>Also, I would like to bring your attention towards some of the newly posted answers.</p> <p>Meanwhile, any alternative approach or proof is encouraged. Please do not hesitate in providing any insights or comments in further progressing this question.</p>
<p><img src="https://i.sstatic.net/gi52Y.jpg" alt="Example for irregular pentagon"></p> <p>Consider the irregular pentagon above. I have drawn it as well as it's "extended version," with lines connecting the two. Notice that the perimeter of the extended version is the same as the original, save for several arcs — arcs which fit together, when translated, to form a perfect circle! Thus, the difference in length between the two is the circumference of that circle, $2\pi r$.</p> <p>Any convex shape can be approximated as well as one wishes by convex polygons. Thus, by taking limits (you have to be careful here but it works out), it works for any convex shape.</p> <p>I'll get back to you on the concave case, but it should still work. EDIT: I'm not sure it does… EDIT EDIT: Ah, the problem with my supposed counterexample — the triomino — is that the concave vertex was more than $r$ away from any point on its extended version. If you round out the triomino at that vertex, it works again. TL;DR, it works for concave shapes provided there are no "creases."</p>
<p>Let $\beta: I \rightarrow \mathbb{R^2}, I \subset \mathbb{R}$ be a <em>positively oriented</em> plane curve <strong><em>parametrised by arc length</em></strong>. </p> <p>Now note that $\alpha$ is constructed by moving each point on $\beta$ a distance of $r$ along its <em>unit</em> normal vector. We can articulate this more precisely:</p> <blockquote class="spoiler"> <p> Since $\beta$ is param. by arc length, by definition, its (<em>unit</em>) tangent vector $\beta' =t_\beta$ has a norm of $1$ - that is, $t_\beta . t_\beta = 1$. Differentiate this inner product using the product rule to see that $2t_\beta . t'_\beta = 0$. One deduces that $t'_\beta \perp t$ and since $\beta$ is positively oriented, it is apparent that $\beta''=t'_\beta$ is exactly the outward normal vector of $\beta$ - we must normalise this to obtain the <em>unit</em> normal. Do so using the Serret-Frenet relation in the plane (i.e. 'torsion', $\tau$ vanishes) $\beta''=\kappa n_\beta$ (where $\kappa$ is the signed plane curvature at any given point), and so $n_\beta=\frac{\beta''}{\kappa}$.</p> </blockquote> <p>So, </p> <p>$$ \alpha = \beta + r\frac{\beta''}{\kappa} = \beta + n_\beta \ \ \ (*) $$</p> <p>The arc length of some space curve $\gamma:(a,b)\rightarrow \mathbb{R^n}$ parametrised by arc length is given by,</p> <p>$$ \int^b_a ||\gamma'(s)||\,ds $$</p> <p>(where $s$ is the parametrisation variable)</p> <p>Let $l_\alpha$ and $l_\beta$ denote the respective lengths of $\alpha$ and $\beta$. We wish to show that $l_\alpha - l_\beta=2\pi r$</p> <p>Computing the relevant integral using ($*$) (I needn't bother explicitly writing the bounds since these aren't important to us here) and writing $\beta:= \beta(s)$ (which in turn, induces $\alpha=\alpha(s))$,</p> <p>$$ l_\alpha - l_\beta= \int ||\alpha'||\,ds - \int ||\beta'||\,ds = \int \left(||\beta' + r n'_\beta|| - ||\beta'||\right)\,ds\ \ \ \ (**) $$</p> <p>Recall that $\beta'=t_\beta$.We must determine the nature of $n'_\beta$ in order to proceed:</p> <p>Define the scalar function $\theta(s)$ as the inclination of $t_\beta(s)$ to the horizontal. Then we may write $t_\beta(s)=(\cos\theta(s),\sin\theta(s))$, and so $n_\beta=(-\sin\theta(s),\cos\theta(s))$ by application of the rotation matrix through $\pi/2$. This gives us,</p> <p>$$ n'_\beta(s)=-\theta'(s)t_\beta(s) $$</p> <p>We can stop sweating now since we can see that $n'_\beta$ is parallel to $\beta'$ - and that makes everything pretty neat. Plugging all of this into ($**$) and recalling that $||t_\beta||=1$,</p> <p>$$ l_\alpha - l_\beta= \int \left(||t_\beta -r\theta't_\beta|| - ||t_\beta||\right)\,ds = \int \left(||t_\beta||.||1-r\theta'-1||\right)\,ds = \int \left(||-r\theta'||\right)\,ds$$</p> <p>And finally,</p> <p>$$ l_\alpha-l_\beta = \int \left(||-r\theta'||\right)\,ds= r \int \left(||\theta'||\right)\,ds=2\pi r $$</p> <p><strong>Q.E.D</strong></p> <p><strong><em>Fun fact</em></strong>: $\theta'(s)$ is exactly $\kappa(s)$, the signed plane curvature (from Serret-Frenet relation $t'=\kappa n$ for space curves given torsion $\tau$ vanishes) and so the final integral is often seen as $\int \kappa(s)ds$ which is a quantity called the <em>total curvature</em>!</p> <p><strong>Caveat</strong> One must assume differentiability on the inner curve - note that the result fails for polygons or when given vertices. Furthermore, the special case of concave curves (where the initial result $l_\alpha=l_\beta+2\pi r$ does not always hold) is discussed in the comments below - this is not too difficult to deal with given a restriction on $r$ (though we don't have to apply this restriction if self-intersections <em>are permitted</em>; if they are indeed permitted, the proof will still work!) and modified computation of the total curvature.</p>
logic
<p>I expect that nearly everyone here at stackexchange is by now familiar with <a href="http://www.google.com/search?q=Cheryls+birthday" rel="nofollow noreferrer">Cheryl's birthday problem</a>, which spawned many variant problems, including a <a href="http://web.archive.org/web/20150509214648/https://plus.google.com/+TimothyGowers0/posts/Ak3Fnw8dvBk" rel="nofollow noreferrer">transfinite version</a> due to Timothy Gowers.</p> <p>In response, I have made my own transfinite epistemic logic puzzle, <a href="http://jdh.hamkins.org/transfinite-epistemic-logic-puzzle-challenge/" rel="nofollow noreferrer">Cheryl's rational gift</a>, which appears below. Can you solve it?</p> <p><img src="https://i.sstatic.net/Lq7C0.png" alt="" /></p> <em> <p><strong><em>Cheryl</em>  </strong> Welcome, Albert and Bernard, to my birthday party, and I thank you for your gifts. To return the favor, as you entered my party, I privately made known to each of you a rational number of the form <span class="math-container">$$n-\frac{1}{2^k}-\frac{1}{2^{k+r}},$$</span> where <span class="math-container">$n$</span> and <span class="math-container">$k$</span> are positive integers and <span class="math-container">$r$</span> is a non-negative integer; please consider it my gift to each of you. Your numbers are different from each other, and you have received no other information about these numbers or anyone's knowledge about them beyond what I am now telling you. Let me ask, who of you has the larger number?</p> <p><strong><em>Albert</em>   </strong> I don't know.</p> <p><strong><em>Bernard</em>   </strong> Neither do I.</p> <p><strong><em>Albert</em>   </strong> Indeed, I still do not know.</p> <p><em><strong>Bernard</strong></em>    And still neither do I.</p> <p><em><strong>Cheryl</strong></em>    Well, it is no use to continue that way! I can tell you that no matter how long you continue that back-and-forth, you shall not come to know who has the larger number.</p> <p><em><strong>Albert</strong></em>    What interesting new information! But alas, I still do not know whose number is larger.</p> <p><em><strong>Bernard</strong></em>    And still also I do not know.</p> <p><em><strong>Albert</strong></em>    I continue not to know.</p> <p><em><strong>Bernard</strong></em>    I regret that I also do not know.</p> <p><em><strong>Cheryl</strong></em>    Let me say once again that no matter how long you continue truthfully to tell each other in succession that you do not yet know, you will not know who has the larger number.</p> <p><em><strong>Albert</strong></em>    Well, thank you very much for saving us from that tiresome trouble! But unfortunately, I still do not know who has the larger number.</p> <p><em><strong>Bernard</strong></em>    And also I remain in ignorance. However shall we come to know?</p> <p><em><strong>Cheryl</strong></em>    Well, in fact, no matter how long we three continue from now in the pattern we have followed so far---namely, the pattern in which you two state back-and-forth that still you do not yet know whose number is larger and then I tell you yet again that no further amount of that back-and-forth will enable you to know---then still after as much repetition of that pattern as we can stand, you will not know whose number is larger! Furthermore, I could make that same statement a second time, even after now that I have said it to you once, and it would still be true. And a third and fourth as well! Indeed, I could make that same pronouncement a hundred times altogether in succession (counting my first time as amongst the one hundred), and it would be true every time. And furthermore, even after my having said it altogether one hundred times in succession, you would still not know who has the larger number!</p> <p><em><strong>Albert</strong></em>    Such powerful new information! But I am very sorry to say that still I do not know whose number is larger.</p> <p><em><strong>Bernard</strong></em>    And also I do not know.</p> <p><em><strong>Albert</strong></em>    But wait! It suddenly comes upon me after Bernard's last remark, that finally I know who has the larger number!</p> <p><em><strong>Bernard</strong> </em>   Really? In that case, then I also know, and what is more, I know both of our numbers!</p> <p><em><strong>Albert</strong></em>    Well, now I also know them!</p> </em> <hr /> <p><strong>Question.</strong> What numbers did Cheryl give to Albert and Bernard?</p> <p>There are many remarks and solution proposals posted already as comments on <a href="http://jdh.hamkins.org/transfinite-epistemic-logic-puzzle-challenge/" rel="nofollow noreferrer">my blog</a>, but the solutions proposed there do not all agree with one another.</p> <p>I shall post a solution there in a few days time, but I thought people here at Math.SE might enjoy the puzzle. I apologize if some find this question to be an inappropriate use of this site.</p> <p>You may also want to see my earlier <a href="http://jdh.hamkins.org/now-i-know" rel="nofollow noreferrer">transfinite epistemic logic puzzles</a>, with solutions there.</p>
<p>We can map $n - 2^{-k} - 2^{-(k+r)}$ onto the ordinal $\omega^2(n-1) + \omega(k-1) + r$; this preserves order. So Cheryl effectively says "I have given you both distinct ordinals $a$ and $b$, both less than $\omega^3$. Which is bigger?"</p> <p>Albert says "$a \geq 1$", Bernard says "$b \geq 2$", Albert says "$a \geq 3$", Bernard says "$b \geq 4$".</p> <p>Cheryl says "Actually $a$ and $b$ are both $\geq \omega$".</p> <p>Albert says "$a \geq \omega + 1$", Bernard says "$b \geq \omega + 2$", Albert says "$a \geq \omega + 3$", Bernard says "$b \geq \omega + 4$".</p> <p>Eventually Cheryl says "Actually both are $\geq \omega^2 100 + 1$".</p> <p>Albert says "$a \geq \omega^2 100 + 2$".</p> <p>Bernard says "$b \geq \omega^2 100 + 3$".</p> <p>Albert says "Aha! In that case I know the answer, which tells you that $a &lt; \omega^2 100 + 4$".</p> <p>From that, Bernard can work out Albert's number. How does he know whether Albert's number is $\omega^2 100 + 2$ or $\omega^2 100 + 3$? It can only be that $\omega^2 100 + 3$ is Bernards's number, so he knows Albert's must be $\omega^2 100 + 2$.</p> <p>Translating back in to the language of the original question, this means that Albert's number is $101 - 2^{-1}- 2^{-3} = 100.375$ and Bernard's is $101 - 2^{-1}- 2^{-4} = 100.4375$.</p>
<p>Every time Albert and Bernard go back and forth they are eliminating a new ‘r’ , and every time she tells them that no matter how many times they go back and forth they won’t get it, they are eliminating a new ‘k.’</p> <p>When Cheryl makes her big rant, she if effectively telling them that for all r and k when n = 1, they won’t find it, and furthermore that the first 100 times she makes that statement they won’t find it.</p> <p>She also says that after saying it 100 times, neither will know which number is larger. This means that neither Albert nor Bernard has the number 100. (something a couple other solutions missed I think.) Albert says he does not know, ruling out him having 100 + 1/4, and then Bernard says he still does not know, ruling out Bernard having 100 + 3/8 and 100 + 1/4.</p> <p>When Albert says that he suddenly knows, he can either have 100 + 3/8 or 100 + 7/16. The only way for Bernard to know both of their numbers is if he has 100 + 7/16, which gives us the answer:</p> <p>Albert: 100 + 3/8 Bernard: 100 + 7/16</p>
number-theory
<blockquote> <p>Let $x$ be a real number and let $n$ be a positive integer. It is known that both $x^n$ and $(x+1)^n$ are rational. Prove that $x$ is rational. </p> </blockquote> <p>What I have tried:</p> <p>Denote $x^n=r$ and $(x+1)^n=s$ with $r$, $s$ rationals. For each $k=0,1,\ldots, n−2$ expand $x^k\cdot(x+1)^n$ and replace $x^n$ by $r$. One thus obtains a linear system with $n−1$ equations with variables $x$, $x^2$, $x^3,\ldots x^{n−1}$. The matrix associated to this system has rational entries, and therefore if the solution is unique it must be rational (via Cramer's rule). This approach works fine if $n$ is small. The difficulty is to figure out what exactly happens in general. </p>
<p>Here is a proof which does not require Galois theory.</p> <p>Write $$f(z)=z^n-x^n\quad\hbox{and}\quad g(z)=(z+1)^n-(x+1)^n\ .$$ It is clear that these polynomials have rational coefficients and that $x$ is a root of each; therefore each is a multiple of the minimal polynomial of $x$, and every algebraic conjugate of $x$ is a root of both $f$ and $g$. However, if $f(z)=g(z)=0$ then we have $$|z|=|x|\quad\hbox{and}\quad |z+1|=|x+1|\ ;$$ this can be written as $$\def\c{\overline} z\c z=x\c x\ ,\quad z\c z+z+\c z+1=x\c x+x+\c x+1$$ which implies that $${\rm Re}(z)={\rm Re}(x)\ ,\quad {\rm Im}(z)=\pm{\rm Im}(x)=0\tag{$*$}$$ and so $z=x$. In other words, $f$ and $g$ have no common root except for $x$; so $x$ has no conjugates except for itself, and $x$ must be rational.</p> <p>As an alternative, the last part of the argument can be seen visually: the roots of $f$ lie on the circle with centre $0$ and radius $|x|$; the roots of $g$ lie on the circle with centre $-1$ and radius $|x+1|$; and from a diagram, these circles intersect only at $x$. Thus, again, $f$ and $g$ have no common root except for $x$.</p> <p><strong>Observe</strong> that the deduction in $(*)$ relies on the fact that $x$ is real: as mentioned in Micah's comment on the original question, the result need not be true if $x$ is not real.</p> <p><strong>Comment</strong>. A virtually identical argument proves the following: if $n$ is a positive integer, if $a$ is a non-zero rational and if $x^n$ and $(x+a)^n$ are both rational, then $x$ is rational.</p>
<p>Let $x$ be a real number such that $x^n$ and $(x+1)^n$ are rational. Without loss of generality, I assume that $x \neq 0$ and $n&gt;1$.</p> <p>Let $F = \mathbb Q(x, \zeta)$ be the subfield of $\mathbb C$ generated by $x$ and a primitive $n$-th root of unity $\zeta$. </p> <p>It is not difficult to see that $F/\mathbb Q$ is Galois; indeed, $F$ is the splitting field of the polynomial $$X^n - x^n \in \mathbb Q[X].$$ The conjugates of $x$ in $F$ are all of the form $\zeta^a x$ for some powers $\zeta^a$ of $\zeta$. Indeed, if $x'$ is another root of $X^n - x^n,$ in $F$, then</p> <p>$$(x/x')^n = x^n/x^n = 1$$</p> <p>so that $x/x'$ is an $n$-th root of unity, which is necessarily of the form $\zeta^a$ for some $a \in \mathbb Z/n\mathbb Z$.</p> <p>Assume now that $x$ is <strong>not</strong> rational. Then, since $F$ is Galois, there exists an automorphism $\sigma$ of $F$ such that $x^\sigma \neq x$. Choose any such automorphism. By the above remark, we can write $x^\sigma = \zeta^a x$, for some $a \not \equiv 0 \pmod n$. </p> <p>Since also $(1+x)^\sigma = 1+ x^\sigma \neq 1 + x$, and since $(1+x)^n$ is also rational, the same argument applied to $1+x$ shows that $(1+x)^\sigma = \zeta^b (1+x)$ for some $b \not \equiv 0 \pmod n$. And since $\sigma$ is a field automorphism it follows that</p> <p>$$\zeta^b (1+x) = (1+x)^\sigma = 1 + x^\sigma = 1 + \zeta^a x$$</p> <p>and therefore</p> <p>$$x(\zeta^b - \zeta^a) = 1 - \zeta^b.$$</p> <p>But $\zeta^b \neq 1$ because $b \not \equiv 0\pmod n$, so we may divide by $1-\zeta^b$ to get</p> <p>$$x^{-1} = \frac{\zeta^b - \zeta^a}{1-\zeta^b}.$$</p> <p>Now, since $x$ is real, this complex number is invariant under complex conjugation, hence</p> <p>$$x^{-1} = \frac{\zeta^b - \zeta^a}{1-\zeta^b} = \frac{\zeta^{-b} - \zeta^{-a}}{1-\zeta^{-b}} = \frac{1 - \zeta^{b-a}}{\zeta^b-1} = \zeta^{-a}\frac{\zeta^a - \zeta^b}{\zeta^b-1} = \zeta^{-a} x^{-1}.$$</p> <p>But this implies that $\zeta^a = 1$, which contradicts that $x^\sigma \neq x$. So we are done. $\qquad \blacksquare$</p> <p>The following stronger statement actually follows from the proof:</p> <blockquote> <p><em>For each $n$, there are finitely many non-rational complex numbers $x$ such that $x^n$ and $(x+1)^n$ are rational. These complex numbers belong to the cyclotomic field $\mathbb Q(\zeta_n)$, and none of them are real.</em> </p> </blockquote> <p>Indeed, there are finitely many choices for $a$ and $b$.</p> <p>In <a href="https://math.stackexchange.com/a/1087226/12507">this related answer,</a> Tenuous Puffin proves that there exist only $26$ real numbers having this property, allowing for any value of $n$.</p>
linear-algebra
<p>I plan to self-study linear algebra this summer. I am sorta already familiar with vectors, vector spaces and subspaces and I am really interested in everything about matrices (diagonalization, ...), linear maps and their matrix representation and eigenvectors and eigenvalues. I am looking for a book that handles every of the aforementioned topics in details. I also want to build a solid basis of the mathematical way of thinking to get ready to an exciting abstract algebra next semester, so my main aim is to work on proofs for somehow hard problems. I got Lang's "Intro. to Linear Algebra" and it is too easy, superficial.</p> <p>Can you advise me a good book for all of the above? Please take into consideration that it is for self-study, so that it' gotta work on its own. Thanks.</p>
<p>When I learned linear algebra for the first time, I read through Friedberg, Insel, and Spence. It is slightly more modern than Hoffman/Kunze, is fully rigorous, and has a bunch of useful exercises to work through.</p>
<p>A great book freely available online is <a href="https://sites.google.com/a/brown.edu/sergei-treil-homepage/linear-algebra-done-wrong" rel="nofollow noreferrer">Linear Algebra Done Wrong</a> by Sergei Treil. It covers all the topics you listed and culminates in a discussion of spectral theory, which can be considered a generalized treatment of diagonalization.</p> <p>Don't be put off by the book's title. It's a play on the popular Linear Algebra Done Right, by Sheldon Axler. Axler's book is also very good, and you might want to check it out.</p> <p>The classic proof-based linear algebra text is the one by Hoffman and Kunze. I find the two books I listed above easier to read, but you might also consider it. In any case, it is a good reference.</p> <p>I hope this helps. Please comment if you have any questions.</p>
geometry
<p>I was just wondering why we have 90° degrees for a perpendicular angle. Why not 100° or any other number?</p> <p>What is the significance of 90° for the perpendicular or 360° for a circle?</p> <p>I didn't ever think about this during my school time.</p> <p>Can someone please explain it mathematically? Is it due to some historical reason?</p>
<p>360 is an incredibly <a href="http://en.wikipedia.org/wiki/Abundant_number">abundant</a> number, which means that there are many factors. So it makes it easy to divide the circle into $2, 3, 4, 5, 6, 8, 9, 10, 12,\ldots$ parts. By contrast, 400 gradians cannot even be divided into 3 equal whole-number parts. While this may not necessarily be why 360 was chosen in the first place, it could be one of the reasons we've stuck with the convention.</p> <p>By the way, when working in radians, we just "live with" the fact that most common angles are fractions involving $\pi$. There's a small group of people who prefer to use a constant called $\tau$, which is just $2\pi$. Then angles seem naturally to be divisions of the circle: The angle that divides a circle into $n$ equal parts is $\tau/n$ (radians).</p> <p>Hope this helps!</p>
<p>I have heard that the ancient Babylonians used a base-$60$ numeral system with sub-base $10$.</p> <p>Certainly such a system was used by Ptolemy in the second century AD. See Gerald Toomer's translation of Ptolemy's <em>Almagest</em>. In particular Ptolemy divided the circle into $360$ degrees. See <a href="http://en.wikipedia.org/wiki/Ptolemy%27s_table_of_chords">http://en.wikipedia.org/wiki/Ptolemy%27s_table_of_chords</a>, <a href="http://en.wikipedia.org/wiki/Almagest">http://en.wikipedia.org/wiki/Almagest</a>, and <a href="http://hypertextbook.com/eworld/chords.shtml">http://hypertextbook.com/eworld/chords.shtml</a> .</p>
number-theory
<p>Good evening! I am very new to this site. I would like to put the following material from Prof. Gandhi's note book and my observations. Of course it is little long with more questions. But, with good belief on this site, I am sending for good solutions/answers.</p> <p>If we take other than primes $2$, $5$ and $11$, every prime can be written as $x + y + z$, where $x$, $y$ and $z$ are some positive numbers. Interestingly, $x \times y \times z = c^3$, where $c$ is again some positive number. Let us see the magic for primes $3,7,13,31,43,73$ $$ \begin{align} 3 = 1 + 1 + 1 &amp;\Longrightarrow 1 \times 1 \times 1 = 1^3\\ 7 = 1 + 2 + 4 &amp;\Longrightarrow 1 \times 2 \times 4 = 2^3\\ 13 = 1 + 3 + 9 &amp;\Longrightarrow 1 \times 3 \times 9 = 3^3\\ 31 = 1 + 5 + 25 &amp;\Longrightarrow 1 \times 5 \times 25 = 5^3\\ 43 = 1 + 6 + 36 &amp;\Longrightarrow 1 \times 6 \times 36 = 6^3\\ 73 = 1 + 8 + 64 &amp;\Longrightarrow 1 \times 8 \times 64 = 8^3\\ \end{align} $$ Can you justify the above pattern? How to generalize the above statement either mathematically or by computer?</p> <p>But, I observed that it is true for primes less than $9500$. Can your provide a computational algorithm to describe this?</p> <p>Also, prove that, we conjecture that except $1, 2, 3, 5, 6, 7, 11, 13, 14, 15, 17, 22, 23$, every positive number can be written as a sum of four positive numbers and the product is again can be expressible in 4th power. Now, can we generalize this? Also, I want to know that, is there any such numbers can be expressible as some of $n$-integers with their product is again in $n$-th power?</p> <p>Thank you so much.</p> <p><strong>edit</strong></p> <p>Concerning this cubic property :</p> <p>Notice that this can be extended to hold for almost all squarefree positive integers $&gt; 2$, not just the primes.</p> <p>for instance : we know for the prime $7$ : $7=1+2+4$ so we also get $7A = 1A + 2A + 4A$ and $1A * 2A * 4A$ is simply equal to $8A^3$.</p> <p>In fact this can be extended to all odd positive integers $&gt;11$ if $25,121$ have a solution.</p> <p>Hence I am intrested in this and I placed a bounty.</p> <p>I edited the question because its to much for a comment and certainly not an answer.</p> <p>Btw Im curious about this Ghandi person though info about that does not get the bounty naturally.</p> <p>I would like to remind David Speyer's comment : Every prime that is $1 mod 3$ is of the form $a^ 2 +ab+b^ 2$ , so that covers half the primes immediately.</p> <p>So that might be a line of attack.</p>
<p>To add pessimism in finding such prime $p$ that cannot be written as sum of $3$ co-divisors of cube, I've drawn images:</p> <p><strong>Number of ways to write an integer number $p$ as the sum $x+y+z$, where $x\cdot y\cdot z=c^3$.</strong></p> <p>($\color{red}{\bf{red}}$ dots $-$ composite numbers, $\bf{black}$ dots $-$ prime numbers)</p> <hr> <p>$$p\le 500$$ <img src="https://i.sstatic.net/zuTWW.png" alt="p=x+y+z, xyz=c^3, p&lt;500"></p> <hr> <p>$$p\le 5\;000$$ <img src="https://i.sstatic.net/vx6KG.png" alt="p=x+y+z, xyz=c^3, p&lt;5 000"></p> <hr> <p>$$p\le 50\;000$$ <img src="https://i.sstatic.net/KT00u.png" alt="p=x+y+z, xyz=c^3, p&lt;50 000"></p> <hr> <p>$$p\le 500\;000$$ <img src="https://i.sstatic.net/4i5ws.png" alt="p=x+y+z, xyz=c^3, p&lt;500 000"></p> <hr> <p>$$p\le 5\;000\;000$$ <img src="https://i.sstatic.net/TUR4N.png" alt="p=x+y+z, xyz=c^3, p&lt;5 000 000"></p> <hr> <p>Control points: <br> $p=486$ (composite): $1$ way: $486 = 162+162+162$; <br> $p=2048$ (composite): $2$ ways: $2048 = 128+720+1200 = 224+256+1568$; <br> $p=6656$ (composite): $3$ ways: $\small {6656 = 416+2340+3900 = 512+1536+4608 = 728+832+5096}$; <br> $p=7559$ (prime): $4$ ways: $\scriptsize{7559 = 9+50+7500 = 114+225+7220 = 135+1024+6400 = 722+2809+4028}$; <br> $p=26624$ (composite): $5$ ways; <br> $p=58757$ (prime): $10$ ways; <br> $p=80429$ (prime): $13$ ways; <br> $p=111611$ (prime): $15$ ways; <br> $...$</p> <hr> <p>These images were made in style like <a href="http://en.wikipedia.org/wiki/Goldbach%27s_conjecture#Heuristic_justification" rel="noreferrer">here</a>.<br> (Since I've seen images for Goldbach's conjecture, I lost any hope to find contradiction for it).</p> <hr> <p>Method of search:</p> <p>to build array $a[3N]$ for storing number of ways;</p> <p>$a[p]$ is the number of ways to write $p$ as such sum; (initially, each $a[j]=0$);</p> <p>for $c = 1 ... N$<br> $\quad$ create list of prime divisors of $c^3$;<br> $\quad$ (there is enough to know prime decomposition of $c$)<br> $\quad$ for each pair $x,y$ of divisors of $c^3$ to find $z=\dfrac{c^3}{xy}$;<br> $\quad$ if $z\in\mathbb{Z}$ and if $x\le y\le z$, then increase $a[x+y+z]$.</p> <p>(if $c&gt;N$, then sum of co-divisors of $c^3$ is greater than $3N$).</p>
<p>Two comments on positivity in David Speyer's method.</p> <p>First, we can take $x,y &gt; 0$ in $p = x^2 + x y + y^2,$ which we can do when $p \equiv 1 \pmod 3.$ Given $p = u^2 + u v + v^2,$ with $u&gt;0$ but $v &lt; 0.$ If $u+v &gt; 0,$ we can use $p = (u+v)^2 + (u+v)(-v) + (-v)^2$ to get everything positive, as both $u+v, -v &gt; 0.$ If $u+v &lt;0,$ switch to $u^2 + u(-u-v)+ (-u-v)^2.$ </p> <p>Next, we get another good one with an indefinite form $$\color{blue}{x^2 + 4 x y + 2 y^2.}$$</p> <p><strong>Lemma</strong>: every (positive) prime $p \equiv \pm 1 \pmod 8 $ can be written as $p = u^2 - 2 v^2$ with $u &gt; 2v.$</p> <p><strong>Proof</strong>. Take the representation $p = u^2 - 2 v^2$ such that $v$ has the smallest possible positive value, with $u&gt;0.$ Note that $u &gt; v \sqrt 2,$ also $u$ is odd. We get a slightly different representation with $(u,-v).$ Next, we apply the generator of the automorphism group of $x^2 - 2 y^2$ to get another representation, $$ (3u-4v, 2u-3v). $$</p> <p>Now, if $2u - 3v \leq 0, $ by minimality of $|v|$ we also get $2u - 3 v \leq -v,$ so $2u \leq 2v$ and $u \leq v.$ This contradicts $u &gt; v \sqrt 2.$</p> <p>Therefore, $2u - 3 v &gt; 0,$ and minimality again says $2u-3v &gt; v.$ This gives us $2u&gt;4v$ and $u &gt; 2v.$</p> <p>So we have $u &gt; 2 v.$ Define $$ x = u - 2 v, \; \; y = v. $$ Then $$ p = x^2 + 4 x y + 2 y^2 = u^2 - 2 v^2, $$ with both $x,y &gt; 0.$</p> <p>EDIT, Thursday, August 21. The remaining primes are $5,11 \pmod 24.$ These are all represented by $2x^2 + 3 y^2,$ but the misfortune is that this does not give any cubes; furthermore, it appears that no $SL_2 \mathbb Z$-equivalent form has a product of the three coefficients a cube. So, the best i have come up with is $$ 14 x^2 + 36 xy + 147 y^2. $$ The good part is the cubes. there are two bad parts: it does represents only primes $5,11 \pmod {24},$ but the additional constraint is that the prime not be a quadratic residue $\bmod {17}.$ Furthermore, with a positivity constraint, as in the original problem, it represents only about half of those. So, all told, it gives only about one fourth of the remaining primes, such as $$ 197, 677, 1907, 1979, 2213 , 2237, 3083, 3803, 4091, 6011, 7349, 8429 , 10139 , 10781, 11213, \ldots $$</p> <p>So, not bad but not great, about 13/16 of the primes so far. </p> <p>EDIT, Friday, August 22.</p> <p>It appears that all primes $p &gt; 0$ with $(p|5) = 1$ and $(p|11) = 1$ are represented by $$ x^2 + 25 xy + 5 y^2 $$ with $x,y &gt; 0.$ I think I have the proof; it starts with an elementary argument that the same can be accomplished for $x^2 + 23 xy- 19 y^2,$ then some fiddling with improper automorphs similar to the proof for $x^2 + 4 x y + 2 y^2$ but with more steps. </p> <p>EEEDDDIIIITTTTTTT, Saturday, August 23:</p> <p>THEOREM: given a prime $p &gt; 30$ with $(p|5)=1$ and $(p|11)=1,$ we can write $ p = x^2 + 25 x y + 5 y^2 $ with positive integers $x,y.$</p> <p>Take prime $p &gt; 0$ and integers $s,t &gt; 0$ with $$ p = s^2 - 605 t^2. $$ Note that $s &gt; t \sqrt {605} \approx 24.5967 t.$Then we can write $$ p = x^2 + 23 x y - 19 y^2, $$ with $x = s - 23 t, \; \; y = 2 t,$ so that $x,y &gt;0.$ </p> <p>Now, find the smallest positive integer $v$ such that there is another positive integer $u$ with $$ p = u^2 + 23 u v - 19 v^2. $$</p> <p>Lemma: given $ p = x^2 + 23 x y - 19 y^2 $ with $y &lt; 0,$ then $y \leq -v.$</p> <p>Proof: if $x &lt; 0,$ we get a representation in positive integers $(-x,-y),$ so $-y \geq v.$ If $x &gt; 0,$ note that $$ x &gt; \frac{\sqrt {605} + 23}{2} |y| \approx 23.798 |y|. $$ Therefore the new representation $(x + 23 y, -y)$ is again in positives and $-y \geq v.$</p> <p>Corollary: if $ p = x^2 + 23 x y - 19 y^2 $ with $x &lt;0, y &gt;0,$ then $y \geq v.$</p> <p>Back to the minimum $(u,v),$ both positive. Note $$ u &gt; \frac{\sqrt {605} - 23}{2} v \approx 0.798 v. $$ </p> <p>There is another representation, $$(4u-3v,5u-4v)$$ We know $5u-4v \neq 0$ as $p$ is not a square. If $5u-4v &lt; 0,$ then $5u-4v \leq -v,$ thus $5u &lt; 3 v.$ But this gives $u &lt; 0.6 v,$ while in fact, $u &gt; 0.798 v. $</p> <p>Therefore, $5u-4v &gt; 0,$ whereupon $5u - 4 v \geq v.$ Thus $5 u \geq 5 v$ and $u \geq v.$ Finally, as $u,v$ are relatively prime, they are not equal unless both are actually $1,$ giving the prime $5.$ Thus, for $p &gt; 5,$ we get $u &gt; v.$</p> <p>Finally, finally, finally, we can write $$ \color{blue}{ p = x^2 + 25 x y + 5 y^2} $$ with $$ x = u - v, y = v $$ both positive. </p>
probability
<p>Just for fun, I am trying to find a good method to generate a random number between 1 and 10 (uniformly) with an unbiased six-sided die.</p> <p>I found a way, but it may requires a lot of steps before getting the number, so I was wondering if there are more efficient methods.</p> <p><strong>My method:</strong></p> <ol> <li>Throw the die and call the result $n$. If $1\leq n\leq 3$ your number will be between $1$ and $5$ and if $4\leq n\leq 6$ your number will be between $6$ and $10$. Hence, we reduced to the problem of generating a random number between $1$ and $5$.</li> <li>Now, to get a number between $1$ and $5$, throw the die five times. If the $i$th throw got the largest result, take your number to be $i$. If there is no largest result, start again until there is.</li> </ol> <p>The problem is that although the probability that there will eventually be a largest result is $1$, it might take a while before getting it.</p> <p>Is there a more efficient way that requires only some <em>fixed</em> number of steps? <strong>Edit:</strong> Or if not possible, a method with a smaller expected number of rolls?</p>
<p>You may throw the die many ($N$) times, take the sum of the outcomes and consider the residue class $\pmod{10}$. The distribution on $[1,10]$ gets closer and closer to a uniform distribution as $N$ increases.</p> <hr> <p>You may throw the die once to decide if the final outcome will be even or odd, then throw it again until it gives an outcome different from six, that fixes the residue class $\pmod{5}$. In such a way you generate a uniform distribution over $[1,10]$ with $\frac{11}{5}=\color{red}{2.2}$ tosses, in average. </p> <hr> <p>If you are allowed to throw the die in a wedge, you may label the edges of the die with the numbers in $[1,10]$ and mark two opposite edges as "reroll". In such a way you save exactly one toss, and need just $\color{red}{1.2}$ tosses, in average.</p> <hr> <p>Obviously, if you are allowed to throw the die in decagonal glass you don't even need the die, but in such a case the lateral thinking spree ends with just $\color{red}{1}$ toss. Not much different from buying a D10, as Travis suggested.</p> <hr> <p>At last, just for fun: look at the die, without throwing it. Then look at your clock, the last digit of the seconds. Add one. $\color{red}{0}$ tosses.</p>
<p>Write out the base-$6$ decimals of $\frac{0}{10}$ through $\frac{10}{10}$.</p> <p>$$\begin{array}{cc} \frac{0}{10} &amp; = 0.00000000\dots\\ \frac{1}{10} &amp; = 0.03333333\dots\\ \frac{2}{10} &amp; = 0.11111111\dots\\ \frac{3}{10} &amp; = 0.14444444\dots\\ \frac{4}{10} &amp; = 0.22222222\dots\\ \frac{5}{10} &amp; = 0.30000000\dots\\ \frac{6}{10} &amp; = 0.33333333\dots\\ \frac{7}{10} &amp; = 0.41111111\dots\\ \frac{8}{10} &amp; = 0.44444444\dots\\ \frac{9}{10} &amp; = 0.52222222\dots\\ \frac{10}{10} &amp; = 1.00000000\dots\\ \end{array}$$</p> <p>Treat rolls of a $6$ as a $0$. As you roll your $6$-sided die, you are generating digits of a base-$6$ decimal number, uniformly distributed between $0$ and $1$. There are $10$ gaps in between the fractions for $\frac{x}{10}$, corresponding to the $10$ uniformly random outcomes you are looking for. You know which outcome you are generating as soon as you know which gap the random number will be in. </p> <p>This is kind of annoying to do. Here's an equivalent algorithm:</p> <blockquote> <p>Roll a die $$\begin{array}{c|c} 1 &amp; A(0,1)\\ 2 &amp; B(1,2,3)\\ 3 &amp; A(4,3)\\ 4 &amp; A(5,6)\\ 5 &amp; B(6,7,8)\\ 6 &amp; A(9,8)\\ \end{array}$$ $A(x,y)$: Roll a die $$\begin{array}{c|c} 1,2,3 &amp; x\\ 4 &amp; A(x,y)\\ 5,6 &amp; y\\ \end{array}$$ $B(x,y,z)$: Roll a die $$\begin{array}{c|c} 1 &amp; x\\ 2 &amp; C(x,y)\\ 3,4 &amp; y\\ 5 &amp; C(z,y)\\ 6 &amp; z\\ \end{array}$$ $C(x,y)$: Roll a die $$\begin{array}{c|c} 1 &amp; x\\ 2 &amp; C(x,y)\\ 3,4,5,6 &amp; y\\ \end{array}$$</p> </blockquote> <p>One sees that:</p> <ul> <li>$A(x,y)$ returns $x$ with probability $\frac35$ and $y$ with probability $\frac25$.</li> <li>$B(x,y,z)$ returns $x$ with probability $\frac15$, $y$ with probability $\frac35$, and $z$ with probability $\frac15$.</li> <li>$C(x,y)$ returns $x$ with probability $\frac15$ and $y$ with probability $\frac45$.</li> </ul> <p>Overall, it produces the $10$ outcomes each with $\frac1{10}$ probability. </p> <p>Procedures $A$ and $C$ are expected to require $\frac65$ rolls. Procedure $B$ is expected to require $\frac23 \cdot 1 + \frac13 (1 + E(C)) = \frac75$ rolls. So the main procedure is expected to require $\frac23 (1 + E(A)) + \frac13(1 + E(B)) = \frac{34}{15} = 2.2\overline{6}$ rolls.</p>
combinatorics
<p>why is $\sum\limits_{k=1}^{n} k^m$ a polynomial with degree $m+1$ in $n$?</p> <p>I know this is well-known. But how to prove it rigorously? Even mathematical induction does not seem so straight-forward.</p> <p>Thanks.</p>
<p>Let $V$ be the space of all polynomials $f : \mathbb{N}_{\ge 0} \to F$ (where $F$ is a field of characteristic zero). Define the <em>forward difference operator</em> $\Delta f(n) = f(n+1) - f(n)$. It is not hard to see that the forward difference of a polynomial of degree $d$ is a polynomial of degree $d-1$, hence defines a linear operator $V_d \to V_{d-1}$ where $V_d$ is the space of polynomials of degree at most $d$. Note that $\dim V_d = d+1$. </p> <p>We want to think of $\Delta$ as a discrete analogue of the derivative, so it is natural to define the corresponding discrete analogue of the integral $(\int f)(n) = \sum_{k=0}^{n-1} f(k)$. But of course we need to prove that this actually sends polynomials to polynomials. Since $(\int \Delta f)(n) = f(n) - f(0)$ (the "fundamental theorem of discrete calculus"), it suffices to show that the forward difference is surjective as a linear operator $V_d \to V_{d-1}$.</p> <p>But by the "fundamental theorem," the image of the integral is precisely the subspace of $V_d$ of polynomials such that $f(0) = 0$, so the forward difference and integral define an isomorphism between $V_{d-1}$ and this subspace. </p> <p>More explicitly, you can observe that $\Delta$ is upper triangular in the standard basis, work by induction, or use the <strong>Newton basis</strong> $1, n, {n \choose 2}, {n \choose 3}, ...$ for the space of polynomials. In this basis we have $\Delta {n \choose k} = {n \choose k-1}$, and now the result is <em>really</em> obvious.</p> <p>The method of finite differences provides a fairly clean way to derive a formula for $\sum n^m$ for fixed $m$. In fact, for any polynomial $f(n)$ we have the "discrete Taylor formula"</p> <p>$$f(n) = \sum_{k \ge 0} \Delta^k f(0) {n \choose k}$$</p> <p>and it's easy to compute the numbers $\Delta^k f(0)$ using a finite difference table and then to replace ${n \choose k}$ by ${n \choose k+1}$. I wrote a blog post that explains this, but it's getting harder to find; I also explained it in my <a href="https://math.berkeley.edu/~qchu/TopicsInGF.pdf" rel="noreferrer">notes on generating functions</a>.</p>
<p>You can set up a recursive formula for $\sum_{k=0}^n k^m $ by noting that</p> <p>$$(n+1)^{m+1} = \sum_{k=0}^n (k+1)^{m+1}- \sum_{k=0}^n k^{m+1}$$ $$ = { m+1 \choose 1} \sum_{k=0}^n k^m + { m+1 \choose 2} \sum_{k=0}^n k^{m-1} + \cdots $$</p> <p>by expanding the first summation on the RHS by the binomial theorem. Then shift all the other summations except $\sum_{k=0}^n k^m $ to the LHS.</p> <p>So we get</p> <p>$${ m+1 \choose 1} \sum_{k=0}^n k^m = (n+1)^{m+1} - { m+1 \choose 2} \sum_{k=0}^n k^{m-1} - { m+1 \choose 3} \sum_{k=0}^n k^{m-2} + \cdots $$</p>
number-theory
<p>The number $$\sqrt{308642}$$ has a crazy decimal representation : $$555.5555777777773333333511111102222222719999970133335210666544640008\cdots $$</p> <blockquote> <p>Is there any mathematical reason for so many repetitions of the digits ?</p> </blockquote> <p>A long block containing only a single digit would be easier to understand. This could mean that there are extremely good rational approximations. But here we have many long one-digit-blocks , some consecutive, some interrupted by a few digits. I did not calculate the probability of such a "digit-repitition-show", but I think it is extremely small.</p> <p>Does anyone have an explanation ?</p>
<p>The architect's answer, while explaining the absolutely crucial fact that <span class="math-container">$$\sqrt{308642}\approx 5000/9=555.555\ldots,$$</span> didn't quite make it clear why we get <strong>several</strong> runs of repeating decimals. I try to shed additional light to that using a different tool.</p> <p>I want to emphasize the role of <a href="https://en.wikipedia.org/wiki/Binomial_series" rel="noreferrer">the binomial series</a>. In particular the Taylor expansion <span class="math-container">$$ \sqrt{1+x}=1+\frac x2-\frac{x^2}8+\frac{x^3}{16}-\frac{5x^4}{128}+\frac{7x^5}{256}-\frac{21x^6}{1024}+\cdots $$</span> If we plug in <span class="math-container">$x=2/(5000)^2=8\cdot10^{-8}$</span>, we get <span class="math-container">$$ M:=\sqrt{1+8\cdot10^{-8}}=1+4\cdot10^{-8}-8\cdot10^{-16}+32\cdot10^{-24}-160\cdot10^{-32}+\cdots. $$</span> Therefore <span class="math-container">$$ \begin{aligned} \sqrt{308462}&amp;=\frac{5000}9M=\frac{5000}9+\frac{20000}9\cdot10^{-8}-\frac{40000}9\cdot10^{-16}+\frac{160000}9\cdot10^{-24}+\cdots\\ &amp;=\frac{5}9\cdot10^3+\frac29\cdot10^{-4}-\frac49\cdot10^{-12}+\frac{16}9\cdot10^{-20}+\cdots. \end{aligned} $$</span> This explains both the runs, their starting points, as well as the origin and location of those extra digits not part of any run. For example, the run of <span class="math-container">$5+2=7$</span>s begins when the first two terms of the above series are "active". When the third term joins in, we need to subtract a <span class="math-container">$4$</span> and a run of <span class="math-container">$3$</span>s ensues et cetera.</p>
<p>Repeated same numbers in a decimal representation can be converted to repeated zeros by multiplication with $9$. (try it out)</p> <p>so if we multiply $9 \sqrt{308642} = \sqrt{308642 \times 81} = \sqrt{25 000 002}$ since this number is allmost $5000^2$ it has a lot of zeros in its decimal expansion </p>
probability
<p>I've studied mathematics and statistics at undergraduate level and am pretty happy with the main concepts. However, I've come across measure theory several times, and I know <a href="https://math.stackexchange.com/a/118227/77151">it is a basis for probability theory</a>, and, unsurprising, looking at a basic introduction such as this <a href="https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2006-0008.pdf" rel="noreferrer">Measure Theory Tutorial (pdf)</a>, I see there are concepts such as events, sample spaces, and ways of getting from them to real numbers, that seem familiar.</p> <p>So measure theory seems like an area of pure mathematics that I probably <em>ought</em> to study (as discussed very well <a href="https://math.stackexchange.com/questions/502132/what-is-the-motivation-of-measure-theory-when-there-is-probability-theory">here</a>) but I have a lot of other areas I'd like to look at. For example, I'm studying and using calculus and Taylor series at a more advanced level and I've never studied real analysis properly -- and I can tell! In the future I'd like to study the theory of differential equations and integral transforms, and to do that I think I will need to study complex analysis. But I don't have the same kind of "I don't know what I'm doing" feeling when I do probability and statistics as when I look at calculus, series, or integral transforms, so those seem a lot more urgent to me from a foundational perspective. </p> <p>So my real question is, <strong>are there some application relating to probability and statistics that I can't tackle without measure theory</strong>, or for that matter applications in other areas? Or is it more, I'm glad those measure theory guys have got the foundations worked out, I can trust they did a good job and get on with using what's built on top?</p>
<p>First, there are things that are much easier given the abstract formultion of measure theory. For example, let <span class="math-container">$X,Y$</span> be independent random variables and let <span class="math-container">$f:\mathbb{R}\to\mathbb{R}$</span> be a continuous function. Are <span class="math-container">$f\circ X$</span> and <span class="math-container">$f\circ Y$</span> independent random variables. The answer is utterly trivial in the measure theoretic formulation of probability, but very hard to express in terms of cumulative distribution functions. Similarly, convergence in distribution is really hard to work with in terms of cumulative distribution functions but easily expressed with measure theory.</p> <p>Then there are things that one can consume without much understanding, but that requires measure theory to actually understand and to be comfortable with it. It may be easy to get a good intuition for sequences of coin flips, but what about continuous time stochastic processes? How irregular can sample paths be?</p> <p>Then there are powerful methods that actually require measure theory. One can get a lot from a little measure theory. The <a href="http://en.wikipedia.org/wiki/Borel%E2%80%93Cantelli_lemma" rel="noreferrer">Borel-Cantelli lemmas</a> or the <a href="http://en.wikipedia.org/wiki/Kolmogorov%27s_zero%E2%80%93one_law" rel="noreferrer">Kolmogorov 0-1-law</a> are not hard to prove but hard to even state without measure theory. Yet, they are immensely useful. Some results in probability theory require very deep measure theory. The two-volume book <em>Probability With a View Towards Statistics</em> by Hoffman-Jorgensen contains a lot of very advanced measure theory. </p> <p>All that being said, there are a lot of statisticians who live happily avoiding any measure theory. There are however no real analysts who can really do without measure theory.</p>
<p>The usual answer is that of course measure theory is not only provides the right language for rigorous statements, but allows achieve progress not possible without it. The only place I found a different point of view is a remarkable book by Edwin Jaynes, <a href="http://omega.albany.edu:8008/JaynesBook.html">Probability Theory: The Logic of Science</a>, which is real pleasure to read. Here is an exctract from Appendix B.3: Willy Feller on measure theory:</p> <blockquote> <p>In contrast to our policy, many expositions of probability theory begin at the outset to try to assign probabilities on infinite sets, both countable or uncountable. Those who use measure theory are, in effect, supposing the passage to an infinite set already accomplished before introducing probabilities. For example, Feller advocates this policy and uses it throughout his second volume (Feller, 1966).</p> <p>In discussing this issue, Feller (1966) notes that specialists in various applications sometimes ‘deny the need for measure theory because they are unacquainted with problems of other types and with situations where vague reasoning did lead to wrong results’. If Feller knew of any case where such a thing has happened, this would surely have been the place to cite it – yet he does not. Therefore we remain, just as he says, unacquainted with instances where wrong results could be attributed to failure to use measure theory.</p> <p>But, as noted particularly in Chapter 15, there are many documentable cases where careless use of infinite sets has led to absurdities.We know of no case where our ‘cautious approach’ policy leads to inconsistency or error; or fails to yield a result that is reasonable.</p> <p>We do not use the notation of measure theory because it presupposes the passage to an infinite limit already carried out at the beginning of a derivation – in defiance of the advice of Gauss, quoted at the start of Chapter 15. But in our calculations we often pass to an infinite limit at the end of a derivation; then we are in effect using ‘Lebesgue measure’ directly in its original meaning. We think that failure to use current measure theory notation is not ‘vague reasoning’; quite the opposite. It is a matter of doing things in the proper order.</p> </blockquote> <p>You should finish reading the whole text in reference I gave above. </p>
logic
<p>Specifically, the <a href="http://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theorem">Paris-Harrington theorem</a>.</p> <p>In what sense is it true? True in Peano arithmetic but not provable in Peano arithmetic, or true in some other sense?</p>
<p>Peano Arithmetic is a particular <em>proof system</em> for reasoning about the natural numbers. As such it does not make sense to speak about something being "true in PA" -- there is only "provable in PA", "disprovable in PA", and "independent of PA".</p> <p>When we speak of "truth" it must be with respect to some particular <em>model</em>. In the case of arithmetic statements, the model we always speak about unless something else is explicitly specified is <em>the actual (Platonic) natural numbers</em>. Virtually all mathematicians expect these numbers to "exist" (in whichever philosophical sense you prefer mathematical objects to exist in) independently of any formal system for reasoning about them, and the great majority expect all statements about them to have objective (but not necessarily knowable) truth values.</p> <p>We're very sure that everything that is "provable in PA" is also "true about the natural numbers", but the converse does not hold: There exist sentences that are "true about the actual natural numbers" but not "provable in PA". This was famously proved by Gödel -- actually he gave a (formalizable, with a few additional technical assumptions) proof that a particular sentence was neither "provable in PA" nor "disprovable in PA", and a convincing (but not strictly formalizable) argument that this sentence is true about the actual natural numbers.</p> <p>Paris-Harrington shows that another particular sentence is of this kind: not provable in PA, yet true about the actual natural numbers.</p>
<p>The Paris-Harrington theorem is actually the theorem that states that the strengthened [finite] Ramsey theorem is unprovable in first-order Peano Arithmetic.</p> <p>However, since the Ramsey theorem is provable in second-order arithmetic, and therefore true in the standard model of Peano arithmetic, we can say that the theorem is <em>true</em> in a very deep and concrete sense.</p> <p>Generally speaking, when we say a statement about the integer is true we mean that it is true in <em>the</em> standard model, which is the model of second-order Peano arithmetic, and it is all the numbers which can be generated by finitely many iteration of the successor function from zero. (Non-standard models of first-order PA exist, and there are non-standard integers which cannot be generated by finite iterations of the successor from zero.)</p> <p>You may also want to read <a href="https://math.stackexchange.com/questions/69353/true-vs-provable">this thread</a> about the difference between truth and provability.</p>
probability
<p>I found this exercise in a practice exam:</p> <blockquote> <p>Any student has a 90% chance of entering a University. Two students are applying. Assuming each student’s results are independent, what is the probability that at least one of them will be successful in entering the National University?</p> <p>A. $0.50$</p> <p>B. $0.65$</p> <p>C. $0.88$</p> <p>D. $0.90$</p> <p>E. $0.96$</p> </blockquote> <p>I think the answer is something different than the answers above, namely $0.99$.</p> <p>$0.01 = (0.1 \times 0.1)$ is the chance of neither, so $1 - 0.01$ must be $0.99$ right? But it's not part of the possible answers.</p> <p>Other way: $(0.9 \times 0.9) + (0.9 \times 0.1) + (0.1 \times 0.9) = 0.99$</p> <p>Am I missing something here?</p>
<p>As the other answerers have noted, $0.99 = 1 - 0.1^2$ is indeed the correct answer.</p> <p>As to what went wrong with the exercise, I suspect the problem statement has a typo, and the correct admission rate should be 80% instead of 90%. That would make option E ($0.96$ $=$ $1 - 0.2^2$) the correct one.</p> <p>Why do I suspect that? It's because, as you note, for two independent trials with the same success rate, \begin{aligned} {\rm Pr}[\text{one succeeds}] &amp;= 1 - {\rm Pr}[\text{both fail}] \\ &amp;= 1 - (\text{failure rate})^2 \\ &amp;= 1 - (1 - \text{success rate})^2, \end{aligned} which implies that, conversely, $$\text{success rate} = 1 - \sqrt{1 - {\rm Pr}[\text{one succeeds}]}.$$</p> <p>Applying this "reverse" formula to the given options, we can work out what the admission rate would have to be for each option to be correct:</p> <p>\begin{aligned} {\rm A}:\ {\rm Pr}[\text{one succeeds}] &amp;= 0.5 &amp;\implies&amp; \text{success rate} = 1 - \sqrt{0.5}\phantom0 \approx 0.292893 \\ {\rm B}:\ {\rm Pr}[\text{one succeeds}] &amp;= 0.65 &amp;\implies&amp; \text{success rate} = 1 - \sqrt{0.35} \approx 0.408392 \\ {\rm C}:\ {\rm Pr}[\text{one succeeds}] &amp;= 0.88 &amp;\implies&amp; \text{success rate} = 1 - \sqrt{0.12} \approx 0.653589 \\ {\rm D}:\ {\rm Pr}[\text{one succeeds}] &amp;= 0.9 &amp;\implies&amp; \text{success rate} = 1 - \sqrt{0.1}\phantom0 \approx 0.683772 \\ {\rm E}:\ {\rm Pr}[\text{one succeeds}] &amp;= 0.96 &amp;\implies&amp; \text{success rate} = 1 - \sqrt{0.04} = 0.8 \end{aligned}</p> <p>Out of these options, E is the only one where the probability of both students failing to be admitted works out to a nice square ($0.04 = 0.2^2$). For any of the options A to D to be (exactly) correct, the admission rate would have to be a very awkward irrational number.</p> <p>Of course, it's also possible that the error is in the options themselves, and that the author of the exercise meant to include 0.99 as one of the options. The actual explanation might even be a combination of both possibilities &mdash; perhaps whoever wrote the exercise started with an admission rate of 80%, came up with a suitable answer set, and then later decided to change the admission rate to 90% but forgot to update the answers.</p>
<p>Sometimes you have to challenge the options. Your concepts and answer are absolutely correct!</p>
logic
<p>Here's the picture I have in my head of Model Theory:</p> <ul> <li>a <em>theory</em> is an axiomatic system, so it allows proving some statements that apply to <em>all</em> models consistent with the theory</li> <li>a <em>model</em> is a particular -- consistent! -- function that assigns every statement to its truth value, it is to be thought of as a &quot;concrete&quot; object, the kind of thing we actually usually think about. It's only when it comes to <em>models</em> that we have the law of the excluded middle.</li> </ul> <p>My understanding of Gödel's first incompleteness theorem is that <strong>no theory that satisfies some finiteness condition can uniquely pin down a model</strong>.</p> <p>So I am not really surprised by it. The idea of theories being incomplete -- of not completely pinning down a particular model -- is quite normal. The fact that no theory is complete seems analogous to how no Turing machine can compute every function.</p> <p>But then I read <a href="https://math.stackexchange.com/questions/1052299/what-is-a-simple-example-of-an-unprovable-statement">this thread</a> and there were two claims there in the answers which <em>made no sense to me</em>:</p> <ol> <li><strong>Self-referential statements as examples of unprovable statements</strong> -- Like &quot;<a href="https://math.stackexchange.com/a/1057014/78451">there is no number whose ASCII representation proves this statement</a>&quot;.</li> </ol> <p>A statement like this <em>cannot be constructed in propositional logic</em>. I'm guessing this has to do with the concept of a &quot;language&quot;, but why would anyone use a language that permits self-reference?</p> <p>Wouldn't that be completely defeat the purpose of using classical logic as the system for syntactic implications?</p> <p>If we permit this as a valid sentence, wouldn't we also have to permit the liar paradox (and then the system would be inconsistent)?</p> <ol start="2"> <li><strong>Unprovable statements being &quot;intuitively true/false&quot;</strong> -- According to <a href="https://math.stackexchange.com/a/1054255/78451">this answer</a>, if we found that the Goldbach conjecture was unprovable, then in particular that means we cannot produce a counter-example, so we'd &quot;intuitively&quot; know that the conjecture is true.</li> </ol> <p>How is this only <em>intuitive</em>? If there exist <span class="math-container">$\sf PA$</span>-compatible models <span class="math-container">$M_1$</span>, <span class="math-container">$M_2$</span> where Goldbach is true in <span class="math-container">$M_1$</span> but not <span class="math-container">$M_2$</span>, then <span class="math-container">$\exists n, p, q$</span> such that <span class="math-container">$n= p+q$</span> in <span class="math-container">$M_1$</span> but not in <span class="math-container">$M_2$</span>. But whether <span class="math-container">$n=p+q$</span> is decidable from <span class="math-container">$\sf PA$</span>, so either &quot;<span class="math-container">$\sf{PA}+\sf{Goldbach}$</span>&quot; or &quot;<span class="math-container">$\sf{PA}+\lnot\sf{Goldbach}$</span>&quot; must be inconsistent, and Goldbach cannot be unprovable. Right?</p> <p>In any case, I don't know what it means for the extension to be &quot;intuitively correct&quot;. Do we know something about the consistency of each extension or do we not?</p> <p>Further adding to my confusion, the answer claims that the irrationality of <span class="math-container">$e+\pi$</span> is <em>not</em> such a statement, that it can truly be unprovable. I don't see how this can be -- surely the same argument applies; if <span class="math-container">$e+\pi$</span>'s rationality is unprovable, there does not exist <span class="math-container">$p/q$</span> that it equals, thus it is irrational. Right?</p>
<p>This answer only addresses the second part of your question, but you asked many questions so hopefully it's okay.</p> <p>First, there is in the comments a statement: &quot;If Goldbach is unprovable in PA then it is necessarily true in all models.&quot; This is incorrect. If Goldbach were true in all models of PA then PA would prove Goldbach by Godel's <em>Completeness</em> Theorem (less popular, still important).</p> <p>What is true is:</p> <p><strong>Lemma 1:</strong> Any <span class="math-container">$\Sigma_1$</span> statement true in <span class="math-container">$\mathbb{N}$</span> (the &quot;standard model&quot; of PA) is provable from PA.</p> <p>These notes (see Lemma 3) have some explanation: <a href="http://journalpsyche.org/files/0xaa23.pdf" rel="noreferrer">http://journalpsyche.org/files/0xaa23.pdf</a></p> <p>So the correct statement is:</p> <p><strong>Corollary 2:</strong> If PA does not decide Goldbach's conjecture then it is true in <span class="math-container">$\mathbb{N}$</span>.</p> <p><strong>Proof:</strong> The negation of Goldbach's conjecture is <span class="math-container">$\Sigma_1$</span>. So if PA does not prove the negation, then the negation of Goldbach is not true in <span class="math-container">$\mathbb{N}$</span> by Lemma 1.</p> <p>Remember that <span class="math-container">$\mathbb{N}$</span> is a model so any statement is either true or false in it (in our logic). But PA is an incomplete theory (assuming it's consistent), so we don't get the same dichotomy for things it can prove.</p> <p>Now, it could be the case that PA does prove Goldbach (so its true in all models of PA including <span class="math-container">$\mathbb{N}$</span>). But if we are in the situation of Corollary 2 (PA does not prove Goldbach or its negation) then Goldbach is true in <span class="math-container">$\mathbb{N}$</span> but false in some other model of PA. (This would be good enough for the number theorists I imagine.) This is also where the problem in your reasoning is. It is NOT true that if Goldbach fails in some model <span class="math-container">$M$</span> of PA then there is a <em>standard</em> <span class="math-container">$n$</span> in <span class="math-container">$\mathbb{N}$</span> that is not the sum of two primes. Rather the witness to the failure of Goldbach is just some element that <span class="math-container">$M$</span> believes is a natural number. In some random model, this element need not be in the successor chain of <span class="math-container">$0$</span>.</p> <p>On the other hand, the rationality of <span class="math-container">$\pi+e$</span> is not known to be expressible by a <span class="math-container">$\Sigma_1$</span> statement. So we can't use Lemma 1 in the same way.</p> <p><strong>Edited later:</strong> I don't have much to say about the question on self-referential statements beyond what others have said. But I'll just say that one should be careful to distinguish propositional logic and predicate logic. This also goes for your &quot;general picture of Model Theory&quot;. Part of the interesting thing with the incompleteness theorems is that they permit self-reference without being so obvious about it. In PA there is enough expressive power to code statements and formal proofs, and so the self-referential statements about proofs and so forth are fully rigorous and uncontroversial.</p>
<p>Let me try to get at the heart of your misunderstanding as concise as possible:</p> <p><strong>1. We are not deliberately choosing to use a language that permits self-reference, we are forced to do so.</strong></p> <p>The only choice we made is that of a logic that is sufficiently strong to include integer arithmetic. What Gödel then proves is that access to the integers automatically allows us to construct somewhat self-referential statements. If we want integers, then we have to accept self-referentiality. The same is true in the theory of computability. Turing machines are not chosen because they can emulate themselves, they are chosen because they allow all operations we expect a general computer to do, which just happens to include emulating turing machines.</p> <p><strong>2. We are self-referential with respect to the theory, not the model.</strong></p> <p>The kind of sentences that Gödels procedure allows us to construct are of the form &quot;X can not be infered from Y&quot;, as the integers are only used to build a copy of logical reasoning. If we pick the set of axioms of a given theory as Y, then we can construct sentences like &quot;X is not provable in the theory&quot; which is what leads to the incompleteness theorem if X is the sentence itself. There is no way to access a specific model of the theory and thus no way of constructing sentences like &quot;X is false&quot;, which would be needed for the liar's paradoxon.</p>
number-theory
<p>We have,</p> <p>$$\sum_{k=1}^n k^3 = \Big(\sum_{k=1}^n k\Big)^2$$</p> <p>$$2\sum_{k=1}^n k^5 = -\Big(\sum_{k=1}^n k\Big)^2+3\Big(\sum_{k=1}^n k^2\Big)^2$$</p> <p>$$2\sum_{k=1}^n k^7 = \Big(\sum_{k=1}^n k\Big)^2-3\Big(\sum_{k=1}^n k^2\Big)^2+4\Big(\sum_{k=1}^n k^3\Big)^2$$</p> <p>and so on (apparently). </p> <p>Is it true that the sum of consecutive odd $m$ powers, for $m&gt;1$, can be expressed as <strong>sums of squares of sums*</strong> in a manner similar to the above? What is the general formula? </p> <p>*(Edited re Lord Soth's and anon's comment.)</p>
<p>This is a partial answer, it just establishes the existence.</p> <p>We have $$s_m(n) = \sum_{k=1}^n k^m = \frac{1}{m+1} \left(\operatorname{B}_{m+1}(n+1)-\operatorname{B}_{m+1}(1)\right)$$ where $\operatorname{B}_m(x)$ denotes the monic <a href="http://mathworld.wolfram.com/BernoulliPolynomial.html">Bernoulli polynomial</a> of degree $m$, which has the following useful properties: $$\begin{align} \int_x^{x+1}\operatorname{B}_m(t)\,\mathrm{d}t &amp;= x^m \quad\text{(from which everything else follows)} \\\operatorname{B}_{m+1}'(x) &amp;= (m+1)\operatorname{B}_m(x) \\\operatorname{B}_m\left(x+\frac{1}{2}\right) &amp;\begin{cases} \text{is even in $x$} &amp; \text{for even $m$} \\ \text{is odd in $x$} &amp; \text{for odd $m$} \end{cases} \\\operatorname{B}_m(0) = \operatorname{B}_m(1) &amp;= 0 \quad\text{for odd $m\geq3$} \end{align}$$</p> <p>Therefore, $$\begin{align} s_m(n) &amp;\text{has degree $m+1$ in $n$} \\s_m(0) &amp;= 0 \\s_m'(0) &amp;= \operatorname{B}_m(1) = 0\quad\text{for odd $m\geq3$} \\&amp;\quad\text{(This makes $n=0$ a double zero of $s_m(n)$ for odd $m\geq3$)} \\s_m\left(x-\frac{1}{2}\right) &amp;\begin{cases} \text{is even in $x$} &amp; \text{for odd $m$} \\ \text{is odd in $x$} &amp; \text{for even $m\geq2$} \end{cases} \end{align}$$</p> <p>Consider the vector space $V_m$ of univariate polynomials $\in\mathbb{Q}[x]$ with degree not exceeding $2m+2$, that are even in $x$ and have a double zero at $x=\frac{1}{2}$. Thus $V_m$ has dimension $m$ and is clearly spanned by $$\left\{s_j^2\left(x-\frac{1}{2}\right)\mid j=1,\ldots,m\right\}$$ For $m&gt;0$, we find that $s_{2m+1}(x-\frac{1}{2})$ has all the properties required for membership in $V_m$. Substituting $x-\frac{1}{2}=n$, we conclude that there exists a representation $$s_{2m+1}(n) = \sum_{j=1}^m a_{m,j}\,s_j^2(n) \quad\text{for $m&gt;0$ with $a_{m,j}\in\mathbb{Q}$}$$ of $s_{2m+1}(n)$ as a linear combination of squares of sums.</p>
<p><strong>Not an answer, but only some results I found.</strong></p> <p>Denote $s_m=\sum_{k=1}^nk^m.$ Suppose that $$s_{2m+1}=\sum_{i=1}^m a_is_i^2,a_i\in\mathbb Q.$$</p> <p>Use the method of undetermined coefficients, we can get a list of $\{m,\{a_i\}\}$: $$\begin{array}{l} \{1,\{1\}\} \\ \left\{2,\left\{-\frac{1}{2},\frac{3}{2}\right\}\right\} \\ \left\{3,\left\{\frac{1}{2},-\frac{3}{2},2\right\}\right\} \\ \left\{4,\left\{-\frac{11}{12},\frac{11}{4},-\frac{10}{3},\frac{5}{2}\right\}\right\} \\ \left\{5,\left\{\frac{61}{24},-\frac{61}{8},\frac{28}{3},-\frac{25}{4},3\right\}\right\} \\ \left\{6,\left\{-\frac{1447}{144},\frac{1447}{48},-\frac{332}{9},\frac{595}{24},-\frac{21}{2},\frac{7}{2}\right\}\right\} \\ \left\{7,\left\{\frac{5771}{108},-\frac{5771}{36},\frac{5296}{27},-\frac{2375}{18},56,-\frac{49}{3},4\right\}\right\} \\ \left\{8,\left\{-\frac{53017}{144},\frac{53017}{48},-\frac{60817}{45},\frac{21817}{24},-\frac{3861}{10},\frac{1127}{10},-24,\frac{9}{2}\right\}\right\} \\ \left\{9,\left\{\frac{2755645}{864},-\frac{2755645}{288},\frac{632213}{54},-\frac{1133965}{144},\frac{13379}{4},-\frac{11725}{12},208,-\frac{135}{4},5\right\}\right\}\end{array}$$</p> <p>Through some observation, we can find that $a_i a_{i+1}&lt;0,a_m=\dfrac{m+1}2,a_2=-3a_1,a_{m-1}=\dfrac{1}{24} m^2 (m+1).$</p>
game-theory
<p>Here's the description of a dice game which puzzles me since quite some time (the game comes from a book which offered a quite unsatisfactory solution — but then, its focus was on programming, so this is probably excusable).</p> <p>The game goes as follows:</p> <p>Two players play against each other, starting with score 0 each. Winner is the player to first reach a score of 100 or more. The players play in turn. The score added in each round is determined as follows: The player throws a die. If the die does not show an 1, he has the option to stop and have the points added to his score, or to continue throwing until either he stops or gets an 1. As soon as he gets an 1, his turn ends and no points are added to his score, any points he has accumulated in this round are lost. Afterward it is the second player's turn.</p> <p>The question is now what is the best strategy for that game. The book suggested to try out which of the following two strategies gives better result:</p> <ul> <li>Throw 5 times (if possible), then stop.</li> <li>If the accumulated points in this round add up to 20 or more, stop, otherwise continue.</li> </ul> <p>The rationale is that you want the next throw to increase the expected score. Of course it doesn't need testing to see that the second strategy is better: If you've accumulated e.g. 10 points, it doesn't matter whether you accumulated them with 5 times throwing a 2, or with 2 times throwing a 5.</p> <p>However it is also easy to see that this second strategy isn't the best one either: After all, the ultimate goal is not to maximize the increase per round, but to maximize the probability to win.; both are related, but not the same. For example, imagine you have been very unlucky and are still at a very low score, but your opponent has already 99 points. It's your turn, and you've already accumulated some points (but those points don't get you above 100) and have to decide whether to stop, or to continue. If you stop, you secure the points, but your opponent has a 5/6 chance to win in the next move. Let's say that if you stop, the optimal strategy in the next move will be to try to get 100 points in one run, and that the probability to reach that is $p$. Then if you stop, since your opponent then has his chance to win first, your total probability to win is just $1/6(p + (1-p)/6 (p + (1-p)/6 (p + ...))) = p/(p+5)$. On the other hand, if you continue to 100 points right now, you have the chance $p$ to win this round <em>before</em> the other has a chance to try, but a lower probability $p'$ to win in later rounds, giving a probability $p + p'/(5+p')$. It is obvious that even if we had $p'=0$ (i.e. if you don't succeed now, you'll lose), you'd still have the probability $p&gt;p/(p+5)$ to win by continuing, so you should continue no matter how slim your chances, and even if your accumulated points this round are above 20, because if you stop, you chances will be worse for sure. Since at <em>some</em> time, the optimal strategy <em>will</em> have a step where you try to go beyond 100 (because that's where you win), by induction you can say that if your opponent has already 99 points, your best strategy is, unconditionally, to try to get 100 points in one run.</p> <p>Of course this "brute force rule" is for that specific situation (it also applies if the opponent has 98 points, for obvious reasons). If you'd play that brute-force rule from the beginning, you'd lose even against someone who just throws once each round. Indeed, if both are about equal, and far enough from the final 100 points, intuitively I think the 20 points rule is quite good. Also, intuitively I think if you are far advanced against your opponent, you should even play more safe and stop earlier.</p> <p>As the current game situation is described by the three numbers <em>your score</em> ($Y$), <em>your opponent's score</em> ($O$) and the points already collected in this round ($P$), and your decision is to either continue ($C$) or to stop ($S$), a strategy is completely given by a function $$s:\{(Y, O, P)\}\to \{C,S\}$$ where the following rules are obvious:</p> <ul> <li>If $Y+S\ge 100$ then $s(Y,O,P)=S$ (if you already have collected 100 points, the only reasonable move is to stop).</li> <li>$s(Y, O, 0)=C$ (it doesn't make sense to stop before you threw at least once).</li> </ul> <p>Also, I just above derived the following rule:</p> <ul> <li>$f(Y,98,P)=f(Y,99,P)=C$ unless the first rule kicks in.</li> </ul> <p>I <em>believe</em> the following rule should also hold (but have no idea how to prove it):</p> <ul> <li>If $f(Y,98,P)=S$ then also $f(Y,98,P+1)=S$</li> </ul> <p>If that believe is true, then the description of a strategy can be simplified to a function $g(Y,O)$ which gives the smallest $P$ at which you should stop.</p> <p>However, that's all I've figured out. What I'd really like to know is: What is the optimal strategy for this game?</p>
<p>When you are both far from the goal, you should maximize the expected points per turn. The expected return from another throw is $\frac 56 (2+3+4+5+6) - \frac 16 P$, which says you should stop above $20$ and do whatever you want at $20$. You are right this gets perturbed as you get close to the end.</p>
<p>There is no simplified description of the Nash equilibrium of this game.</p> <p>You can compute the best strategy starting from positions where both players are about to win and going backwards from there. Let $p(Y,O,P)$ the probability that you win if you are at the situation $(Y,O,P)$ and if you make the best choices. The difficulty is that to compute the strategy and probability to win at some situation $(Y,O,P)$, you make your choice depending on the probability $p(O,Y,0)$. So you have a (piecewise affine and contracting) decreasing function $F_{(Y,O,P)}$ such that $p(Y,O,P) = F_{(Y,O,P)}(p(O,Y,0))$, and in particular, you need to find the fixpoint of the composition $F_{(Y,O,0)} \circ F_{(O,Y,0)}$ in order to find the real $p(O,Y,0)$, and deduce everything from there.</p> <p>After computing this for a 100 points game and some inspecting, there is no function $g(Y,O)$ such that the strategy simplifies to "stop if you accumulated $g(Y,O)$ points or more". For example, at $Y=61,O=62$,you should stop when you have exactly $20$ or $21$ points, and continue otherwise.</p> <p>If you let $g(Y,O)$ be the smallest number of points $P$ such that you should stop at $(Y,O,P)$, then $g$ does not look very nice at all. It is not monotonous and does strange things, except in the region where you should just keep playing until you lose or win in $1$ move.</p>
logic
<p>I'm interested in this question, but I'm not going to list my knowledge/demands but rather gear it to more general purpose; so the first thing concerns the prerequisites, i.e. </p> <blockquote> <p>How much theoretical knowledge (mathematical logic, programming and other) should one have prior to engaging with automated theorem proving (ATP)? Are there any fields of mathematical logic that aren't necessary prerequisites but still provide a deeper insight into ATP?</p> </blockquote> <p>After the prerequisities are done, one just needs to dive in:</p> <blockquote> <p>How does one start with ATP? Are there any books, lecture notes, which explain the crucial concepts? After one is done with the general idea of ATP, how does one proceed to <em>do</em> it?</p> </blockquote> <p>However, one might be concerned (at least that's what my main concern is) about the many different theorem-provers; how does one choose, and is there a chance that if one chooses the wrong one, they are going to be stuck with obsolete knowledge (even in terms of pure mathematics). In other words</p> <blockquote> <p>How concerned should one be with "aging" of the theorem-provers? Are there any language-agnostic approaches?</p> </blockquote>
<p>Besides @dtldarek suggestions, I would like to draw your attention to </p> <p>Mizar: a project aiming to formalize all of mathematics. It has been going on since the 70's so it is not likely to disappear any time soon. To learn and participate in the project you just need to study some basic (standard) logic/theory of demonstration and to look at the axioms of Tarsky-Grothendieck set theory (set theory with universes). <a href="http://en.wikipedia.org/wiki/Mizar_system" rel="noreferrer">http://en.wikipedia.org/wiki/Mizar_system</a></p> <p>The Japanese mirror site: <a href="http://markun.cs.shinshu-u.ac.jp/mirror/mizar/" rel="noreferrer">http://markun.cs.shinshu-u.ac.jp/mirror/mizar/</a></p> <p>If you manage to formalize a new proof in Mizar (even of a well-known theorem), your result may be published in their peer-reviewed journal.</p> <p>However if you really are interested in ATP, that is in systems which discover a proof by themselves (or with very little human help), than my experience and suggestion goes to Theorema a project developed in Austria. In order to use Theorema you need to use the commercial software Mathematica by Wolfram Research</p> <p><a href="http://www.wolfram.com/products/" rel="noreferrer">http://www.wolfram.com/products/</a></p> <p>Mathematica is one the (2 or 3) most powerful computer algebra systems (CAS) available today. I would recommend you to become familiar with a CAS as soon as possible, basically for the same reasons that I would recommend a would-be journalist or writer to become proficient in using a word-processor. Fortunately student or home editions of Mathematica are not <em>too</em> expensive (100-300$ range). Please note that these versions are exactly as powerful as the full commercial version</p> <p>Theorema is a (free) add-on to Mathematica.</p> <p>The technology behind Theorema is very advanced (for example you can create new mathematical notation, the proofs are generated and explained in plain English, etc.), but it seems (to me, at least) that the system is not widely used outside its own developing team. Nevertheless studying and using it is fascinating and well-worth.</p> <p><a href="http://www.risc.jku.at/research/theorema/description/" rel="noreferrer">http://www.risc.jku.at/research/theorema/description/</a></p> <p>Theorema can be requested from this page:</p> <p><a href="http://www.risc.jku.at/research/theorema/software/" rel="noreferrer">http://www.risc.jku.at/research/theorema/software/</a></p>
<ol> <li><p>I never developed an ATP, just did some related stuff, so an answer form someone who did will be infinitely better. Still, I think I might help just a bit.</p></li> <li><p>It greatly depends what would you want to do with it (the theorem prover). </p></li> <li><p>To develop something entirely new that really works you would need a whole team of experienced people for few years (compare <a href="http://coq.inria.fr/who-did-what-in-coq" rel="noreferrer">who did what in Coq</a>). That kind of software is very hard to write and requires a lot of programming skill. Still, it's not a lost case yet: to play with developing such a tool may be a valid exercise, even if it is a hard one.</p></li> <li><p>I can't help you with any books (Google seems to spit out many related things, though), because I learned it by trial and error. On the other hand I can say that learning to use existing one (if you don't know some yet) might be a good idea. For that purpose I recommend <a href="http://en.wikipedia.org/wiki/Coq" rel="noreferrer">Coq</a> -- it is not exactly what you want (proof assistant instead of theorem prover), but has nice, large community and (from my perspective) a lot of people use/know it, I would say that it is kind of standard.</p></li> <li><p>I can't help you with aging of theorem provers -- I'm not old enough :-) However, I can say how I deal with aging of programming languages (and theorem provers are much like specialized programming languages interpreters), every some time there is a new feature you would want to have, so if any of available tools support it, go ahead, if not -- develop(expand an existing app?) your own (or convince someone to develop it for you).</p></li> </ol> <p>Good luck with your endeavor ;-)</p>
logic
<p>Often, we find different proofs for certain theorems that, on the surface, seem to be very different but actually use the same fundamental ideas. For example, the topological proof of the infinitude of primes is quite different than the standard one, but the arguments are essentially the same, just using different vocabulary.</p> <p>So, my question is the following:</p> <blockquote> <p>Is there a rigorous notion of "proof isomorphism"?</p> </blockquote>
<p>The question of proof equivalence is quite an old one! In fact, David Hilbert considered adding it (or a similar one) to his celebrated <a href="http://en.wikipedia.org/wiki/Hilbert%27s_problems" rel="nofollow noreferrer">list of open problems</a>, but finally decided to leave it out, so it is sometimes referred to as <a href="http://en.wikipedia.org/wiki/Hilbert%27s_twenty-fourth_problem" rel="nofollow noreferrer">Hilbert's 24th problem</a>.</p> <p>There is a rather well-established field investigating proof equivalence, though definitely no clear answer to either your or Hilbert's question (and it is likely that this is quite out of reach). However, here are some various notions of proof equivalence of increasing strength.</p> <ol> <li><p>Equality w.r.t. <strong>variable re-naming</strong> (also called <span class="math-container">$\alpha$</span>-renaming). This is much too fine: clearly there are proofs that are &quot;morally&quot; the same, but that differ by more than variable re-namings.</p> </li> <li><p>Equality w.r.t. <strong>definition unfolding</strong>. This doesn't solve all of the above problems, but it's clear that if one proof involves a compact definition and the other does not, they should be seen to be the same.</p> </li> <li><p>Equality w.r.t. <a href="http://en.wikipedia.org/wiki/Cut-elimination_theorem" rel="nofollow noreferrer"><strong>cut elimination</strong></a>. This one is much more interesting! For two proofs <span class="math-container">$\Delta$</span>, <span class="math-container">$\Phi$</span>, we say that <span class="math-container">$\Delta\simeq_{\mathrm{cut}}\Phi$</span> if the two proofs are the same (modulo variable re-naming) <em>after having eliminated all cuts</em>. Now a lot of rather different proofs become quite similar, e.g. proofs in calculus involving abstract &quot;open/closed&quot; terminology can be reduced to simple proofs involving <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> notation. This <em>still</em> isn't satisfactory since e.g., some hypotheses can be used in different orders, or some useless steps still remain. It's also not clear that this is not <em>too</em> coarse, since sometimes the use of crucial lemmas makes a proof <em>much simpler</em>, and cut-elimination makes all intermediate lemmas &quot;disappear&quot;.</p> </li> <li><p>Equality w.r.t. cut elimination with <strong>commutative cuts</strong>. See, for example <a href="http://www.lama.univ-savoie.fr/%7Etypes09/slides/types09-slides-45.pdf" rel="nofollow noreferrer">these</a> really nice slides by Clement Houtmann. This might get closer to the &quot;right&quot; notion, though as you can see things start to get a bit subjective at this point. What does it mean to &quot;use the same idea&quot;?</p> </li> </ol> <p>As Bruno mentioned, there is a deep connection between proofs of propositions and certain programs in particular programing languages, so one may re-formulate the question as</p> <blockquote> <p>When are two programs the same?</p> </blockquote> <p>with very fruitful results. The conclusion should be that this is a very active area of research in proof theory, with connections to computer science and <a href="http://en.wikipedia.org/wiki/Categorical_logic" rel="nofollow noreferrer">categorical logic</a>.</p> <hr /> <p><strong>Addendum</strong></p> <p>Looking back at this answer a couple of years later, I feel like I should add a couple of more recent research directions around this idea of proof equivalence.</p> <ol> <li><p><a href="https://en.wikipedia.org/wiki/Focused_proof" rel="nofollow noreferrer">Focusing</a> is a proof system that refines the classical sequent presentation of derivations, trying to make irrelevant choices disappear by distinguishing between the &quot;invertible&quot; rules, which can be applied at any time without worry and the &quot;positive&quot; rules in which choices do matter. A pretty neat paper explaining the philosophy of this approach is <a href="https://www.sciencedirect.com/science/article/pii/S0168007208000080" rel="nofollow noreferrer">Zeilberger, On the unity of duality</a></p> </li> <li><p>Simlarly, <a href="https://en.wikipedia.org/wiki/Proof_net" rel="nofollow noreferrer">proof nets</a> try to represent families of proofs in ways that quotient out irrelevant proof search distinctions. I know less about this, but it is certainly related to focusing.</p> </li> </ol>
<p>This <em>proof irrelevance</em> is one of the problems of classic foundations.</p> <p>In <a href="http://en.wikipedia.org/wiki/Type_theory" rel="nofollow">Type Theory</a>, however, we represent mathematical statements as types, which enables us to treat proofs as mathematical objects. This is because of a well-known isomorphism between types and propositions a.k.a. <strong>Curry-Howard Correspondence</strong>, which roughly says that to find a proof of a statement <em>A</em> is to find a inhabitant of this type: $$a:A$$ Which, in the point of view of logic, can be read '<em>a</em> is a proof of the proposition <em>A</em>'. </p> <p>In this sense, to prove a proposition is to construct an inhabitant of a type, which means that every mathematical proof can be seen as an algorithm. This is related with "constructive" (intuitionistic) conception of logic where (i) to prove a statement of the form "A and B" is to find a proof of A and a proof of B, (i) to prove that A implies B is to find a function that converts a proof of A into a proof of B (iii) every prove that something exists carries with it enough information to exhibit such object and so on. Hence equality of elements of a type (proofs) is treated intensionally.</p> <p>Now <a href="http://homotopytypetheory.org/" rel="nofollow">Homotopy Type Theory</a> thinks of types as "homotopical spaces", interpreting, as stated in the comments, the relation of identity $a=b$ between elements (proofs) of the same type (proposition) $a,b: A$ as homotopical equivalence, understood as a path from the point $a$ to the point $b$ in the space $A$. The HoTT book is available for free in the project website.</p>
logic
<p>Suppose we have a line of people that starts with person #1 and goes for a (finite or infinite) number of people behind him/her, and this property holds for every person in the line: </p> <blockquote> <p><em>If everyone in front of you is bald, then you are bald</em>.</p> </blockquote> <p>Without further assumptions, does this mean that the first person is necessarily bald? Does it say <em>anything</em> about the first person at all? </p> <p>In my opinion, it means: </p> <blockquote> <p><em>If there exist anyone in front of you and they're all bald, then you're bald.</em> </p> </blockquote> <p>Generally, for a statement that consists of a subject and a predicate, if the subject doesn't exist, then does the statement have a truth value? </p> <p>I think there's a convention in math that if the subject doesn't exist, then the statement is right.</p> <p>I don't have a problem with this convention (in the same way that I don't have a problem with the meaning of '<em>or</em>' in math). My question is whether it's a clear logical implication of the facts, or we have to <strong>define the truth value</strong> for these subject-less statements.</p> <hr> <p><strong>Addendum:</strong> </p> <p>You can read up on this matter <a href="https://en.wikipedia.org/wiki/Syllogism#Existential_import" rel="noreferrer">here</a> (too).</p>
<p>You can see what's going on by reformulating the assumption in its equivalent contrapositive form: </p> <blockquote> <p><em>If I'm not bald, then there is someone in front of me who is not bald.</em></p> </blockquote> <p>Now the first person in line finds himself thinking, "There is <em>no one</em> in front of me. So it's not true that there is someone in front of me who is not bald. So it's not true that I'm not bald. So I must be bald!"</p>
<p>Mathematical logic <em>defines</em> a statement about <a href="https://en.wikipedia.org/wiki/Universal_quantification#The_empty_set" rel="nofollow noreferrer">all elements of an empty set</a> to be true. This is called <a href="https://en.wikipedia.org/wiki/Vacuous_truth" rel="nofollow noreferrer">vacuous truth</a>. It may be somewhat confusing since it doesn't agree with common everyday usage, where making a statement tends to suggest that there is some object for which the statement actually holds (like the person in front of you in your example).</p> <p>But it is <em>exactly</em> the right thing to do in a formal setup, for several reasons. One reason is that logical statements don't <em>suggest</em> anything: you must not assume any meaning in excess of what's stated explicitly. Another reason is that it makes several conversions possible without special cases. For example,</p> <p>$$\forall x\in(A\cup B):P(x)\;\Leftrightarrow \forall x\in A:P(x)\;\wedge\;\forall x\in B:P(x)$$</p> <p>holds even if $A$ (or $B$) happens to be the empty set. Another example is the conversion between universal and existential quantification <a href="https://math.stackexchange.com/users/86747/barry-cipra">Barry Cipra</a> <a href="https://math.stackexchange.com/a/1669020/35416">used</a>:</p> <p>$$\forall x\in A:\neg P(x)\;\Leftrightarrow \neg\exists x\in A:P(x)$$</p> <p>If you are into programming, then the following pseudocode snippet may also help explaining this:</p> <pre><code>bool universal(set, property) { for (element in set) if (not property(element)) return false return true } </code></pre> <p>As you can see, the universally quantified statement is <em>only</em> false if there <em>exists</em> an element of the set for which it does not hold. Conversely, you could define</p> <pre><code>bool existential(set, property) { for (element in set) if (property(element)) return true return false } </code></pre> <p>This is also similar to other empty-set definitions like</p> <p>$$\sum_{x\in\emptyset}f(x)=0\qquad\prod_{x\in\emptyset}f(x)=1$$</p> <blockquote> <p>If everyone in front of you is bald, then you are bald.</p> </blockquote> <p>Applying the above to the statement from your question: from</p> <p>$$\bigl(\forall y\in\text{People in front of }x: \operatorname{bald}(y) \bigr)\implies\operatorname{bald}(x)$$</p> <p>one can derive</p> <p>$$\emptyset=\text{People in front of }x\implies\operatorname{bald}(x)$$</p> <p>so <strong>yes, the first person must be bald</strong> because there is noone in front of him.</p> <p>Some formalisms prefer to write the “People in front of” as a pair of predicates instead of a set. In such a setup, you'd see fewer sets and more implications:</p> <p>$$\Bigl(\forall y: \bigl(\operatorname{person}(y)\wedge(y\operatorname{infrontof}x)\bigr)\implies\operatorname{bald}(y) \Bigr)\implies\operatorname{bald}(x)$$</p> <p>If there is no $y$ satisfying both predicates, then the left hand side of the first implication is always false, rendering the implication as a whole always true, thus allowing us to conclude the baldness of the first person. The fact that an implication with a false antecedent is always true is another form of vacuous truth.</p> <p>Note to self: <a href="https://math.stackexchange.com/questions/556117/good-math-bed-time-stories-for-children#comment1182925_556133">this comment</a> indicates that <em>Alice in Wonderland</em> was dealing with vacuous truth at some point. I should re-read that book and quote any interesting examples when I find the time.</p>
linear-algebra
<p>Let $A=\begin{bmatrix}a &amp; b\\ c &amp; d\end{bmatrix}$.</p> <p>How could we show that $ad-bc$ is the area of a parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+b, c+d)$?</p> <p>Are the areas of the following parallelograms the same? </p> <p>$(1)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+c, b+d)$.</p> <p>$(2)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+b, c+d)$.</p> <p>$(3)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+d, b+c)$.</p> <p>$(4)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+d, b+c)$.</p> <p>Thank you very much.</p>
<p>Spend a little time with this figure due to <a href="http://en.wikipedia.org/wiki/Solomon_W._Golomb" rel="noreferrer">Solomon W. Golomb</a> and enlightenment is not far off:</p> <p><img src="https://i.sstatic.net/gCaz3.png" alt="enter image description here" /></p> <p>(Appeared in <em>Mathematics Magazine</em>, March 1985.)</p>
<p><img src="https://i.sstatic.net/PFTa4.png" alt="enter image description here"></p> <p>I know I'm extremely late with my answer, but there's a pretty straightforward geometrical approach to explaining it. I'm surprised no one has mentioned it. It does have a shortcoming though - it does not explain why area flips the sign, because there's no such thing as negative area in geometry, just like you can't have a negative amount of apples(unless you are economics major).</p> <p>It's basically:</p> <pre><code> Parallelogram = Rectangle - Extra Stuff. </code></pre> <p>If you simplify $(c+a)(b+d)-2ad-cd-ab$ you will get $ad-bc$.</p> <p>Also interesting to note that if you swap vectors places then you get a negative(opposite of what $ad-bc$ would produce) area, which is basically:</p> <pre><code> -Parallelogram = Rectangle - (2*Rectangle - Extra Stuff) </code></pre> <p>Or more concretely:</p> <p>$(c+a)(b+d) - [2*(c+a)(b+d) - (2ad+cd+ab)]$</p> <p>Also it's $bc-ad$, when simplified.</p> <p>The sad thing is that there's no good geometrical reason why the sign flips, you will have to turn to linear algebra to understand that. </p> <p>Like others had noted, determinant is the scale factor of linear transformation, so a negative scale factor indicates a reflection.</p>
logic
<p>I'm reading through Hindley and Seldin's book about the lambda calculus and combinatory logic. In the book, the authors express that, though combinatory logic can be expressed as an equational theory in first order logic, the lambda calculus cannot. I sort of understand why this is true, but I can't seem to figure out the problem with the following method. Aren't the terms of first order logic arbitrary (constructed from constants, functions, and variables, etc.) so why can't we define composition and lambda abstraction as functions on variables, and then consider an equational theory over these terms? Which systems of logic can the lambda calculus be formalized as an equationary theory over?</p> <p>In particular, we consider a first order language with a single binary predicate, equality. We take a set $V$ of terms over the $\lambda$ calculus, which we will view of as constants in the enveloping first order theory and for each $x \in V$, we will add a function $f_x$, corresponding to $\lambda$ abstraction. We also add a binary function $c$ corresponding to composition. We add in the standard equality axioms, in addition to the $\beta$ and $\alpha$ conversion rules, which are a sequence of infinitely many axioms produced over the terms formed from the $\lambda$ terms from composition and $\lambda$ abstraction. I doubt this is finitely axiomatizable, but it's still a first order theory.</p>
<p>Untyped <span class="math-container">$\lambda$</span>-calculus and untyped combinatory terms both have a form of the <span class="math-container">$Y$</span> combinator, so we introduce types to avoid infinite reductions. As (untyped) systems, they are equivalent <em>only</em> as equational theories. To elaborate;</p> <p>Take <span class="math-container">$(\Lambda , \rightarrow_{\beta\eta})$</span> to be the set of <span class="math-container">$\lambda$</span>-terms equipped with the <span class="math-container">$\beta\eta$</span>-relation (abstraction and application only, I am considering the simplest case) and <span class="math-container">$(\mathcal{C}, \rightarrow_R)$</span> be the set of combinatory terms equipped with reduction. We want a <em>translation</em> <span class="math-container">$T$</span>, be a bijection between the two sets that respects reduction. The problem is that reduction in combinatory terms happens &quot;too fast&quot; when compared with <span class="math-container">$\beta\eta$</span>, so such a translation does not exist. When viewed as <em>equational theories</em>, i.e. <span class="math-container">$(\Lambda , =_{\beta\eta})$</span> and <span class="math-container">$(\mathcal{C}, =_R)$</span>, we can find a translation that respects equalities.</p> <p>An interesting fact is that <span class="math-container">$=_{\beta\eta} \approx =_{ext}$</span>, where <span class="math-container">$=_{ext}$</span> is a form of extensional equality on <span class="math-container">$\lambda$</span>-terms.</p> <p>To answer your question, when you view these systems as equational theories, they are both expressible (in FOL) since one of them is and this translation exists. But using just reduction, <span class="math-container">$\lambda$</span>-calculus has the notion of bound variables, while combinatory logic does not. Bound variables require more expressibility in the system, more than FOL. You need to have <em>access</em> to FOL expressions in order to differentiate them from free variables, which you cannot do from inside the system.</p> <p>Concepts related to, and explanatory of my abuse of terminology when I say <em>&quot;you need to have access to expressions of FOL&quot;</em> are the <span class="math-container">$\lambda$</span>-cube and higher-order logics from formal systems, and &quot;linguistic levels&quot; from linguistics. Informally it is the level of precision the system in question has. Take for example the alphabet <span class="math-container">$\{a,b\}$</span> and consider all words from this alphabet. This language requires the minimum amount of precision to be expressed, since it is the largest language I can create from this alphabet. Now consider only words of the form <span class="math-container">$a^nb^n$</span>. This language requires a higher level of precision for a system to be able to express it, because you need to be able to &quot;look inside&quot; a word to tell if belongs to the language or not. You can read more about this topic on <a href="https://devopedia.org/chomsky-hierarchy" rel="nofollow noreferrer">https://devopedia.org/chomsky-hierarchy</a></p>
<p>The problem with your approach is that in the semantics of lambda calculus, if you have that the interpretation of two variables is equal in a model, then their lambda abstractions will need to be too. Your situation may have <span class="math-container">$[[x]]=[[y]]$</span> but <span class="math-container">$[[f_x]]$</span> and <span class="math-container">$[[f_y]]$</span> distinct. It therefore doesn't have the same semantics as lambda calculus, so doesn't axiomatise it successfully (you can see this, as your axiomatisation is universal, so will be closed under substructure, whereas the semantics of <span class="math-container">$\lambda$</span>-calculus is not closed under substructure).</p> <p>There are two other approaches that work: you could have a predicate <span class="math-container">$V$</span> representing 'being a variable' and axioms saying <span class="math-container">$V(A)$</span> for each <span class="math-container">$A$</span> which <span class="math-container">$\beta$</span>-reduces to a variable, and axioms saying <span class="math-container">$\neg V(A)$</span> for all other <span class="math-container">$A$</span>.You can then introduce your partial lambda abstraction function as a ternary predicate, i.e. <span class="math-container">$\lambda (A,B,C)$</span> represents <span class="math-container">$\lambda A.B$</span> is defined and <span class="math-container">$=C$</span>. You can then axiomatise only being able to abstract over a variable: <span class="math-container">$\forall A,B (\exists C \lambda (A,B,C))\iff V(A)$</span> and then proceed to write the rest of the theory using these predicates.</p> <p>This works, but since whether a term reduces to a variable is undecidable, these axioms are not recursively enumerable, which is a pain for logicians. The final and best approach, is to note that Scott-Meyer models of <span class="math-container">$\lambda$</span>-calculus are essentially described in a first order way.</p> <p>So you can have an axiomatisation using 3 constants (s,k,e) and a binary function <span class="math-container">$\cdot$</span> with the axioms:<br /> <span class="math-container">$\forall a,b \, \, \, \, \, \, \, \, k \cdot a \cdot b$</span> <span class="math-container">$= a$</span><br /> <span class="math-container">$\forall a,b,c \, \, \, \, \, \, s \cdot a \cdot b\cdot c$</span> <span class="math-container">$= a\cdot c \cdot (b\cdot c)$</span><br /> <span class="math-container">$e \cdot e = e$</span><br /> <span class="math-container">$\forall a,b \, \, \, \, \, \, \, \, e \cdot a \cdot b$</span> <span class="math-container">$= a\cdot b$</span><br /> <span class="math-container">$\forall a,b \, \, \, \, \, \, \, \, \forall c \, \, a \cdot c$</span> <span class="math-container">$= b\cdot c \to e \cdot a = e \cdot b$</span></p> <p>This is a theory (in a different language) which has the same (Set) models as <span class="math-container">$\lambda$</span>-calculus. While this is now first order, it is no longer an <em>algebraic</em> axiomatisation.</p>
logic
<p>In plain language, what's the difference between two things that are 'equivalent', 'equal', 'identical', and isomorphic?</p> <p>If the answer depends on the area of mathematics, then please take the question in the context of logical systems and statements.</p>
<p>Convention may vary, but the following is, I guess, how most mathematicians would use these notions. Identical and equal are very often used synonymously. However, sometimes identical is meant to say that the two things are not just equal, but actually are syntactically equal. For instance, take $x=2$. The claim that $x^2=4$ is saying that $x^2$ and $4$ are equal. The claim that $x^2=x^2$ is saying that $x^2$ is equal to $x^2$, but we also say that the left hand side and the right hand side are identical. </p> <p>Equivalence is a strictly weaker notion than equality. It can be formalized in many different ways. For instance, as an equivalence relation. The identity relation is always an equivalence relation, but not the other way around. A typical way to obtain an equivalence is to suppress some properties of the objects you study, and only look at particular aspects of them. A classical example is modular arithmetic. We say that $10$ and $20$ are equivalent modulo $5$, basically saying that while $10$ and $20$ are not equal, if the only thing we care about is their divisibility by $5$, then they are the same. </p> <p>Isomorphism is a specific term from category theory. Two objects are isomorphic if there exists an invertible morphism between them. Informally, two isomorphic objects are identical for the purposes of answering any question about them in their category. </p>
<p>They have different <a href="http://qchu.wordpress.com/2013/05/28/the-type-system-of-mathematics/">types</a>.</p> <p>"Equal" and "identical" take as input two elements of a <em>set</em> and return a truth value. They both mean the same thing, which is what you think they'd mean. For example, we can consider the set $\{ 1, 2, 3, \dots \}$ of natural numbers, and then $1 = 1$ is true, $1 = 2$ is false, and so forth.</p> <p>"Equivalent" takes as input two elements of a <em>set together with an <a href="http://en.wikipedia.org/wiki/Equivalence_relation">equivalence relation</a></em> and returns a truth value corresponding to the equivalence relation. For example, we can consider the set $\{ 1, 2, 3, \dots \}$ of natural numbers together with the equivalence relation "has the same remainder upon division by $2$," and then $1 \equiv 3$ is true, $1 \equiv 4$ is false, and so forth. The crucial point here is that an equivalence relation is extra structure on a set. It doesn't make sense to ask whether $1$ is equivalent to $3$ without specifying what equivalence relation you're talking about. </p> <p>"Isomorphic" takes as input two objects in a <em><a href="http://en.wikipedia.org/wiki/Category_%28mathematics%29">category</a></em> and returns a truth value corresponding to whether an <a href="http://en.wikipedia.org/wiki/Isomorphism">isomorphism</a> between the two objects exists. For example, we can consider the category of sets and functions, and then the set $\{ 1, 2 \}$ and the set $\{ 3, 4 \}$ are isomorphic because the map $1 \to 3, 2 \to 4$ is an isomorphism between them. The crucial point here is, again, a category structure is extra structure on a set (of objects). It doesn't make sense to ask whether two objects are isomorphic without specifying what category structure you're talking about.</p> <p>Here is a terrible place where this distinction matters. In <a href="http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZF set theory</a>, in addition to being able to ask whether two sets are isomorphic (which means that they are in bijection with each other), it is also a meaningful question to ask whether two sets are <em>equal</em>. The former involves the structure of the category of sets while the latter involves the "set" of sets (not really a set, but that isn't the problem here). For example, $\{ 1, 2 \}$ and $\{ 3, 4 \}$ are particular sets in ZFC which are not the same set (because they don't contain the same elements; that's what it means for two sets in ZFC to be <em>equal</em>) even though they are in bijection with each other. This distinction can trip up the unwary if they aren't careful.</p> <p>(My personal conviction is that you should never be allowed to ask the question of whether two bare sets are equal in the first place. It is basically never the question you actually want to ask.) </p>
differentiation
<blockquote> <p>If $f(x)=\frac{1}{x^2+x+1}$, find $f^{(36)} (0)$. </p> </blockquote> <p>So far I have tried letting $a=x^2+x+1$ and then finding the first several derivatives to see if some terms would disappear because the third derivative of $a$ is $0$, but the derivatives keep getting longer and longer. Am I on the right track? Thanks!</p>
<p>We can write:</p> <p>$$1+ x + x^2 = \frac{1-x^3}{1-x}$$</p> <p>Therefore:</p> <p>$$f(x) = \frac{1-x}{1-x^3} $$</p> <p>We can then expand this in powers of $x$:</p> <p>$$f(x) = (1-x)\sum_{k=0}^{\infty}x^{3 k}$$</p> <p>which is valid for $\left|x\right|&lt;1$. The coefficient of $x^{36}$ is thus equal to $1$, so the 36th derivative at $x = 0$ is $36!$ .</p>
<p>Let $\omega$ be a complex cube root of $1$. Then Partial Fraction representation of $f(x)$ is given by</p> <p>$f(x) = \dfrac{1}{x^2+x+1} = \dfrac{1}{(x-\omega)(x-\omega^2)} = \dfrac{1}{\omega - \omega^2}\Big(\dfrac{1}{x-\omega} - \dfrac{1}{x - \omega^2}\Big)$.</p> <p>Find successive derivatives to show that</p> <p>$f^{(36)}(x) = \dfrac{1}{\omega - \omega^2}(36! (x-\omega)^{-37} - 36! (x - \omega^2)^{-37})$.</p> <p>Let $x = 0$ and use $\omega^3 = 1$.</p>
probability
<p>I have trouble understanding the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science.</p> <p>From the purely mathematical point of view, I think it would be uncontroversial to say that Bayes' theorem does not amount to a particularly sophisticated result. Indeed, the relation <span class="math-container">$$P(A|B)=\frac{P(A\cap B)}{P(B)}=\frac{P(B\cap A)P(A)}{P(B)P(A)}=\frac{P(B|A)P(A)}{P(B)}$$</span> is a one line proof that follows from expanding both sides directly from the definition of conditional probability. Thus, I expect that what people find interesting about Bayes' theorem has to do with its practical applications or implications. However, even in those cases I find the typical examples being used as a justification of this to be a bit artificial.</p> <hr /> <p>To illustrate this, the classical application of Bayes' theorem usually goes something like this: Suppose that</p> <ol> <li>1% of women have breast cancer;</li> <li>80% of mammograms are positive when breast cancer is present; and</li> <li>10% of mammograms are positive when breast cancer is not present.</li> </ol> <p>If a woman has a positive mammogram, then what is the probability that she has breast cancer?</p> <p>I understand that Bayes' theorem allows to compute the desired probability with the given information, and that this probability is counterintuitively low. However, I can't help but feel that the premise of this question is wholly artificial. The only reason why we need to use Bayes' theorem here is that the full information with which the other probabilities (i.e., 1% have cancer, 80% true positive, etc.) have been computed is not provided to us. If we have access to the sample data with which these probabilities were computed, then we can directly find <span class="math-container">$$P(\text{cancer}|\text{positive test})=\frac{\text{number of women with cancer and positive test}}{\text{number of women with positive test}}.$$</span> In mathematical terms, if you know how to compute <span class="math-container">$P(B|A)$</span>, <span class="math-container">$P(A)$</span>, and <span class="math-container">$P(B)$</span>, then this means that you know how to compute <span class="math-container">$P(A\cap B)$</span> and <span class="math-container">$P(B)$</span>, in which case you already have your answer.</p> <hr /> <p>From the above arguments, it seems to me that Bayes' theorem is essentially only useful for the following reasons:</p> <ol> <li>In an adversarial context, i.e., someone who has access to the data only tells you about <span class="math-container">$P(B|A)$</span> when <span class="math-container">$P(A|B)$</span> is actually the quantity that is relevant to your interests, hoping that you will get confused and will not notice.</li> <li>An opportunity to dispel the confusion between <span class="math-container">$P(A|B)$</span> and <span class="math-container">$P(B|A)$</span> with concrete examples, and to explain that these are very different when the ratio between <span class="math-container">$P(A)$</span> and <span class="math-container">$P(B)$</span> deviates significantly from one.</li> </ol> <p>Am I missing something big about the usefulness of Bayes' theorem? In light of point 2., especially, I don't understand why Bayes' theorem stands out so much compared to, say, the Borel-Kolmogorov paradox, or the &quot;paradox&quot; that <span class="math-container">$P[X=x]=0$</span> when <span class="math-container">$X$</span> is a continuous random variable, etc.</p>
<p>You are mistaken in thinking that what you perceive as &quot;the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science&quot; is really &quot;the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science.&quot; But it's probably not your fault: This usually doesn't get explained very well.</p> <p>What is the probability of a Caucasian American having brown eyes? <b>What does that question mean?</b> By one interpretation, commonly called the frequentist interpretation of probability, it asks merely for the proportion persons having brown eyes among Caucasian Americans.</p> <p>What is the probability that there was life on Mars two billion years ago? <b>What does that question mean?</b> It has no answer according to the frequentist interpretation. &quot;The probability of life on Mars two billion years ago is <span class="math-container">$0.54$</span>&quot; is taken to be meaningless because one cannot say it happened in <span class="math-container">$54\%$</span> of all instances. But the Bayesian, as opposed to frequentist, interpretation of probability works with this sort of thing.</p> <p>The Bayesian interpretation applied to statistical inference is immune to various pathologies afflicting that field.</p> <p>Possibly you have seen that some people attach massive importance to the Bayesian interpretation of probability and mistakenly thought it was merely massive importance attached to Bayes's theorem. People who do consider Bayesianism important seldom explain this very clearly, primarily because that sort of exposition is not what they care about.</p>
<p>While I agree with Michael Hardy's answer, there is a sense in which Bayes' theorem is more important than any random identity in basic probability. Write Bayes' Theorem as</p> <p><span class="math-container">$$\text{P(Hypothesis|Data)}=\frac{\text{P(Data|Hypothesis)P(Hypothesis)}}{\text{P(Data)}}$$</span></p> <p>The left hand side is what we usually want to know: given what we've observed, what should our beliefs about the world be? But the main thing that probability theory gives us is in the numerator on the right side: the frequency with which any <em>given</em> hypothesis will generate particular kinds of data. Probabilistic models in some sense answer the wrong question, and Bayes' theorem tells us how to combine this with our prior knowledge to generate the answer to the right question.</p> <p>Frequentist methods that try not to use the prior have to reason about the quantity on the left by indirect means or else claim the left side is meaningless in many applications. They work, but frequently confuse even professional scientists. E.g. the common misconceptions about <span class="math-container">$p$</span>-values come from people assuming that they are a left-side quantity when they are a right-side quantity.</p>
logic
<p>Puzzle: there are $n$ computers most of which are good; the others may be bad ("most" in the strict sense: there are strictly more good computers than bad ones). You may ask any computer $A$ about the good/bad status of another computer $B$. if $A$ is good it will correctly indicate $B$'s status, but otherwise it may answer whatever. </p> <p>Your goal is to locate a good computer using the minimum number of questions in the worst case. In other words, devise an algorithm that requires no more than $N$ questions regardless of the outcome and is guaranteed to pinpoint a good computer, and make $N$ as small as possible. </p> <p>The original puzzle asks for the optimal $N$ when $n=100$. </p> <p>Warning: spoilers below. Stop here if you wish to think about this fun puzzle. </p> <p>. . . . .</p> <p>I, and everyone else I know who solved this, can do the $n=100$ case with $97$ questions in the worst case. I'm pretty sure this is optimal but I do really miserably on lower bounds. The simplest case where I can't match the bounds is $n=7$ (at most $3$ bad computers): this is doable with $5$ questions and I can rule out $3$ but I can't rule out $4$. </p> <p>More generally, if the number of bad computers is at most $k$ (so $n=2k+1$ or $n=2k+2$), I can show that at least $k+1$ questions are needed while $2k-1$ questions suffice. Can anyone narrow that gap?</p> <p>EDIT: starting a bounty, looking for improvements to the lower bound (or the upper bound, though I'd be surprised if the latter is possible) for general $n$. A transparent argument for why 7 computers require 5 questions is also good, but a computer-assisted case-by-case enumeration is not.</p>
<p>This is a very fun problem, which I have encountered in the literature as the <strong>knights and spies</strong> problem. Good computers are called <em>knights</em>, because they always tell the truth, while bad computers are called <em>spies</em>, because they say whatever they want. (Traditionally, a liar would be called a <em>knave</em>.) I will use this terminology because I feel it is more consistent with other liar/truth-teller puzzles.</p> <p>Let me briefly discuss an alternative objective: figure out <em>everyone's</em> identity, not just a single knight's. I encourage you to think about this problem before reading further. This is an interesting problem, in part because it's exciting how many different bounds I have seen people come up with:</p> <ul> <li>$n^2$ or $n(n-1)$ by asking everyone about everyone</li> <li>$\Theta(n\sqrt{n})$ using a standard "square-root" trick or $\Theta(n\log{n})$ by being smarter about the recursion</li> <li>$5n + O(1)$, $3n + O(1)$, and $2n + O(1)$ by increasingly efficient methods related to pairing up people</li> <li>$3n/2 + O(1)$ optimally!</li> </ul> <p>The 2009 paper <a href="http://arxiv.org/abs/0903.2869">"Knights, spies, games and ballot sequences"</a> by Mark Wildon proves that the final bound is optimal, computing the exact value of the $O(1)$ for every $n$. Mark Wildon has a <a href="http://www.ma.rhul.ac.uk/~uvah099/knights.html">webpage with additional information about the problem</a> which notes that this solution was previously published by Pavel M. Blecher in the 1983 paper <a href="http://www.math.iupui.edu/~bleher/Papers/1983_On_a_logical_problem.pdf">"On a logical problem"</a>. The strategy used in these papers, which Wildon calls the "Spider Interrogation Strategy" is fantastic.</p> <p>Your problem of identifying a single knight is more challenging! I'll go straight to the punch by saying that the optimal bound is $n-1 - H(n-1)$ where $H(k)$ is the number of 1 bits in the binary representation of $k$. I haven't carefully read the other answer to this question, but as far as I can tell, this bound can easily be achieved by a trivial modification of that strategy.</p> <p>The upper bound isn't too bad, but the lower bound is much trickier. When my friends and I thought about (and solved!) this problem, we used a reduction to the following problem:</p> <blockquote> <p>There are two players, the Questioner and God, and a collection of (nonnegative) numbers. The Questioner's aim is to achieve a violation of the (strict) triangle inequality, that is, have a single number be greater than or equal to the sum of the rest. God's aim is to not have this happen for as long as possible. Each move consists of the Questioner asking about two numbers, to which God replies "sum" or "difference"; the two numbers $a$ and $b$ are then replaced by either $a+b$ or $\lvert a-b\rvert$ accordingly. Since the number of numbers decreases by one each turn, eventually there will be two numbers and the (strict) triangle inequality is necessarily violated. If God manages to not lose before then, we say that "God wins".</p> </blockquote> <p>Here are some fun facts:</p> <ul> <li><p>Consider a starting position of $n$ ones. Then God wins if and only if $n$ is 1 more than a power of 2.</p></li> <li><p>Suppose God has a winning position. Then his answer to any question the Questioner can ask is <em>forced</em>; that is, it is always the case that one of the two answers is losing. We call this "Uniqueness". The way we think about it is that God somehow has to be very careful in answering every single question, and indeed, if you look at optimal play in "endgame" positions, it's hard to predict what the correct answer for God is.</p></li> <li><p>Suppose the starting position is $n=2^k+1$ ones. By the above two points, this is a winning position and God must carefully answer every question so as not to lose. Nonetheless, the answer to the first $(n-3)/2$ questions asked will necessarily be "sum".</p></li> </ul> <p>So there's this phenomenon that in a certain class of positions, God answers blindly for the first half of the time, and then has to be very careful afterwards. It's very weird.</p> <p>We finally classified winning positions. It turns out that the first bullet point above is true because $\binom{2n}{n}/2$ is odd only when $n$ is a power of 2.</p> <p>Some more quick observations:</p> <ul> <li><p>With regards to the reduction: this sum/difference game is just what you get if the spies are knaves. In that case, the knights and spies are just two groups of people which support themselves but accuse the others.</p></li> <li><p>The lower bound $n-1-H(n-1)$ holds even if you're trying to find a knight or a spy.</p></li> </ul> <p>Let me finish with some references to the literature. This result has been published many times! The term used in the literature for this game is "the majority game".</p> <blockquote> <p>You have a group of $n$ items, each with one of $k$ labels. One of the labels is shared by a majority of the items. You are allowed to ask if two items have the same label, and the goal is to minimize the number of questions necessary to identify a majority element.</p> </blockquote> <p>The best studied case is when $k=2$. The problem is often presented as played by a questioner and an answerer. This is not exactly the same game, because in principle spies could tell the truth. However, the lower bound reduction above is to the case when the spies are knaves, and then this is the same game.</p> <p>Here's a summary of the relevant literature:</p> <ul> <li><p>Michael J. Fischer and Steven L. Salzberg.<br> Finding a majority among $n$ votes.<br> J. Algorithms 3 (1982) 375--379.</p> <p>They consider the problem when $k$ is unknown, and a majority may or may not exist. They prove that the optimal bound is then $\lceil 3n/2 - 2\rceil$ questions. You may recognize this as the bound from Mark Wildon's paper. Now, I am a little confused because it doesn't seem to me that the problems map onto each other exactly, but I find it hard to imagine that it's an accident.</p> <p>Actually, the algorithm they present seems strikingly similar to Wildon's "spider interrogation strategy". Because Fischer and Salzberg are in a slightly different model (in particular one that is symmetric w.r.t. asking $x$ about $y$ and $y$ about $x$), you have to change some of the details. I don't think they don't exactly map onto each other (in terms of the questions asked), but they're similar.</p></li> <li><p>Saks, Michael E. and Werman, Michael.<br> On computing majority by comparisons.<br> Combinatorica 11 (1991), no. 4, 383--387.</p> <p>They show that if $n$ is odd, the optimal bound is $n-H(n)$, where $H(n)$ is the number of 1-bits in $n$.</p> <p>Their analysis proceeds by first reformulating the game as between a selector and an assigner. A position in the game is a multiset of integers, and the rules are as in the sum-difference game, e.g. "the game ends when the largest number in the multiset is at least half the total." They then have a slick proof of the upper bound! (In their notation it's a lower bound.) I mention this because even though the upper bound isn't hard, some analyses gets mired in minor details.</p> <p>They then define a 2-adic valuation invariant which my friends and I also discovered. Their proof uses generating functions which ours did not, but I believe the actual invariant is the same as ours. Finally, they apply the invariant result to the position consisting of $2h+1$ ones by computing the valuation of $\binom{2h}{h}$.</p></li> <li><p>Alonso, Laurent; Reingold, Edward M.; and Schott, René<br> Determining the majority.<br> Inform. Process. Lett. 47 (1993), no. 5, 253--255.</p> <p>They present a short proof of the previous paper's results. The upper bound uses the same neat trick. They then prove the lower bound with essentially the same 2-adic invariant technique, except with a different exposition. Admittedly, it's cleaner and doesn't use generating functions.</p></li> <li><p>Wiener, Gábor<br> Search for a majority element.<br> International Conference (of the Forum for Interdisciplinary Mathematics) on Combinatorics, Information Theory and Statistics (Portland, ME, 1997). J. Statist. Plann. Inference 100 (2002), no. 2, 313--318.</p> <p>He proves some related results, including the following:</p> <p>If asking about $x$ and $y$ is an optimal question, then there exists an optimal question that doesn't ask about $x$. It follows that the optimal number of questions is monotonic, in the sense that adding another 1 into the position does not decrease the optimal number of questions.</p></li> </ul>
<p>Here's a more detailed explanation of the method I sketched in a comment. This method lowers the upper bound in some cases.</p> <p>We'll perform an operation which (1) reduces the number of computers under consideration (by about half), and (2) preserves the fact that more than half of the computers under consideration are good. Repeating this operation eventually reduces the number of computers to 1, at which point the more-than-half condition tells us that the computer is good, and we're done.</p> <p>The operation is this:</p> <ol> <li>Pair up all the computers arbitrarily. (Maybe one computer is left out; see step 3.)</li> <li>In each pair, ask one computer about the other. If the answer is "bad", discard both computers; if the answer is "good", discard just the testing computer and keep the tested one.</li> <li>If there was one computer left out of the pairing in step 1, then either keep or discard it so that you keep in total an odd number of computers.</li> </ol> <p>Proof that after this operation, more than half of the computers are good: Let $G$ be the number of good computers to start with, and $B$ the number of bad computers. Write $(GG)$ for the number of pairs produced in step 1 with both computers good; write $(GB)$ for the number of pairs with the testing computer good and the tested computer bad; write $(BG)$ and $(BB)$ similarly. Let $G_2$ and $B_2$ denote respectively the number of good and bad computers that are kept after step 2, and let $G_3$ and $B_3$ denote the corresponding number for after step 3. We know that $G &gt; B$ and want to show that $G_3 &gt; B_3$.</p> <p>In step 2, every $(GG)$ pair answers "good", so you keep the second good computer; this shows $G_2 \ge (GG)$. On the other hand, the only way for a bad computer to survive step 2 is if a bad computer vouches for it; this shows $B_2 \le (BB)$. Now consider three cases.</p> <p>Case 1: There were an even number of computers to start with. Then $G = 2(GG) + (GB) + (BG)$ and $B = 2(BB) + (GB) + (BG)$. By hypothesis, $G &gt; B$, so these identities imply $(GG) &gt; (BB)$, which gives $G_2 &gt; B_2$. Since there is no unpaired computer, $G_3 = G_2$ and $B_3 = B_2$, so we're done.</p> <p>Case 2: There were an odd number of computers to start with, and the unpaired computer was good. Then $G = 2(GG) + (GB) + (BG) + 1$ and $B = 2(BB) + (GB) + (BG)$, which since $G &gt; B$ implies $(GG) \ge (BB)$, so $G_2 \ge B_2$. If $G_2 &gt; B_2$ then $G_3 &gt; B_3$ whether or not we keep the unpaired good computer. If $G_2 = B_2$ then the total number of computers kept after step 2 is even, so in step 3 we keep the unpaired good computer, obtaining $G_3 = G_2+1 &gt; B_2 = B_3$.</p> <p>Case 3: There were an odd number of computers to start with, and the unpaired computer was bad. Then $G = 2(GG) + (GB) + (BG)$ and $B = 2(BB) + (GB) + (BG) + 1$, which since $G &gt; B$ implies $(GG) &gt; BB$. As in case 1 this yields $G_2 &gt; B_2$. If $G_2 = B_2 + 1$ then the total number of computers kept after step 2 is odd, so in step 3 we discard the unpaired bad computer, obtaining $G_3 = G_2 &gt; B_2 = B_3$. If $G_2 &gt; B_2 + 1$ then $G_3 &gt; B_3$ whether or not we keep the unpaired bad computer.</p> <p>So in all cases, at the end of step 3 we have kept more good computers than bad computers, as desired.</p> <hr /> <p>As Alon said in comments, in the worst case (when all tests say "good") this method uses $Q(n) = n - h(n)$ questions, where $h(n)$ is the number of bits in the binary representation of $n$. (This can be easily proved by (strong) induction.) Some specific values:</p> <ul> <li>$Q(100) = 97$, just as in the question.</li> <li>$Q(2^m) = 2^m-1$, which is slightly worse than the estimate in the question, which in this case is $2^m-3$ (for $m\ge 2$, I presume).</li> <li>In particular, $Q(2) = 1$, which is suboptimal, since the hypothesis that more than half the computers are good here tells us that all of them are good, so there is no need to ask questions. But I guess adding this special case to the algorithm saves us at most one question.</li> <li>$Q(2^m-1) = 2^m - m$, which is asymptotically better than the estimate in the question (it's $n-\log n$ instead of $n-c$).</li> <li>In particular, $Q(7) = 4$, slightly improving the estimate in the question.</li> </ul> <hr /> <p>Here's another way to look at this puzzle. Consider the following game. There is a (directed) graph with $2n$ vertices, labelled $x_1,\dotsc,x_n$ and $\neg x_1,\dotsc,\neg x_n$. At the beginning of the game, the graph has no edges. Every round, player A chooses two numbers $i$ and $j$; player B draws either the directed edge $x_i\to x_j$ or the directed edge $x_i\to\neg x_j$. At the end of the round, player A may claim victory by choosing some number $k$ and drawing $x_k\to\neg x_k$; player A then wins if, interpreting the graph as specifying a boolean formula (where each directed edge represents an implication, which is a 2-clause), it is not possible to satisfy that boolean formula with more than half of the variables set to "true". Player A's goal is to win as quickly as possible; player B's goal is to delay player A's victory as long as possible.</p> <p>In short, the players jointly construct an instance of MAX 2-SAT, with player A trying to make it unsatisfiable and player B trying to keep it satisfiable.</p> <p>(See, when player A chooses $k$, adds $x_k\to\neg x_k$, and shows that the resulting MAX 2-SAT instance is unsatisfiable, that amounts to a proof (by contraposition) that, if more than half the computers are good, the answers so far prove that computer $k$ is good.)</p> <p>I have no insights from this way of thinking about the puzzle, except that it makes me suspect the problem is hard.</p>
linear-algebra
<p>I happened to stumble upon the following matrix: $$ A = \begin{bmatrix} a &amp; 1 \\ 0 &amp; a \end{bmatrix} $$</p> <p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then: $$ P(A)=\begin{bmatrix} P(a) &amp; P'(a) \\ 0 &amp; P(a) \end{bmatrix}$$</p> <p>Where $P'(a)$ is the derivative evaluated at $a$.</p> <p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me: $$ \exp(A)=\begin{bmatrix} e^a &amp; e^a \\ 0 &amp; e^a \end{bmatrix}$$ and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p> <p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get: $$ P(A)=\begin{bmatrix} \frac{1}{a} &amp; -\frac{1}{a^2} \\ 0 &amp; \frac{1}{a} \end{bmatrix}$$ And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p> <p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p> <p>I have two questions:</p> <ol> <li><p>Why is this happening?</p></li> <li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li> </ol>
<p>If $$ A = \begin{bmatrix} a &amp; 1 \\ 0 &amp; a \end{bmatrix} $$ then by induction you can prove that $$ A^n = \begin{bmatrix} a^n &amp; n a^{n-1} \\ 0 &amp; a^n \end{bmatrix} \tag 1 $$ for $n \ge 1 $. If $f$ can be developed into a power series $$ f(z) = \sum_{n=0}^\infty c_n z^n $$ then $$ f'(z) = \sum_{n=1}^\infty n c_n z^{n-1} $$ and it follows that $$ f(A) = \sum_{n=0}^\infty c_n A^n = I + \sum_{n=1}^\infty c_n \begin{bmatrix} a^n &amp; n a^{n-1} \\ 0 &amp; a^n \end{bmatrix} = \begin{bmatrix} f(a) &amp; f'(a) \\ 0 &amp; f(a) \end{bmatrix} \tag 2 $$ From $(1)$ and $$ A^{-1} = \begin{bmatrix} a^{-1} &amp; -a^{-2} \\ 0 &amp; a^{-1} \end{bmatrix} $$ one gets $$ A^{-n} = \begin{bmatrix} a^{-1} &amp; -a^{-2} \\ 0 &amp; a^{-1} \end{bmatrix}^n = (-a^{-2})^{n} \begin{bmatrix} -a &amp; 1 \\ 0 &amp; -a \end{bmatrix}^n \\ = (-1)^n a^{-2n} \begin{bmatrix} (-a)^n &amp; n (-a)^{n-1} \\ 0 &amp; (-a)^n \end{bmatrix} = \begin{bmatrix} a^{-n} &amp; -n a^{-n-1} \\ 0 &amp; a^{-n} \end{bmatrix} $$ which means that $(1)$ holds for negative exponents as well. As a consequence, $(2)$ can be generalized to functions admitting a Laurent series representation: $$ f(z) = \sum_{n=-\infty}^\infty c_n z^n $$</p>
<p>It's a general statement if <span class="math-container">$J_{k}$</span> is a Jordan block and <span class="math-container">$f$</span> a function matrix then <span class="math-container">\begin{equation} f(J)=\left(\begin{array}{ccccc} f(\lambda_{0}) &amp; \frac{f'(\lambda_{0})}{1!} &amp; \frac{f''(\lambda_{0})}{2!} &amp; \ldots &amp; \frac{f^{(n-1)}(\lambda_{0})}{(n-1)!}\\ 0 &amp; f(\lambda_{0}) &amp; \frac{f'(\lambda_{0})}{1!} &amp; &amp; \vdots\\ 0 &amp; 0 &amp; f(\lambda_{0}) &amp; \ddots &amp; \frac{f''(\lambda_{0})}{2!}\\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \frac{f'(\lambda_{0})}{1!}\\ 0 &amp; 0 &amp; 0 &amp; \ldots &amp; f(\lambda_{0}) \end{array}\right) \end{equation}</span> where <span class="math-container">\begin{equation} J=\left(\begin{array}{ccccc} \lambda_{0} &amp; 1 &amp; 0 &amp; 0\\ 0 &amp; \lambda_{0} &amp; 1&amp; 0\\ 0 &amp; 0 &amp; \ddots &amp; 1\\ 0 &amp; 0 &amp; 0 &amp; \lambda_{0} \end{array}\right) \end{equation}</span> This statement can be demonstrated in various ways (none of them short), but it's a quite known formula. I think you can find it in various books, like in Horn and Johnson's <em>Matrix Analysis</em>.</p>
geometry
<p>I'm an 8th grader. After browsing aops.com, a math contest website, I've seen a lot of problems solved by Cauchy Schwarz. I'm only in geometry (have not started learning trigonometry yet). So can anyone explain Cauchy Schwarz in layman's terms, as if you are explaining it to someone who has just started geo in 8th grade?</p>
<p>In geometry terms that you can understand, the Cauchy-Schwarz inequality says that:</p> <blockquote> <p>Among all the parallelograms with sides <em>a</em> and <em>b</em>, the rectangle is the one with the largest area.</p> </blockquote> <p>Usually you can use this inequality when you are looking for an upper (or lower) bound of an expression.</p> <p>I wanted to give you an example, but my geometry studies are too far to remember an easy demonstration. Maybe you can ask somebody to give you a compass-and-straightedge demonstration of the equivalence between Cauchy-Schwarz and triangle inequalities.</p>
<p>Since you said you browse AoPS, maybe you're interested in math contests. In math contests, the following forms of Cauchy-Schwarz (C-S) inequality are used:</p> <p>For all $a_i,b_i\in\Bbb R$:</p> <p>$$\left(a_1^2+a_2^2+\cdots+a_n^2\right)\left(b_1^2+b_2^2+\cdots+b_n^2\right)\ge (a_1b_1+a_2b_2+\cdots+a_nb_n)^2$$</p> <p>Equality holds if and only if either $a_1=b_1k, a_2=b_2k,\ldots, a_n=b_nk$ for some $k\in\Bbb R$ or $a_1=a_2=\cdots=a_n=0$ or $b_1=b_2=\cdots=b_n=0$.</p> <p>For all $a_i,b_i\in\Bbb R^+$:</p> <p>$$\sqrt{a_1+a_2+\cdots+a_n}\sqrt{b_1+b_2+\cdots+b_n}\ge \sqrt{a_1b_1}+\sqrt{a_2b_2}+\cdots+\sqrt{a_nb_n}$$</p> <p>Equality holds if and only if $\frac{a_1}{b_1}=\frac{a_2}{b_2}=\cdots=\frac{a_n}{b_n}$.</p> <p>The following is also called Titu's lemma, or Engel's form of C-S inequality (to prove it, multiply both sides by $b_1+b_2+\cdots+b_n$ and apply the first form of C-S inequality):</p> <p>For $b_i\in\Bbb R^+, a_i\in\Bbb R$:</p> <p>$$\frac{a_1^2}{b_1}+\frac{a_2^2}{b_2}+\cdots+\frac{a_n^2}{b_n}\ge \frac{(a_1+a_2+\cdots+a_n)^2}{b_1+b_2+\cdots+b_n}$$</p> <p>with equality if and only if $\frac{a_1}{b_1}=\frac{a_2}{b_2}=\cdots=\frac{a_n}{b_n}$.</p> <p>The more general is <a href="https://hcmop.wordpress.com/2012/04/19/holders-inequality/" rel="nofollow">Hölder's inequality</a>:</p> <p>For all $a_i,b_i,c_i\in\Bbb R$:</p> <p>$$\left(a_1^3+a_2^3+\cdots+a_n^3\right)\left(b_1^3+b_2^3+\cdots+b_n^3\right)\left(c_1^3+c_2^3+\cdots+c_n^3\right)$$</p> <p>$$\ge (a_1b_1c_1+a_2b_2c_2+\cdots+a_nb_nc_n)^3$$</p> <p>The same holds for all $a_i,b_i,c_i,d_i\in\Bbb R$, etc. I.e., any number of arrays $a_i,b_i,c_i,d_i,\ldots$. In general, all these inequalities are called Hölder's inequalities.</p>
probability
<p>How do you compute the minimum of two independent random variables in the general case ?</p> <p>In the particular case there would be two uniform variables with a difference support, how should one proceed ?</p> <p>EDIT: specified that they were independent and that the uniform variables do not have obligatory the same support range.</p>
<p>$F_{X,Y}(x,y)$ be the joint cumulative distribution function. Then, for $Z= \min(X,Y)$ $$ \begin{eqnarray} 1-F_Z(z) &amp;=&amp; \mathbb{P}\left(\min(X,Y) &gt; z\right) = \mathbb{P}\left(X &gt; z, Y&gt;z\right) \\ &amp;=&amp; 1 - \mathbb{P}\left(X\leqslant z\right) - \mathbb{P}\left(Y\leqslant z\right) + \mathbb{P}\left(X\leqslant z, Y\leqslant z\right) \end{eqnarray} $$ where the inclusion exclusion principle was applied to get the last equality. Thus $$ F_Z(z) = \mathbb{P}\left(X\leqslant z\right) + \mathbb{P}\left(Y\leqslant z\right) - \mathbb{P}\left(X\leqslant z, Y\leqslant z\right) = F_X(z) + F_Y(z) - F_{X,Y}(z,z) $$ Notice that we have not used the information about the correlation of $X$ and $Y$.</p> <p>Let's consider an example. Let $F_{X,Y}(x,y) = F_X(x) F_Y(y) \left(1+ \alpha (1-F_X(x)) (1-F_Y(y))\right)$, known as <a href="http://rss.acs.unt.edu/Rdoc/library/VGAM/html/fgm.html" rel="noreferrer">Farlie-Gumbel-Morgenstern</a> <a href="http://en.wikipedia.org/wiki/Copula_%28probability_theory%29" rel="noreferrer">copula</a>, and let $F_X(x)$ and $F_Y(y)$ be cdfs of uniform random variables on the unit interval. Then, for $0&lt;z&lt;1$ $$ F_Z(z) = 2 z - z^2 \left(1 + \alpha (1-z)^2 \right) $$ leading to $$ \mathbb{E}\left(Z\right) = \int_0^1 z F_Z^\prime(z) \mathrm{d}z = \frac{1}{3} \left(1 + \frac{\alpha}{10} \right) $$</p>
<p>Let: $U = \min(X,Y)$, where $\min(X,Y)\leq z$.</p> <p>$Pr(\min(X,Y) &gt; z) = Pr((X&gt;z) \cap (Y&gt;z))$.</p> <p>$Pr(U&gt;z) = Pr(X&gt;z)*Pr(Y&gt;z)$.</p> <p>$Pr(U\geq z) = (1 - Fx(z))*(1 - Fy(z))$.</p> <p>$Fu(z) = 1 - (1 - Fx(z))*(1 - Fy(z))$.</p> <p>Thus,</p> <p>$F_\min(x,y) = Fx(z) + Fy(z) - Fx(z)*Fy(z)$.</p>
geometry
<p><a href="https://i.sstatic.net/KkcQt.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KkcQt.jpg" alt="diagram" /></a></p> <p>Find the optimal shape of a coffee cup for heat retention. Assuming</p> <ol> <li>A constant coffee flow rate out of the cup.</li> <li>All surfaces radiate heat equally, i.e. liquid surface, bottom of cup and sides of cup.</li> <li>The coffee is drunk quickly enough that the temperature differential between the coffee and the environment can be ignored/assumed constant.</li> </ol> <p>So we just need to minimise the average surface area as the liquid drains</p> <p>I have worked out the following 2 alternative equations for the average surface area over the lifetime of the liquid in the cup (see below for derivations):</p> <p><span class="math-container">$$ S_{ave} =\pi r_0^2+ \frac{\pi^2}{V}\int_{0}^{h}{{r(s)}^4ds}+\frac{2\pi^2}{V}\int_{0}^{h}{\int_{0}^{s}{r\sqrt{1+\left(\frac{dr}{dz}\right)^2}\ dz\ }{r(s)}^2ds\ } \tag{1}$$</span></p> <p><span class="math-container">$$S_{ave}=\pi r_0^2+\frac{\pi^2}{V}\int_{0}^{h}r\left(s\right)^4ds+\frac{2\pi^2}{V}\int_{0}^{h}r(s){\underbrace{\int_{s}^{h}{{r\left(z\right)}^2dz\ }}_{\text{Volume Drunk}}}\sqrt{1+\left(\frac{dr}{ds}\right)^2}ds \tag{2}$$</span></p> <p>If the volume of the cup is constant</p> <p><span class="math-container">$$ V=\pi\int_{0}^{h}{{r(z)}^2dz\ }$$</span></p> <p>Can the function, <span class="math-container">$r(z)$</span>, be found that minimises the average surface area <span class="math-container">$S_{ave}$</span>?</p> <p>If r is expressed as a parametric equation in the form <span class="math-container">$r=f(t), z=g(t)$</span> and <span class="math-container">$f,g$</span> are polynomials then a genetic search found the best function of parametric polynomials to be: <span class="math-container">$r\left(z\right)=\sqrt{\frac{3}{2}}z^\frac{1}{2}-\frac{\sqrt6}{9}z^\frac{3}{2}, f\left(t\right)=\sqrt{\frac{3}{2}}t-\sqrt{\frac{3}{2}}t^3, g\left(t\right)=\frac{9}{2}t^2$</span></p> <p>This parametric shape has a maximum radius of 1, height of 4.5, starting volume of <span class="math-container">$\frac{3^4}{2^5}\pi$</span> and is shown here: <a href="https://i.sstatic.net/MMCej.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MMCej.png" alt="Best Parametric Polynomials" /></a></p> <p>I can't prove that there is (or is not) a better <span class="math-container">$r(z)$</span> but...</p> <p>the average surface area of this surface turns out to be <span class="math-container">$12.723452r^2$</span> or <span class="math-container">$4.05\pi r_{max}^2$</span>. I suspect that the optimal surface will have the same surface area as a sphere, i.e. <span class="math-container">$4\pi r_{max}^2$</span> <span class="math-container">$(12.5664)$</span></p> <p><strong>Conjecture: The optimally shaped coffee cup has the same average surface area as a sphere of the same maximum radius.</strong> Shown to be false by <a href="https://math.stackexchange.com/a/4704272/1050333">this answer</a></p> <p><strong>Derivation of Surface Area Formula:</strong></p> <p>Surface area when surface of liquid is at level s is the sum of the areas of the top disc, bottom disc and the sides.</p> <p><span class="math-container">$S(s)=\pi r_0^2+\pi r_s^2+2\pi\int_{0}^{s}{r(z)dldz}$</span></p> <p><span class="math-container">$S(s)=\pi r_0^2+\pi r_s^2+2\pi\int_{0}^{s}{r(z)\sqrt{1+\left(\frac{dr}{dz}\right)^2}\ dz\ }$</span></p> <p>The average surface area will be the sum of all the As’s times the time spent at each surface area.</p> <p><span class="math-container">$S_{ave}=\frac{1}{T}\int_{t_0}^{t_h}{S(s)dt\ }$</span></p> <p>In order to have the drain rate constant we need to set the flow rate Q to be constant i.e. the rate of change volume is constant and <span class="math-container">$Q=dV/dt =V/T$</span></p> <p>Time spent at a particular liquid level <span class="math-container">$dt\ =\frac{T}{V}dV$</span> and <span class="math-container">$ dV={\pi r}^2ds$</span></p> <p><span class="math-container">$dt=\frac{T\pi r^2}{V}ds$</span></p> <p><span class="math-container">$S_{ave}=\int_{s=0}^{s=h}{S(s)\frac{T\pi{r(s)}^2}{V}ds\ }$</span> <span class="math-container">$S_{ave}=\frac{\pi}{V}\int_{s=0}^{s=h}{(r_0^2+r(s)^2+2\int_{z=0}^{z=s}{r(z)\sqrt{1+(\frac{dr(z)}{dz})^2}\ dz\ })\pi{r(s)}^2ds\ }$</span></p> <p><span class="math-container">$S_{ave}=\frac{\pi}{V}r_0^2\int_{0}^{h}{\pi{r(s)}^2ds}+\frac{\pi}{V}\int_{s=0}^{s=h}{\left(r\left(s\right)^2+2\int_{z=0}^{z=s}{r(z)\sqrt{1+(\frac{dr(z)}{dz})^2}\ dz\ }\right)\pi{r(s)}^2ds\ }$</span></p> <p><span class="math-container">$S_{ave}=\pi\ r_0^2+\frac{\pi^2}{V}\int_{s=0}^{s=h}{\left(r\left(s\right)^2+2\int_{z=0}^{z=s}{r(z)\sqrt{1+\left(\frac{dr(z)}{dz}\right)^2}\ dz\ }\right){r(s)}^2ds\ }$</span></p> <p><strong>Alternative Formula Derivation:</strong></p> <p>Surface area of highlighted ribbon in the diagram is:</p> <p><span class="math-container">$S_{ribbon}=2\pi rdl$</span></p> <p>And the contribution towards the average surface area lasts for the ratio of volume of the liquid above the current level to the total volume.</p> <p><span class="math-container">$$S_{sides}=2\pi\frac{\pi}{V}{\underbrace{\int_{s}^{h}{{r\left(z\right)}^2dz\ }}_{\text{Volume Drunk}}}rdl=\frac{2\pi^2}{V}{\underbrace{\int_{s}^{h}{{r\left(z\right)}^2dz\ }}_{\text{Volume Drunk}}}r\left(s\right)\sqrt{1+\left(\frac{dr}{ds}\right)^2}ds$$</span></p> <p>Integrate the contribution of all such sections.</p> <p><span class="math-container">$$S_{sides}=\frac{2\pi^2}{V}\int_{0}^{h}{r\left(s\right){\underbrace{\int_{s}^{h}{{r\left(z\right)}^2dz\ }}_{\text{Volume Drunk}}}\sqrt{1+\left(\frac{dr}{ds}\right)^2}ds}$$</span></p> <p>Contribution of top surfaces to average surface area is area of top by proportion of volume that area x dz is:</p> <p><span class="math-container">$$S_{tops}=\frac{1}{V}\int_{0}^{h}{\pi{r(s)}^2{\pi r(s)}^2}ds$$</span></p> <p>Contribution of bottom surface is constant <span class="math-container">$\pi r_0^2$</span> so adding together all three gives:</p> <p><span class="math-container">$$S_{ave}=\pi r_0^2+\frac{\pi^2}{V}\int_{0}^{h}r\left(s\right)^4ds+\frac{2\pi^2}{V}\int_{0}^{h}r(s){\underbrace{\int_{s}^{h}{{r\left(z\right)}^2dz\ }}_{\text{Volume Drunk}}}\sqrt{1+\left(\frac{dr}{ds}\right)^2}ds$$</span></p> <hr />
<p>This isn't really a solution - just rewriting it in terms of a fourth-order differential equation. I've also cross-posted this answer on MathOverflow.</p> <hr /> <p>Let <span class="math-container">$$V(t):=\pi\int_0^tr(s)^2ds$$</span></p> <p>We can then rewrite <span class="math-container">$r(s)$</span> as <span class="math-container">$$r(s)=\sqrt{\frac{V'(s)}{\pi}}$$</span></p> <p>If we rewrite <span class="math-container">$S_{ave}$</span> using just <span class="math-container">$V$</span>, we get <span class="math-container">$$S_{ave}=\pi\sqrt{\frac{V'\left(0\right)}{\pi}}+\frac{\pi^{2}}{V\left(h\right)}\int_{0}^{h}\left(\left(\frac{V'\left(s\right)}{\pi}\right)^{2}+2\sqrt{\frac{V'\left(s\right)}{\pi}}\int_{s}^{h}\frac{V'\left(z\right)}{\pi}dz\sqrt{1+\frac{1}{4\pi}\cdot\frac{V''\left(s\right)^{2}}{V'\left(s\right)}}\right)ds$$</span></p> <p>The inner integral simplifies to <span class="math-container">$\frac{1}{\pi}(V(h)-V(s))$</span>, so this simplifies to (getting rid of some of the <span class="math-container">$\pi$</span> terms as well) <span class="math-container">$$\sqrt{\pi V'(0)}+\frac{1}{V(h)}\int_{0}^{h}\left(V'(s)^2+2\cdot(V(h)-V(s))\sqrt{\pi V'(s)+\frac{V''(s)^2}{4}}\right)ds$$</span></p> <p>We want to minimize this value assuming a fixed <span class="math-container">$V(h)$</span>. Assume we have a fixed <span class="math-container">$V'(0)$</span> and <span class="math-container">$h$</span> as well. We then want to minimize <span class="math-container">$\frac{\sqrt{\pi V'(0)}}{h}+\frac{1}{V(h)}\mathcal{L}(s,V, V', V'')$</span>, where <span class="math-container">$\mathcal{L}(s,V, V', V'')$</span> is given by <span class="math-container">$$(V')^2+2\cdot(V_h-V)\sqrt{\pi V'+\frac{(V'')^2}{4}}$$</span></p> <p>and <span class="math-container">$V_h=V(h)$</span>. We can then use the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation#Single_function_of_single_variable_with_higher_derivatives" rel="nofollow noreferrer">Euler-Lagrange equation</a> to get that the stationary points of the average surface area (with respect to <span class="math-container">$V(s)$</span>) would be given by <span class="math-container">$$\frac{\partial\mathcal{L}}{\partial V}-\frac{d}{ds}\left(\frac{\partial \mathcal{L}}{\partial V'}\right)+\frac{d^2}{ds^2}\left(\frac{\partial \mathcal{L}}{\partial V''}\right)=0$$</span></p> <p>This ends up being a fourth-order differential equation with a long form (I started writing it out before realizing that the last term would make it be very long).</p> <hr /> <p>Edit: Using the Beltrami identity, which TheSimpliFire mentioned in a comment, and <a href="https://math.stackexchange.com/a/1773488/595055">this answer</a>, we can write <span class="math-container">$$\mathcal{L}-V'\frac{\partial\mathcal{L}}{\partial V'}+V'\frac{d}{ds}\frac{\partial \mathcal{L}}{\partial V''}-V''\frac{\partial \mathcal{L}}{\partial V''}=C$$</span></p> <p>Plugging in <span class="math-container">$\mathcal{L}$</span> and simplifying, we get <span class="math-container">$$-V'(s)^{2}\left(1+\frac{V''(s)\left(4\pi V'(s)+V''(s)^{2}\right)+4\pi\left(V(s)-V_{h}\right)\left(2\pi+V^{(3)}(s)\right)}{\left(4\pi V'(s)+V''(s)^{2}\right)^{\frac{3}{2}}}\right) = C$$</span></p>
<p><strong>Solution:</strong></p> <p>For a coffee cup of volume <span class="math-container">$\frac{81\pi}{32}$</span> the shape of maximum heat retention is the surface of revolution of <span class="math-container">$$r(z)=\sqrt\frac{3}{2}z^\frac{1}{2}-\frac{\sqrt{6}}{9}z^{\frac{3}{2}}$$</span>.</p> <p>The height of this cup is <span class="math-container">$4.5$</span> and the max radius is <span class="math-container">$1$</span>. For other starting volumes, V, the radius and height scale by <span class="math-container">$(\frac{32V}{81\pi})^\frac{1}{3}.$</span></p> <p><strong>Proof:</strong></p> <p>In the question I show that the average surface area for a coffee cup is:</p> <p><span class="math-container">$$S_{ave}=\pi r_0^2+\frac{\pi^2}{V}\int_{0}^{h}r\left(s\right)^4ds+\frac{2\pi^2}{V}\int_{0}^{h}r(s){\underbrace{\int_{s}^{h}{{r\left(z\right)}^2dz\ }}_{\text{Volume Drunk}}}\sqrt{1+\left(\frac{dr}{ds}\right)^2}ds \tag{2}$$</span> and ask what function of <span class="math-container">$r$</span> minimises this. Recall that I give <span class="math-container">$(1)$</span> as the best equation I have found using a genetic search algorithm.</p> <p>TheSimpliFire showed in <a href="https://math.stackexchange.com/a/4703322/1050333">this answer</a> and Varun Vejalla show in <a href="https://math.stackexchange.com/a/4702324/1050333">this answer</a> (with a scaling of <span class="math-container">$\pi$</span> difference) that the optimal shape must be a solution to the third-order ODE: <span class="math-container">$$\frac{4(P(h)-P)(2+P''')-P''(4P'+P''^2)}{(4P'+P''^2)^{3/2}}=1+\frac C{P'^2}\tag{3}$$</span></p> <p>where <span class="math-container">$P'=r^2(z)$</span></p> <p>Let's plug <span class="math-container">$(1)$</span> into <span class="math-container">$(3)$</span> and see if it is a solution.</p> <p>Using <span class="math-container">$P=\frac{3\ z^2}{4}-\frac{2\ z^3}{9}+\frac{z^4}{54}+K$</span> (Where <span class="math-container">$K$</span> is the constant of integration).</p> <p>The numerator of the LHS evaluates to: <span class="math-container">$$\frac{81}{4}-8\left(\frac{3\ z^2}{4}-\frac{2\ z^3}{9}+\frac{z^4}{54}\right)+4\left(\frac{81}{32}-\left(\frac{3\ z^2}{4}-\frac{2\ z^3}{9}+\frac{z^4}{54}\right)\right)\left(\frac{4\ z}{9}-\frac{4}{3}\right)-4\left(\frac{3\ z}{2}\ -\frac{2\ z^2}{3}\ +\frac{2\ z^3}{27}\right)\left(\frac{3}{2}-\frac{4\ z}{3}+\frac{2\ z^2}{9}\right)-\left(\frac{3}{2}-\frac{4\ z}{3}+\frac{2\ z^2}{9}\right)^3$$</span></p> <p><span class="math-container">$$=\frac{27}{8}+\frac{9z}{2}+\frac{z^2}{2}-\frac{28z^3}{27}-\frac{2z^4}{27}+\frac{8z^5}{81}-\frac{8z^6}{729}$$</span></p> <p><span class="math-container">$$=-\frac{(-27-12z+4z^2)^3}{5832}$$</span></p> <p>The denominator is:</p> <p><span class="math-container">$$\left(4\left(\frac{3\ z}{2}\ -\frac{2\ z^2}{3}\ +\frac{2\ z^3}{27}\right)+\left(\frac{3}{2}-\frac{4\ z}{3}+\frac{2\ z^2}{9}\right)^2\right)\frac{3}{2}$$</span></p> <p><span class="math-container">$$=\frac{\left(\left(-27\ -\ 12\ z\ +\ 4\ z^2\right)^6\right)^\frac{1}{2}}{5832}$$</span></p> <p>Which is always equal to either the numerator or its negative</p> <p><span class="math-container">$$\frac{-(-27-12z+4z^2)^3}{\sqrt{(27 + 12 z - 4 z^2)^6}} = \pm1\tag{4}$$</span> But <span class="math-container">$P'=r^2 \implies r=\pm \sqrt {P'}$</span></p> <p>So we can rewrite the ODE as <span class="math-container">$$\frac{4(P(h)-P)(2+P''')-P''(4P'+P''^2)}{\pm(4P'+P''^2)^{3/2}}=1+\frac C{P'^2}\tag{3a}$$</span></p> <p>squaring both sides of (3a) and (4), we see that <span class="math-container">$(1)$</span> satisfies the ODE with <span class="math-container">$C=0$</span></p> <p>Q.E.D.</p> <p>Well, almost. The function is at a stationary point. Any small change in the coefficients increases the surface area so it’s not a maximum, so it must be a minimum. Now I just need to see if it is also a global minimum.</p>
number-theory
<p>Prime numbers are numbers with no factors other than one and itself.</p> <p>Factors of a number are always lower or equal to than a given number; so, the larger the number is, the larger the pool of "possible factors" that number might have.</p> <p>So the larger the number, it seems like the less likely the number is to be a prime.</p> <p>Surely there must be a number where, simply, every number above it has some other factors. A "critical point" where every number larger than it simply will always have some factors other than one and itself.</p> <p>Has there been any research as to finding this critical point, or has it been proven not to exist? That for any $n$ there is always guaranteed to be a number higher than $n$ that has no factors other than one and itself?</p>
<p>Below are two noteworthy variations on Euclid's classic proof that there are infinitely many primes. The first is a simplification and the second is a generalization to rings with few units (= invertibles).</p> <p><strong>Theorem</strong> <span class="math-container">$\rm\, \ N (N+1) \,$</span> has a larger set of prime factors than does <span class="math-container">$\,\rm N &gt; 0$</span>.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> <span class="math-container">$\rm N+1 &gt; 1\,$</span> so it has a prime factor <span class="math-container">$\rm P$</span> (e.g. its least factor <span class="math-container">$&gt; 1)$</span>. <span class="math-container">$\,\rm N$</span> is coprime to <span class="math-container">$\rm N+1\,$</span> so <span class="math-container">$\rm P$</span> can't divide <span class="math-container">$\rm N$</span> (else <span class="math-container">$\rm\, P$</span> divides <span class="math-container">$\rm N+1\,$</span> and <span class="math-container">$\rm N$</span> so also their difference <span class="math-container">$\rm N+1 - N = 1).\,$</span> So the prime factors of <span class="math-container">$\rm\, N(N+1)$</span> include all those of <span class="math-container">$\rm N$</span> <em>and</em> at least one prime <span class="math-container">$\rm P$</span> not dividing <span class="math-container">$\rm N$</span>.</p> <p><strong>Corollary</strong> <span class="math-container">$\ $</span> There are infinitely many primes.</p> <p><strong>Proof</strong> <span class="math-container">$\, $</span> Iterating <span class="math-container">$\rm\, N\to N (N+1)\, $</span> yields integers with an unbounded number of prime factors.</p> <p>Below, generalizing Euclid's classic argument, is a simple proof that an infinite ring has infinitely many maximal (so prime) ideals if it has fewer units than elements (i.e. smaller cardinality). The key idea is that Euclid's <em>construction</em> of a new prime generalizes from elements to ideals, i.e. given some maximal ideals <span class="math-container">$\rm P_1,\ldots,P_k$</span> then a simple pigeonhole argument employing <span class="math-container">$\rm CRT$</span> implies that <span class="math-container">$\rm 1 + P_1\cdots P_k$</span> contains a nonunit, which lies in some maximal ideal <span class="math-container">$\rm P$</span> which, by construction, is comaximal (so distinct) from the prior max ideals <span class="math-container">$\rm P_i.\,$</span> Below is the full proof, excerpted from from some of my old <a href="https://artofproblemsolving.com/community/c7h217448p1209616" rel="nofollow noreferrer">sci.math/AAA/AoPS posts.</a></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> An infinite ring <span class="math-container">$\rm R$</span> has infinitely many maximal ideals if it has fewer units <span class="math-container">$\rm U = U(R)$</span> than it has elements, i.e. <span class="math-container">$\rm\:|U| &lt; |R|$</span>.</p> <p><strong>Proof</strong> <span class="math-container">$\rm\ \ R$</span> has a max ideal <span class="math-container">$\rm P_1,\:$</span> since the nonunit <span class="math-container">$\rm\: 0\:$</span> lies in some max ideal.<br /> Inductively, suppose <span class="math-container">$\rm P_1,\ldots,P_k$</span> are maximal ideals in <span class="math-container">$\rm R$</span>, with product <span class="math-container">$\,\rm J.$</span></p> <p><span class="math-container">$\rm Case\ 1\!: \; 1 + J \not\subset U.\:$</span> Thus <span class="math-container">$\rm 1 + J$</span> contains a nonunit <span class="math-container">$\rm p,\,$</span> lying in a max ideal <span class="math-container">$\rm P.$</span><br /> It's new: <span class="math-container">$\rm\: P \neq P_i\:$</span> since <span class="math-container">$\rm\: P + P_i = 1\:$</span> via <span class="math-container">$\rm\: p \in P,\ 1 - p \in J \subset P_i$</span></p> <p><span class="math-container">$\rm Case\ 2\!: \; 1 + J \subset U$</span> is impossible by the following <em>pigeonhole</em> argument. <span class="math-container">$\rm R/J = R_1 \times \cdots \times R_k,\ R_i = R/P_i\:$</span> by the Chinese Remainder Theorem.<br /> We deduce that <span class="math-container">$\rm\ |U(R/J)| \leq |U|\ $</span> because <span class="math-container">$\rm\ uv \in 1 + J \subset U \Rightarrow u \in U.$</span><br /> Thus <span class="math-container">$\rm|U(R_i)| \leq |U(R/J)| \leq |U|\:$</span> via the injection <span class="math-container">$\rm u \mapsto (1,1,\ldots,u,\ldots,1,1).$</span><br /> <span class="math-container">$\rm R_i$</span> field <span class="math-container">$\rm\: \Rightarrow\ |R| &gt; 1 + |U| \geq |R_i|,\,$</span> and <span class="math-container">$\,\rm|J| \leq |U| &lt; |R|\,$</span> via <span class="math-container">$\,\rm 1 + J \subset U.$</span><br /> Therefore <span class="math-container">$\rm|R| = |R/J|\ |J| = |R_1|\ \cdots |R_k|\ |J|\:$</span> yields the contradiction that<br /> the infinite <span class="math-container">$\rm|R|$</span> is a finite product of smaller cardinals.</p> <p>I recall the pleasure of discovering this &quot;fewunit&quot; generalization of Euclid's proof and other related theorems while reading Kaplansky's classic textbook <em>Commutative Rings</em> as an MIT undergrad. There Kaplansky presents a simpler integral domain version as exercise <span class="math-container">$8$</span> in Section <span class="math-container">$1$</span>-<span class="math-container">$1,\:$</span> namely</p> <blockquote> <p>(This exercise is offered as a modernization of Euclid's theorem on the infinitude of primes.) Prove that an infinite integral domain with with a finite number of units has an infinite number of maximal ideals.</p> </blockquote> <p>I highly recommend Kap's classic textbook to anyone seeking to master commutative ring theory. In fact I highly recommend everything by Kaplansky - it is almost always very insightful and elegant. Learn from the masters! For more about Kaplansky see this interesting <a href="http://www.ams.org/notices/200711/tx071101477p.pdf" rel="nofollow noreferrer">NAMS paper</a> which includes quotes from many eminent mathematicians (Bass, Eisenbud, Kadison, Lam, Rotman, Swan, etc).</p> <blockquote> <p>I liked the algebraic way of looking at things. I'm additionally fascinated when the algebraic method is applied to infinite objects. <span class="math-container">$\ $</span>--Irving Kaplansky</p> </blockquote> <p><strong>Note</strong> <span class="math-container">$ $</span> Readers familiar with the Jacobson radical may note that it may be employed to describe the relationship between the units in <span class="math-container">$\rm R$</span> and <span class="math-container">$\rm R/J\:$</span> used in the above proof. Namely</p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> TFAE in ring <span class="math-container">$\rm\:R\:$</span> with units <span class="math-container">$\rm\:U,\:$</span> ideal <span class="math-container">$\rm\:J,\:$</span> and Jacobson radical <span class="math-container">$\rm\:Jac(R).$</span></p> <p><span class="math-container">$\rm(1)\quad J \subseteq Jac(R),\quad $</span> i.e. <span class="math-container">$\rm\:J\:$</span> lies in every max ideal <span class="math-container">$\rm\:M\:$</span> of <span class="math-container">$\rm\:R$</span></p> <p><span class="math-container">$\rm(2)\quad 1+J \subseteq U,\quad\ \ $</span> i.e. <span class="math-container">$\rm\: 1 + j\:$</span> is a unit for every <span class="math-container">$\rm\: j \in J$</span></p> <p><span class="math-container">$\rm(3)\quad I\neq 1\ \Rightarrow\ I+J \neq 1,\qquad\ $</span> i.e. proper ideals survive in <span class="math-container">$\rm\:R/J$</span></p> <p><span class="math-container">$\rm(4)\quad M\:$</span> max <span class="math-container">$\rm\:\Rightarrow M+J \ne 1,\quad $</span> i.e. max ideals survive in <span class="math-container">$\rm\:R/J$</span></p> <p><strong>Proof</strong> <span class="math-container">$\: $</span> (sketch) <span class="math-container">$\ $</span> With <span class="math-container">$\rm\:i \in I,\ j \in J,\:$</span> and max ideal <span class="math-container">$\rm\:M,$</span></p> <p><span class="math-container">$\rm(1\Rightarrow 2)\quad j \in all\ M\ \Rightarrow\ 1+j \in no\ M\ \Rightarrow\ 1+j\:$</span> unit.</p> <p><span class="math-container">$\rm(2\Rightarrow 3)\quad i+j = 1\ \Rightarrow\ 1-j = i\:$</span> unit <span class="math-container">$\rm\:\Rightarrow I = 1$</span></p> <p><span class="math-container">$\rm(3\Rightarrow 4)\ \ \ $</span> Let <span class="math-container">$\rm\:I = M\:$</span> max.</p> <p><span class="math-container">$\rm(4\Rightarrow 1)\quad M+J \ne 1 \Rightarrow\ J \subseteq M\:$</span> by <span class="math-container">$\rm\:M\:$</span> max.</p>
<p>Euclid's famous proof is as follows: Suppose there is a finite number of primes. Let $x$ be the product of all of these primes. Then look at $x+1$. It is clear that $x$ is coprime to $x+1$. Therefore, no nontrivial factor of $x$ is a factor of $x+1$, but every prime is a factor of $x$. By the fundamental theorem of arithmetic, $x+1$ admits a prime factorization, and by the above remark, none of these prime factors can be a factor of $x$, but $x$ is the product of all primes. This is a contradiction.</p>
geometry
<p>I am a programmer without really good knowledge in math. :/ So I have to write an algorithm that changes the color of pixel(dot) P to opposite if it's on left side of the straigt line in coordinate system (and the line is not vertical, with that I mean, x2-x1 can't be 0). The values of x1, y1 and x2, y2 dots are known (and they can also be negative). </p> <p>Does anyone have an idea how could this be solved?</p>
<p>To determine which side of the line from <span class="math-container">$A=(x_1,y_1)$</span> to <span class="math-container">$B=(x_2,y_2)$</span> a point <span class="math-container">$P=(x,y)$</span> falls on you need to compute the value:- <span class="math-container">$$d=(x-x_1)(y_2-y_1)-(y-y_1)(x_2-x_1)$$</span> If <span class="math-container">$d&lt;0$</span> then the point lies on one side of the line, and if <span class="math-container">$d&gt;0$</span> then it lies on the other side. If <span class="math-container">$d=0$</span> then the point lies exactly line.</p> <p>To see whether points on the left side of the line are those with positive or negative values compute the value for <span class="math-container">$d$</span> for a point you know is to the left of the line, such as <span class="math-container">$(x_1-1,y_1)$</span> and then compare the sign with the point you are interested in.</p> <h3>Addendum:</h3> <p>For completeness an explanation as to how this works is as follows:</p> <p>The direction of the line <span class="math-container">$AB$</span> can be defined as <span class="math-container">$&lt;x_2, y_2&gt; - &lt;x_1, y_1&gt; = &lt;x_2 - x_1, y_2 - y_1&gt;$</span></p> <p>The orthogonal (perpendicular) direction to that line will be <span class="math-container">$\vec n=&lt;y_2-y_1, -(x_2 - x_1)&gt;$</span> (we flip the x's and the y's and negate one component i.e (y's, -x's)). You can verify this is orthogonal by taking the dot product with the original value and check it's 0).</p> <p>One possible vector going from the line to the Point <span class="math-container">$P$</span> is <span class="math-container">$D = P-A = &lt;x-x_1,y-y_1&gt; $</span></p> <p>This vector <span class="math-container">$D$</span> is made of 2 components, a component <span class="math-container">$D^{\parallel}$</span> that is parallel to the line <span class="math-container">$AB$</span> and a component <span class="math-container">$D^{\bot}$</span> that is perpendicular to the line <span class="math-container">$AB$</span>. By definition <span class="math-container">$D^{\parallel} \cdot \vec n = 0$</span>. Since a direction parallel to the line will be perpendicular to the normal of that line.</p> <p>We are interested in knowing whether the point is on the side the normal points to, or the side the normal points opposite to. In other words, we want to know if <span class="math-container">$D^{\bot}$</span> is in the same direction as <span class="math-container">$\vec n$</span> or not.</p> <p>This is essentially the sign of the dot product of <span class="math-container">$D$</span> and <span class="math-container">$\vec n$</span> since <span class="math-container">$D \cdot \vec n = (D^{\bot} + D^{\parallel}) \cdot \vec n = D^{\bot} \cdot \vec n$</span></p> <p>Arithmetically: <span class="math-container">$$&lt;x-x_1, y-y_1&gt; \cdot &lt;y_2 - y_1, -(x_2 - x_1)&gt;$$</span> <span class="math-container">$$=(x-x_1)(y_2 - y_1) - (y-y_1)(x_2 - x_1)$$</span></p> <p>In short, we are calculating the signed shortest distance from the line <span class="math-container">$AB$</span> to the point <span class="math-container">$P$</span> then evaluating the sign of that distance.</p> <p>Diagram for clarity: <a href="https://i.sstatic.net/57yFQ.png" rel="noreferrer"><img src="https://i.sstatic.net/57yFQ.png" alt="enter image description here" /></a></p>
<p>If the point is given by the 2D vector $\vec{P}$ and the line end point by $\vec{A}$ and $\vec{B}$, then calculate the <a href="http://en.wikipedia.org/wiki/Cross_product" rel="noreferrer">cross product</a>: $$\vec{AB}\times\vec{AP} = x_{AB}\cdot y_{AP} - x_{AP}\cdot y_{AB}$$ $$x_{AB} = x_B - x_A,\ x_{AP} = x_P - x_A\ ...$$ The sign of the above will determine what side of the line your pixel is on.</p>
game-theory
<p>I am trying to understand how to compute all Nash equilibria in a 2 player game, but I fail when there are more than 2 possible options to play. Could somebody explain to me how to calculate a matrix like this (without computer) \begin{matrix} 1,1 &amp; 10,0 &amp; -10,1 \\ 0,10 &amp; 1,1 &amp; 10,1 \\ 1,-10 &amp; 1,10 &amp; 1,1 \end{matrix}</p> <p>I tried it with the support concept, but I don't get it... </p>
<p>A Nash equilibrium is a profile of strategies <span class="math-container">$(s_1,s_2)$</span> such that the strategies are best responses to each other, i.e., no player can do strictly better by deviating. This helps us to find the (pure strategy) Nash equilibria.</p> <p>To start, we find the best response for player 1 for each of the strategies player 2 can play. I will demonstrate this by underlining the best responses: <span class="math-container">\begin{matrix} &amp; A &amp;B&amp;C \\ A&amp;\underline{1},1 &amp; \underline{10},0 &amp; -10,1 \\ B&amp;0,10 &amp; 1,1 &amp; \underline{10},1 \\ C&amp;\underline{1},-10 &amp; 1,10 &amp; 1,1 \end{matrix}</span> Player 1 is the row player, player 2 is the column player. If 2 plays column A, then player 1's best response is to play either row <span class="math-container">$A$</span> or <span class="math-container">$C$</span>, which gives him 1 rather than 0 as payoff. Similarly, the best response to column <span class="math-container">$B$</span> is row <span class="math-container">$A$</span>, and to column <span class="math-container">$C$</span> it is row <span class="math-container">$B$</span>.</p> <p>Now we do the same for player 2 by underlining the best responses of the column player: <span class="math-container">\begin{matrix} &amp; A &amp;B&amp;C \\ A&amp;\underline{1},\underline{1} &amp; \underline{10},0 &amp; -10,\underline{1} \\ B&amp;0,\underline{10} &amp; 1,1 &amp; \underline{10},1 \\ C&amp;\underline{1},-10 &amp; 1,\underline{10} &amp; 1,1 \end{matrix}</span> So, if player 1 plays row <span class="math-container">$A$</span> then player 2 best responds either with column <span class="math-container">$A$</span> or column <span class="math-container">$C$</span>, giving him 1 rather than 0. We also find the best responses for row <span class="math-container">$B$</span> and <span class="math-container">$C$</span>.</p> <p>Now a pure strategy Nash equilibrium is a cell where both payoffs are underlined, i.e., where both strategies are best responses to each other. In the example, the unique pure strategy equilibrium is <span class="math-container">$(A,A)$</span>. (There may also be mixed strategy equilibria.) In all other cells, at least one player has an incentive to deviate (because it gives him a higher payoff).</p> <p><strong>EDIT</strong>: How to compute mixed strategy equilibria in discrete games?</p> <p>In a mixed Nash strategy equilibrium, each of the players must be indifferent between any of the pure strategies played with positive probability. If this were not the case, then there is a profitable deviation (play the pure strategy with higher payoff with higher probability).</p> <p>Consider player 2. He plays column <span class="math-container">$A$</span> with probability <span class="math-container">$p$</span>, <span class="math-container">$B$</span> with probability <span class="math-container">$q$</span>, and <span class="math-container">$C$</span> with probability <span class="math-container">$1-p-q$</span>. We need to find <span class="math-container">$p,q$</span> such that player 1 is indifferent between his pure strategies <span class="math-container">$A,B,C$</span>. He is indifferent between row <span class="math-container">$A$</span> (left hand side) and row <span class="math-container">$B$</span> (right hand side) if <span class="math-container">$p,q$</span> are such that <span class="math-container">$$p+10q-10(1-q-p)=q+10(1-p-q).$$</span> He is indifferent between <span class="math-container">$B$</span> and <span class="math-container">$C$</span> if <span class="math-container">$$q+10(1-p-q)=p+q+1-q-p=1.$$</span> You just have to solve the first condition for <span class="math-container">$q$</span> as function of <span class="math-container">$p$</span>, substitute <span class="math-container">$q$</span> in the second condition and you have <span class="math-container">$p$</span>. Inserting <span class="math-container">$p$</span> again in the first gives you <span class="math-container">$q$</span>.</p> <p>Now we do the same with strategies for player 1 such that player 2 is indifferent. Player 1 plays <span class="math-container">$A$</span> with probability <span class="math-container">$x$</span>, <span class="math-container">$B$</span> with probability <span class="math-container">$y$</span> and <span class="math-container">$C$</span> with probability <span class="math-container">$1-x-y$</span>. The two conditions that follow are <span class="math-container">\begin{equation} 1x+10y-10(1-x-y)=x+10(1-x-y) \\ x+10(1-x-y)=1 \end{equation}</span> Solve this again to find <span class="math-container">$x,y$</span>. This is a mixed-strategy equilibrium, because neither player has a profitable deviation. Remember, we constructed the profile <span class="math-container">$(x,y;p,q)$</span> such that the other player is indifferent between his pure strategies. So, no matter how the other player unilaterally deviates, his expected payoff will be identical to that in equilibrium <span class="math-container">$(x,y;p,q)$</span>. In general, depending on the game and solutions <span class="math-container">$x,y,p,q$</span>, there may be infinitely many mixed Nash equilibria, or none. The more pure strategies there are, the more tedious it is to compute mixed strategy equilibria, since we solve for <span class="math-container">$N-1$</span> variables for each player (<span class="math-container">$N$</span> being the number of pure strategies of the other player).</p> <p>Moreover, to find all equilibria, if there are more than 2 actions for a player, then every possible combination of actions has to be checked. Here, a player has 3 actions, and a mixed strategy equilibrium could entail mixing over all three or just any two of them. Since such a player would <em>not</em> have to be indifferent regarding the strategy played with probability 0, the equations you have to set up are different. In summary, manually checking for all possible mixed strategy equilibria if at least one player has more than two actions can require a lot of effort.</p>
<p>I think I finally understood how to get all equilibrias by the support concept. </p> <p>Given payoff matrix A for Player 1 and B for Player 2. That a point $(x,y)=(x_1,x_2,x_3,y_1,y_2,y_3)$ is a Nash equilibria with supp $x =I$ and supp $ y =J$ (which means that $x_i&gt;0 $ for $i \in I$ and $ x_i=0 $ for $ i \not\in I$) is equivalent to </p> <p>$ 1) \forall i,k \in I : (Ay)_i = (Ay)_k \\ 2) \forall i \in I,\forall k \not\in I : (Ay)_k ≤ (Ay)_i \\ 3) \forall j,l \in J : (Bx)_j = (Bx)_l \\ 4) \forall j \in J,\forall l \not\in J : (Bx)_l ≤ (Bx)_j $</p> <p>So to get all Nash equilibrias, I have to check for every support combination that is possible, if all equalities and inequalities hold on. If yes the equations tell for what points or intervalls there are Nash equilibrias. In the example above the equations of Nameless are the equations for the case $I=\{1,2,3\}=J$. All in all there are to check 49 equation systems, which is a lot of work...</p> <p>Thank you for your answer Nameless, it helped me a lot to understand the main principle.</p>
logic
<p>My question tries to address the intuition or situations when using the contrapositive to prove a mathematical statement is an adequate attempt.</p> <p>Whenever we have a mathematical statement of the form $A \implies B$, we can always try to prove the contrapositive instead i.e. $\neg B \implies \neg A$. However, what I find interesting to think about is, when should this approach look promising? When is it a good idea when trying to prove something to use the contrapositive? What is the intuition that $A \implies B$ might be harder to do directly than if one tried to do the contrapositive?</p> <p>I am looking more for a set of <strong>guidelines</strong> or <strong>intuitions</strong> or <strong>heuristics</strong> that might suggest that trying to use the contrapositive to prove the mathematical statement might be a good idea.</p> <p>Some problems have structures that make it more "obvious" to try induction or contradiction. For example, in cases where a recursion is involved or something is iterating, sometimes induction is natural way of trying the problem. Or when some mathematical object has property X, then assuming it doesn't have property X can seem promising because assuming the opposite might lead to a contradiction. So I was wondering if there was something analogous to proving stuff using the contrapositive.</p> <p>I was wondering if the community had a good set of heuristics for when they taught it could be a good idea to use the contrapositive in a proof.</p> <p>Also, this question might benefit from some simple, but very insightful examples that show why the negation might be easier to prove. Also, I know that this intuition can be gained from experience, so providing good or solid examples could be a great way to contribute. Just don't forget to say what type of intuition you are trying to teach with your examples!</p> <p>Note that I am probably not expecting an actual full proof super general magical algorithm because an algorithm like that could probably be used for automating prooving, which might imply something big like $P=NP$. (Obviously a proof of P vs NP is always interesting, but I think that asking the community to prove the P vs NP is not a realistic thing to ask...)</p>
<p>Contraposition is often helpful when an implication has multiple hypotheses, or when the hypothesis specifies multiple objects (perhaps infinitely many).</p> <p>As a simple (and arguably artificial) example, compare, for $x$ a real number:</p> <p>1(a). If $x^4 - x^3 + x^2 \neq 1$, then $x \neq 1$. (Not easy to see without implicit contraposition?)</p> <p>1(b). If $x = 1$, then $x^4 - x^3 + x^2 = 1$. (Immediately obvious.)</p> <p>Here's a classic from elementary real analysis, with $x$ again standing for a real number:</p> <p>2(a). If $|x| \leq \frac{1}{n}$ for every positive integer $n$, then $x = 0$.</p> <p>This is true, but awkward to prove directly because there are effectively infinitely many hypotheses (one for each positive $n$), yet <em>no finite number of these hypotheses implies the conclusion</em>.</p> <p>2(b). If $x \neq 0$, there exists a positive integer $n$ such that $\frac{1}{n} &lt; |x|$.</p> <p>The contrapositive, which follows immediately from the Archimedean property, requires only a <em>strategy</em> for showing <em>some</em> hypothesis is violated if the conclusion is false.</p> <p>3(a). If $f$ is a continuous, real-valued function on $[0, 1]$ and if $$ \int_0^1 f(x) g(x)\, dx = 0\quad\text{for every continuous $g$,} $$ then $f \equiv 0$.</p> <p>3(b). If $f$ is a continuous, real-valued function on $[0, 1]$ that is not indentically zero, then $$ \int_0^1 f(x) g(x)\, dx \neq 0\quad\text{for some continuous $g$.} $$</p> <p>Here contraposition is not especially helpful because the <em>specific choice</em> $f = g$ gives either direction. That is, 3(a) has infinitely many hypotheses, but <em>one</em> of them is sufficient to deduce the conclusion.</p> <p>Contrast with the superficially similar-looking:</p> <p>4(a). If $f$ is a continuous, real-valued function on $[0, 1]$ and if $$ \int_0^1 f(x) g(x)\, dx = 0\quad\text{for every non-negative step function $g$,} $$ then $f \equiv 0$.</p> <p>4(b). If $f$ is a continuous, real-valued function on $[0, 1]$ that is not indentically zero, then $$ \int_0^1 f(x) g(x)\, dx \neq 0\quad\text{for some non-negative step function $g$.} $$</p> <p>(Sketch of 4(b): If $f$ is not identically $0$, there is an $x_0$ in $(0, 1)$ such that $f(x_0) \neq 0$. By continuity, there exists a $\delta &gt; 0$ such that $[x_0 - \delta, x_0 + \delta] \subset (0, 1)$ and $|f(x)| &gt; |f(x_0)|/2$ for all $x$ with $|x - x_0| &lt; \delta$. Let $g$ be the characteristic function of $[x_0 - \delta, x_0 + \delta]$.)</p> <p>As in 2(a), the hypothesis of 4(a) comprises infinitely many conditions, and no finite number suffice. Here, the contrapositive 4(b) arises naturally, because in trying to establish 4(a) directly one is all but forced to ask how the conclusion could fail.</p>
<p>Here are some examples, hope they help. First an easy one.</p> <p><strong>Theorem</strong>. Let $n$ be an integer. If $n$ is even then $n^2$ is even.</p> <p><strong>Proof</strong> (outline). Let $n$ be even. Then $n=2k$ for some integer $k$, so $n^2=4k^2=2(2k^2)$, which is even.</p> <p>Now try the converse by the same method.</p> <p><strong>Theorem</strong>. Let $n$ be an integer. If $n^2$ is even then $n$ is even.</p> <p><strong>Proof</strong> (attempted). Suppose that $n^2$ is even, say $n^2=2k$. Then $n=\sqrt{2k}$ and so...???? This seems hopeless, $\sqrt{2k}$ does not look like an integer at all, never mind proving that it's even!</p> <p>Now try proving the converse by using its contrapositive.</p> <p><strong>Theorem</strong>. Let $n$ be an integer. If $n^2$ is even then $n$ is even.</p> <p><strong>Proof</strong>. We have to prove that if $n$ is odd, then $n^2$ is odd. So, let $n=2k+1$; then $n^2=4k^2+4k+1=2(2k^2+2k)+1$ which is odd. Done!</p> <p>I think the point here is that for the attempt at a direct proof we start with $n^2=2k$. This implicitly gives us some information about $n$, but it's rather indirect and hard to get hold of. Using the contrapositive begins with $n=2k+1$, which gives us very clear and usable information about $n$.</p> <p>Perhaps you could put a heuristic in the following form: "try both ways, just for a couple of steps, and see if either looks notably easier than the other".</p>
linear-algebra
<p>This question aims to create an &quot;<a href="http://meta.math.stackexchange.com/q/1756/18880">abstract duplicate</a>&quot; of numerous questions that ask about determinants of specific matrices (I may have missed a few):</p> <ul> <li><a href="https://math.stackexchange.com/q/153457/18880">Characteristic polynomial of a matrix of $1$&#39;s</a></li> <li><a href="https://math.stackexchange.com/q/55165/18880">Eigenvalues of the rank one matrix $uv^T$</a></li> <li><a href="https://math.stackexchange.com/q/577937/18880">Calculating $\det(A+I)$ for matrix $A$ defined by products</a></li> <li><a href="https://math.stackexchange.com/q/84206/18880">How to calculate the determinant of all-ones matrix minus the identity?</a></li> <li><a href="https://math.stackexchange.com/q/86644/18880">Determinant of a specially structured matrix ($a$&#39;s on the diagonal, all other entries equal to $b$)</a></li> <li><a href="https://math.stackexchange.com/q/629892/18880">Determinant of a special $n\times n$ matrix</a></li> <li><a href="https://math.stackexchange.com/q/689111/18880">Find the eigenvalues of a matrix with ones in the diagonal, and all the other elements equal</a></li> <li><a href="https://math.stackexchange.com/q/897469/18880">Determinant of a matrix with $t$ in all off-diagonal entries.</a></li> <li><a href="https://math.stackexchange.com/q/227096/18880">Characteristic polynomial - using rank?</a></li> <li><a href="https://math.stackexchange.com/q/3955338/18880">Caclulate $X_A(x) $ and $m_A(x) $ of a matrix $A\in \mathbb{C}^{n\times n}:a_{ij}=i\cdot j$</a></li> <li><a href="https://math.stackexchange.com/q/219731/18880">Determinant of rank-one perturbations of (invertible) matrices</a></li> </ul> <p>The general question of this type is</p> <blockquote> <p>Let <span class="math-container">$A$</span> be a square matrix of rank<span class="math-container">$~1$</span>, let <span class="math-container">$I$</span> the identity matrix of the same size, and <span class="math-container">$\lambda$</span> a scalar. What is the determinant of <span class="math-container">$A+\lambda I$</span>?</p> </blockquote> <p>A clearly very closely related question is</p> <blockquote> <p>What is the characteristic polynomial of a matrix <span class="math-container">$A$</span> of rank<span class="math-container">$~1$</span>?</p> </blockquote>
<p>The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of <em>any</em> square matrix of size$~n$. So the answer to the second question is</p> <blockquote> <p>The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.</p> </blockquote> <p>The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore</p> <blockquote> <p>The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n&gt;1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n&gt;1$ is diagonalisable if and only if $\tr(A)\neq0$.</p> </blockquote> <p>See also <a href="https://math.stackexchange.com/q/52395/18880">this question</a>.</p> <p>For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)</p> <blockquote> <p>For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.</p> </blockquote> <p>In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.</p>
<p>Here’s an answer without using eigenvalues: the rank of <span class="math-container">$A$</span> is <span class="math-container">$1$</span> so its image is spanned by some nonzero vector <span class="math-container">$v$</span>. Let <span class="math-container">$\mu$</span> be such that <span class="math-container">$$Av=\mu v.$$</span></p> <p>We can extend this vector <span class="math-container">$v$</span> to a basis of <span class="math-container">$\mathbb{C}^n$</span>. With respect to this basis now, we have that the matrix of <span class="math-container">$A$</span> has all rows except the first one equal to <span class="math-container">$0$</span>. Since determinant and trace are basis-independent it follows by expanding the first column of <span class="math-container">$A$</span> wrt to this basis that <span class="math-container">$$\det(A-\lambda I)= (-1)^n(\lambda -\mu)\lambda^{n-1}.$$</span> Using this same basis as above we also see that <span class="math-container">$\text{Tr}(A) =\mu$</span>, so the characteristic polynomial of <span class="math-container">$A$</span> turns out to be</p> <p><span class="math-container">$$(-1)^n(\lambda -\text{Tr}(A))\lambda^{n-1}.$$</span></p>
logic
<blockquote> <p><em><strong>Theorem 1</strong> [ZFC, classical logic]:</em> If $A,B$ are sets such that $\textbf{2}\times A\cong \textbf{2}\times B$, then $A\cong B$.</p> </blockquote> <p>That's because the axiom of choice allows for the definition of cardinality $|A|$ of any set $A$, and for $|A|\geq\aleph_0$ we have $|\textbf{2}\times A|=|A|$.</p> <blockquote> <p><em><strong>Theorem 2</strong>:</em> Theorem 1 still holds in ZF with classical logic.</p> </blockquote> <p>This is less trivial and explained in Section 5 of <a href="https://math.dartmouth.edu/~doyle/docs/three/three.pdf" rel="noreferrer">Division by Three</a> - however, though the construction does not involve any choices, it <em>does</em> involve the law of excluded middle.</p> <blockquote> <p><strong><em>Question:</em></strong> Are there <em>intuitionistic</em> set theories in which one can prove $$\textbf{2}\times A\cong \textbf{2}\times B\quad\Rightarrow\quad A\cong B\quad\text{?}$$ </p> </blockquote> <p>For example, is this statement true in elementary topoi or can it be proved in some intuitionistic type theory?</p> <blockquote> <p>In his comment below Kyle indicated that the statement is unprovable in some type theory - does somebody know the argument or a reference for that?</p> </blockquote> <p><em>Edit</em> See also the related question <a href="https://math.stackexchange.com/questions/1114752/does-a-times-a-cong-b-times-b-imply-a-cong-b">Does $A\times A\cong B\times B$ imply $A\cong B$?</a> about 'square roots'</p>
<p>There was a paper recently posted to arXiv about this question: Swan, <a href="https://arxiv.org/abs/1804.04490" rel="noreferrer"><em>On Dividing by Two in Constructive Mathematics</em></a>.</p> <p>It turns out that there are examples of toposes where you can't divide by two.</p>
<p>(The following is not really an answer, or just a very partial one, but it's definitely relevant and too long for a comment.)</p> <p>There is a theorem of Richard Friedberg ("<a href="https://eudml.org/doc/169898" rel="nofollow noreferrer">The uniqueness of finite division for recursive equivalence types</a>", <em>Math. Z.</em> <strong>75</strong> (1961), 3–7) which goes as follows (all of this is in classical logic):</p> <p>For $A$ and $B$ subsets of $\mathbb{N}$, define $A \sim B$ when there exists a partial computable function $f:\mathbb{N}\rightharpoonup\mathbb{N}$ that is one-to-one on its domain and defined at least on all of $A$ such that $f(A) = B$. (One also says that $A$ and $B$ are <em>computably equivalent</em> or <em>recursively equivalent</em>, and it is indeed an equivalence relation, not to be confused with "computably/recursively isomorphic", <a href="https://mathoverflow.net/questions/228599/partial-computably-isomorphic-sets">see here</a>.) Then [Friedberg's theorem states]: if $n$ is a positive integer then $(n\times A) \sim (n \times B)$ implies $A\sim B$ (here, $n\times A$ is the set of natural numbers coding pairs $(i,k)$ where $0\leq i&lt;n$ and $k\in A$ for some standard coding of pairs of natural numbers by natural numbers).</p> <p>To make this assertion closer to the question asked here, subsets of $\mathbb{N}$ can be considered as objects, indeed subobjects of $\mathcal{N}$, in the <a href="https://en.wikipedia.org/wiki/Effective_topos" rel="nofollow noreferrer">effective topos</a> (an elementary topos with n.n.o. $\mathcal{N}$ such that all functions $\mathcal{N}\to\mathcal{N}$ are computable), in fact, these subobjects are exactly those classified by maps $\mathcal{N} \to \Omega_{\neg\neg}$ where $\Omega_{\neg\neg} = \nabla 2$ is the subobject of the truth values $p\in\Omega$ such that $\neg\neg p = p$; moreover, to say that two such objects are isomorphic, or internally isomorphic, in the effective topos, is equivalent to saying that $A$ and $B$ are computably isomorphic as above. So Friedberg's result can be reinterpreted by saying that if $A$ and $B$ are such objects of the effective topos and if $n\times A$ and $n\times B$ are isomorphic then $A$ and $B$ are.</p> <p>I'm not sure how much this can be internalized (e.g., does the effective topos validate "if $A$ and $B$ are $\neg\neg$-stable sets of natural numbers and $n\times A$ is isomorphic to $n\times B$ then $A$ is isomorphic to $B$" for explicit $n$? and how about for $n$ quantified inside the topos?) or generalized (do we really need $\neg\neg$-stability?). But this may be worth looking into, and provides at least a positive kind-of-answer to the original question.</p>