tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
logic
<p><strong>Context:</strong> I'm studying for my discrete mathematics exam and I keep running into this question that I've failed to solve. The question is as follows.</p> <hr> <p><strong>Problem:</strong> The main form for normal induction over natural numbers $n$ takes the following form:</p> <ol> <li>$P(1)$ is true, and</li> <li>for every $n, P(n-1)\to P(n).\qquad\qquad\qquad\text{[Sometimes written as $P(n)\to P(n+1)$]}$</li> </ol> <p>If both 1 and 2 are true, then $P(n)$ is true for every $n$.</p> <p>The question is to prove the correctness of the form above.</p> <hr> <p><strong>My work:</strong> My idea was to make a boolean statement and if it's a tautology in a true false table. That means the statement is always correct.</p> <p>I've tried many times but I've failed.</p> <p>Here is the original question in Hebrew:</p> <p>n טענה כלשהי לגבי מםפר טבעי $P(n)$ תהי </p> <p>אם מתקימיים שני התנאים הבאים:</p> <p>1- הטענה $P(0)$ נכונה</p> <p>2- לכל n>0, $P(n)$ נכונות הטענה $P(n-1)$ גוררת את נכונות הטענה </p> <p>אז $P(n)$ נכונה לכל מספר טבעי n</p> <p>הוכיחו את נכונות משפט זה</p>
<p>Consider the following definition of mathematical induction (adapted from David Gunderson's book <em>Handbook of Mathematical Induction</em>):</p> <hr> <p><strong>Principle of mathematical induciton:</strong> For some fixed integer $b$, and for each integer $n\geq b$, let $S(n)$ be a statement involving $n$. If</p> <ol> <li>$S(b)$ is true, and</li> <li>for any integer $k\geq b, S(k)\to S(k+1)$,</li> </ol> <p>then for all $n\geq b$, the statement $S(n)$ is true. </p> <hr> <p>Mathematical induction's validity as a valid proof technique may be established as a consequence of a fundamental axiom concerning the set of positive integers (note: this is only one of many possible ways of viewing induction--see the addendum at the end of this answer). The following statement of this axiom is adapted from John Durbin's book <em>Modern Algebra</em>, wherein it is called the <em>Least Integer Principle</em>, but it is often referred to as the <em>Well-Ordering Principle</em> or WOP. The principle is as follows:</p> <p><strong>Well-Ordering Principle:</strong> Every nonempty set of positive integers contains a least element.</p> <p>The validity of mathematical induction, in this context where we are using the WOP to prove the validity of mathematical induction, is established by using a proof by contradiction. This proof will contain several "steps" or "parts." Before giving all of the steps to the proof of mathematical induction, it may be useful to reformulate the definition of the proof technique in terms of the notation that will be used throughout the sequence of steps in the explanation (for consistency and facilitated understanding):</p> <hr> <p><strong>Reformulated principle of mathematical induction:</strong> For some fixed integer $1$, and for each integer $n\geq 1$, let $S(n)$ be a statement involving $n$. If </p> <ol> <li>$S(1)$ is true, and</li> <li>for any integer $k\geq 1, S(k)\to S(k+1)$,</li> </ol> <p>then for all $n\geq 1$, the statement $S(n)$ is true.</p> <p>Let 1 and 2 above be denoted by $(\dagger)$ and $(\dagger\dagger)$, respectively.</p> <hr> <p><strong>Steps of the proof that mathematical induction is a consequence of the WOP:</strong></p> <ol> <li><p>Start by supposing that $S(1)$ is true and that the proposition $S(k) \rightarrow S(k+1)$ is true for all positive integers $k$, i.e., where $(\dagger)$ and $(\dagger\dagger)$ hold as indicated above. </p></li> <li><p>The goal is to verify whether or not $S(n)$ is true for all $n \geq 1$ if $S(1)$ and $S(k) \rightarrow S(k+1)$ are true. The statement of mathematical induction above indicates that $S(n)$ will logically follow if $S(1)$ and $S(k) \rightarrow S(k+1)$ are true, but does $S(n)$ really follow if $(\dagger)$ and $(\dagger\dagger)$ are true? If yes, then mathematical induction is a valid proof technique. If not, then it is mere rubbish. </p></li> <li><p>We are skeptics, and we think that mathematical induction is a sham (hint: a proof by contradiction is about to take place). Our skepticism induces us to assume that there is <em>at least</em> one positive integer for which $S(n)$ is false [keep in mind that we are assuming that $(\dagger)$ and $(\dagger\dagger)$ are true, albeit we are disputing whether or not $S(n)$ really follows from their truthfulness]. Surely there is at least one positive integer for which $S(n)$ is false even though $S(1)$ and $S(k) \rightarrow S(k+1)$ are true.</p></li> <li><p>Let $\mathcal{P}$ denote the set of all positive integers for which $S(n)$ is false. Is this set empty? We think not---after all, we are assuming that there is at least one positive integer, say $\ell$, for which $S(n)$ is false; that is, the assumption is that $S(\ell)$ is false, where $\ell \in \mathcal{P}$. Are there other positive integers in $\mathcal{P}$? Perhaps, but we cannot say for certain at the moment. We can, however, declare with certainty that $\mathcal{P}$ has a least element by the well-ordering principle. We let the least element of $\mathcal{P}$ be $\ell$ without loss of generality. </p></li> <li><p>Since $S(1)$ is true, we know that $\ell \neq 1$, and because $\ell$ is positive and greater than $1$, we also know that $\ell -1$ must be a positive integer. Moreover, because $\ell -1$ is less than $\ell$, it should be clear that $\ell -1$ cannot be in $\mathcal{P}$ [this is because $\ell$ is the least element in $\mathcal{P}$, meaning that any lesser positive integer cannot be in $\mathcal{P}$]. Notationally, we have it that $\ell \in \mathcal{P}$ and $(\ell - 1) \notin \mathcal{P}$. What does this mean? Since $(\ell - 1) \notin \mathcal{P}$ and $\ell -1$ is a positive integer, and because $\mathcal{P}$ is the set of <em>all</em> positive integers for which $S(n)$ is false, it must be that $S(\ell-1)$ is true.</p></li> <li><p>Finally, recall that we maintained that $(\dagger\dagger)$ was true; that is, $S(k) \rightarrow S(k+1)$ is true for any integer $k \geq 1$. Since $\ell$ and $\ell -1$ are both positive integers, we may let $k = \ell -1$ and $k+1 = \ell$. Substituting these values into the implication that we assume to be true, we get that $S(\ell-1) \rightarrow S(\ell)$. Do you see the problem now (and hence the conclusion of the proof by contradiction)? </p></li> <li><p>By supposing that $(\dagger)$ and $(\dagger\dagger)$ held while also supposing that $S(n)$ was false for some positive integer $\ell$, we deduced through a series of steps that $S(\ell-1) \rightarrow S(\ell)$ [by $(\dagger\dagger)$ where $k = \ell -1$ and $k+1 = \ell$]. What is wrong with this? Simply consider the following three assertions that occur within the proof:</p> <ul> <li>$S(\ell-1)\to S(\ell)\qquad$ [<em>True</em>---by supposition $(\dagger\dagger)$]</li> <li>$S(\ell)\qquad$ [<em>False</em>---because of the assumption that $S(n)$ was false for $\ell\in\mathbb{Z^+}$]</li> <li>$S(\ell-1)\qquad$ [<em>True</em>---by the well-ordering principle]</li> </ul> <p>The logical issue should now be apparent. We know $S(\ell-1) \rightarrow S(\ell)$ is true by our original supposition, but this implication cannot be true if $S(\ell)$ is false. Why? Because an implication of the form $p \rightarrow q$ is only false when the hypothesis, $p$, is true and the conclusion, $q$, is false. In our own case, since $S(\ell-1)$ is true, the implication $S(\ell-1) \rightarrow S(\ell)$ is only true when $S(\ell)$ is also true, thus <em>contradicting</em> the choice of $\ell$. Consequently, $S(n)$ must be true for every positive integer $n$. </p></li> </ol> <hr> <p><strong>Addendum:</strong> It may be of interest to other students taking discrete mathematics courses that the form of induction proved above (often referred to simply as "induction") is actually <em>equivalent</em> to both <em>strong induction</em> and the WOP. This may be surprising, but there is a good paper about the <a href="https://bspace.berkeley.edu/access/content/group/2fb5bd3e-8d09-40ee-a371-6cc033d854b9/ho1.pdf">Equivalence of Three Variations on Inducton</a> for readers who are interested. </p> <p>The basic idea behind the equivalence proofs is as follows:</p> <ol> <li>Strong induction implies Induction.</li> <li>Induction implies Strong Induction.</li> <li>Well-Ordering of $\mathbb{N}$ implies Induction [This is the proof outlined in this answer but with much greater detail]</li> <li>Strong Induction implies Well-Ordering of $\mathbb{N}$. </li> </ol> <p>Equivalence of Induction, Strong Induction, and Well-Ordering on $\mathbb{N}$ follows after having proved the four implications outlined above (the paper linked to contains details of the proof(s)). </p> <p>The answer I provided takes care of (3) above, but you can explore the other three to show equivalence if desired. </p>
<p>Your question seems somewhat unclear to me, as it stands, but I'll answer the one in the title, and if the question is updated, I'll address that too.</p> <p>Mathematical induction can be taken as its own axiom, independent from the other (though, as comments point out, it can be proven as a theorem in common systems like <a href="http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZF</a>). That is to say, it is more akin to statements like: $$x\cdot (y+1)=x\cdot y + x$$ which are often taken to be part of the <em>definition</em> of multiplication and may not be derivable from more basic statements. The point behind such definitions is to capture some intuitive idea - the above starts to capture the distributive property of multiplication, which we know from intuition to be a reasonable idea. Yet, it is possible that we could not prove the above statement without taking it as an axiom.</p> <p>Induction basically says every natural number can be "reached" from zero. That is, if we have proven the statements:</p> <ul> <li>$P(1)$ is true</li> <li>$P(n)\rightarrow P(n+1).$</li> </ul> <p>Then, we can build a proof for every natural number. For instance, if we wanted to know that $P(5)$ was true, we could infer that, since $P(1)$ is true and $P(1)\rightarrow P(2)$, we have $P(2)$. But $P(2)\rightarrow P(3)$ and $P(3)\rightarrow P(4)$ and $P(4)\rightarrow P(5)$, thus we can deduce $P(5)$ too. Induction merely says that $P(n)$ must be true for <em>all</em> natural numbers because we can create a proof like the one above for every natural. Without induction, we can, for any natural $n$, create a proof for $P(n)$ - induction just formalizes that and says we're allowed to jump from there to $\forall n[ P(n)]$.</p>
differentiation
<p>We know that there exist some functions <span class="math-container">$f(x)$</span> such that their derivative <span class="math-container">$f'(x)$</span> is strictly greater than the function itself. for example the function <span class="math-container">$5^x$</span> has a derivative <span class="math-container">$5^x\ln(5)$</span> which is greater than <span class="math-container">$5^x$</span>. Exponential functions in general are known to be proportional to their derivatives. The question I have is whether it is possible for a function to grow &quot;even faster&quot; than this. To be more precise let's take the ratio <span class="math-container">$\frac{f'(x)}{f(x)}$</span> for exponential functions this ratio is a constant. For most elementary functions we care about, this ratio usually tends to <span class="math-container">$0$</span>. But are there functions for which this ratio grows arbitrarily large? If so, is there an upper limit for how large the ratio <span class="math-container">$\frac{f'(x)}{f(x)}$</span> can grow? I also ask a similar question for integrals.</p>
<p>Consider the differential equation $$ \frac{f'}{f} = g $$ where $g$ is the fast-growing function you want. For instance, for $g(x) = e^x$ (and say the initial condition $f(0) =1$) you get $$f(x) = e^{e^x-1} $$ The ratio $f'/f$ grows arbitrarily large.</p>
<p>You can always try functions of the form $f(x) = e^{g(x)}$, where $g(x)$ is an antiderivative of a function with large growth. For example, if $f(x) = e^{e^x}$, then $$\frac{f'(x)}{f(x)} = e^x,$$ which is, of course, exponential.</p>
game-theory
<p>Consider a $9 \times 9$ matrix that consists of $9$ block matrices of $3 \times 3$. Let each $3 \times 3$ block be a game of tic-tac-toe. For each game, label the $9$ cells of the game from $1$ to $9$ with order from left to right, from above to down, call this a cell number. Label the $9$ games of the big matrix $1$ to $9$ with the same order, call this a game number.</p> <p>The rule is the following:</p> <p>$1$. Player $1$ starts with any game number and any cell number.</p> <p>$2$. Player $2$ can make a move in the game whose game number is the cell number where player $1$ made the last move</p> <p>$3$. It continues like this, where player $1$ then plays in the game whose game number is the cell number where player $2$ made the last move.</p> <p>$4$. Special case, when a player is supposed to play in game $X$, but game $X$ is already won (may not be full)/lost (may not be full)/drawn (is full), then he may choose to play in any game he wants.</p> <p>$5$. Winning: whenever a player has three winning games such that the three games line up either horizontally, vertically or across the diagonals, he wins.<br> <img src="https://i.sstatic.net/GQwiN.png" alt="Tic tac toe"></p> <p>It is easy to see why we call it tic-tac-toe $\times$ tic-tac-toe.</p> <p>Now question:</p> <blockquote> <p>We know tic-tac-toe has a non-losing strategy. Does tic-tac-toe $\times$ tic-tac-toe have a non-losing strategy? If so what is it? In general what is a good strategy?</p> </blockquote> <p>PS: This is a fun game. Originally what was a 'good move' now sends your opponent to a 'good game position', so it is more complicated.</p>
<p>The first question, if there is a non-losing strategy, I have an answer for: Yes. </p> <p>Since this is a finite two-person perfect information game without chance, at least one player must have a non-losing strategy, guaranteed by <a href="http://en.wikipedia.org/wiki/Zermelo%27s_theorem_%28game_theory%29" rel="noreferrer">Zermelo's theorem</a> (of game theory).</p> <p>For Tic-Tac-Toe related games, it can be proven that the first player has this non-losing strategy. (If it is a winning strategy depends on whether or not the second player has a non-losing strategy).</p> <p>The argument goes something like this (Player 1 = $P_1$, Player 2 = $P_2$): Suppose there is a non-losing strategy $S$ for $P_2$. Then $P_1$ will start the game with a random move $X$, and for whatever $P_2$ will do, follow strategy $S$ (thus $P_1$ takes on the role of being the second player). Since $S$ is a non-losing strategy, $P_1$ will not lose, which means $S$ is a non-losing strategy for $P_1$.</p> <p>Note that, if strategy $S$ ever calls for making the move $X$ (which was the original random move), $P_1$ may simply do another random move $X_2$ and then keep following $S$ as if $X_2$ had been the original random move. This is further explained in page 12-13 <a href="http://uu.diva-portal.org/smash/get/diva2:631339/FULLTEXT01.pdf" rel="noreferrer">here</a>.</p> <p>(EDIT: Since the first move $P_1$ affects what move $P_2$ can do (by rule 2) the latter argument may not apply to this game. Anyone?)</p>
<p>I think it is possible to "control" the board by having many sub-games "point" to a square that has already been won in the larger game, preventing your opponent from blocking you in that square, and driving you towards marking other squares, so eventually you have 2 in a row in many sub-games, eventually forcing your opponent to let you go on a sub-game-winning spree.</p> <p>For example, taking square 3 on a number of boards will essentially give your opponent sub-game #3, but from there on, you could start taking squares 1 and 2, or 5 and 7, or 6 and 9; all of which "point" to square 3 in their respective games. Thus, in order to block you in a sub-game that already has such a "pointer", they must allow you to take a move wherever you want after their turn, forcing them to allow you to either take a square (at leisure) or continue to set yourself up for more "pointers". Opponents placing moves elsewhere tend to fall even further behind, as they cannot overtake your offensive lead, and can't block you efficiently.</p> <p>There is also a "gambit" strategy, where you keep selecting the same block in each sub-game thereby sacrificing one sub-game for the sake of getting a head-start in many others. </p> <p>EDIT: Elaborating on the strategy explanation</p>
differentiation
<p>I am wondering if the following converse (or modification) of the mean value theorem holds. Suppose $f(\cdot)$ is continuously differentiable on $[a,b]$. Then for all $c \in (a,b)$ there exists $x$ and $y$ such that $$ f'(c)=\frac{f(y)-f(x)}{y-x} $$</p>
<p>Assume that $f''(c)\ne0$. Then one can find $x_1&lt;c&lt;x_2$ such that $$f'(c)={f(x_2)-f(x_1)\over x_2-x_1}\ .$$ <em>Proof.</em> Assume $f''(c)&gt;0$ and consider the auxiliary function $$g(t):=f(c+t)-f(c)-t f'(c)\ ,\tag{1}$$ which is defined in a full neighborhood of $t=0$. By Taylor's theorem one has $$g(t)=g(0)+t g'(0)+{t^2\over2}g''(0)+o(t^2)={t^2\over2}\bigl(f''(c)+o(1)\bigr)\qquad(t\to0)\ .$$ It follows that there is an $h&gt;0$ with $g(t)&gt;0$ for $0&lt;|t|\leq h$. Assuming $g(h)\geq g(-h)$ put $t_1:=-h$, and choose $t_2\in\ ]0,h]$ such that $g(t_2)=g(t_1)$, which is possible by the intermediate value theorem.</p> <p>Finally put $x_i:=c+t_i$ $(i=1,\&gt;2)$. Then it follows from $(1)$ that $$f(x_2)-f(x_1)=g(t_2)-g(t_1)+(t_2-t_1)f'(c)=(x_2-x_1) f'(c)\ ,$$ which is equivalent to the claim.</p>
<p>It is not true. Let $f(t)=t^3$. Then $f'(0)=0$, but $\frac{f(y)-f(x)}{y-x}$ is never $0$.</p>
combinatorics
<blockquote> <p>Consider a square of side equal to $1$. Prove that we can place inside the square a finite number of disjoint discs, with different radii of the form $1/k$ with $k$ a positive integer, such that the area of the remaining region is at most $0.0001$.</p> </blockquote> <p>If we consider all the discs of this form, their total area is $\sum_{k \geq 1}\displaystyle \pi \frac{1}{k^2} - \pi=\frac{\pi^3}{6}-\pi\simeq 2.02$ which is greater than the area of the square. (I subtracted $\pi$ because we cannot place a disc of radius $1$ inside the square).</p> <p>So the discs of this form can cover the square very well, but how can I prove that there is a disjoint family which leaves out a small portion of the area?</p>
<p>I don't think this is possible for general $\epsilon$, and I doubt it's possible for remainder $0.0001$.</p> <p>Below are some solutions with remainder less than $0.01$. I produced them by randomized search from two different initial configurations. In the first one, I only placed the circle with curvature $2$ in the centre and tried placing the remaining circles randomly, beginning with curvature $12$; in the second one, I prepositioned pairs of circles that fit in the corners and did a deterministic search for the rest.</p> <p>The data structure I used was a list of interstices, each in turn consisting of a list of circles forming the interstice (where the lines forming the boundary of the square are treated as circles with zero curvature). I went through the circles in order of curvature and for each circle tried placing it snugly in each of the cusps where two circles touch in random order. If a circle didn't fit anywhere, I discarded it; if that decreased the remaining area below what was needed to get up to the target value (in this case $0.99$), I backtracked to the last decision.</p> <p>I also did this without using the circle with curvature $2$. For that case I did a complete search and found no configurations with remainder less than $0.01$. Thus, if there is a better solution in that case, it must involve placing the circles in a different order. (We can always transform any solution to one where each circle is placed snugly in a cusp formed by two other circles, so testing only such positions is not a restriction; however, circles with lower curvature might sit in the cusps of circles with higher curvature, and I wouldn't have found such solutions.)</p> <p>For the case including the circle with curvature $2$, the search wasn't complete (I don't think it can be done completely in this manner, without introducing further ideas), so I can't exclude that there's are significantly better configurations (even ones with in-order placement), but I'll try to describe how I came to doubt that there's much room for improvement beyond $0.01$, and particularly that this can be done for arbitrary $\epsilon$.</p> <p>The reasons are both theoretical and numerical. Numerically, I found that this seems to be a typical combinatorial optimization problem: There are many local minima, and the best ones are quite close to each other. It's easy to get to $0.02$; it's relatively easy to get to $0.011$; it takes quite a bit more optimization to get to $0.01$; and beyond that practically all the solutions I found were within $0.0002$ or so of $0.01$. So a solution with $0.0001$ would have to be of a completely different kind from everything that I found.</p> <p>Now of course <em>a priori</em> there might be some systematic solution that's hard to find by this sort of search but can be proved to exist. That might conceivably be the case for $0.0001$, but I'm pretty sure it's not the case for general $\epsilon$. To prove that it's possible to leave a remainder less than $\epsilon$ for any $\epsilon\gt0$, one might try to argue that after some initial phase it will always be possible to fit the remaining circles into the remaining space. The problem is that such an argument can't work, because we're trying to fill the rational area $1$ by discarding rational multiples of $\pi$ from the total area $\pi^3/6$, so we can't do it by discarding a finite number of circles, since $\pi$ is transcendental.</p> <p>Thus we can never reach a stage where we could prove that the remaining circles will exactly fit, and hence every proof that proves we can beat an arbitrary $\epsilon$ would have to somehow show that the remaining circles can be divided into two infinite subsets, with one of them exactly fitting into the remaining gaps. Of course this, too, is possible in principle, but it seems rather unlikely; the problem strikes me as a typical messy combinatorial optimization problem with little regularity.</p> <p>A related reason not to expect a clean solution is that in an <a href="http://en.wikipedia.org/wiki/Apollonian_gasket" rel="noreferrer">Apollonian gasket</a> with integer curvatures, some integers typically occur more than once. For instance, one might try to make use of the fact that the curvatures $0$, $2$, $18$ and $32$ form a quadruple that would allow us to fill an entire half-corner with a gasket of circles of integer curvature; however, in that gasket, many curvatures, for instance $98$, occur more than once, so we'd have to make exceptions for those since we're not allowed to reuse those circles. Also, if you look at the gaskets produced by $0$, $2$ and the numbers from $12$ to $23$ (which are the candidates to be placed in the corners), you'll find that the fourth number increases more rapidly than the third; that is, $0$, $2$ and $18$ lead to $32$, whereas $0$ $2$ and $19$ already lead to $(\sqrt2+\sqrt{19})^2\approx33.3$; so not only can you not place all the numbers from $12$ to $23$ into the corners (since only two of them fit together and there are only four corners), but then if you start over with $24$ (which is the next number in the gasket started by $12$), you can't even continue with the same progression, since the spacing has increased. The difference would have to be compensated by the remaining space in the corners that's not part of the gaskets with the big $2$-circle, but that's too small to pick up the slack, which makes it hard to avoid dropping several of the circles in the medium range around the thirties. </p> <p>My impression from the optimization process is that we're forced to discard too much area quite early on; that is, we can't wait for some initial irregularities to settle down into some regular pattern that we can exploit. For instance, the first solution below uses all curvatures except for the following: 3 4 5 6 7 8 9 10 11 16 17 20 22 25 30 31 33 38 46 48 49 52 53 55 56 57 59 79 81 94 96 101 106 107 108 113 125 132. Already at 49 the remaining area becomes less than would be needed to fill the square. Other solutions I found differed in the details of which circles they managed to squeeze in where, but the total area always dropped below $1$ early on. Thus, it appears that it's the irregular constraints at the beginning that limit what can be achieved, and this can't be made up for by some nifty scheme extending to infinity. It might even be possible to prove by an exhaustive search that some initial set of circles can't be placed without discarding too much area. To be rigorous, this would have to take a lot more possibilities into account than my search did (since the circles could be placed in any order), but I don't see why allowing the bigger circles to be placed later on should make such a huge difference, since there's precious little wiggle room for their placement to begin with if we want to fit in most of the ones between $12$ and $23$.</p> <p>So here are the solutions I found with remainder less than $0.01$. The configurations shown are both filled up to an area $\gtrsim0.99$ and have a tail of tiny circles left worth about another $0.0002$. For the first one, I checked with integer arithmetic that none of the circles overlap. (In fact I placed the circles with integer arithmetic, using floating-point arithmetic to find an approximation of the position and a single iteration of Newton's method in integer arithmetic to correct it.)</p> <p>The first configuration has $10783$ circles and was found using repeated randomized search starting with only the circle of curvature $2$ placed; I think I ran something like $100$ separate trials to find this one, and something like $1$ in $50$ of them found a solution with remainder below $0.01$; each trial took a couple of seconds on a MacBook Pro.</p> <p><img src="https://i.sstatic.net/K1MNc.png" alt="randomized"></p> <p>The second configuration has $17182$ circles and was found by initially placing pairs of circles with curvatures $(12,23)$, $(13,21)$, $(14,19)$ and $(15,18)$ touching each other in the corners and tweaking their exact positions by hand; the tweaking brought a gain of something like $0.0005$, which brought the remainder down below $0.01$. The search for the remaining circles was carried out deterministically, in that I always tried first to place a circle into the cusps formed by the smaller circles and the boundary lines; this was to keep as much contiguous space as possible available in the cusps between the big circle and the boundary lines.</p> <p><img src="https://i.sstatic.net/5zUI0.png" alt="pre-placed"></p> <p>I also tried placing pairs of circles with curvatures $(13,21)$, $(14,19)$, $(15,18)$ and $(16,17)$ in the corners, but only got up to $0.9896$ with that.</p> <p>Here are high-resolution version of the images; they're scaled down in this column, but you can open them in a new tab/window (where you might have to click on them to toggle the browser's autoscale feature) to get the full resolution.</p> <p>Randomized search:</p> <p><img src="https://i.sstatic.net/JWiwd.png" alt="randomized hi-res"></p> <p>With pre-placed circles:</p> <p><img src="https://i.sstatic.net/aFFfN.png" alt="enter image description here"></p>
<p>Let's roll up our sleeves here. Let $C_k$ denote the disk of radius $1/k$. Suppose we can cover an area of $\ge 0.9999$ using a set of non-overlapping disks inside the unit square, and let $S$ denote the set of integers $k$ such that $C_k$ is used in this cover.<br> Then we require</p> <p>$$\sum_{k\in S}\frac{1}{k^2} \ge 0.9999/\pi \approx 0.318278$$</p> <p>As the OP noted, we know that $1 \not\in S$. This leaves</p> <p>$$\sum_{k\ge2}\frac{1}{k^2} \approx 0.644934$$</p> <p>which gives us $0.644934 - 0.318278 = 0.326656$ 'spare capacity' to play with.</p> <p><strong>Case 1</strong> Suppose $2 \in S$. Then the largest disk that will fit into the spaces in the corners left by $C_2$ is $C_{12}$, so we must throw $3,...,11$ out of $S$. This wastes</p> <p>$$\sum_{k=3}^{11}\frac{1}{k^2}\approx0.308032$$</p> <p>and we are close to using up our spare capacity: we would be left with $0.326656-0.308032=0.018624$ to play with. </p> <p><strong>Case 2</strong> Now suppose $2 \not\in S$. Then we can fit $C_3$ and $C_4$ into the unit square, but not $C_5$. So we waste</p> <p>$$\frac{1}{2^2} + \frac{1}{5^2} = 0.29$$</p> <p>leaving us with $0.326656-0.29=0.036656$ to play with. </p> <p>Neither of these cases fills me with confidence that this thing is doable.</p>
logic
<p>Given the set of standard axioms (I'm not asking for proof of those), do we know for sure that a proof exists for all unproven theorems? For example, I believe the Goldbach Conjecture is not proven even though we "consider" it true.</p> <p>Phrased another way, have we <em>proven</em> that if a mathematical statement is true, a proof of it exists? That, therefore, anything that is true can be proven, and anything that cannot be proven is not true? Or, is there a counterexample to that statement?</p> <p>If it hasn't been proven either way, do we have a strong idea one way or the other? Is it generally thought that some theorems can have no proof, or not?</p>
<p>Relatively recent discoveries yield a number of so-called 'natural independence' results that provide much more natural examples of independence than does Gödel's example based upon the liar paradox (or other syntactic diagonalizations). As an example of such results, I'll sketch a simple example due to Goodstein of a concrete number theoretic theorem whose proof is independent of formal number theory PA <a href="https://en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic" rel="nofollow noreferrer">(Peano Arithmetic)</a> (following [Sim]).</p> <p>Let <span class="math-container">$\,b\ge 2\,$</span> be a positive integer. Any nonnegative integer <span class="math-container">$n$</span> can be written uniquely in base <span class="math-container">$b$</span> <span class="math-container">$$\smash{n\, =\, c_1 b^{\large n_1} +\, \cdots + c_k b^{\large n_k}} $$</span></p> <p>where <span class="math-container">$\,k \ge 0,\,$</span> and <span class="math-container">$\, 0 &lt; c_i &lt; b,\,$</span> and <span class="math-container">$\, n_1 &gt; \ldots &gt; n_k \ge 0,\,$</span> for <span class="math-container">$\,i = 1, \ldots, k.$</span></p> <p>For example the base <span class="math-container">$\,2\,$</span> representation of <span class="math-container">$\,266\,$</span> is <span class="math-container">$$266 = 2^8 + 2^3 + 2$$</span></p> <p>We may extend this by writing each of the exponents <span class="math-container">$\,n_1,\ldots,n_k\,$</span> in base <span class="math-container">$\,b\,$</span> notation, then doing the same for each of the exponents in the resulting representations, <span class="math-container">$\ldots,\,$</span> until the process stops. This yields the so-called 'hereditary base <span class="math-container">$\,b\,$</span> representation of <span class="math-container">$\,n$</span>'. For example the hereditary base <span class="math-container">$2$</span> representation of <span class="math-container">$\,266\,$</span> is <span class="math-container">$${266 = 2^{\large 2^{2+1}}\! + 2^{2+1} + 2} $$</span></p> <p>Let <span class="math-container">$\,B_{\,b}(n)$</span> be the nonnegative integer which results if we take the hereditary base <span class="math-container">$\,b\,$</span> representation of <span class="math-container">$\,n\,$</span> and then syntactically replace each <span class="math-container">$\,b\,$</span> by <span class="math-container">$\,b+1,\,$</span> i.e. <span class="math-container">$\,B_{\,b}\,$</span> is a base change operator that 'Bumps the Base' from <span class="math-container">$\,b\,$</span> up to <span class="math-container">$\,b+1.\,$</span> For example bumping the base from <span class="math-container">$\,2\,$</span> to <span class="math-container">$\,3\,$</span> in the prior equation yields <span class="math-container">$${B_{2}(266) = 3^{\large 3^{3+1}}\! + 3^{3+1} + 3\quad\ \ \ }$$</span></p> <p>Consider a sequence of integers obtained by repeatedly applying the operation: bump the base then subtract one from the result. For example, iteratively applying this operation to <span class="math-container">$\,266\,$</span> yields <span class="math-container">$$\begin{eqnarray} 266_0 &amp;=&amp;\ 2^{\large 2^{2+1}}\! + 2^{2+1} + 2\\ 266_1 &amp;=&amp;\ 3^{\large 3^{3+1}}\! + 3^{3+1} + 3 - 1\ =\ B_2(266_0) - 1 \\ ~ \ &amp;=&amp;\ 3^{\large 3^{3+1}}\! + 3^{3+1} + 2 \\ 266_2 &amp;=&amp;\ 4^{\large 4^{4+1}}\! + 4^{4+1} + 1\qquad\! =\ B_3(266_1) - 1 \\ 266_3 &amp;=&amp;\ 5^{\large5^{5+1}}\! + 5^{5+1}\phantom{ + 2}\qquad\ =\ B_4(266_2) - 1 \\ 266_4 &amp;=&amp;\ 6^{\large 6^{6+1}}\! + \color{#0a0}{6^{6+1}\! - 1} \\ ~ \ &amp;&amp;\ \textrm{using}\quad \color{#0a0}{6^7\ -\,\ 1}\ =\ \color{#c00}{5555555}\, \textrm{ in base } 6 \\ ~ \ &amp;=&amp;\ 6^{\large 6^{6+1}}\! + \color{#c00}5\cdot 6^6 + \color{#c00}5\cdot 6^5 + \,\cdots + \color{#c00}5\cdot 6 + \color{#c00}5 \\ 266_5 &amp;=&amp;\ 7^{\large 7^{7+1}}\! + 5\cdot 7^7 + 5\cdot 7^5 +\, \cdots + 5\cdot 7 + 4 \\ &amp;\vdots &amp; \\ 266_{k+1} &amp;=&amp; \ \qquad\quad\ \cdots\qquad\quad\ = \ B_{k+2}(266_k) - 1 \\ \end{eqnarray}$$</span></p> <p>In general, if we start this procedure at the integer <span class="math-container">$\,n\,$</span> then we obtain what is known as the <em>Goodstein sequence</em> starting at <span class="math-container">$\,n.$</span></p> <p>More precisely, for each nonnegative integer <span class="math-container">$\,n\,$</span> we recursively define a sequence of nonnegative integers <span class="math-container">$\,n_0,\, n_1,\, \ldots ,\, n_k,\ldots\,$</span> by <span class="math-container">$$\begin{eqnarray} n_0\ &amp;:=&amp;\ n \\ n_{k+1}\ &amp;:=&amp;\ \begin{cases} B_{k+2}(n_k) - 1 &amp;\mbox{if }\ n_k &gt; 0 \\ \,0 &amp;\mbox{if }\ n_k = 0 \end{cases} \\ \end{eqnarray}$$</span></p> <p>If we examine the above Goodstein sequence for <span class="math-container">$\,266\,$</span> numerically we find that the sequence initially increases extremely rapidly:</p> <p><span class="math-container">$$\begin{eqnarray} 2^{\large 2^{2+1}}\!+2^{2+1}+2\ &amp;\sim&amp;\ 2^{\large 2^3} &amp;\sim&amp;\, 3\cdot 10^2 \\ 3^{\large 3^{3+1}}\!+3^{3+1}+2\ &amp;\sim&amp;\ 3^{\large 3^4} &amp;\sim&amp;\, 4\cdot 10^{38} \\ 4^{\large 4^{4+1}}\!+4^{4+1}+1\ &amp;\sim&amp;\ 4^{\large 4^5} &amp;\sim&amp;\, 3\cdot 10^{616} \\ 5^{\large 5^{5+1}}\!+5^{5+1}\ \ \phantom{+ 2} \ &amp;\sim&amp;\ 5^{\large 5^6} &amp;\sim&amp;\, 3\cdot 10^{10921} \\ 6^{\large 6^{6+1}}\!+5\cdot 6^{6}\quad\!+5\cdot 6^5\ \:+\cdots +5\cdot 6\ \ +5\ &amp;\sim&amp;\ 6^{\large 6^7} &amp;\sim&amp;\, 4\cdot 10^{217832} \\ 7^{\large 7^{7+1}}\!+5\cdot 7^{7}\quad\!+5\cdot 7^5\ \:+\cdots +5\cdot 7\ \ +4\ &amp;\sim&amp;\ 7^{\large 7^8} &amp;\sim&amp;\, 1\cdot 10^{4871822} \\ 8^{\large 8^{8+1}}\!+5\cdot 8^{8}\quad\!+5\cdot 8^5\ \: +\cdots +5\cdot 8\ \ +3\ &amp;\sim&amp;\ 8^{\large 8^9} &amp;\sim&amp;\, 2\cdot 10^{121210686} \\ 9^{\large 9^{9+1}}\!+5\cdot 9^{9}\quad\!+5\cdot 9^5\ \: +\cdots +5\cdot 9\ \ +2\ &amp;\sim&amp;\ 9^{\large 9^{10}} &amp;\sim&amp;\, 5\cdot 10^{3327237896} \\ 10^{\large 10^{10+1}}\!\!\!+5\cdot 10^{10}\!+5\cdot 10^5\!+\cdots +5\cdot 10+1\ &amp;\sim&amp;\ 10^{\large 10^{11}}\!\!\!\! &amp;\sim&amp;\, 1\cdot 10^{100000000000} \\ \end{eqnarray}$$</span></p> <p>Nevertheless, despite numerical first impressions, one can prove that this sequence converges to <span class="math-container">$\,0.\,$</span> In other words, <span class="math-container">$\,266_k = 0\,$</span> for all sufficiently large <span class="math-container">$\,k.\,$</span> This surprising result is due to Goodstein <span class="math-container">$(1944)$</span> who actually proved the same result for <em>all</em> Goodstein sequences:</p> <p><strong>Goodstein's Theorem</strong> <span class="math-container">$\ $</span> For all <span class="math-container">$\,n\,$</span> there exists <span class="math-container">$\,k\,$</span> such that <span class="math-container">$\,n_k = 0.\,$</span> In other words, every Goodstein sequence converges to <span class="math-container">$\,0.$</span></p> <p>The secret underlying Goodstein's theorem is that hereditary expression of <span class="math-container">$\,n\,$</span> in base <span class="math-container">$\,b\,$</span> mimics an ordinal notation for all ordinals less than <a href="https://en.wikipedia.org/wiki/Epsilon_numbers_%28mathematics%29" rel="nofollow noreferrer">epsilon nought</a> <span class="math-container">$\,\varepsilon_0 = \omega^{\large \omega^{\omega^{\Large\cdot^{\cdot^\cdot}}}}\!\!\! =\, \sup \{ \omega,\, \omega^{\omega}\!,\, \omega^{\large \omega^{\omega}}\!,\, \omega^{\large \omega^{\omega^\omega}}\!,\, \dots\, \}$</span>. For such ordinals, the base bumping operation leaves the ordinal fixed, but subtraction of one decreases the ordinal. But these ordinals are well-ordered, which allows us to conclude that a Goodstein sequence eventually converges to zero. Goodstein actually proved his theorem for a general increasing base-bumping function <span class="math-container">$\,f:\Bbb N\to \Bbb N\,$</span> (vs. <span class="math-container">$\,f(b)=b+1\,$</span> above). He proved that convergence of all such <span class="math-container">$f$</span>-Goodstein sequences is equivalent to transfinite induction below <span class="math-container">$\,\epsilon_0.$</span></p> <p>One of the primary measures of strength for a system of logic is the size of the largest ordinal for which transfinite induction holds. It is a classical result of Gentzen that the consistency of PA (Peano Arithmetic, or formal number theory) can be proved by transfinite induction on ordinals below <span class="math-container">$\,\epsilon_0.\,$</span> But we know from Godel's second incompleteness theorem that the consistency of PA cannot be proved in PA. It follows that neither can Goodstein's theorem be proved in PA. Thus we have an example of a very simple concrete number theoretical statement in PA whose proof is nonetheless independent of PA.</p> <p>Another way to see that Goodstein's theorem cannot be proved in PA is to note that the sequence takes too long to terminate, e.g.</p> <p><span class="math-container">$$ 4_k\,\text{ first reaches}\,\ 0\ \,\text{for }\, k\, =\, 3\cdot(2^{402653211}\!-1)\,\sim\, 10^{121210695}$$</span></p> <p>In general, if 'for all <span class="math-container">$\,n\,$</span> there exists <span class="math-container">$\,k\,$</span> such that <span class="math-container">$\,P(n,k)$</span>' is provable, then it must be witnessed by a provably computable choice function <span class="math-container">$\,F\!:\, $</span> 'for all <span class="math-container">$\,n\!:\ P(n,F(n)).\,$</span>' But the problem is that <span class="math-container">$\,F(n)\,$</span> grows too rapidly to be provably computable in PA, see [Smo] <span class="math-container">$1980$</span> for details.</p> <p>Goodstein's theorem was one of the first examples of so-called 'natural independence phenomena', which are considered by most logicians to be more natural than the metamathematical incompleteness results first discovered by Gödel. Other finite combinatorial examples were discovered around the same time, e.g. a finite form of Ramsey's theorem, and a finite form of Kruskal's tree theorem, see [KiP], [Smo] and [Gal]. [Kip] presents the Hercules vs. Hydra game, which provides an elementary example of a finite combinatorial tree theorem (a more graphical tree-theoretic form of Goodstein's sequence).</p> <p>Kruskal's tree theorem plays a fundamental role in computer science because it is one of the main tools for showing that certain orderings on trees are well-founded. These orderings play a crucial role in proving the termination of rewrite rules and the correctness of the Knuth-Bendix equational completion procedures. See [Gal] for a survey of results in this area.</p> <p>See the references below for further details, especially Smorynski's papers. Start with Rucker's book if you know no logic, then move on to Smorynski's papers, and then the others, which are original research papers. For more recent work, see the references cited in Gallier, especially to Friedman's school of 'Reverse Mathematics', and see [JSL].</p> <p><strong>References</strong></p> <p>[Gal] Gallier, Jean. <a href="ftp://ftp.cis.upenn.edu/pub/papers/gallier/kruskal1.pdf" rel="nofollow noreferrer">What's so special about Kruskal's theorem and the ordinal <span class="math-container">$\Gamma_0$</span>?</a><br /> A survey of some results in proof theory,<br /> Ann. Pure and Applied Logic, 53 (1991) 199-260.</p> <p>[HFR] Harrington, L.A. et.al. (editors)<br /> <a href="https://www.sciencedirect.com/bookseries/studies-in-logic-and-the-foundations-of-mathematics/vol/117" rel="nofollow noreferrer">Harvey Friedman's Research on the Foundations of Mathematics,</a> Elsevier 1985.</p> <p>[KiP] Kirby, Laurie, and Paris, Jeff. <a href="https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.107.3303" rel="nofollow noreferrer">Accessible independence results for Peano arithmetic,</a><br /> <em>Bull. London Math. Soc.,</em> 14 (1982), 285-293.</p> <p>[JSL] <a href="https://web.archive.org/web/20140910164006/http://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.jsl/1183742622" rel="nofollow noreferrer">The Journal of Symbolic Logic,* v. 53, no. 2, 1988</a>, <a href="https://www.jstor.org/stable/i339588" rel="nofollow noreferrer">jstor</a>, <a href="https://www.cambridge.org/core/journals/journal-of-symbolic-logic/issue/D570C6A9732ADFA7511C249AF3EF01AD" rel="nofollow noreferrer">cambridge.org</a><br /> This issue contains papers from the Symposium &quot;Hilbert's Program Sixty Years Later&quot;.</p> <p>[Kol] Kolata, Gina. <a href="https://www.science.org/doi/abs/10.1126/science.218.4574.779" rel="nofollow noreferrer">Does Goedel's Theorem Matter to Mathematics?</a><br /> <em>Science</em> 218 11/19/1982, 779-780; reprinted in [HFR]</p> <p>[Ruc] Rucker, Rudy. <a href="https://rads.stackoverflow.com/amzn/click/com/0691121273" rel="nofollow noreferrer" rel="nofollow noreferrer">Infinity and The Mind,</a> 1995, Princeton Univ. Press.</p> <p>[Sim] Simpson, Stephen G. <a href="https://web.archive.org/web/20210507012644/https://groups.google.com/forum/message/raw?msg=sci.math/KQ4Weqk4TmE/LE_Wfsk00H4J" rel="nofollow noreferrer">Unprovable theorems and fast-growing functions,</a><br /> <em>Contemporary Math.</em> 65 1987, 359-394.</p> <p>[Smo] Smorynski, Craig. (all three articles are reprinted in [HFR])<br /> Some rapidly growing functions, <em>Math. Intell.,</em> 2 1980, 149-154.<br /> The Varieties of Arboreal Experience, <em>Math. Intell.,</em> 4 1982, 182-188.<br /> &quot;Big&quot; News from Archimedes to Friedman, <em>Notices AMS,</em> 30 1983, 251-256.</p> <p>[Spe] Spencer, Joel. <a href="https://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Spencer669-675.pdf" rel="nofollow noreferrer">Large numbers and unprovable theorems,</a><br /> <em>Amer. Math. Monthly,</em> Dec 1983, 669-675.</p>
<p>Gödel was able to construct a statement that says "this statement is not provable."</p> <p>The proof is something like this. First create an enumeration scheme of written documents. Then create a statement in number theory "$P(x,y,z)$", which means "if $x$ is interpreted as a computer program, and we input the value $y$, then the value $z$ is the output." (This part was quite hard, but intuitively you can see it could be done.)</p> <p>Then write a computer program that checks proofs. Creating proofs is undecidable, and it is hard to create a program to do that. But a program to check a proof can be created. Let's suppose this program becomes the literal number $n$ in our enumeration scheme. Then we can create a statement in number theory "$Q(x)$"${}={}$"$\exists y:P(n,\text{cat}(x,y),1)$". Here $\text{cat}(x,y)$ concatenates a written statement in number theory $x$ with its proof $y$. So $Q(x)$ says "$x$ is provable."</p> <p>Now construct in number theory a formula $S(x,y)$, which means take the statement enumerated by $x$, and whenever you see the symbol $x$ in it, substitute it with the literal number represented by $y$.</p> <p>Now consider the statement "$T(x)$"${}={}$"$\text{not} \ Q(S(x,x))$". Let's suppose this enumerates as the number $m$.</p> <p>Then "$T(m)$" is a statement in number theory that says "this statement is not provable."</p> <p>Now suppose "$T(m)$" is provable. Then it is true. But if it is true, then it is not provable (because that is what the statement says).</p> <p>So "$T(m)$" is clearly not provable. Hence it is true.</p> <p>I know I am missing some important technical issues. I'll answer them as best I can when they are asked. But that is the rough outline of the proof of Gödel's incompleteness theorem.</p>
logic
<p>Provided we have this truth table where "$p\implies q$" means "if $p$ then $q$":</p> <p>$$\begin{array}{|c|c|c|} \hline p&amp;q&amp;p\implies q\\ \hline T&amp;T&amp;T\\ T&amp;F&amp;F\\ F&amp;T&amp;T\\ F&amp;F&amp;T\\\hline \end{array}$$</p> <p>My understanding is that "$p\implies q$" means "when there is $p$, there is q". The second row in the truth table where $p$ is true and $q$ is false would then contradict "$p\implies q$" because there is no $q$ when $p$ is present.</p> <p>Why then, does the third row of the truth table not contradict "$p\implies q$"? If $q$ is true when $p$ is false, then $p$ is not a condition of $q$.</p> <p>I have not taken any logic class so please explain it in layman's terms.</p> <hr> <blockquote> <p><strong>Administrative note.</strong> You may experience being directed here even though your question was actually about line 4 of the truth table instead. In that case, see the companion question <a href="https://math.stackexchange.com/questions/48161/in-classical-logic-why-is-p-rightarrow-q-true-if-both-p-and-q-are-false">In classical logic, why is $(p\Rightarrow q)$ True if both $p$ and $q$ are False?</a> And even if your original worry was about line 3, it might be useful to skim the other question anyway; many of the answers to either question attempt to explain <em>both</em> lines.</p> </blockquote>
<p>If you don't put any money into the soda-pop machine, and it gives you a bottle of soda anyway, do you have grounds for complaint? Has it violated the principle, "if you put money in, then a soda comes out"? I wouldn't think you have grounds for complaint. If the machine gives a soda to every passerby, then it is still obeying the principle that if one puts money in, one gets a soda out. </p> <p>Similarly, the only grounds for complaint against $p\to q$ is the situation where $p$ is true, but $q$ is false. This is why the only F entry in the truth table occurs in this row. </p> <p>If you imagine putting an F on the row to which you refer, the truth table becomes the same as what you would expect for $p\iff q$, but we don't expect that "if p, then q" has the same meaning as "p if and only if q". </p>
<p>$p\Rightarrow q$ is an assertion that says something about situations where $p$ is true, namely that if we find ourselves in a world where $p$ is true, then $q$ will be true (or otherwise $p\Rightarrow q$ lied to us).</p> <p>However, if we find ourselves in a world where $p$ is <em>false</em>, then it turns out that $p\Rightarrow q$ did not actually promise us anything. Therefore it can't possibly have lied to us -- you could complain about it being <em>irrelevant</em> in that situation, but that doesn't make it <em>false</em>. It has delivered everything it promised, because it turned out that it actually promised nothing.</p> <p>As an everyday example, it is true that "If John jumps into a lake, then John will get wet". The truth of this is not affected by the fact that there are other ways to get wet. If, on investigating, we discover that John didn't jump in to the lake, but merely stood in the rain and now is wet, that doesn't mean that it is no longer true that people who jump into lakes get wet.</p> <p><strong>However</strong>, one should note that these arguments are ultimately not the reason why $\Rightarrow$ has the truth table it has. The real reason is because that truth table is the <em>definition</em> of $\Rightarrow$. Expressing $p\Rightarrow q$ as "If $p$, then $q$" is not a definition of $\Rightarrow$, but an explanation of how the words "if" and "then" are used by mathematicians, given that one already knows how $\Rightarrow$ works. The intuitive explanations are supposed to convince you (or not) that it is reasonable to use those two English words to speak about logical implication, not that logical implication ought to work that way in the first place.</p>
probability
<p>Suppose I play a lottery that has 300 tickets. I can only buy one ticket per draw. Statistically speaking, shouldn't I win once every 300 draws?</p> <p>Is it more complicated than this?</p> <p><strong>Edit</strong></p> <p>This question has generated much more response than I had imagined. Thank you all, for your input and your great explanations.</p> <p>To go into further details: The lottery has 300 tickets. You can buy 1 ticket per draw. Every draw, is all new tickets, so in essence, you can hold 1 out of 300 numbers, at each draw.</p> <p><strong>Edit</strong></p> <p>Just to give some funny (not really funny) side info. I have now played 1,104 times, and still have not won anything. I suppose I am EXTREMELY unlucky.</p>
<p>If you flip a coin twice and call it in the air both times, will you necessarily call either flip correctly? Will you necessarily call either flip incorrectly? It isn't that simple.</p> <p>The very general idea here--by the <a href="http://en.wikipedia.org/wiki/Law_of_large_numbers" rel="nofollow">Law of Large Numbers</a>--is that, if you play this lottery "enough" times, then the fraction of times that you win (the <em>empirical probability</em>) will be "close to" $1/300$ (the <em>theoretical probability</em>). This is a fantastically imprecise notion (hence the quotes), as your tests should make clear. After $1104$ tests, you still haven't won anything, so while $0$ is arguably "close to" $1/300$, you may want your empirical probability to be closer, which means that you haven't played "enough" times, yet.</p>
<p>Yes it is. The probability of winning $k$ times out of $n$ lotteries is determined from a binomial distribution:</p> <p>$$P(K=k) = \binom{n}{k} p^k (1-p)^{n-k}$$</p> <p>where $n=300$, $k=1$, and $p=1/300$. The answer is about $0.368494$, or $36.8\%$. The probability of winning at least once is</p> <p>$$P(K \ge 1) = 1- P(K=0) = 1-\left( \frac{299}{300} \right )^{300} \approx 0.632735$$</p> <p>or about $63.3\%$ chance of winning something. Not bad, but not $100\%$.</p>
combinatorics
<p>In competitive Pokémon-play, two players pick a team of six Pokémon out of the 718 available. These are picked independently, that is, player $A$ is unaware of player $B$'s choice of Pokémon. Some online servers let the players see the opponents team before the match, allowing the player to change the order of its Pokémon. (Only the first matters, as this is the one that will be sent into the match first. After that, the players may switch between the chosen six freely, as explained below.) Each Pokémon is assigned four moves out of a list of moves that may or may not be unique for that Pokémon. There are currently 609 moves in the move-pool. Each move is assigned a certain type, and may be more or less effective against Pokémon of certain types. However, a Pokémon may have more than one type. In general, move effectiveness is given by $0.5\times$, $1\times$ and $2\times$. However, there are exceptions to this rule. Ferrothorn, a dual-type Pokémon of steel and grass, will take $4\times$ damage against fire moves, since both of its types are weak against fire. All moves have a certain probability that it will work.</p> <p>In addition, there are moves with other effects than direct damage. For instance, a move may increase one's attack, decrease your opponent's attack, or add a status deficiency on your opponent's Pokémon, such as making it fall asleep. This will make the Pokémon unable to move with a relatively high probability. If it is able to move, the status of "asleep" is lifted. Furthermore, each Pokémon has a "Nature" which increases one stat (out of Attack, Defense, Special Attack, Special Defense, Speed), while decreases another. While no longer necessary for my argument, one could go even deeper with things such as IV's and EV's for each Pokémon, which also affects its stats.</p> <p>A player has won when all of its opponents Pokémon are out of play. A player may change the active Pokémon freely. (That is, the "battles" are 1v1, but the Pokémon may be changed freely.)</p> <p>Has there been any serious mathematical research towards competitive Pokémon play? In particular, has there been proved that there is always a best strategy? What about the number of possible "positions"? If there always is a best strategy, can one evaluate the likelihood of one team beating the other, given best play from both sides? (As is done with chess engines today, given a certain position.)</p> <p>EDIT: For the sake of simplicity, I think it is a good idea to consider two positions equal when </p> <p>1) Both positions have identical teams in terms of Pokémon. (Natures, IVs, EVs and stats are omitted.) As such, one can create a one-to-one correspondence between the set of $12$ Pokémon in position $A$ and the $12$ in position $B$ by mapping $a_A \mapsto a_B$, where $a_A$ is Pokémon $a$ in position $A$.</p> <p>2) $a_A$ and $a_B$ have the same moves for all $a\in A, B$.</p>
<p>Has there been serious <em>research</em>? Probably not. Have there been modeling efforts? Almost certainly, and they probably range anywhere from completely ham-handed to somewhat sophisticated.</p> <p>At its core, the game is finite; there are two players and a large but finite set of strategies. As such, existence of a mixed-strategy equilibrium is guaranteed. This is the result of Nash, and is actually an interesting application of the Brouwer Fixed-Point theorem.</p> <p>That said, the challenge isn't really in the math; if you could set up the game, it's pretty likely that you could solve it using some linear programming approach. The challenge is in modeling the payoff, capturing the dynamics and, to some degree (probably small), handling the few sources of intrinsic uncertainty (i.e. uncertainty generated by randomness, such as hit chance).</p> <p>Really, though, this is immaterial since the size of the action space is so large as to be basically untenable. LP solutions suffer a curse of dimensionality -- the more actions in your discrete action space, the more things the algorithm has to look at, and hence the longer it takes to solve.</p> <p>Because of this, most tools that people used are inherently Monte Carlo-based -- simulations are run over and over, with new random seeds, and the likelihood of winning is measured statistically.</p> <p>These Monte Carlo methods have their down-sides, too. Certain player actions, such as switching your Blastoise for your Pikachu, are deterministic decisions. But we've already seen that the action space is too large to prescribe determinism in many cases. Handling this in practice becomes difficult. You could treat this as a random action with some probability (even though in the real world it is not random at all), and increase your number of Monte Carlo runs, or you could apply some heuristic, such as "swap to Blastoise if the enemy type is fire and my current pokemon is under half-health." However, writing these heuristics relies on an assumption that your breakpoints are nearly-optimal, and it's rarely actually clear that such is the case.</p> <p>As a result, games like Pokemon are interesting because optimal solutions are difficult to find. If there were 10 pokemon and 20 abilities, it would not be so fun. The mathematical complexity, if I were to wager, is probably greater than chess, owing simply to the size of the action space and the richer dynamics of the measurable outcomes. This is one of the reasons the game and the community continue to be active: people find new ideas and new concepts to explore.</p> <p>Also, the company making the game keeps making new versions. That helps.</p> <hr> <p>A final note: one of the challenges in the mathematical modeling of the game dynamics is that the combat rules are very easy to implement programmatically, but somewhat more difficult to cleanly describe mathematically. For example, one attack might do 10 damage out front, and then 5 damage per round for 4 rounds. Other attacks might have cooldowns, and so forth. This is easy to implement in code, but more difficult to write down a happy little equation for. As such, it's a bit more challenging to do things like try to identify gradients etc. analytically, although it could be done programmatically as well. It would be an interesting application for automatic differentiation, as well.</p>
<p>I think it's worth pointing out that even stripping away most of the complexity of the game still leaves a pretty hard problem. </p> <p>The very simplest game that can be said to bear any resemblance to Pokemon is rock-paper-scissors (RPS). (Imagine, for example, that there are only three Pokemon - let's arbitrarily call them Squirtle, Charmander, and Bulbasaur - and that Squirtle always beats Charmander, Charmander always beats Bulbasaur, and Bulbasaur always beats Squirtle.)</p> <p>Already it's unclear what "best strategy" means here. There is a unique <a href="http://en.wikipedia.org/wiki/Nash_equilibrium">Nash equilibrium</a> given by randomly playing Squirtle, Charmander, or Bulbasaur with probability exactly $\frac{1}{3}$ each, but in general just because there's a Nash equilibrium, even a unique Nash equilibrium, doesn't mean that it's the strategy people will actually gravitate to in practice. </p> <p>There is in fact a <a href="http://www.chessandpoker.com/rps_rock_paper_scissors_strategy.html">professional RPS tournament scene</a>, and in those tournaments nobody is playing the Nash equilibrium because nobody can actually generate random choices with probability $\frac{1}{3}$; instead, everyone is playing some non-Nash equilibrium strategy, and if you want to play to win (not just win $\frac{1}{3}$ of the time, which is the best you can hope for playing the Nash equilibrium) you'll instead play strategies that fare well against typical strategies you'll encounter. Two examples:</p> <ul> <li>Novice male players tend to open with rock, and to fall back on it when they're angry or losing, so against such players you should play paper.</li> <li>Novice RPS players tend to avoid repeating their plays too often, so if a novice player's played rock twice you should expect that they're likely to play scissors or paper next.</li> </ul> <p>There is even an <a href="http://www.rpscontest.com/">RPS programming competition scene</a> where people design algorithms to play repeated RPS games against other algorithms, and nobody's playing the Nash equilibrium in these games unless they absolutely have to. Instead the idea is to try to predict what the opposing algorithm is going to do next while trying to prevent your opponent from predicting what you'll do next. <a href="http://ofb.net/~egnor/iocaine.html">Iocaine Powder</a> is a good example of the kind of things these algorithms get up to. </p> <p>So, even the very simple-sounding question of figuring out the "best strategy" in RPS is in some sense open, and people can dedicate a lot of time both to figuring out good strategies to play against other people and good algorithms to play against other algorithms. I think it's safe to say that Pokemon is strictly more complicated than RPS, enough so even this level of analysis is probably impossible. </p> <p><strong>Edit:</strong> It's also worth pointing out that another way that Pokemon differs from a game like chess is that it is <em>imperfect information</em>: you don't know everything about your opponent's Pokemon (movesets, EV distribution, hold items, etc.) even if you happen to know what they are. That means both players should be trying to predict these hidden variables about each other's Pokemon while trying to trick the other player into making incorrect predictions. My understanding is that a common strategy for doing this is to use Pokemon that have very different viable movesets and try to trick your opponent into thinking you're playing one moveset when you're actually playing another. So in this respect Pokemon resembles poker more than it does chess. </p>
probability
<p>Basically, on average, how many times one should roll to expect two consecutive sixes?</p>
<p>Instead of finding the probability distribution, and then the expectation, we can work directly with expectations. That is often a useful strategy.</p> <p>Let $a$ be the expected <em>additional</em> waiting time if we have not just tossed a $6$. At the beginning, we certainly have not just tossed a $6$, so $a$ is the required expectation. Let $b$ be the expected <em>additional</em> waiting time if we have just tossed a $6$. </p> <p>If we have not just tossed a $6$, then with probability $\frac{5}{6}$ we toss a non-$6$ (cost: $1$ toss) and our expected additional waiting time is still $a$. With probability $\frac{1}{6}$ we toss a $6$ (cost: $1$ toss) and our expected additional waiting time is $b$. Thus $$a=1+\frac{5}{6}a+\frac{1}{6}b.$$ If we have just tossed a $6$, then with probability $\frac{5}{6}$ we toss a non-$6$, and then our expected additional waiting time is $a$. (With probability $\frac{1}{6}$ the game is over.) Thus $$b=1+\frac{5}{6}a.$$ We have two linear equations in two unknowns. Solve for $a$. We get $a=42$.</p>
<p>The formula is proved in the link given by Byron. I'll just try to give you my intuition regarding the formula.</p> <p>The chance of rolling 6 on two dice is $1/36$. So if you think of rolling two dice at once as one trial, it would take an average of $36$ trials.</p> <p>However, in this case you are rolling 1 die at a time, and you look at two consecutive rolls. The average number of trials to see the first 6 is $6$ times. Suppose we roll one more times to see if the two 6's are consecutive. Record the sequence of all rolls as</p> <p>$$???????6?$$</p> <p>where $?$ stands for any number that is not 6. The length of this sequence is random, but the average length of this sequence is $7$.</p> <p>Now, the question is how many sequences of this type we will see before we get a sequence that ends in $66$. Since we are only interested in the last number of the sequence, there is a $1/6$ chance that the sequence will end in $66$. The same argument says that the average number of sequences we will need is $6$. Multiply this with the average length of the sequence to get $7 \cdot 6 = 42$ as the answer.</p> <p>This argument generalizes to more than $2$ consecutive 6's by induction. For example, if you want $3$ consecutive 6's, you consider sequences that look like this:</p> <p>$$ ???????66? $$</p> <p>The average length of a sequence of this type is $42 + 1 = 43$. Therefore, the answer is $43 \cdot 6 = 258$.</p> <p>It's not hard to generalize this situation. The general form is $L_{n+1} = \frac{1}{p}(L_n + 1)$ where $p$ is the probability of the desired symbol appearing and $L_n$ is the average number of trials until the first occurrence of $n$ consecutive desired symbols. (By definition, $L_0 = 0$.) This recurrence can be solved pretty easily, giving the same formula as in Byron's link. (The proof is essentially the same too, but my version is informal.)</p>
number-theory
<p>I was looking at a list of primes. I noticed that $ \frac{AM (p_1, p_2, \ldots, p_n)}{p_n}$ seemed to converge.</p> <p>This led me to try $ \frac{GM (p_1, p_2, \ldots, p_n)}{p_n}$ which also seemed to converge.</p> <p>I did a quick Excel graph and regression and found the former seemed to converge to $\frac{1}{2}$ and latter to $\frac{1}{e}$. As with anything related to primes, no easy reasoning seemed to point to those results (however, for all natural numbers it was trivial to show that the former asymptotically tended to $\frac{1}{2}$).</p> <p>Are these observations correct and are there any proofs towards:</p> <p>$$ { \lim_{n\to\infty} \left( \frac{AM (p_1, p_2, \ldots, p_n)}{p_n} \right) = \frac{1}{2} \tag1 } $$</p> <p>$$ { \lim_{n\to\infty} \left( \frac{GM (p_1, p_2, \ldots, p_n)}{p_n} \right) = \frac{1}{e} \tag2 } $$</p> <p>Also, does the limit $$ { \lim_{n\to\infty} \left( \frac{HM (p_1, p_2, \ldots, p_n)}{p_n} \right) \tag3 } $$ exist?</p>
<p>Your conjecture for GM was proved in 2011 in the short paper <em><a href="http://nntdm.net/papers/nntdm-17/NNTDM-17-2-01-03.pdf" rel="noreferrer">On a limit involving the product of prime numbers</a></em> by József Sándor and Antoine Verroken.</p> <blockquote> <p><strong>Abstract</strong>. Let $p_k$ denote the $k$th prime number. The aim of this note is to prove that the limit of the sequence $(p_n / \sqrt[n]{p_1 \cdots p_n})$ is $e$.</p> </blockquote> <p>The authors obtain the result based on the prime number theorem, i.e., $$p_n \approx n \log n \quad \textrm{as} \ n \to \infty$$ as well as an inequality with Chebyshev's function $$\theta(x) = \sum_{p \le x}\log p$$ where $p$ are primes less than $x$.</p>
<p>We can use the simple version of the prime counting function $$p_n \approx n \log n$$ and plug it into your expressions. For the arithmetic one, this becomes $$\lim_{n \to \infty} \frac {\sum_{i=1}^n p_i}{np_n}=\lim_{n \to \infty} \frac {\sum_{i=1}^n i\log(i)}{np_n}=\lim_{n \to \infty} \frac {\sum_{i=1}^n \log(i^i)}{np_n}\\=\lim_{n \to \infty} \frac {\log\prod_{i=1}^n i^i}{np_n}=\lim_{n \to \infty}\frac {\log(H(n))}{n^2\log(n)}$$ Where $H(n)$ is the <a href="http://mathworld.wolfram.com/Hyperfactorial.html" rel="noreferrer">hyperfactorial function</a>. We can use the expansion given on the Mathworld page to get $$\log H(n)\approx \log A -\frac {n^2}4+\left(\frac {n(n+1)}2+\frac 1{12}\right)\log (n)$$ and the limit is duly $\frac 12$ </p> <p>I didn't find a nice expression for the product of the primes.</p>
linear-algebra
<p>I was wondering if a dot product is technically a term used when discussing the product of $2$ vectors is equal to $0$. And would anyone agree that an inner product is a term used when discussing the integral of the product of $2$ functions is equal to $0$? Or is there no difference at all between a dot product and an inner product?</p>
<p>In my experience, <em>the dot product</em> refers to the product $\sum a_ib_i$ for two vectors $a,b\in \Bbb R^n$, and that "inner product" refers to a more general class of things. (I should also note that the real dot product is extended to a complex dot product using the complex conjugate: $\sum a_i\overline{b}_i)$.</p> <p>The definition of "inner product" that I'm used to is a type of biadditive form from $V\times V\to F$ where $V$ is an $F$ vector space.</p> <p>In the context of $\Bbb R$ vector spaces, the biadditive form is usually taken to be symmetric and $\Bbb R$ linear in both coordinates, and in the context of $\Bbb C$ vector spaces, it is taken to be Hermetian-symmetric (that is, reversing the order of the product results in the complex conjugate) and $\Bbb C$ linear in the first coordinate. </p> <p>Inner products in general can be defined even on infinite dimensional vector spaces. The integral example is a good example of that.</p> <p>The real dot product is just a special case of an inner product. In fact it's even positive definite, but general inner products need not be so. The modified dot product for complex spaces also has this positive definite property, and has the Hermitian-symmetric I mentioned above.</p> <p>Inner products are generalized by <a href="http://en.wikipedia.org/wiki/Inner_product#Generalizations">linear forms</a>. I think I've seen some authors use "inner product" to apply to these as well, but a lot of the time I know authors stick to $\Bbb R$ and $\Bbb C$ and require positive definiteness as an axiom. General bilinear forms allow for indefinite forms and even degenerate vectors (ones with "length zero"). The naive version of dot product $\sum a_ib_i$ still works over any field $\Bbb F$. Another thing to keep in mind is that in a lot of fields the notion of "positive definite" doesn't make any sense, so that may disappear.</p>
<p>A dot product is a very specific inner product that works on $\Bbb{R}^n$ (or more generally $\Bbb{F}^n$, where $\Bbb{F}$ is a field) and refers to the inner product given by</p> <p>$$(v_1, ..., v_n) \cdot (u_1, ..., u_n) = v_1 u_1 + ... + v_n u_n$$</p> <p>More generally, an inner product is a function that takes in two vectors and gives a complex number, subject to some conditions.</p>
differentiation
<p>I am not an English speaker that is why I asked this question. In addition, I think <a href="http://english.stackexchange.com">english.stackexchange.com</a> is not the proper place to ask this because (I am so sorry) I don't think most of them know mathematics deeply.</p> <p>How do you pronounce the following derivatives in English?</p> <ol> <li><span class="math-container">$\frac{\textrm{d}y}{\textrm{d}x}$</span></li> <li><span class="math-container">$\frac{\textrm{d}^2y}{\textrm{d}x^2}$</span></li> <li><span class="math-container">$\frac{\partial y}{\partial x}$</span></li> <li><span class="math-container">$\frac{\partial^2 y}{\partial x^2}$</span></li> <li><span class="math-container">$\frac{\partial^3 y}{\partial x^2\partial z}$</span></li> </ol> <p>Feel free to edit the tag if it has been chosen incorrectly.</p> <p>For example, is it correct if I pronounce <span class="math-container">$\frac{\textrm{d}y}{\textrm{d}x}$</span> as &quot;dee wai over dee eks&quot;?</p>
<p>This is how I personally pronounce them:</p> <ol> <li>I pronounce it either "dee wai over dee eks" or simply "dee wai dee eks".</li> <li>I occasionally pronounce it as "dee squared wai over dee eks squared", but more often I just refer to it as "the second derivative of y with respect to x".</li> <li>"Partial of y with respect to x." Very occasionally as "del wai del eks".</li> <li>As with 2, I usually just call this "the second partial of y with respect to x".</li> <li>"del cubed y over del $x^2$ del z", but if we have $y=f(x,z)$, I'd much prefer referring to it as $f_{zxx}$.</li> </ol>
<p>3. $$ \frac{\partial y}{\partial x}\rightarrow\text{day wai, day eks}$$</p> <p>5. $$ \frac{\partial^3 y}{\partial x^2\partial z}\rightarrow\text{day cubed wai, day eks squared, day zee}$$</p> <p>6. $$\nabla F\rightarrow\text{del eff}$$</p>
linear-algebra
<p>Let $\mathbb{F}_3$ be the field with three elements. Let $n\geq 1$. How many elements do the following groups have?</p> <ol> <li>$\text{GL}_n(\mathbb{F}_3)$</li> <li>$\text{SL}_n(\mathbb{F}_3)$</li> </ol> <p>Here GL is the <a href="http://en.wikipedia.org/wiki/general_linear_group">general linear group</a>, the group of invertible <em>n</em>×<i>n</i> matrices, and SL is the <a href="http://en.wikipedia.org/wiki/Special_linear_group">special linear group</a>, the group of <em>n</em>×<i>n</i> matrices with determinant 1. </p>
<p><strong>First question:</strong> We solve the problem for &quot;the&quot; finite field <span class="math-container">$F_q$</span> with <span class="math-container">$q$</span> elements. The first row <span class="math-container">$u_1$</span> of the matrix can be anything but the <span class="math-container">$0$</span>-vector, so there are <span class="math-container">$q^n-1$</span> possibilities for the first row. For <em>any</em> one of these possibilities, the second row <span class="math-container">$u_2$</span> can be anything but a multiple of the first row, giving <span class="math-container">$q^n-q$</span> possibilities.</p> <p>For any choice <span class="math-container">$u_1, u_2$</span> of the first two rows, the third row can be anything but a linear combination of <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span>. The number of linear combinations <span class="math-container">$a_1u_1+a_2u_2$</span> is just the number of choices for the pair <span class="math-container">$(a_1,a_2)$</span>, and there are <span class="math-container">$q^2$</span> of these. It follows that for every <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span>, there are <span class="math-container">$q^n-q^2$</span> possibilities for the third row.</p> <p>For any allowed choice <span class="math-container">$u_1$</span>, <span class="math-container">$u_2$</span>, <span class="math-container">$u_3$</span>, the fourth row can be anything except a linear combination <span class="math-container">$a_1u_1+a_2u_2+a_3u_3$</span> of the first three rows. Thus for every allowed <span class="math-container">$u_1, u_2, u_3$</span> there are <span class="math-container">$q^3$</span> forbidden fourth rows, and therefore <span class="math-container">$q^n-q^3$</span> allowed fourth rows.</p> <p>Continue. The number of non-singular matrices is <span class="math-container">$$(q^n-1)(q^n-q)(q^n-q^2)\cdots (q^n-q^{n-1}).$$</span></p> <p><strong>Second question:</strong> We first deal with the case <span class="math-container">$q=3$</span> of the question. If we multiply the first row by <span class="math-container">$2$</span>, any matrix with determinant <span class="math-container">$1$</span> is mapped to a matrix with determinant <span class="math-container">$2$</span>, and any matrix with determinant <span class="math-container">$2$</span> is mapped to a matrix with determinant <span class="math-container">$1$</span>.</p> <p>Thus we have produced a <em>bijection</em> between matrices with determinant <span class="math-container">$1$</span> and matrices with determinant <span class="math-container">$2$</span>. It follows that <span class="math-container">$SL_n(F_3)$</span> has half as many elements as <span class="math-container">$GL_n(F_3)$</span>.</p> <p>The same idea works for any finite field <span class="math-container">$F_q$</span> with <span class="math-container">$q$</span> elements. Multiplying the first row of a matrix with determinant <span class="math-container">$1$</span> by the non-zero field element <span class="math-container">$a$</span> produces a matrix with determinant <span class="math-container">$a$</span>, and all matrices with determinant <span class="math-container">$a$</span> can be produced in this way. It follows that <span class="math-container">$$|SL_n(F_q)|=\frac{1}{q-1}|GL_n(F_q)|.$$</span></p>
<p>Determinant function is a surjective homomorphism from <span class="math-container">$GL(n, F)$</span> to <span class="math-container">$F^*$</span> with kernel <span class="math-container">$SL(n, F)$</span>. Hence by the fundamental isomorphism theorem <span class="math-container">$\frac{GL(n,F)}{SL(n,F)}$</span> is isomorphic to <span class="math-container">$F^*$</span>, the multiplicative group of nonzero elements of <span class="math-container">$F$</span>. </p> <p>Thus if <span class="math-container">$F$</span> is finite with <span class="math-container">$p$</span> elements then <span class="math-container">$|GL(n,F)|=(p-1)|SL(n, F)|$</span>.</p>
probability
<p>Consider a two-sided coin. If I flip it $1000$ times and it lands heads up for each flip, what is the probability that the coin is unfair, and how do we quantify that if it is unfair? </p> <p>Furthermore, would it still be considered unfair for $50$ straight heads? $20$? $7$?</p>
<p>First of all, you must understand that there is no such thing as a perfectly fair coin, because there is nothing in the real world that conforms perfectly to some theoretical model. So a useful definition of "fair coin" is one, that for practical purposes behaves like fair. In other words, no human flipping it for even a very long time, would be able to tell the difference. That means, one can assume, that the probability of heads or tails on that coin, is $1/2$. </p> <p>Whether your particular coin is fair (according to above definition) or not, cannot be assigned a "probability". Instead, statistical methods must be used. </p> <p>Here, you make a so called "null-hypothesis": "the coin is fair". You then proceed to calculate the probability of the event you observed (to be precise: the event, or something at least as "strange"), assuming the null-hypothesis were true. In your case, the probability of your event, 1000 heads, or something at least as strange, is $2\times1/2^{1000}$ (that is because you also count 1000 tails).</p> <p>Now, with statistics, you can never say anything for sure. You need to define, what you consider your "confidence level". It's like saying in court "beyond a reasonable doubt". Let's say you are willing to assume confidence level of 0.999 . That means, if something that had supposedly less than 0.001 chance of happening, actually happened, then you are going to say, "I am confident enough that my assumptions must be wrong". </p> <p>In your case, if you assume the confidence level of 0.999, and you have 1000 heads in 1000 throws, then you can say, "the assumption of the null hypothesis must be wrong, and the coin must be unfair". Same with 50 heads in 50 throws, or 20 heads in 20 throws. But not with 7, not at this confidence level. With 7 heads (or tails), the probability is $2 \times 1/2 ^ {7}$ , which is more than 0.001.</p> <p>But if you assume confidence level at 95% (which is commonly done in less strict disciplines of science), then even 7 heads means "unfair". </p> <p>Notice that you can never actually "prove" the null hypothesis. You can only reject it, based on what you observe is happening, and your "standard of confidence". This is in fact what most scientists do - they reject hypotheses based on evidence and the accepted standards of confidence. </p> <p>If your events do not disprove your hypothesis, that does not necessarily mean, it must be true! It just means, it withstood the scrutiny so far. You can also say "the results are consistent with the hypothesis being true" (scientists frequently use this phrase). If a hypothesis is standing for a long time without anybody being able to produce results that disprove it, it becomes generally accepted. However, sometimes even after hundreds of years, some new results might come up which disprove it. Such was the case of General Relativity "disproving" Newton's classical theory. </p>
<p>If you take a coin you have modified so that it always lands in heads and you get $1000$ heads then the probability of it being unfair is $100\%$.</p> <p>If you take a coin you have crafted yourself and carefully made sure that it is a fair coin and then you get $1000$ heads then the probability of it being unfair is $0\%$.</p> <p>Next, you fill a box with coins of both types, then take a random coin.</p> <ul> <li>$NF$ : fair coins in the box.</li> <li>$NU$ : unfair coins in the box</li> <li><p>$P(U)$ : probability of having taken an unfair coin $$P(U) = \frac{NU}{NF + NU}$$</p></li> <li><p>$P(F)$ : probability of having taken a fair coin $$ P(F) = \frac{NF}{NF + NU} = 1 - P(U) $$</p></li> <li>$P(H \mid{U})$ : Probability of having 1000 heads conditioned to having take an unfair coin $$P(H\mid{U}) = 1 $$</li> <li>$P(H\mid{F})$ : Probability of having 1000 heads conditioned to having taken a fair coin $$P(H\mid{F}) = \left( \tfrac{1}{2} \right)^{1000}$$</li> <li>$P(H)$ : Probability of having 1000 heads</li> </ul> <p>\begin{align} P(H) &amp;= P(U \cap H) + P(F \cap H)\\ &amp;= P(H \mid{U})P(U) + P(H \mid{F})P(F)\\ &amp;= P(U) + P(H \mid{F})P(F) \end{align}</p> <p>By applying <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem" rel="noreferrer">Bayes theorem</a> :</p> <p>$P(U \mid{H})$ : probability of the coin being unfair conditioned to getting 1000 heads $$P(U\mid{H}) = \frac{P(H \mid{U})P(U)}{P(H)} = \frac{P(U)}{P(U) + P(H\mid{F})P(F)}$$</p> <p>And that is your answer.</p> <hr> <h2>In example</h2> <p>If $P(U)=1/(6 \cdot 10^{27})$ ($1$ out of every $6 \cdot 10^{27}$ coins are unfair) and you get 1000 heads then the probability of the coin being unfair is \begin{align} \mathbf{99}.&amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999944\% \end{align}</p> <p>Very small coins like the USA cent have a weight of $2.5g$. We can safely assume that there are no coins with a weight less than 1 gram.</p> <p>Earth has a weight of less than $6 \cdot 10^{27}$ grams. Thus we know that there are less than $6 \cdot 10^{27}$ coins. We know that there is at least one unfair coin ( I have seen coins with two heads and zero tails) thus we know that $P(U) \ge 1/(6 \cdot 10^{27})$.</p> <p>And thus we can conclude that if you get 1000 heads then the probability of the coin being unfair is at least \begin{align} \mathbf{99}.&amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999999999\\ &amp;999999999999999999999999999999999999999999999999999999999999999944\% \end{align}</p> <p>This analysis is only valid if you take a random coin and only if coins are either $100\%$ fair or $100\%$ unfair. It is still a good indication that yes, with $1000$ heads you can be certain beyond any reasonable doubt that the coin is unfair.</p>
number-theory
<p>The number $$\sqrt{308642}$$ has a crazy decimal representation : $$555.5555777777773333333511111102222222719999970133335210666544640008\cdots $$</p> <blockquote> <p>Is there any mathematical reason for so many repetitions of the digits ?</p> </blockquote> <p>A long block containing only a single digit would be easier to understand. This could mean that there are extremely good rational approximations. But here we have many long one-digit-blocks , some consecutive, some interrupted by a few digits. I did not calculate the probability of such a "digit-repitition-show", but I think it is extremely small.</p> <p>Does anyone have an explanation ?</p>
<p>The architect's answer, while explaining the absolutely crucial fact that <span class="math-container">$$\sqrt{308642}\approx 5000/9=555.555\ldots,$$</span> didn't quite make it clear why we get <strong>several</strong> runs of repeating decimals. I try to shed additional light to that using a different tool.</p> <p>I want to emphasize the role of <a href="https://en.wikipedia.org/wiki/Binomial_series" rel="noreferrer">the binomial series</a>. In particular the Taylor expansion <span class="math-container">$$ \sqrt{1+x}=1+\frac x2-\frac{x^2}8+\frac{x^3}{16}-\frac{5x^4}{128}+\frac{7x^5}{256}-\frac{21x^6}{1024}+\cdots $$</span> If we plug in <span class="math-container">$x=2/(5000)^2=8\cdot10^{-8}$</span>, we get <span class="math-container">$$ M:=\sqrt{1+8\cdot10^{-8}}=1+4\cdot10^{-8}-8\cdot10^{-16}+32\cdot10^{-24}-160\cdot10^{-32}+\cdots. $$</span> Therefore <span class="math-container">$$ \begin{aligned} \sqrt{308462}&amp;=\frac{5000}9M=\frac{5000}9+\frac{20000}9\cdot10^{-8}-\frac{40000}9\cdot10^{-16}+\frac{160000}9\cdot10^{-24}+\cdots\\ &amp;=\frac{5}9\cdot10^3+\frac29\cdot10^{-4}-\frac49\cdot10^{-12}+\frac{16}9\cdot10^{-20}+\cdots. \end{aligned} $$</span> This explains both the runs, their starting points, as well as the origin and location of those extra digits not part of any run. For example, the run of <span class="math-container">$5+2=7$</span>s begins when the first two terms of the above series are "active". When the third term joins in, we need to subtract a <span class="math-container">$4$</span> and a run of <span class="math-container">$3$</span>s ensues et cetera.</p>
<p>Repeated same numbers in a decimal representation can be converted to repeated zeros by multiplication with $9$. (try it out)</p> <p>so if we multiply $9 \sqrt{308642} = \sqrt{308642 \times 81} = \sqrt{25 000 002}$ since this number is allmost $5000^2$ it has a lot of zeros in its decimal expansion </p>
matrices
<blockquote> <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be two matrices which can be multiplied. Then <span class="math-container">$$\operatorname{rank}(AB) \leq \operatorname{min}(\operatorname{rank}(A), \operatorname{rank}(B)).$$</span></p> </blockquote> <p>I proved <span class="math-container">$\operatorname{rank}(AB) \leq \operatorname{rank}(B)$</span> by interpreting <span class="math-container">$AB$</span> as a composition of linear maps, observing that <span class="math-container">$\operatorname{ker}(B) \subseteq \operatorname{ker}(AB)$</span> and using the kernel-image dimension formula. This also provides, in my opinion, a nice interpretation: if non-stable, under subsequent compositions the kernel can only get bigger, and the image can only get smaller, in a sort of <em>loss of information</em>.</p> <p>How do you manage <span class="math-container">$\operatorname{rank}(AB) \leq \operatorname{rank}(A)$</span>? Is there a nice interpretation like the previous one?</p>
<p>Yes. If you think of A and B as linear maps, then the domain of A is certainly at least as big as the image of B. Thus when we apply A to either of these things, we should get "more stuff" in the former case, as the former is bigger than the latter.</p>
<p>Once you have proved <span class="math-container">$\operatorname{rank}(AB) \le \operatorname{rank}(A)$</span>, you can obtain the other inequality by using transposition and the fact that it doesn't change the rank (see e.g. this <a href="https://math.stackexchange.com/questions/2315/is-the-rank-of-a-matrix-the-same-of-its-transpose-if-yes-how-can-i-prove-it">question</a>). </p> <p>Specifically, letting <span class="math-container">$C=A^T$</span> and <span class="math-container">$D=B^T$</span>, we have that <span class="math-container">$\operatorname{rank}(DC) \le \operatorname{rank}(D) \implies \operatorname{rank}(C^TD^T)\le \operatorname{rank} (D^T)$</span>, which is <span class="math-container">$\operatorname{rank}(AB) \le \operatorname{rank}(B)$</span>.</p>
differentiation
<p>I've been trying to understand how the second order derivative "formula" works:</p> <p>$$\lim_{h\to0} \frac{f(x+h) - 2f(x) + f(x-h)}{h^2}$$</p> <p>So, the rate of change of the rate of change for an arbitrary continuous function. It basically feels right, since it samples "the after $x+h$ and the before $x-h$" and the $h^2$ is there (due to the expected /h/h -> /h*h), but I'm having trouble finding the equation on my own.</p> <p>It's is basically a derivative of a derivative, right? Newtonian notation declares as $f''$ and Leibniz's as $\frac{\partial^2{y}}{\partial{x}^2}$ which dissolves into:</p> <p>$$(f')'$$ and $$\frac{\partial{}}{\partial{x}}\frac{\partial{f}}{\partial{x}}$$</p> <p>So, first derivation shows the rate of change of a function's value relative to input. The second derivative shows the rate of change of the actual rate of change, suggesting information relating to how frequenly it changes.</p> <p>The original one is rather straightforward:</p> <p>$$\frac{\Delta y}{\Delta x} = \lim_{h\to0} \frac{f(x+h) - f(x)}{x + h - x} = \lim_{h\to0} \frac{f(x+h) - f(x)}{h}$$</p> <p>And can easily be shown that $f'(x) = nx^{n-1} + \dots$ is correct for the more forthcoming of polynomial functions. So, my logic suggests that to get the derivative of a derivative, one only needs to send the derivative function as input to finding the new derivative. I'll drop the $\lim_{h\to0}$ for simplicity:</p> <p>$$f'(x) = \frac{f(x+h) - f(x)}{h}$$</p> <p>So, the derivative of the derivative should be:</p> <p>$$f''(x) = \lim_{h\to0} \frac{f'(x+h) - f'(x)}{h}$$</p> <p>$$f''(x) = \lim_{h\to0} \frac{ \frac{ f(x+2h) - f(x+h)}{h} - \frac{ f(x+h) - f(x)}{h} }{h}$$</p> <p>$$f''(x) = \lim_{h\to0} \frac{ \frac{ f(x+2h) - f(x+h) - f(x+h) + f(x)}{h} }{h}$$</p> <p>$$f''(x) = \lim_{h\to0} \frac{ f(x+2h) - f(x+h) - f(x+h) + f(x) }{h^2}$$</p> <p>$$f''(x) = \lim_{h\to0} \frac{ f(x+2h) - 2f(x+h) + f(x) }{h^2}$$</p> <p>What am I doing wrong? Perhaps it is the mess of it all, but I just can't see it. Please help.</p>
<p>The only problem is that you’re looking at the wrong three points: you’re looking at $x+2h,x+h$, and $x$, and the version that you want to prove is using $x+h,x$, and $x-h$. Start with $$f\,''(x)=\lim_{h\to 0}\frac{f\,'(x)-f\,'(x-h)}h\;,$$ and you’ll be fine.</p> <p>To see that this really is equivalent to looking at $$f\,''(x)=\lim_{h\to 0}\frac{f\,'(x+h)-f\,'(x)}h\;,$$ let $k=-h$; then</p> <p>$$\begin{align*} f\,''(x)&amp;=\lim_{h\to 0}\frac{f\,'(x)-f\,'(x-h)}h\\ &amp;=\lim_{-k\to0}\frac{f\,'(x)-f\,'(x-(-k))}{-k}\\ &amp;=\lim_{k\to 0}\frac{f\,'(x-(-k))-f\,'(x)}k\\ &amp;=\lim_{k\to 0}\frac{f\,'(x+k)-f\,'(x)}k\;, \end{align*}$$</p> <p>and renaming the dummy variable back to $h$ completes the demonstration.</p>
<p>Using the Taylor series expansions of <span class="math-container">$f(x+h)$</span> and <span class="math-container">$f(x-h)$</span>,</p> <blockquote> <p><span class="math-container">$$ f(x+h) = f(x) + f'(x)h+f''(x)\frac{h^2}{2} + f'''(x)\frac{h^3}{3!}+\cdots $$</span></p> <p><span class="math-container">$$ f(x-h) = f(x) - f'(x)h+f''(x)\frac{h^2}{2} - f'''(x)\frac{h^3}{3!}+\cdots $$</span></p> </blockquote> <p>Adding the above equations gives</p> <blockquote> <p><span class="math-container">$$ \frac{f(x+h) - 2f(x) + f(x-h)}{h^2} = f''(x) + 2\frac{f''''(x)}{4!}h^2+\cdots $$</span></p> </blockquote> <p>taking the limit of the above equation as <span class="math-container">$h$</span> goes to zero gives the desired result</p> <blockquote> <p><span class="math-container">$$ \Rightarrow f''(x) = \lim_{h\to0} \frac{f(x+h) - 2f(x) + f(x-h)}{h^2} \,.$$</span></p> </blockquote>
linear-algebra
<p>Let $V$ be a vector space with infinite dimensions. A Hamel basis for $V$ is an ordered set of linearly independent vectors $\{ v_i \ | \ i \in I\}$ such that any $v \in V$ can be expressed as a finite linear combination of the $v_i$'s; so $\{ v_i \ | \ i \in I\}$ spans $V$ algebraically: this is the obvious extension of the finite-dimensional notion. Moreover, by Zorn Lemma, such a basis always exists.</p> <p>If we endow $V$ with a topology, then we say that an ordered set of linearly independent vectors $\{ v_i \ | \ i \in I\}$ is a Schauder basis if its span is dense in $V$ with respect to the chosen topology. This amounts to say that any $v \in V$ can be expressed as an infinite linear combination of the $v_i$'s, i.e. as a series.</p> <p>As far as I understand, if a $v$ can be expressed as finite linear combination of some set $\{ v_i \ | \ i \in I\}$, then it lies in its span; in other words, if $\{ v_i \ | \ i \in I\}$ is a Hamel basis, then it spans the whole $V$, and so it is a Schauder basis with respect to any topology on $V$.</p> <p>However Per Enflo has constructed a Banach space without Schauder basis (ref. <a href="http://en.wikipedia.org/wiki/Schauder_basis#Properties">wiki</a>). So I guess I should conclude that my reasoning is wrong, but I can't see what's the problem.</p> <p>Any help appreciated, thanks in advance!</p> <hr> <p>UPDATE: (coming from the huge amount of answers and comments) Forgetting for a moment the concerns about cardinality and sticking to span-properties, it has turned out that we have two different notions of linear independence: one involving finite linear combinations (Hamel-span, Hamel-independence, in the terminology introduced by rschwieb below), and one allowing infinite linear combinations (Schauder-stuff). So the point is that the vectors in a Hamel basis are Hamel independent (by def) but need not be Schauder-independent in general. As far as I understand, this is the fundamental reason why a Hamel basis is not automatically a Schauder basis.</p>
<p>People keep mentioning the restriction on the size of a Schauder basis, but I think it's more important to emphasize that these bases are bases with respect to <em>different spans</em>.</p> <p>For an ordinary vector space, only finite linear combinations are defined, and you can't hope for anything more. (Let's call these Hamel combinations.) In this context, you can talk about minimal sets whose Hamel combinations generate a vector space.</p> <p>When your vector space has a good enough topology, you can define countable linear combinations (which we'll call Schauder combinations) and talk about sets whose Schauder combinations generate the vector space.</p> <p>If you take a Schauder basis, you can still use it as a Hamel basis and look at its collection of Hamel combinations, and you should see its Schauder-span will normally be strictly larger than its Hamel-span.</p> <p>This also raises the question of linear independence: when there are two types of span, you now have two types of linear independence conditions. In principle, Schauder-independence is stronger because it implies Hamel-independence of a set of basis elements.</p> <p>Finally, let me swing back around to the question of the cardinality of the basis. I don't actually think (/know) that it's absolutely necessary to have infinitely many elements in a Schauder basis. In the case where you allow finite Schauder bases, you don't actually need infinite linear combinations, and the Schauder and Hamel bases coincide. But definitely there is a difference in the infinite dimensional cases. In that sense, using the modifier "Schauder" actually becomes useful, so maybe that is why some people are convinced Schauder bases might be infinite.</p> <p>And now about the limit on Schauder bases only being countable. Certainly given any space where countable sums converge, you can take a set of whatever cardinality and still consider its Schauder span (just like you could also consider its Hamel span). I know that the case of a separable space is especially useful and popular, and necessitates a countable basis, so that is probably why people tend to think of Schauder bases as countable. But I had thought uncountable Schauder bases were also used for inseparable Hilbert spaces.</p>
<p>The problem is that an element of a Hamel basis might be an <em>infinite</em> linear combination of the other basis elements. Essentially, linear dependence changes definition.</p>
linear-algebra
<p>It would seem that one way of proving this would be to show the existence of non-algebraic numbers. Is there a simpler way to show this?</p>
<p>The cardinality argument mentioned by Arturo is probably the simplest. Here is an alternative: an explicit example of an infinite <span class="math-container">$\, \mathbb Q$</span>-independent set of reals. Consider the set consisting of the logs of all primes <span class="math-container">$\, p_i.\,$</span> If <span class="math-container">$ \, c_1 \log p_1 +\,\cdots\, + c_n\log p_n =\, 0,\ c_i\in\mathbb Q,\,$</span> multiplying by a common denominator we can assume that all <span class="math-container">$\ c_i \in \mathbb Z\,$</span> so, exponentiating, we obtain <span class="math-container">$\, p_1^{\large c_1}\cdots p_n^{\large c_n}\! = 1\,\Rightarrow\ c_i = 0\,$</span> for all <span class="math-container">$\,i,\,$</span> by the uniqueness of prime factorizations.</p>
<p>As Steve D. noted, a finite dimensional vector space over a countable field is necessarily countable: if $v_1,\ldots,v_n$ is a basis, then every vector in $V$ can be written uniquely as $\alpha_1 v_1+\cdots+\alpha_n v_n$ for some scalars $\alpha_1,\ldots,\alpha_n\in F$, so the cardinality of the set of all vectors is exactly $|F|^n$. If $F$ is countable, then this is countable. Since $\mathbb{R}$ is uncountable and $\mathbb{Q}$ is countable, $\mathbb{R}$ cannot be finite dimensional over $\mathbb{Q}$. (Whether it has a basis or not depends on your set theory).</p> <p>Your further question in the comments, whether a vector space over $\mathbb{Q}$ is finite dimensional <em>if and only if</em> the set of vectors is countable, has a negative answer. If the vector space is finite dimensional, then it is a countable set; but there are infinite-dimensional vector spaces over $\mathbb{Q}$ that are countable as sets. The simplest example is $\mathbb{Q}[x]$, the vector space of all polynomials with coefficients in $\mathbb{Q}$, which is a countable set, and has dimension $\aleph_0$, with basis $\{1,x,x^2,\ldots,x^n,\ldots\}$. </p> <p><strong>Added:</strong> Of course, if $V$ is a vector space over $\mathbb{Q}$, then it has <em>countable dimension</em> (finite or denumerable infinite) if and only if $V$ is countable as a set. So the counting argument in fact shows that not only is $\mathbb{R}$ infinite dimensional over $\mathbb{Q}$, but that (if you are working in an appropriate set theory) it is <em>uncountably</em>-dimensional over $\mathbb{Q}$. </p>
probability
<p>In <a href="https://math.stackexchange.com/a/656426/25554">this math.se post</a> I described in some detail a certain paradox, which I will summarize:</p> <blockquote> <p>$A$ writes two distinct numbers on slips of paper. $B$ selects one of the slips at random (equiprobably), examines its number, and then, without having seen the other number, predicts whether the number on her slip is the larger or smaller of the two. $B$ can obviously achieve success with probability $\frac12$ by flipping a coin, and it seems impossible that she could do better. However, there is a strategy $B$ can follow that is guaranteed to produce a correct prediction with probability strictly greater than $\frac12$.</p> </blockquote> <p>The strategy, in short, is:</p> <ul> <li>Prior to selecting the slip, $B$ should select some probability distribution $D$ on $\Bbb R$ that is everywhere positive. A normal distribution will suffice.</li> <li>$B$ should generate a random number $y\in \Bbb R$ distributed according to $D$. </li> <li>Let $x$ be the number on the slip selected by $B$. If $x&gt;y$, then $B$ predicts that $x$ is the larger of the two numbers; if $x&lt;y$ she predicts that $x$ is the smaller of the two numbers. ($y=x$ occurs with probability $0$ and can be disregarded.)</li> </ul> <p>I omit the analysis that shows that this method predicts correctly with probability <em>strictly</em> greater than $\frac12$; the details are in the other post.</p> <p>I ended the other post with “I have heard this paradox attributed to Feller, but I'm afraid I don't have a reference.”</p> <p>I would like a reference.</p>
<p>Thanks to a helpful comment, since deleted, by user <a href="https://math.stackexchange.com/users/128037/stefanos">Stefanos</a>, I was led to this (one-page) paper of <a href="https://en.wikipedia.org/wiki/Thomas_M._Cover">Thomas M. Cover</a> “<a href="http://www-isl.stanford.edu/~cover/papers/paper73.pdf">Pick the largest number</a>”<em>Open Problems in Communication and Computation</em> Springer-Verlag, 1987, p152.</p> <p>Stefanos pointed out that there is an extensive discussion of related paradoxes in the Wikipedia article on the ‘<a href="https://en.wikipedia.org/wiki/Two_envelopes_problem">Two envelope problem</a>’. Note that the paradox I described above does not appear until late in the article, in the section "<a href="https://en.wikipedia.org/wiki/Two_envelopes_problem#Randomized_solutions">randomized solutions</a>".</p> <p>Note also that the main subject of that article involves a paradox that arises from incorrect reasoning, whereas the variation I described above is astonishing but sound.</p> <p>I would still be interested to learn if this paradox predates 1987; I will award the "accepted answer" check and its 15 points to whoever posts the earliest appearance.</p>
<p><a href="https://johncarlosbaez.wordpress.com/2015/07/20/the-game-of-googol/" rel="noreferrer">In his blog post</a>, mathematical physicist John Baez considered this problem described by Thomas M. Cover as the special case $n = 2$ of what was called "the game of googol" by Martin Gardner in his column in the Feb. 1960 issue of Scientific American.</p> <p>That is, John Baez accepted (the suggestion by one of his blog readers) that this is a variant of the famous secretary problem, which <a href="https://en.wikipedia.org/wiki/Secretary_problem#The_game_of_googol" rel="noreferrer">wiki entry actually includes this "game of googol".</a></p> <p>If you are willing to take this point of view (Thomas M. Cover sort of did, in his closing remark), then the quest becomes the history of (this variant of) the secretary problem.</p> <p>John Baez's blog post is a good expository piece with some good discussion in the comments there. However, he didn't really do the literature search that pertains to Thomas M. Cover's approach going backwards in time.</p> <p>Similarly, there's no citation in that <a href="https://www.quantamagazine.org/information-from-randomness-puzzle-20150707" rel="noreferrer">Quanta magazine article</a> that got John Baez's notice. FWIW, the analysis there is thorough enough while being easily accessible to the general public.</p> <p>My personal stance at this point is that, nobody before Thomas M. Cover looked at the secretary problem in this particular way. He passed away in 2012 so one can only learn how he encountered this problem by studying his notes or asking his collaborators, friends, etc.</p> <p>I have to admit that I didn't go through <a href="https://scholar.google.com.tw/scholar?hl=en&amp;as_sdt=0,5&amp;sciodt=0,5&amp;cites=10356219223560197333&amp;scipsc=" rel="noreferrer">all the publications (27 so far) that cited that short paper</a> by Thomas M. Cover to see if they found anything preceding that.</p> <p>For the record, if one wants to see Martin Gardner's original text, I cannot find an open access of the Feb.1960 Scientific American nor the earliest reprint of the column in his 1966 book.${}^2$ The best thing I've got is the <a href="https://books.google.com.tw/books?id=sUuBCzazfYUC&amp;pg=PA17&amp;lpg=PA17&amp;dq=martin+gardner+googol&amp;source=bl&amp;ots=4Mr5gjyJgn&amp;sig=awPckIsv5Oi5Wm6sTuMydqFIicg&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwixsIv40OjXAhUFHpQKHbewC1oQ6AEIQTAE#v=onepage&amp;q=martin%20gardner%20googol&amp;f=false" rel="noreferrer">1994 google book</a>. See the 2nd paragraph on p.18 for the case $n=2$. Martin Gardner certainly didn't know what he was missing.</p> <hr> <p><strong>Footnote 1:</strong>$\quad$ <em>"New Mathematical Diversions from Scientific American". Simon and Schuster, 1966, Chapter 3, Problem 3</em>. This is a relatively long chapter in the book, as it essentially deals with the secretary problem.</p>
geometry
<p>Since Earth is a sphere, one has only a limited visibility radius. How far is that, actually?</p> <p>This Q&amp;A was inspired by <a href="https://physics.stackexchange.com/questions/122785/could-legolas-actually-see-that-far">this</a> question, about whether or not Legolas can see the 24km distant Riders of Rohan.</p>
<p>I have up-voted the answer by M.Herzkamp, but I also think he makes it somewhat more complicated than it needs to be. The distance from the center of the earth to your eye is $r+h$, where $r$ is the radius of the earth and $h$ is the height of your eye above the ground. The distance from the center of the earth to a point on the horizon is $r$. The distance from your eye to the point on the horizon let us call $d$. The three sides of a right triangle are then the legs, $r$ and $d$, and the hypotenuse $r+h$. Applying the Pythagorean theorem, we have $$ r^2 + d^2 = (r+h)^2. $$ It follows that $$ d^2 = (r+h)^2 - r^2 $$ so $$ d=\sqrt{(r+h)^2-r^2}. $$ This admits simplifiction: $$ d=\sqrt{(r+h)^2-r^2} = \sqrt{(r^2+2rh+h^2) - r^2} = \sqrt{2rh + h^2}. $$ When $h$ is tiny compared to $r$, we can say $$ d \approx \sqrt{2rh\,{}}. $$</p>
<p>Let us suppose, an observer of height $h$ stands on a perfectly spherical planet of radius $r$:</p> <p><img src="https://i.sstatic.net/pEGyV.png" alt="schema"></p> <p><strong>Edit</strong>: here is an easier way, making use of the right angle between the line of sight and the radial ray. You can just use the definition of the cosine:</p> <p>$$ \cos(\theta_T) = \frac{r}{r+h} \qquad \Rightarrow \qquad s = r\cdot\theta_T = r\cdot\cos^{-1}\!\!\left(\frac{r}{r+h}\right) $$</p> <p>which is equivalent to the solution obtained by the complicated method. <strong>/Edit</strong></p> <p>The distance $s$ to the farthest point he can then see is determined by the tangent to the semi circle through his head. If you describe the semi circle in a cartesian coordinate system by $$ y^2+x^2 = r^2, $$ the observer's head is at $y=r+h,\ x=0$.</p> <p>To obtain the slope of the tangent, we plug the tangent equation $y=mx+r+h$ into the circle equation and solve for $x$: $$ x_{1/2} = -(r+h)\frac{m}{1+m^2} \pm \sqrt{\frac{(r+h)^2m^2}{(1+m^2)^2}+\frac{r^2-b^2}{1+m^2}} $$ Those are two intersection points, and in order to have a tangent, they must be equal. That is the case, if the term under the square root is zero. The resulting equation can be solved for $m$: $$ m_{\pm} = \pm \sqrt{\frac{(r+h)^2}{r^2}-1} $$ Let's take the negative solution for the tangent on the right (it does not matter), and calculate the tangent point: $$ x_T = -(r+h)\frac{m_-}{1+m^2_-} = \frac{r}{r+h}\sqrt{(r+h)^2-r^2} $$ The viewing distance angle is $\theta_T = \text{asin}(x_T)$. To get the viewing distance, we observe that $$ \frac{s}{2\pi r} = \frac{\theta_T}{\text{full angle}} = \frac{\theta_T}{2\pi}\text{, with angle in radian} $$ $$ \Rightarrow s(h) = r\cdot\text{asin}\left(\sqrt{1-\frac{r^2}{(r+h)^2}}\right) $$ If you plot this for $h$ small compared to $r$, it resembles a square root function, and indeed, $$ \lim_{h\rightarrow0^+}\frac{s(h)}{\sqrt{h}} = \sqrt{2r} $$ which means that for small heights, the viewing distance can be described as $$ s(h) \approx \sqrt{2rh} $$ On Earth ($r\approx6371\text{km}$), a normal person ($h\approx1.8\text{m}$) can see the surface about 4.8km away. Not much further. If you climb a hill or tree ($h\approx 50\text{m}$), your range increases to 25km!</p>
logic
<p>I'm far from being an expert in the field of mathematical logic, but I've been reading about the academic work invested in the foundations of mathematics, both in a historical and objective sense; and I learned that it all seems to reduce to a proper -axiomatic- formulation of set theory.</p> <p>It also seems that all set theories (even if those come in ontologically different flavours, such as the ones which pursue the &quot;<a href="http://ncatlab.org/nlab/show/material+set+theory" rel="noreferrer">iterative approach</a>&quot; like ZFC, versus the &quot;<a href="http://ncatlab.org/nlab/show/structural+set+theory" rel="noreferrer">stratified approach</a>&quot; -inspired by Russell's and Whitehead's type theory first formulated in their <em>Principia</em>- such as Quine's NFU or Mendelson's ST) are built as collections of axioms expressed in a <strong>common language</strong>, which invariably involves an underlying <strong>first order predicate logic</strong> augmented with the set-membership binary relation symbol. From this follows that FOL makes up the (<em>necessary</em>) &quot;formal template&quot; in mathematics, at least from a foundational perspective.</p> <p>The justification of this very fact, is the reason behind this question. All the stuff I've read about the metalogical virtues of FOL and the properties of its &quot;extensions&quot; could be summarized as the statements below:</p> <ul> <li>FOL is complete (<a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem" rel="noreferrer">Gödel, 1929</a>), compact and sound, and all its particular formalizations as deductive systems are equivalent (<a href="https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s_theorem" rel="noreferrer">Lindström, 1969</a>). That means that, given a (consistent) collection of axioms on top of a FOL deductive system, the set of all theorems which are syntactically provable, are semantically satisfied by a model of the axioms. The specification of the axioms absolutely entails all its consequences; and the fact that every first order deductive system is equivalent, suggests that FOL is a context-independent (i.e. objective), formal structure.</li> <li>On the other hand, the <a href="https://en.wikipedia.org/wiki/Skolem-L%C3%B6wenheim_theorem" rel="noreferrer">Löwenheim–Skolem theorem</a> implies that FOL cannot categorically characterize infinite structures, and so every first order theory satisfied by a model of a particular infinite cardinality, is also satisfied by multiple additional models of every other infinite cardinality. This non-categoricity feature is explained to be caused by the lack of expressive power of FOL.</li> <li>The categoricity results that FOL-based theories cannot achieve, can be obtained in a Second Order Logic (SOL) framework. Examples abound in ordinary mathematics, such as the <em><a href="https://en.wikipedia.org/wiki/Least-upper-bound_property" rel="noreferrer">Least Upper Bound</a></em> axiom, which allows the definition of the real number system up to isomorphism. Nevertheless, SOL fails to verify an analog to the completeness results of FOL, and so there is no general match between syntactic provability and semantic satisfiability (in other words, it doesn't admit a complete proof calculus). That means that, even if a chosen collection of axioms is able to categorically characterize an infinite mathematical structure, there is an infinite set of wff's satisfied by the unique model of the axioms which cannot be derived through deduction.</li> <li>The syntactic-semantic schism in SOL also implies that there is no such a thing as an equivalent formulation of potential deductive systems, as is the case in FOL and stated by Lindström's theorem. One of the results of this fact is that the domain over which second order variables range must be specified, otherwise being ill-defined. If the domain is allowed to be the full set of subsets of the domain of first order variables, the corresponding <strong>standard semantics</strong> involve the formal properties stated above (enough expressive power to establish categoricity results, and incompleteness of potential, non-equivalent deductive systems). On the other hand, through an appropriate definition of second order domains for second order variables to range over, the resultant logic exhibits <strong>nonstandard semantics</strong> (or <a href="https://en.wikipedia.org/wiki/Henkin_model#Semantics" rel="noreferrer">Henkin semantics</a>) which can be shown to be equivalent to many-sorted FOL; and as single-sorted FOL, it verifies the same metalogical properties stated at the beginning (and of course, its lack of expressive power).</li> <li>The quantification extension over variables of successive superior orders can be formalized, or even eliminate the distinction between individual (first order) variables and predicates; in each case, is obtained -for every N- an Nth Order Logic (NOL), and Higher Order Logic (HOL), respectively. Nevertheless, it can be shown (<a href="http://plato.stanford.edu/entries/logic-higher-order/" rel="noreferrer">Hintikka, 1955</a>) that any sentence in any logic over FOL with standard semantics to be equivalent (in an effective manner) to a sentence in full SOL, using many-sorting.</li> <li>All of this points to the fact that the fundamental distinction, in logical terms, <strong>lies between FOL</strong> (be it single-sorted or many-sorted) <strong>and SOL</strong> (with <strong>standard semantics</strong>). Or what seems to be the case, the logical foundations of every mathematical theory must be either non-categorical or lack a complete proof calculus, with nothing in between that trade-off.</li> </ul> <p>Why, so, is FOL invariably chosen as the underlying logic on top of which the set theoretical axioms are established, in any potentially foundational formalization of mathematics?</p> <p>As I've said, I'm not an expert in this topic, and I just happen to be interested in these themes. What I wrote here is a summary of what I assume I understood of what I read (even though I'm personally inclined against the people who speaks about what they don't fully understand). In this light, I'd be very pleased if any answer to this question involves a rectification of any assertion which happened to be wrong.</p> <p><strong>P.S.</strong> : this is an exact repost of the question I originally asked at <a href="https://philosophy.stackexchange.com/questions/3318/is-first-order-logic-fol-the-only-fundamental-logic">Philosophy .SE</a>, because I assumed this to be an overly philosophical matter, and so it wouldn't be well received by the mathematics community. The lack of response there (be it because I wrong, and this actually makes up a question which can only be answered with a technical background on the subject, or because it's of little philosophical interest) is the reason why I decided to ask it here. Feel free to point out if my original criteria was actually correct, and of course, I'll take no offense if any moderator takes actions because of the probable unsuitability of the question in this site.</p>
<p>For the history of first-order logic I strongly recommend &quot;The Road to Modern Logic-An Interpretation&quot;, José Ferreirós, <em>Bull. Symbolic Logic</em> v.7 n.4, 2001, 441-484. <a href="https://personal.us.es/josef/BSL0704-001.pdf" rel="nofollow noreferrer">Author's website</a>, <a href="https://www.math.ucla.edu/%7Easl/bsl/07-toc.htm" rel="nofollow noreferrer">ASL</a>, <a href="https://www.jstor.org/stable/2687794" rel="nofollow noreferrer">JSTOR</a>, <a href="https://doi.org/10.2307/2687794" rel="nofollow noreferrer">DOI: 10.2307/2687794</a>.</p> <p>Apart from first order logic and higher order logic there are several less well known logics that I can mention:</p> <ul> <li><p>Constructive logics used to formalize intuitionism and related areas of constructive mathematics. One key example is Martin-Löf's intuitionistic type theory, which is also very relevant to theoretical computer science.</p> </li> <li><p>Modal logics used to formalize modalities, primarily &quot;possibility&quot; and &quot;necessity&quot;. This is of great interest in philosophy, but somehow has not drawn much interest in ordinary mathematics.</p> </li> <li><p>Paraconsistent logics, which allow for some inconsistencies without the problem of explosion present in most classical and constructive logics. Again, although this is of great philosophical interest, it has not drawn much attention in ordinary math.</p> </li> <li><p>Linear logic, which is more obscure but can be interpreted as a logic where the number of times that an hypothesis is used actually matters, unlike classical logic.</p> </li> </ul>
<p>I have not an answer, just an additional note about this.</p> <p>In 2000, <a href="https://en.wikipedia.org/wiki/John_Alan_Robinson" rel="nofollow noreferrer">John Alan Robinson</a> (known for joining the &quot;cut&quot; logic inference rule and unification into the <a href="https://en.wikipedia.org/wiki/Resolution_%28logic%29" rel="nofollow noreferrer">Resolution</a> logic inference rule, thus giving logic programming its practical and unifying processing principle) authored an eminently readable overview of the &quot;computational logic&quot; research domain: <a href="http://www-personal.umich.edu/%7Etwod/sof/assignments/computational_logic18610001.pdf" rel="nofollow noreferrer">&quot;Computational Logic: Memories of the Past and Challenges for the Future&quot;</a>. On page 2 of that overview, he wrote the following:</p> <blockquote> <p><em>First Order Predicate Calculus: All the Logic We Have and All the Logic We Need.</em></p> <p>By logic I mean the ideas and notations comprising the classical first order predicate calculus with equality (FOL for short). FOL is all the logic we have and all the logic we need. (...)</p> <p>Within FOL we are completely free to postulate, by formulating suitably axiomatized first order theories, whatever more exotic constructions we may wish to contemplate in our ontology, or to limit ourselves to more parsimonious means of inference than the full classical repertoire.</p> <p>The first order theory of combinators, for example, provides the semantics of the lambda abstraction notation, which is thus available as syntactic sugar for a deeper, first-order definable, conceptual device.</p> <p>Thus FOL can be used to set up, as first order theories, the many “other logics” such as modal logic, higher order logic, temporal logic, dynamic logic, concurrency logic, epistemic logic, nonmonotonic logic, relevance logic, linear logic, fuzzy logic, intuitionistic logic, causal logic, quantum logic; and so on and so on.</p> <p>The idea that FOL is just one among many &quot;other logics&quot; is an unfortunate source of confusion and apparent complexity. The &quot;other logics&quot; are simply notations reflecting syntactically sugared definitions of notions or limitations which can be formalized within FOL. (...) All those “other logics”, including higher-order logic, are thus theories formulated, like general set theory and indeed all of mathematics, within FOL.</p> </blockquote> <p>So I wanted to know what he meant by this and whether this statement can be defended. This being Robinson, I would assume that yes. I am sure there would be domains where expressing the mathematical constraints using FOL instead of something more practical would be possible in principle, but utterly impractical in fact.</p> <p>The question as stated is better than I could have ever have formulated (because I don't know half of what is being mentioned). Thank you!</p> <h2>Addendum (very much later)</h2> <p>In <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.476.7046" rel="nofollow noreferrer">The Emergence of First-Order Logic</a>, <a href="https://www.americanscientist.org/authors/detail/gregory-moore" rel="nofollow noreferrer">Gregory H. Moore</a> explains (a bit confusingly, these papers on history need timelines and diagrams) how mathematicians converged on what is today called (untyped) FOL from initially richer logics with Second-Order features. In particular, to formalize Set Theory.</p> <p>The impression arises that FOL is not only non-fundamental in any particular way but objectively reduced in expressiveness relative to a Second-Order Logic. It has just been studied more.</p> <p>Thus, Robinson's &quot;FOL as all of logic with anything beyond reducible to FOL&quot; sounds like a grumpy and ill-advised statement (even more so as the reductions, if they even exist, will be exponentially large or worse). Robinson may be referring to <a href="https://en.wikipedia.org/wiki/Willard_Van_Orman_Quine#Logic" rel="nofollow noreferrer">Willard Van Orman Quine's</a> works. Quine dismisses anything outside of FOL as &quot;not a logic&quot;, which to me is frankly incomprehensible. Quine's attack on Second-Order Logic is criticized by <a href="https://en.wikipedia.org/wiki/George_Boolos" rel="nofollow noreferrer">George Boolos</a> in his 1975 paper &quot;On Second-Order Logic&quot; (found in &quot;Logic, Logic and Logic&quot;), which is not available on the Internet, but searching for that paper brings up other hits of interest, like &quot;Second-Order Logic Revisited&quot; by Otávio Bueno. In any case, I don't understand half of the rather fine points made. The fight goes on.</p> <p>Let me quote liberally from the conclusion of Gregory Moore's fine paper:</p> <blockquote> <p>As we have seen, the logics considered from 1879 to 1923 — such as those of Frege, Peirce, Schröder, Löwenheim, Skolem, Peano, and Russell — were generally richer than first-order logic. This richness took one of two forms: the use of infinitely long expressions (by Peirce, Schröder, Hilbert, Löwenheim, and Skolem) and the use of a logic at least as rich as second-order logic (by Frege, Peirce, Schröder, Löwenheim, Peano, Russell, and Hilbert). The fact that no system of logic predominated — although the Peirce-Schröder tradition was strong until about 1920 and <em>Principia Mathematica</em> exerted a substantial influence during the 1920s and 1930s — encouraged both variety and richness in logic.</p> <p>First-order logic emerged as a distinct subsystem of logic in Hilbert's lectures (1917) and, in print, in (<a href="https://en.wikipedia.org/wiki/Principles_of_Mathematical_Logic" rel="nofollow noreferrer">Hilbert and Ackermann 1928</a>). Nevertheless, Hilbert did not at any point regard first-order logic as the proper basis for mathematics. From 1917 on, he opted for the <a href="https://plato.stanford.edu/entries/type-theory/" rel="nofollow noreferrer">theory of types</a> — at first the ramified theory with the <a href="https://en.wikipedia.org/wiki/Axiom_of_reducibility" rel="nofollow noreferrer">Axiom of Reducibility</a> and later a version of the simple theory of types (<span class="math-container">$\omega$</span> - order logic). Likewise, it is inaccurate to regard what Löwenheim did in (1915) as first-order logic. Not only did he consider second-order propositions, but even his first-order subsystem included infinitely long expressions.</p> <p>It was in Skolem's work on set theory (1923) that <strong>first-order logic was first proposed as all of logic</strong> and that <strong>set theory was first formulated within first-order logic</strong>. (Beginning in [1928], Herbrand treated the theory of types as merely a mathematical system with an underlying first-order logic.) Over the next four decades Skolem attempted to convince the mathematical community that both of his proposals were correct. The first claim, that first-order logic is all of logic, was taken up (perhaps independently) by Quine, who argued that second-order logic is really set theory in disguise (1941). This claim fared well for a while. After the emergence of a distinct infinitary logic in the 1950s (thanks in good part to Tarski) and after the introduction of <a href="https://plato.stanford.edu/entries/generalized-quantifiers/" rel="nofollow noreferrer">generalized quantifiers</a> (thanks to Mostowski [1957]), <strong>first-order logic is clearly not all of logic</strong>. Skolem's second claim, that set theory should be formulated in first-order logic, was much more successful, and today this is how almost all set theory is done.</p> <p>When Gödel proved the completeness of first-order logic (1929, 1930) and then the incompleteness of both second-order and co-order logic (1931), he both stimulated first-order logic and inhibited the growth of second-order logic. On the other hand, his incompleteness results encouraged the search for an appropriate infinitary logic—by Carnap (1935) and Zermelo (1935). The acceptance of first-order logic as one basis on which to formulate all of mathematics came about gradually during the 1930s and 1940s, aided by Bernays's and Gödel's first-order formulations of set theory.</p> <p>Yet Maltsev (1936), through the use of uncountable first-order languages, and Tarski, through the Upward Löwenheim-Skolem Theorem and the definition of truth, rejected the attempt by Skolem to restrict logic to countable first-order languages. In time, uncountable first-order languages and uncountable models became a standard part of the repertoire of first-order logic. Thus set theory entered logic through the back door, both syntactically and semantically, though it failed to enter through the front door of second-order logic.</p> </blockquote> <p>There sure is space for a few updated books in all of this.</p>
linear-algebra
<p>I've seen the statement "The matrix product of two orthogonal matrices is another orthogonal matrix. " on Wolfram's website but haven't seen any proof online as to why this is true. By orthogonal matrix, I mean an $n \times n$ matrix with orthonormal columns. I was working on a problem to show whether $Q^3$ is an orthogonal matrix (where $Q$ is orthogonal matrix), but I think understanding this general case would probably solve that.</p>
<p>If <span class="math-container">$$Q^TQ = I$$</span> <span class="math-container">$$R^TR = I,$$</span> then <span class="math-container">$$(QR)^T(QR) = (R^TQ^T)(QR) = R^T(Q^TQ)R = R^TR = I.$$</span> Of course, this can be extended to <span class="math-container">$n$</span> many matrices inductively.</p>
<p>As an alternative to the other fine answers, here's a more geometric viewpoint:</p> <p>Orthogonal matrices correspond to linear transformations that preserve the length of vectors (isometries). And the composition of two isometries $F$ and $G$ is obviously also an isometry.</p> <p>(Proof: For all vectors $x$, the vector $F(x)$ has the same length as $x$ since $F$ is an isometry, and $G(F(x))$ has the same length as $F(x)$ since $G$ is an isometry; hence $G(F(x))$ has the same length as $x$.)</p>
combinatorics
<p>I saw this question today, it asks how many triangles are in this picture.</p> <p><img src="https://i.sstatic.net/c8X1c.jpg" alt="enter image description here"></p> <p>I don't know how to solve this (without counting directly), though I guess it has something to do with some recurrence.</p> <p>How can count the number of all triangles in the picture ?</p>
<p>Say that instead of four triangles along each edge we have $n$. First count the triangles that point up. This is easy to do if you count them by top vertex. Each vertex in the picture is the top of one triangle for every horizontal grid line below it. Thus, the topmost vertex, which has $n$ horizontal gridlines below it, is the top vertex of $n$ triangles; each of the two vertices in the next row down is the top vertex of $n-1$ triangles; and so on. This gives us a total of</p> <p>$$\begin{align*} \sum_{k=1}^nk(n+1-k)&amp;=\frac12n(n+1)^2-\sum_{k=1}^nk^2\\ &amp;=\frac12n(n+1)^2-\frac16n(n+1)(2n+1)\\ &amp;=\frac16n(n+1)\Big(3(n+1)-(2n+1)\Big)\\ &amp;=\frac16n(n+1)(n+2)\\ &amp;=\binom{n+2}3 \end{align*}$$</p> <p>upward-pointing triangles.</p> <p>The downward-pointing triangles can be counted by their by their bottom vertices, but it’s a bit messier. First, each vertex not on the left or right edge of the figure is the bottom vertex of a triangle of height $1$, and there are $$\sum_{k=1}^{n-1}=\binom{n}2$$ of them. Each vertex that is not on the left or right edge or on the slant grid lines adjacent to those edges is the bottom vertex of a triangle of height $2$, and there are </p> <p>$$\sum_{k=1}^{n-3}k=\binom{n-2}2$$ of them. In general each vertex that is not on the left or right edge or on one of the $h-1$ slant grid lines nearest each of those edges is the bottom vertex of a triangle of height $h$, and there are </p> <p>$$\sum_{k=1}^{n+1-2h}k=\binom{n+2-2h}2$$ of them. </p> <p><strong>Algebra beyond this point corrected.</strong></p> <p>The total number of downward-pointing triangles is therefore</p> <p>$$\begin{align*} \sum_{h\ge 1}\binom{n+2-2h}2&amp;=\sum_{k=0}^{\lfloor n/2\rfloor-1}\binom{n-2k}2\\ &amp;=\frac12\sum_{k=0}^{\lfloor n/2\rfloor-1}(n-2k)(n-2k-1)\\ &amp;=\frac12\sum_{k=0}^{\lfloor n/2\rfloor-1}\left(n^2-4kn+4k^2-n+2k\right)\\ &amp;=\left\lfloor\frac{n}2\right\rfloor\binom{n}2+2\sum_{k=0}^{\lfloor n/2\rfloor-1}k^2-(2n-1)\sum_{k=0}^{\lfloor n/2\rfloor-1}k\\ &amp;=\left\lfloor\frac{n}2\right\rfloor\binom{n}2+\frac13\left\lfloor\frac{n}2\right\rfloor\left(\left\lfloor\frac{n}2\right\rfloor-1\right)\left(2\left\lfloor\frac{n}2\right\rfloor-1\right)\\ &amp;\qquad\qquad-\frac12(2n-1)\left\lfloor\frac{n}2\right\rfloor\left(\left\lfloor\frac{n}2\right\rfloor-1\right)\;. \end{align*}$$</p> <p>Set $\displaystyle m=\left\lfloor\frac{n}2\right\rfloor$, and this becomes</p> <p>$$\begin{align*} &amp;m\binom{n}2+\frac13m(m-1)(2m-1)-\frac12(2n-1)m(m-1)\\ &amp;\qquad\qquad=m\binom{n}2+m(m-1)\left(\frac{2m-1}3-n+\frac12\right)\;. \end{align*}$$</p> <p>This simplifies to $$\frac1{24}n(n+2)(2n-1)$$ for even $n$ and to</p> <p>$$\frac1{24}\left(n^2-1\right)(2n+3)$$ for odd $n$.</p> <p>The final figure, then is</p> <p>$$\binom{n+2}3+\begin{cases} \frac1{24}n(n+2)(2n-1),&amp;\text{if }n\text{ is even}\\\\ \frac1{24}\left(n^2-1\right)(2n+3),&amp;\text{if }n\text{ is odd}\;. \end{cases}$$</p>
<h1>Tabulating numbers</h1> <p>Let $u(n,k)$ denote the number of upwards-pointing triangles of size $k$ included in a triangle of size $n$, where size is a short term for edge length. Let $d(n,k)$ likewise denote the number of down triangles. You can tabulate a few numbers to get a feeling for these. In the following table, row $n$ and column $k$ will contain two numbers separated by a comma, $u(n,k), d(n,k)$.</p> <p>$$ \begin{array}{c|cccccc|c} n \backslash k &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 &amp; \Sigma \\\hline 1 &amp;  1, 0 &amp;&amp;&amp;&amp;&amp;&amp;  1 \\ 2 &amp;  3, 1 &amp;  1,0 &amp;&amp;&amp;&amp;&amp;  5 \\ 3 &amp;  6, 3 &amp;  3,0 &amp;  1,0 &amp;&amp;&amp;&amp; 13 \\ 4 &amp; 10, 6 &amp;  6,1 &amp;  3,0 &amp; 1,0 &amp;&amp;&amp; 27 \\ 5 &amp; 15,10 &amp; 10,3 &amp;  6,0 &amp; 3,0 &amp; 1,0 &amp;&amp; 48 \\ 6 &amp; 21,15 &amp; 15,6 &amp; 10,1 &amp; 6,0 &amp; 3,0 &amp; 1,0 &amp; 78 \end{array} $$</p> <h1>Finding a pattern</h1> <p>Now look for patterns:</p> <ul> <li>$u(n, 1) = u(n - 1, 1) + n$ as the size change added $n$ upwards-pointing triangles</li> <li>$d(n, 1) = u(n - 1, 1)$ as the downward-pointing triangles are based on triangle grid of size one smaller</li> <li>$u(n, n) = 1$ as there is always exactly one triangle of maximal size</li> <li>$d(2k, k) = 1$ as you need at least twice its edge length to contain a downward triangle.</li> <li>$u(n, k) = u(n - 1, k - 1)$ by using the small $(k-1)$-sized triangle at the top as a representant of the larger $k$-sized triangle, excluding the bottom-most (i.e. $n$<sup>th</sup>) row.</li> <li>$d(n, k) = u(n - k, k)$ as the grid continues to expand, adding one row at a time.</li> </ul> <p>Using these rules, you can extend the table above arbitrarily.</p> <p>The important fact to note is that you get the same sequence of $1,3,6,10,15,21,\ldots$ over and over again, in every column. It describes grids of triangles of same size and orientation, increasing the grid size by one in each step. For this reason, those numbers are also called <a href="http://oeis.org/A000217">triangular numbers</a>. Once you know where the first triangle appears in a given column, the number of triangles in subsequent rows is easy.</p> <h1>Looking up the sequence</h1> <p>Now take that sum column to <a href="https://oeis.org/">OEIS</a>, and you'll find this to be <a href="https://oeis.org/A002717">sequence A002717</a> which comes with a nice formula:</p> <p>$$\left\lfloor\frac{n(n+2)(2n+1)}8\right\rfloor$$</p> <p>There is also a comment stating that this sequence describes the</p> <blockquote> <p>Number of triangles in triangular matchstick arrangement of side $n$.</p> </blockquote> <p>Which sounds just like what you're asking.</p> <h1>References</h1> <p>If you want to know how to obtain that formula without looking it up, or how to check that formula without simply trusting an encyclopedia, then some of the references given at OEIS will likely help you out:</p> <ul> <li>J. H. Conway and R. K. Guy, <em>The Book of Numbers,</em> <a href="http://books.google.com/books?id=0--3rcO7dMYC&amp;pg=PA83">p. 83</a>.</li> <li>F. Gerrish, <em><a href="http://www.jstor.org/stable/3613774">How many triangles</a>,</em> Math. Gaz., 54 (1970), 241-246.</li> <li>J. Halsall, <em><a href="http://www.jstor.org/stable/3613117">An interesting series</a>,</em> Math. Gaz., 46 (1962), 55-56.</li> <li>M. E. Larsen, <em><a href="http://www.math.ku.dk/~mel/mel.pdf">The eternal triangle – a history of a counting problem</a>,</em> College Math. J., 20 (1989), 370-392.</li> <li>C. L. Hamberg and T. M. Green, <em><a href="http://www.jstor.org/stable/27957564">An application of triangular numbers</a>,</em> Mathematics Teacher, 60 (1967), 339-342. <em>(Referenced by Larsen)</em></li> <li>B. D. Mastrantone, <em><a href="http://www.jstor.org/stable/3612392">Comment</a>,</em> Math. Gaz., 55 (1971), 438-440.</li> <li><em><a href="http://www.jstor.org/stable/2688056">Problem 889</a>,</em> Math. Mag., 47 (1974), 289-292.</li> <li>L. Smiley, <em><a href="http://www.jstor.org/stable/2690473">A Quick Solution of Triangle Counting</a>,</em> Mathematics Magazine, 66, #1, Feb '93, p. 40.</li> </ul>
combinatorics
<p>Let's consider a function (or a way to obtain a formal power series):</p> <p>$$f(x)=x+(x+(x+(x+(x+(x+\dots)^6)^5)^4)^3)^2$$</p> <p>Where $\dots$ is replaced by an infinite sequence of nested brackets raised to $n$th power.</p> <p>The function is defined as the limit of:</p> <p>$$f_1(x)=x$$</p> <p>$$f_2(x)=x+x^2$$</p> <p>$$f_3(x)=x+(x+x^3)^2$$</p> <p>etc.</p> <p>For $|x|$ 'small enough' we have a finite limit $f(x)$, but I'm not really interested in it right now.</p> <p>What I'm interested in - if we consider the function to be represented by a (formal) power series, then we can expand the terms $f_n$ and study the sequence of coefficients.</p> <p>It appears to converge as well (i.e. the coefficients for first $N$ powers of $x$ stop changing after a while).</p> <p>For example, we have the correct first $50$ coefficients for $f_{10}$:</p> <p>$$(a_n)=$$</p> <p><code>0,1,1,0,2,0,1,6,0,6,6,24,15,26,48,56,240,60,303,504,780,1002,1776,3246,3601,7826,7500,18980,26874,38130,56196,99360,153636,210084,390348,486420,900428,1310118,2064612,3073008,4825138,7558008,11428162,18596886,26006031,43625940,65162736,100027728,152897710,242895050,365185374</code></p> <p>I say they are correct, because they are the same up until $f_{15}$ at least (checked with Mathematica).</p> <blockquote> <p>Is there any other way to define this integer sequence? </p> <p>What can we say about the rate of growth of this sequence, the existence of small $a_n$ for large $n$, etc.? <em>(see numerical estimations below)</em></p> <p>Does it become monotone after $a_{18}=60$? (Actually, $a_{27}=7500$ is smaller than the previous term as well) <em>(see numerical estimations below)</em></p> <p>Are $a_0,a_3,a_5,a_8$ the only zero members of the sequence? <em>(appears to be yes, see numerical estimations below)</em></p> </blockquote> <p>The sequence is not in OEIS (which is not surprising to me).</p> <hr> <p><strong>Edit</strong></p> <p>Following Winther's lead I computed the ratios of successive terms for $f_{70}$ until $n=35 \cdot 69=2415$:</p> <p>$$c_n=\frac{a_{n+1}}{a_n}$$</p> <p><a href="https://i.sstatic.net/4sfQg.png" rel="noreferrer"><img src="https://i.sstatic.net/4sfQg.png" alt="enter image description here"></a></p> <p>And also the differences between the successive ratios:</p> <p>$$d_n=c_n-c_{n+1}$$</p> <p><a href="https://i.sstatic.net/6xsw3.png" rel="noreferrer"><img src="https://i.sstatic.net/6xsw3.png" alt="enter image description here"></a></p> <p>We have:</p> <p>$$c_{2413}=1.428347168$$</p> <p>$$c_{2414}=1.428338038$$</p> <p>I conjecture that $c_{\infty}=\sqrt{2}$, but I'm not sure.</p> <ul> <li>After much effort, I computed </li> </ul> <blockquote> <p>$$c_{4949}=1.4132183695$$</p> </blockquote> <p>Which seems to disprove my conjecture. The nearby values seem to agree with this.</p> <p>$$c_{4948}=1.4132224343 \\ c_{4947}=1.4132265001$$</p> <p>But the most striking thing - just how much the sequence stabilizes after the first $200-300$ terms.</p> <blockquote> <p>How can we explain this behaviour? Why does the sequence start with more or less 'random' terms, but becomes monotone for large $n$?</p> </blockquote> <hr> <p><strong>UPDATE</strong></p> <p>The sequence is now in OEIS, number <a href="https://oeis.org/A276436" rel="noreferrer">A276436</a></p> <hr>
<p>Just adding some results from a numerical computation of the first $n = 4000$ $a_n$'s in case anyone is interested to see how the sequence grows. The Mathematica code used (probably not very efficient) is given at the end. I compute $f_n(x)$ by solving the reccurence: $g_{i+1} = (x + g_i)^{n-i}$ with $g_1 = x^n$. This way we have $f_n(x) = g_n$.</p> <p>Here you can see $\frac{\log(a_n)}{n}$,</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.sstatic.net/MddAc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MddAc.png" alt="enter image description here"></a></p> <p>and here you can see the ratio $\frac{a_{n+1}}{a_n}$</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.sstatic.net/xx3GC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xx3GC.png" alt="enter image description here"></a></p> <p>and here is a plot of $f(x)$ (well $f_{15}(x)$ however the plot below looks the same for larger $n$). The vertical line denotes $x = \frac{1}{\sqrt{2}}$ which seems to be a vertical asymptote for $f(x)$.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.sstatic.net/dLFHm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dLFHm.png" alt="enter image description here"></a></p> <pre><code>(* Define the function f_n(x) *) f[n_, x_] := Module[{res, i}, res = x^n; Do[ res = (x + res)^(n - i); , {i, 1, n - 1}]; res ]; (* How many an's to compute? *) numterms = 1000; (* am stabilize for m &gt; n(n-1)/2 so we only need to compute fn for n = nmax *) nmax = Ceiling[Sqrt[2 numterms]]; (* Extract the coefficients *) powerseries = Normal[Series[f[nmax, x], {x, 0, numterms}]]; an = Coefficient[powerseries, x, #] &amp; /@ Range[0, numterms]; bn = Table[{i, Log[an[[i]]]/i}, {i, 1, Length[an]}]; cn = Table[{i, an[[i + 1]]/an[[i]]}, {i, 1, Length[an] - 1}]; (* Plot it up *) ListLogLinearPlot[bn] ListLogLinearPlot[cn] </code></pre>
<p>An interpretation of what $f$ is counting: In terms of diagrams consider a branching process where at each level $n-1$ you either put a leaf (the factor $x$) or you branch into $n$ distinct branches at level $n$. The generating functions at each level then verifies the recursion relation (I put the recursion differently than the OP):</p> <p>$$L_{n-1}(x) = x + (L_{n}(x))^n $$</p> <p>Each $L_n(x)=x + ...$</p> <p>The function $L_1(x)=f(x)$ then counts the number of trees. Using parantheses for indicating the branching level we have:</p> <p>$L_1(x)=f(x)=(x) \ + \ ((x)(x)) \ + \ 2 ((x) \ \ ((x)\;(x)\;(x)) )+ ...$</p> <p>Here is a drawing of orders up to $x^7$. Each filled circle corresponds to a leaf (a factor of $x$). Regarding counting factors, e.g. 6 comes from 2 choices for where to put the 3-branching and then 3 choices for putting the 4-branching. <a href="https://i.sstatic.net/IcDRA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IcDRA.jpg" alt="First 5 terms"></a></p>
linear-algebra
<p>I gave the following problem to students:</p> <blockquote> <p>Two $n\times n$ matrices $A$ and $B$ are <em>similar</em> if there exists a nonsingular matrix $P$ such that $A=P^{-1}BP$.</p> <ol> <li><p>Prove that if $A$ and $B$ are two similar $n\times n$ matrices, then they have the same determinant and the same trace.</p></li> <li><p>Give an example of two $2\times 2$ matrices $A$ and $B$ with same determinant, same trace but that are not similar.</p></li> </ol> </blockquote> <p>Most of the ~20 students got the first question right. However, almost none of them found a correct example to the second question. Most of them gave examples of matrices that have same determinant and same trace. </p> <p>But computations show that their examples are similar matrices. They didn't bother to check that though, so they just tried <em>random</em> matrices with same trace and same determinant, hoping it would be a correct example.</p> <p><strong>Question</strong>: how to explain that none of the random trial gave non similar matrices?</p> <p>Any answer based on density or measure theory is fine. In particular, you can assume any reasonable distribution on the entries of the matrix. If it matters, the course is about matrices with real coefficients, but you can assume integer coefficients, since when choosing numbers <em>at random</em>, most people will choose integers.</p>
<p>If <span class="math-container">$A$</span> is a <span class="math-container">$2\times 2$</span> matrix with determinant <span class="math-container">$d$</span> and trace <span class="math-container">$t$</span>, then the characteristic polynomial of <span class="math-container">$A$</span> is <span class="math-container">$x^2-tx+d$</span>. If this polynomial has distinct roots (over <span class="math-container">$\mathbb{C}$</span>), then <span class="math-container">$A$</span> has distinct eigenvalues and hence is diagonalizable (over <span class="math-container">$\mathbb{C}$</span>). In particular, if <span class="math-container">$d$</span> and <span class="math-container">$t$</span> are such that the characteristic polynomial has distinct roots, then any other <span class="math-container">$B$</span> with the same determinant and trace is similar to <span class="math-container">$A$</span>, since they are diagonalizable with the same eigenvalues.</p> <p>So to give a correct example in part (2), you need <span class="math-container">$x^2-tx+d$</span> to have a double root, which happens only when the discriminant <span class="math-container">$t^2-4d$</span> is <span class="math-container">$0$</span>. If you choose the matrix <span class="math-container">$A$</span> (or the values of <span class="math-container">$t$</span> and <span class="math-container">$d$</span>) &quot;at random&quot; in any reasonable way, then <span class="math-container">$t^2-4d$</span> will usually not be <span class="math-container">$0$</span>. (For instance, if you choose <span class="math-container">$A$</span>'s entries uniformly from some interval, then <span class="math-container">$t^2-4d$</span> will be nonzero with probability <span class="math-container">$1$</span>, since the vanishing set in <span class="math-container">$\mathbb{R}^n$</span> of any nonzero polynomial in <span class="math-container">$n$</span> variables has Lebesgue measure <span class="math-container">$0$</span>.) Assuming that students did something like pick <span class="math-container">$A$</span> &quot;at random&quot; and then built <span class="math-container">$B$</span> to have the same trace and determinant, this would explain why none of them found a correct example.</p> <p>Note that this is very much special to <span class="math-container">$2\times 2$</span> matrices. In higher dimensions, the determinant and trace do not determine the characteristic polynomial (they just give two of the coefficients), and so if you pick two matrices with the same determinant and trace they will typically have different characteristic polynomials and not be similar.</p>
<p>As Eric points out, such $2\times2$ matrices are special. In fact, there are only two such pairs of matrices. The number depends on how you count, but the point is that such matrices have a <em>very</em> special form.</p> <p>Eric proved that the two matrices must have a double eigenvalue. Let the eigenvalue be $\lambda$. It is a little exercise<sup>1</sup> to show that $2\times2$ matrices with double eigenvalue $\lambda$ are similar to a matrix of the form $$ C_{\lambda,\mu} = \begin{pmatrix} \lambda&amp;\mu\\ 0&amp;\lambda \end{pmatrix}. $$ Using suitable diagonal matrices shows that $C_{\lambda,\mu}$ is similar to $C_{\lambda,1}$ if $\mu\neq0$. On the other hand, $C_{\lambda,0}$ and $C_{\lambda,1}$ are not similar; one is a scaling and the other one is not.</p> <p>Therefore, up to similarity transformations, the only possible example is $A=C_{\lambda,0}$ and $B=C_{\lambda,1}$ (or vice versa). Since scaling doesn't really change anything, <strong>the only examples</strong> (up to similarity, scaling, and swapping the two matrices) are $$ A = \begin{pmatrix} 1&amp;0\\ 0&amp;1 \end{pmatrix}, \quad B = \begin{pmatrix} 1&amp;1\\ 0&amp;1 \end{pmatrix} $$ and $$ A = \begin{pmatrix} 0&amp;0\\ 0&amp;0 \end{pmatrix}, \quad B = \begin{pmatrix} 0&amp;1\\ 0&amp;0 \end{pmatrix}. $$ If adding multiples of the identity is added to the list of symmetries (then scaling can be removed), then there is only one matrix pair up to the symmetries.</p> <p>If you are familiar with the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="noreferrer">Jordan normal form</a>, it gives a different way to see it. Once the eigenvalues are fixed to be equal, the only free property (up to similarity) is whether there are one or two blocks in the normal form. The Jordan normal form is invariant under similarity transformations, so it gives a very quick way to solve problems like this.</p> <hr> <p><sup>1</sup> You only need to show that any matrix is similar to an upper triangular matrix. The eigenvalues (which now coincide) are on the diagonal. You can skip this exercise if you have Jordan normal forms at your disposal.</p>
probability
<p>Suppose $X$ and $Y$ are iid random variables taking values in $[0,1]$, and let $\alpha &gt; 0$. What is the maximum possible value of $\mathbb{E}|X-Y|^\alpha$? </p> <p>I have already asked this question for $\alpha = 1$ <a href="https://math.stackexchange.com/questions/2149565/maximum-mean-absolute-difference-of-two-iid-random-variables">here</a>: one can show that $\mathbb{E}|X-Y| \leq 1/2$ by integrating directly, and using some clever calculations. Basically, one has the useful identity $|X-Y| = \max{X,Y} - \min{X,Y}$, which allows a direct calculation. There is an easier argument to show $\mathbb{E}|X - Y|^2 \leq 1/2$. In both cases, the maximum is attained when the distribution is Bernoulli 1/2, i.e. $\mathbb{P}(X = 0) = \mathbb{P}(X = 1) = 1/2$. I suspect that this solution achieves the maximum for all $\alpha$ (it is always 1/2), but I have no ideas about how to try and prove this. </p> <p><strong>Edit 1:</strong> @Shalop points out an easy proof for $\alpha &gt; 1$, using the case $\alpha = 1$. Since $|x-y|^\alpha \leq |x-y|$ when $\alpha &gt; 1$ and $x,y \in [0,1]$, </p> <p>$E|X-Y|^\alpha \leq E|X-Y| \leq 1/2$. </p> <p>So it only remains to deal with the case when $\alpha \in (0,1)$.</p>
<p>Throughout this answer, we will fix $\alpha \in (0, 1]$.</p> <p>Let $\mathcal{M}$ denote the set of all finite signed Borel measures on $[0, 1]$ and $\mathcal{P} \subset \mathcal{M}$ denote the set of all Borel probability measure on $[0, 1]$. Also, define the pairing $\langle \cdot, \cdot \rangle$ on $\mathcal{M}$ by</p> <p>$$ \forall \mu, \nu \in \mathcal{M}: \qquad \langle \mu, \nu\rangle = \int_{[0,1]^2} |x - y|^{\alpha} \, \mu(dx)\nu(dy). $$</p> <p>We also write $I(\mu) = \langle \mu, \mu\rangle$. Then we prove the following claim.</p> <blockquote> <p><strong>Proposition.</strong> If $\mu \in \mathcal{P}$ satisfies $\langle \mu, \delta_{t} \rangle = \langle \mu, \delta_{s} \rangle$ for all $s, t \in [0, 1]$, then $$I(\mu) = \max\{ I(\nu) : \nu \in \mathcal{P}\}.$$</p> </blockquote> <p>We defer the proof of the lemma to the end and first rejoice its consequence.</p> <blockquote> <ul> <li><p>When $\alpha = 1$, it is easy to see that the choice $\mu_1 = \frac{1}{2}(\delta_0 + \delta_1)$ works.</p></li> <li><p>When $\alpha \in (0, 1)$, we can focus on $\mu_{\alpha}(dx) = f_{\alpha}(x) \, dx$ where $f_{\alpha}$ is given by</p></li> </ul> <p>$$ f_{\alpha}(x) = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})} \cdot \frac{1}{(x(1-x))^{\frac{1+\alpha}{2}}}, $$</p> <p>Indeed, for $y \in [0, 1]$, apply the substitution $x = \cos^2(\theta/2)$ and $k = 2y-1$ to write</p> <p>$$ \langle \mu_{\alpha}, \delta_y \rangle = \int_{0}^{1} |y - x|^{\alpha} f_{\alpha}(x) \, dx = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \left| \frac{\sin\theta - k}{\cos\theta} \right|^{\alpha} \, d\theta. $$</p> <p>Then letting $\omega(t) = \operatorname{Leb}\left( \theta \in \left(-\frac{\pi}{2}, \frac{\pi}{2}\right) \ : \ \left| \frac{\sin\theta - k}{\cos\theta} \right| &gt; t \right)$, we can check that this satisfies $\omega(t) = \pi - 2\arctan(t)$, which is independent of $k$ (and hence of $y$). Moreover,</p> <p>$$ \langle \mu_{\alpha}, \delta_y \rangle = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})} \int_{0}^{\infty} \frac{2t^{\alpha}}{1+t^2} \, dt = \frac{\pi}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})\cos(\frac{\pi\alpha}{2})} $$</p> <p>Integrating both sides over $\mu(dy)$, we know that this is also the value of $I(\mu_{\alpha})$.</p> </blockquote> <p>So it follows that</p> <p>\begin{align*} &amp;\max \{ \mathbb{E} [ |X - Y|^{\alpha}] : X, Y \text{ i.i.d. and } \mathbb{P}(X \in [0, 1]) = 1 \} \\ &amp;\hspace{1.5em} = \max_{\mu \in \mathcal{P}} I(\mu) = I(\mu_{\alpha}) = \frac{\pi}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})\cos(\frac{\pi\alpha}{2})}. \end{align*}</p> <p>Notice that this also matches the numerical value of Kevin Costello as</p> <p>$$ I(\mu_{1/2}) = \frac{\sqrt{2}\pi^{3/2}}{\Gamma\left(\frac{1}{4}\right)^2} \approx 0.59907011736779610372\cdots. $$</p> <p>The following is the graph of $\alpha \mapsto I(\mu_{\alpha})$.</p> <p>$\hspace{8em} $<a href="https://i.sstatic.net/YuTSF.png" rel="noreferrer"><img src="https://i.sstatic.net/YuTSF.png" alt="enter image description here"></a></p> <hr> <p><em>Proof of Proposition.</em> We first prove the following lemma.</p> <blockquote> <p><strong>Lemma.</strong> If $\mu \in \mathcal{M}$ satisfies $\mu([0,1]) = 0$, then we have $I(\mu) \leq 0$.</p> </blockquote> <p>Indeed, notice that there exists a constant $c &gt; 0$ for which</p> <p>$$ \forall x \in \mathbb{R}: \qquad |x|^{\alpha} = c\int_{0}^{\infty} \frac{1 - \cos (xt)}{t^{1+\alpha}} \, dt $$</p> <p>holds. Indeed, this easily follows from the integrability of the integral and the substitution $|x|t \mapsto t$. So by the Tonelli's theorem, for any positive $\mu, \nu \in \mathcal{M}$,</p> <p>\begin{align*} \langle \mu, \nu \rangle &amp;= c\int_{0}^{\infty} \int_{[0,1]^2} \frac{1 - \cos ((x - y)t)}{t^{1+\alpha}} \, \mu(dx)\nu(dy)dt \\ &amp;= c\int_{0}^{\infty} \frac{\hat{\mu}(0)\hat{\nu}(0) - \operatorname{Re}( \hat{\mu}(t)\overline{\hat{\nu}(t)} )}{t^{1+\alpha}} \, dt, \end{align*}</p> <p>where $\hat{\mu}(t) = \int_{[0,1]} e^{itx} \, \mu(dx)$ is the Fourier transform of $\mu$. In particular, this shows that the right-hand side is integrable. So by linearity this relation extends to all pairs of $\mu, \nu$ in $\mathcal{M}$. So, if $\mu \in \mathcal{M}$ satisfies $\mu([0,1]) = 0$ then $\hat{\mu}(0) = 0$ and thus</p> <p>$$ I(\mu) = -c\int_{0}^{\infty} \frac{|\hat{\mu}(t)|^2}{t^{1+\alpha}} \, dt \leq 0, $$</p> <p>completing the proof of Lemma. ////</p> <p>Let us return to the original proof. Let $m$ denote the common values of $\langle \mu, \delta_t\rangle$ for $t \in [0, 1]$. Then for any $\nu \in \mathcal{P}$</p> <p>$$ \langle \mu, \nu \rangle = \int \left( \int_{[0,1]} |x - y|^{\alpha} \, \mu(dx) \right) \, \nu(dy) = \int \langle \mu, \delta_y \rangle \, \nu(dy) = m. $$</p> <p>So it follows that</p> <p>$$ \forall \nu \in \mathcal{P} : \qquad I(\nu) = I(\mu) + 2\underbrace{\langle \mu, \nu - \mu \rangle}_{=m-m = 0} + \underbrace{I(\nu - \mu)}_{\leq 0} \leq I(\mu) $$</p> <p>as desired.</p>
<p>This isn't a full solution, but it's too long for a comment. </p> <p>For fixed $0&lt;\alpha&lt;1$ we can get an approximate solution by considering the problem discretized to distributions that only take on values of the form $\frac{k}{n}$ for some reasonably large $n$. Then the problem becomes equivalent to $$\max_x x^T A x$$ where $A$ is the $(n+1) \times (n+1)$ matrix whose $(i,j)$ entry is $\left(\frac{|i-j|}{n}\right)^{\alpha}$, and the maximum is taken over all non-negative vectors summing to $1$. </p> <p>If we further assume that there is a maximum where all entries of $x$ are non-zero, Lagrange multipliers implies that the optimal $x$ in this case is a solution to $$Ax=\lambda {\mathbb 1_{n+1}}$$ (where $1_{n+1}$ is the all ones vector), so we can just take $A^{-1} \mathbb{1_{n+1}}$ and rescale. </p> <p>For $n=1000$ and $n=\frac{1}{2}$, this gives a maximum of approximately $0.5990$, with a vector whose first few entries are $(0.07382, 0.02756, 0.01603, 0.01143)$.</p> <hr> <p>If the optimal $x$ has a density $f(x)$ that's positive everywhere, and we want to maximize $\int_0^1 \int_0^1 f(x) f(y) |x-y|^{\alpha}$ the density "should" (by analogue to the above, which can probably be made rigorous) satisfy $$\int_{0}^1 f(y) |x-y|^{\alpha} \, dy= \textrm{ constant independent of } x,$$ but I'm not familiar enough with integral transforms to know if there's a standard way of inverting this. </p>
linear-algebra
<p>Let $k$ be a field with characteristic different from $2$, and $A$ and $B$ be $2 \times 2$ matrices with entries in $k$. Then we can prove, with a bit art, that $A^2 - 2AB + B^2 = O$ implies $AB = BA$, hence $(A - B)^2 = O$. It came to a surprise for me when I first succeeded in proving this, for this seemed quite nontrivial to me.</p> <p>I am curious if there is a similar or more general result for the polynomial equations of matrices that ensures commutativity. (Of course, we do not consider trivial cases such as the polynomial $p(X, Y) = XY - YX$ corresponding to commutator)</p> <p>p.s. This question is purely out of curiosity. I do not know even this kind of problem is worth considering, so you may regard this question as a recreational one.</p>
<p>Your question is very interesting, unfortunately that's not a complete answer, and in fact not an answer at all, or rather a negative answer.</p> <p>You might think, as generalization of <span class="math-container">$A^2+B^2=2AB$</span>, of the following matrix equation in <span class="math-container">$\mathcal M_n\Bbb C$</span> : <span class="math-container">$$ (E) :\ \sum_{l=0}^k (-1)^k \binom kl A^{k-l}B^l = 0. $$</span></p> <p>This equation implies the commutativity if and only if <span class="math-container">$n=1$</span> of <span class="math-container">$(n,k)=(2,2)$</span>, which is the case you studied.However, the equation (E) has a remarkable property : <strong>if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> satisfy (E) then their characteristic polynomials are equal</strong>. Isn't it amazing ? You can have a look at <a href="https://hal.inria.fr/hal-00780438/document" rel="nofollow noreferrer">this paper for a proof</a>.</p>
<p>I'm neither an expert on this field nor on the unrelated field of the facts I'm about to cite, so this is more a shot in the dark. But: Given a set of matrices, the problem whether there is some combination in which to multiply them resulting in zero is undecidable, even for relatively small cases (such as two matrices of sufficient size or a low fixed number of $3\times3$ matrices).</p> <p>A solution to one side of this problem (the "is there a polynomial such that..." side) <em>looks</em> harder (though I have no idea beyond intuition whether it really is!) than the mortality problem mentioned above. If that is actually true, then it would at least suggest that $AB = BA$ does not guarantee the existance of a solution (though it might still happen).</p> <p>In any case, the fact that the mortality problem is decidable for $2 \times 2$ matrices at least shows that the complexity of such problems increases rapidly with dimension, which could explain why your result for $2$ does not easily extend to higher dimensions.</p> <p>Apologies for the vagueness of all this, I just figured it might give someone with more experience in the field a different direction to think about the problem. If someone does want to look that way, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.5792&amp;rep=rep1&amp;type=pdf" rel="nofollow">this paper</a> has the mentioned results as well as references to related literature.</p>
linear-algebra
<p>I am currently trying to self-study linear algebra. I've noticed that a lot of the definitions for terms (like eigenvectors, characteristic polynomials, determinants, and so on) require a <strong>square</strong> matrix instead of just any real-valued matrix. For example, <a href="http://mathworld.wolfram.com" rel="noreferrer">Wolfram</a> has this in its <a href="http://mathworld.wolfram.com/CharacteristicPolynomial.html" rel="noreferrer">definition</a> of the characteristic polynomial:</p> <blockquote> <p>The characteristic polynomial is the polynomial left-hand side of the characteristic equation $\det(A - I\lambda) = 0$, where $A$ is a square matrix.</p> </blockquote> <p>Why must the matrix be square? What happens if the matrix is not square? And why do square matrices come up so frequently in these definitions? Sorry if this is a really simple question, but I feel like I'm missing something fundamental.</p>
<p>Remember that an $n$-by-$m$ matrix with real-number entries represents a linear map from $\mathbb{R}^m$ to $\mathbb{R}^n$ (or more generally, an $n$-by-$m$ matrix with entries from some field $k$ represents a linear map from $k^m$ to $k^n$). When $m=n$ - that is, when the matrix is square - we're talking about a map from a space to itself.</p> <p>So really your question amounts to:</p> <blockquote> <p>Why are maps from a space to <em>itself</em> - as opposed to maps from a space to <em>something else</em> - particularly interesting?</p> </blockquote> <p>Well, the point is that when I'm looking at a map from a space to itself inputs to and outputs from that map are the same "type" of thing, <em>and so I can meaningfully compare them</em>. So, for example, if $f:\mathbb{R}^4\rightarrow\mathbb{R}^4$ it makes sense to ask when $f(v)$ is parallel to $v$, since $f(v)$ and $v$ lie in the same space; but asking when $g(v)$ is parallel to $v$ for $g:\mathbb{R}^4\rightarrow\mathbb{R}^3$ doesn't make any sense, since $g(v)$ and $v$ are just different types of objects. (This example, by the way, is just saying that <em>eigenvectors/values</em> make sense when the matrix is square, but not when it's not square.)</p> <hr> <p>As another example, let's consider the determinant. The geometric meaning of the determinant is that it measures how much a linear map "expands/shrinks" a unit of (signed) volume - e.g. the map $(x,y,z)\mapsto(-2x,2y,2z)$ takes a unit of volume to $-8$ units of volume, so has determinant $-8$. What's interesting is that this applies to <em>every</em> blob of volume: it doesn't matter whether we look at how the map distorts the usual 1-1-1 cube, or some other random cube.</p> <p>But what if we try to go from $3$D to $2$D (so we're considering a $2$-by-$3$ matrix) or vice versa? Well, we can try to use the same idea: (proportionally) how much <em>area</em> does a given <em>volume</em> wind up producing? However, we now run into problems:</p> <ul> <li><p>If we go from $3$ to $2$, the "stretching factor" is no longer invariant. Consider the projection map $(x,y,z)\mapsto (x,y)$, and think about what happens when I stretch a bit of volume vertically ...</p></li> <li><p>If we go from $2$ to $3$, we're never going to get any volume at all - the starting dimension is just too small! So regardless of what map we're looking at, our "stretching factor" seems to be $0$.</p></li> </ul> <p>The point is, in the non-square case the "determinant" as naively construed either is ill-defined or is $0$ for stupid reasons.</p>
<p>Lots of good answers already as to why square matrices are so important. But just so you don't think that other matrices are not interesting, they have analogues of the inverse (e.g., the <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse" rel="noreferrer">Moore-Penrose inverse</a>) and non-square matrices have a <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition" rel="noreferrer">singular-value decompition</a>, where the singular values play a role loosely analogous to the eigenvalues of a square matrix. These topics are often left out of linear algebra courses, but they can be important in numerical methods for statistics and machine learning. But learn the square matrix results before the fancy non-square matrix results, since the former provide a context for the latter.</p>
geometry
<p>I have a sphere of radius $R_{s}$, and I would like to pick random points in its volume with uniform probability. How can I do so while preventing any sort of clustering around poles or the center of the sphere?</p> <hr> <p>Since I'm unable to answer my own question, here's another solution:</p> <p>Using the strategy suggested by <a href="http://mathworld.wolfram.com/SpherePointPicking.html">Wolfram MathWorld</a> for picking points on the surface of a sphere: Let $\theta$ be randomly distributed real numbers over the interval $[0,2\pi]$, let $\phi=\arccos(2v−1)$ where $v$ is a random real number over the interval $[0,1]$, and let $r=R_s (\mathrm{rand}(0,1))^\frac13$. Converting from spherical coordinates, a random point in $(x,y,z)$ inside the sphere would therefore be: $((r\cos(\theta)\sin(\phi)),(r\sin(\theta)\sin(\phi)),(r\cos(\phi)))$.</p> <p>A quick test with a few thousand points in the unit sphere appears to show no clustering. However, I'd appreciate any feedback if someone sees a problem with this approach.</p>
<p>Let's say your sphere is centered at the origin $(0,0,0)$.</p> <p>For the distance $D$ from the origin of your random pointpoint, note that you want $P(D \le r) = \left(\frac{r}{R_s}\right)^3$. Thus if $U$ is uniformly distributed between 0 and 1, taking $D = R_s U^{1/3}$ will do the trick.</p> <p>For the direction, a useful fact is that if $X_1, X_2, X_3$ are independent normal random variables with mean 0 and variance 1, then $$\frac{1}{\sqrt{X_1^2 + X_2^2 + X_3^2}} (X_1, X_2, X_3)$$ is uniformly distributed on (the surface of) the unit sphere. You can generate normal random variables from uniform ones in various ways; the <a href="http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform">Box-Muller algorithm</a> is a nice simple approach.</p> <p>So if you choose $U$ uniformly distributed between 0 and 1, and $X_1, X_2, X_3$ iid standard normal and independent of $U$, then $$\frac{R_s U^{1/3}}{\sqrt{X_1^2 + X_2^2 + X_3^2}} (X_1, X_2, X_3)$$ would produce a uniformly distributed point inside the ball of radius $R_s$.</p>
<p>An alternative method in $3$ dimensions: </p> <p>Step 1: Take $x, y, $ and $z$ each uniform on $[-r_s, r_s]$. </p> <p>Step 2: If $x^2+y^2+z^2\leq r_s^2$, stop. If not, throw them away and return to step $1$. </p> <p>Your success probability each time is given by the volume of the sphere over the volume of the cube, which is about $0.52$. So you'll require slightly more than $2$ samples on average. </p> <p>If you're in higher dimensions, this is not a very efficient process at all, because in a large number of dimensions a random point from the cube is probably not in the sphere (so you'll have to take many points before you get a success). In that case a modified version of Nate's algorithm would be the way to go. </p>
linear-algebra
<p>I am quite confused about this. I know that zero eigenvalue means that null space has non zero dimension. And that the rank of matrix is not the whole space. But is the number of distinct eigenvalues ( thus independent eigenvectos ) is the rank of matrix?</p>
<p>Well, if $A$ is an $n \times n$ matrix, the rank of $A$ plus the nullity of $A$ is equal to $n$; that's the rank-nullity theorem. The nullity is the dimension of the kernel of the matrix, which is all vectors $v$ of the form: $$Av = 0 = 0v.$$ The kernel of $A$ is precisely the eigenspace corresponding to eigenvalue $0$. So, to sum up, the rank is $n$ minus the dimension of the eigenspace corresponding to $0$. If $0$ is not an eigenvalue, then the kernel is trivial, and so the matrix has full rank $n$. The rank depends on no other eigenvalues.</p>
<p>My comment is 7 years late but I hope someone might find some useful information.</p> <p>First, the number of linearly independent eigenvectors of a rank <span class="math-container">$k$</span> matrix can be greater than <span class="math-container">$k$</span>. For example <span class="math-container">\begin{align} A &amp;= \left[ \begin{matrix} 1 &amp; 2 \\ 2 &amp; 4 \end{matrix} \right] \\ rk(A) &amp;= 1 \\ \end{align}</span> <span class="math-container">$A$</span> has the following eigenvalues and eigenvectors <span class="math-container">$\lambda_1 = 5, \mathbf{v}_1 = [1 \ \ 2]^\top$</span>, <span class="math-container">$\lambda_2 = 0, \mathbf{v}_2 = [-2 \ \ 1]^\top$</span>. So <span class="math-container">$A$</span> has 1 linearly independent column but 2 linearly independent eigenvectors. The column space of <span class="math-container">$A$</span> has 1 dimension. The eigenspace of <span class="math-container">$A$</span> has 2 dimensions.</p> <p>There are also cases where the number of linearly independent eigenvectors is smaller than the rank of <span class="math-container">$A$</span>. For example <span class="math-container">\begin{align} A &amp;= \left[ \begin{matrix} 0 &amp; 1 \\ -1 &amp; 0 \end{matrix} \right] \\ rk(A) &amp;= 2 \end{align}</span> <span class="math-container">$A$</span> has no real valued eigenvalues and no real valued eigenvectors. But <span class="math-container">$A$</span> has two <em>complex valued eigenvalues</em> <span class="math-container">$\lambda_1 = i,\ \lambda_2 = -i$</span> and two complex valued eigenvectors.</p> <p>Another remark is that a eigenvalue can correspond to multiple linearly independent eigenvectors. An example is the Identity matrix. <span class="math-container">$I_n$</span> has only 1 eigenvalue <span class="math-container">$\lambda = 1$</span> but <span class="math-container">$n$</span> linearly independent eigenvectors.</p> <p>So to answer your question, I think there is no trivial relationship between the rank and the dimension of the eigenspace.</p>
matrices
<p>I am trying to understand the similarities and differences between the minimal polynomial and characteristic polynomial of Matrices.</p> <ol> <li>When are the minimal polynomial and characteristic polynomial the same</li> <li>When are they different</li> <li>What conditions (eigenvalues/eigenvectors/...) would imply 1 or 2</li> <li>Please tell me anything else about these two polynomials that is essential in comparing them.</li> </ol>
<p>The minimal polynomial is quite literally the smallest (in the sense of divisibility) nonzero polynomial that the matrix satisfies. That is to say, if $A$ has minimal polynomial $m(t)$ then $m(A)=0$, and if $p(t)$ is a nonzero polynomial with $p(A)=0$ then $m(t)$ divides $p(t)$.</p> <p>The characteristic polynomial, on the other hand, is defined algebraically. If $A$ is an $n \times n$ matrix then its characteristic polynomial $\chi(t)$ must have degree $n$. This is not true of the minimal polynomial.</p> <p>It can be proved that if $\lambda$ is an eigenvalue of $A$ then $m(\lambda)=0$. This is reasonably clear: if $\vec v \ne 0$ is a $\lambda$-eigenvector of $A$ then $$m(\lambda) \vec v = m(A) \vec v = 0 \vec v = 0$$ and so $m(\lambda)=0$. The first equality here uses linearity and the fact that $A^n\vec v = \lambda^n \vec v$, which is an easy induction.</p> <p>It can also be proved that $\chi(A)=0$. In particular that $m(t)\, |\, \chi(t)$.</p> <p>So one example of when (1) occurs is when $A$ has $n$ distinct eigenvalues. If this is so then $m(t)$ has $n$ roots, so has degree $\ge n$; but it has degree $\le n$ because it divides $\chi(t)$. Thus they must be equal (since they're both monic, have the same roots and the same degree, and one divides the other).</p> <p>A more complete characterisation of when (1) occurs (and when (2) occurs) can be gained by considering Jordan Normal Form; but I suspect that you've only just learnt about characteristic and minimal polynomials so I don't want to go into JNF.</p> <p>Let me know if there's anything else you'd like to know; I no doubt missed some things out.</p>
<p>The minimal polynomial $m(t)$ is the smallest factor of the characteristic polynomial $f(t)$ such that if $A$ is the matrix, then we still have $m(A) = 0$. The only thing the characteristic polynomial measures is the algebraic multiplicity of an eigenvalue, whereas the minimal polynomial measures the size of the $A$-cycles that form the generalized eigenspaces (a.k.a. the size of the Jordan blocks). These facts can be summarized as follows.</p> <ul> <li>If $f(t)$ has a factor $(t - \lambda)^k$, this means that the eigenvalue $\lambda$ has $k$ linearly independent generalized eigenvectors.</li> <li>If $m(t)$ has a factor $(t - \lambda)^p$, this means that the largest $A$-cycle of generalized eigenvectors contains $p$ elements; that is, the largest Jordan block for $\lambda$ is $p \times p$. Notice that this means that $A$ is only diagonalizable if $m(t)$ has only simple roots.</li> <li>Thus $f(t) = m(t)$ if and only if each eigenvalue $\lambda$ corresponds to a single Jordan block, a.k.a each eigenvalue corresponds to a single minimal invariant subspace of generalized eigenvectors.</li> <li>$f(t)$ and $m(t)$ differ if any eigenvalue has more than one Jordan block, a.k.a. if an eigenvalue has more than one generalized eigenspace.</li> </ul>
combinatorics
<p>The goal of the <a href="http://en.wikipedia.org/wiki/Four_fours" rel="noreferrer">four fours</a> puzzle is to represent each natural number using four copies of the digit $4$ and common mathematical symbols.</p> <p>For example, $165=\left(\sqrt{4} + \sqrt{\sqrt{{\sqrt{4^{4!}}}}}\right) \div .4$.</p> <blockquote> <p>If we remove the restriction on the number of fours, let $f(N)$ be the number of fours required to be able to represent all positive integers no greater than $N$. What is the asymptotic behaviour of $f(N)$? Can it be shown that $f(N) \sim r \log N$ for some $r$?</p> </blockquote> <p>To be specific, let’s restrict the operations to the following: </p> <ul> <li>addition: $x+y$</li> <li>subtraction: $x-y$</li> <li>multiplication: $x\times y$</li> <li>division: $x\div y$</li> <li>exponentiation: $y^x$</li> <li>roots: $\sqrt[x]{y}$</li> <li>square root: $\sqrt{x}$</li> <li>factorial $n!$</li> <li>decimal point: $.4$</li> <li>recurring decimal: $. \overline 4$</li> </ul> <p>It is easy to see that $f(N)$ is $O(\log N)$. For example, with four fours, numbers up to $102$ can be represented (see <a href="https://math.stackexchange.com/questions/92230/proving-you-cant-make-2011-out-of-1-2-3-4-nice-twist-on-the-usual/93188#93188">here</a> for a tool for generating solutions), so, since $96 = 4\times4!$, we can use $6k-2$ fours in the form $(\dots((a_1\times 96+a_2)\times 96+a_3)\dots)\times96+a_k$ to represent every number up to $96^k$.</p> <p>On the other hand, we can try to count the number of distinct expressions that can be made with $k$ fours. For example, if we (arbitrarily) permit factorial only to be applied to the digit $4$, and allow no more than two successive applications of the square root operation, we get $\frac{216^k}{18}C_{k-1}$ distinct expressions where $C_k$ is the $k$th Catalan number. (Of course, many of these expressions won’t represent a positive integer, many different expressions will represent the same number, and the positive integers generated won’t consist of a contiguous range from $1$ to some $N$.) </p> <p>Using Stirling’s formula, for large $k$, this is approximately $\frac{864^k}{72k\sqrt{\pi k}}$. So for $f(N)$ to grow slower than $r\log N$, we’d need to remove the restrictions on the use of unary operations. (It is <a href="http://en.wikipedia.org/wiki/Four_fours#Rules" rel="noreferrer">well-known</a> that the use of logs enables <em>any</em> number to be represented with only <em>four</em> fours.)</p> <blockquote> <p>Can this approach be extended to show that $f(N)$ is $\Omega(\log N)$? Or does unrestricted use of factorial and square roots mean that $f(N)$ is actually $o(\log N)$? Is the answer different if the use of $x\%$ (percentages) is also permitted?</p> </blockquote>
<p>I'm one of the authors of the paper referenced by David Bevan in his comment. The four-fours was one inspiration for that problem, although others have thought about it also. The specific version of the problem there looks at the minimum number of <span class="math-container">$1$</span>s needed to represent <span class="math-container">$n$</span> where one is allowed only addition and multiplication but any number of parentheses. Call this <span class="math-container">$g(n)$</span>. For example, <span class="math-container">$g(6) \le 5$</span>, since <span class="math-container">$6=(1+1)(1+1+1)$</span>, and it isn't hard to show that <span class="math-container">$g(6)=5$</span>. Even in this limited version of the problem, the question is generally difficult even to get asymptotics.</p> <p>In some sense most natural questions of asymptotic growth are somewhat contained in this question, since one can write any given <span class="math-container">$k$</span> as <span class="math-container">$1+1+1...+1$</span> <span class="math-container">$k$</span> times, and <span class="math-container">$1=k/k$</span>. Thus starting with some <span class="math-container">$k$</span> other than <span class="math-container">$1$</span> (such as <span class="math-container">$k=4$</span>), the asymptotics stay bounded within a constant factor, assuming that addition and division are allowed.</p> <p>However, actually calculating this sort of thing for any set of operations is generally difficult. In the case of integer complexity one has a straightforward way of doing so, since if one calculates <span class="math-container">$g(i)$</span> for all <span class="math-container">$i &lt; n$</span>, calculating <span class="math-container">$g(n)$</span> is then doable. This doesn't apply when one has other operations generally, with division and substraction already making an algorithm difficult. In this case, one can make such an algorithm but exactly how to do so is more subtle. In fact, as long as one is restricted to binary operations this is doable (proof sketch: do what you did to look at all distinct expressions).</p> <p>Adding in non-binary operations makes everything even tougher. Adding in square roots won't make things that much harder, nor will adding factorial by itself. The pair of them together makes calculating specific values much more difficult. My guess would be that even with factorial, square root and the four binary operations there are numbers which require arbitrarily large numbers of <span class="math-container">$1$</span>s, but I also suspect that this would be extremely difficult to prove. Note that this is already substantially weaker than what you are asking- whether the order of growth is of <span class="math-container">$\log n$</span>. Here though square roots probably don't alter things at all; in order for it to matter one needs to have a lot numbers of the form n^2^k with surprisingly low complexity. This seems unlikely.</p>
<p>You can get <span class="math-container">$103$</span> with five <span class="math-container">$4$</span>s as <span class="math-container">$$\frac {\sqrt{\sqrt{\sqrt{4^{4!}}}}+4+\sqrt{.\overline4}}{\sqrt{.\overline4}}=103$$</span> </p> <p>For four <span class="math-container">$4$</span>s, we have <span class="math-container">$\dfrac {44}{.\overline 4}+4=103$</span>.</p> <p>In fact, <span class="math-container">$113$</span> is the first number I can't get with four <span class="math-container">$4$</span>s.</p>
game-theory
<p>Okay so <a href="https://math.stackexchange.com/questions/835/optimal-strategy-for-deal-or-no-deal">this question</a> reminded me of one my brother asked me a while back about the hit day-time novelty-worn-off-now snoozathon <a href="http://en.wikipedia.org/wiki/Deal_or_no_deal" rel="noreferrer">Deal or no deal</a>.</p> <h3>For the uninitiated:</h3> <p>In playing deal or no deal, the player is presented with one of 22 boxes (randomly selected) each containing different sums of money, he then asks in turn for each of the 21 remaining boxes to be opened, occasionally receiving an offer (from a wholly unconvincing 'banker' figure) for the mystery amount in his box.</p> <p>If he rejects all of the offers along the way, the player is allowed to work his way through several (for some unfathomable reason, emotionally charged) box openings until there remain only two unopened boxes: one of which is his own, the other not. He is then given a choice to stick or switch (take the contents of his own box or the other), something he then agonises pointlessly over for the next 10 minutes.</p> <h3>Monty hall</h3> <p>[If you have not seen the monty hall 'paradox' check out this <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem" rel="noreferrer">wikipedia link</a> and prepare to be baffled, then enlightened, then disappointed that the whole thing is so trivial. After which feel free to read on.]</p> <p>There is a certain similarity, you will agree, between the situation a deal or no deal player finds himself in having rejected all offers and the dilemma of Monty's contestant in the classic problem: several 'bad choices' have been eliminated and he is left with a choice between a better and worse choice with no way of knowing between them.</p> <h3>So???</h3> <blockquote> <p><strong>Question:</strong> The solution to the monty hall problem is that it is, in fact, better to switch- does the same apply here? Does this depend upon the money in the boxes? Should every player opt for 'switch', cutting the 10 minutes of agonising away???</p> </blockquote>
<blockquote> <p>I would ceratinly plump for saying they were opened at random</p> </blockquote> <p>Then no, the Monty Hall solution doesn't apply. The whole point is that the door isn't <em>randomly</em> opened, it's always a goat.</p> <p>An easy way of seeing this is imagining there are 100 doors, with 99 goats. If, after you pick a door, the host always opens 98 doors of goats, then switching is very intuitively favorable. However, if he had just opened 98 doors at random, then most of the time (98 out of 100) he would open the door with the car behind it; and even on the rare occasions he didn't, you still wouldn't be any better off switching than staying.</p> <p>See also <a href="https://math.stackexchange.com/questions/15055/in-a-family-with-two-children-what-are-the-chances-if-one-of-the-children-is-a/15085#15085">this answer</a>, in which I try to intuitively explain probability fallacies.</p>
<p>This doesn't have the key features of the Monty Hall problem:</p> <ol> <li>The contestant makes a choice</li> <li>The contestant is given information about other choices they could have made</li> <li>The contestant can change their choice based on this new information</li> </ol> <p>Without being given extra information, their is no point in changing your choice.</p>
geometry
<p>When I came across the Cauchy-Schwarz inequality the other day, I found it really weird that this was its own thing, and it had lines upon lines of <a href="https://www.khanacademy.org/math/linear-algebra/vectors_and_spaces/dot_cross_products/v/proof-of-the-cauchy-schwarz-inequality">proof</a>.</p> <p>I've always thought the geometric definition of dot multiplication: $$|{\bf a }||{\bf b }|\cos \theta$$ is equivalent to the other, algebraic definition: $$a_1\cdot b_1+a_2\cdot b_2+\cdots+a_n\cdot b_n$$ And since the inequality is directly implied by the geometric definition (the fact that $\cos(\theta)$ is $1$ only when $\bf a$ and $\bf b$ are collinear), then shouldn't the Cauchy-Schwarz inequality be the world's most obvious and almost-no-proof-needed thing?</p> <p>Can someone correct me on where my thought process went wrong?</p>
<p><em>Side note: it's actually the Cauchy-Schwarz-Bunyakovsky inequality, and don't let anyone tell you otherwise.</em></p> <p>The problem with using the geometric definition is that you have to define what an angle is. Sure, in three dimensional space, you have pretty clear ideas about what an angle is, but what do you take as $\theta$ in your equation when $i$ and $j$ are $10$ dimensional vectors? Or infinitely-dimensional vectors? What if $i$ and $j$ are polynomials?</p> <p>The Cauchy-Schwarz inequality tells you that <strong>anytime</strong> you have a vector space and an inner product defined on it, you can be sure that for any two vectors $u,v$ in your space, it is true that $\left|\langle u,v\rangle\right| \leq \|u\|\|v\|$.</p> <p>Not all vector spaces are simple $\mathbb R^n$ businesses, either. You have the vector space of all continuous functions on $[0,1]$, for example. You can define the inner product as</p> <p>$$\langle f,g\rangle=\int_0^1 f(x)g(x)dx$$</p> <p>and use Cauchy-Schwarz to prove that for any pair $f,g$, you have $$\left|\int_{0}^1f(x)g(x)dx\right| \leq \sqrt{\int_0^1 f^2(x)dx\int_0^1g^2(x)dx}$$</p> <p>which is not a trivial inequality.</p>
<ol> <li><p>The inequality is ubiquitous, so some name is needed.</p></li> <li><p>As there is no cosine in the statement of the inequality, it cannot be called "cosine inequality" or anything like that.</p></li> <li><p>The geometric interpretation with cosines only works for finite-dimensional real Euclidean space, but the inequality holds and is used more generally than that. That is Schwarz' contribution.</p></li> <li><p>Schwarz founded the field of <em>functional analysis</em> (infinite-dimensional metrized linear algebra) with his proof of the inequality. That is important enough to warrant a name. In terms of consequences per line of proof it is one of the greatest arguments of all time.</p></li> <li><p>The Schwarz proof was part of the historical realization that Euclidean geometry, with its mysterious angle measure that seems to depend on notions of arc-length from calculus, <em>is</em> the theory of a vector space equipped with a quadratic form. That is a major shift in viewpoint. </p></li> <li><p>Stating the inequality in terms of cosines assumes that the inner product restricts to the standard Euclidean one on the 2 (or fewer) dimensional subspace spanned by the two vectors, and that you have proved the inequality for holds for standard Euclidean space of 2 dimensions or less. How do you know those things are correct without a much longer argument? That argument will, probably, include somewhere a proof of the Cauchy-Schwarz inequality, maybe written for 2-dimensions but working for the whole $n$-dimensional space, so it might as well be stated as a direct proof for $n$ dimensions. Which is what Cauchy and Schwarz did.</p></li> </ol>
matrices
<blockquote> <p>Let $A,B\in M_{n}(\mathbb{Q})$. If $(A-B)^2=AB$, prove that $\det(AB-BA)=0$.</p> </blockquote> <p>I considered the function $f:\mathbb{Q}\rightarrow \mathbb{Q}$, $f(x)=\det(A^2+B^2-BA-xAB)$ and I obtained that: $$f(0)=\det(A^2+B^2-BA)=\det(2AB)=2^n\det(AB)$$ $$f(1)=\det(A^2+B^2-BA-AB)=\det((A-B)^2)=\det(AB)$$ $$f(2)=\det(A^2+B^2-BA-2AB)=\det((A-B)^2-AB)=\det(AB-AB)=0$$</p> <p>I don't have any other idea. </p>
<p>The idea is to use <span class="math-container">$(-3\pm\sqrt 5)/2$</span>. We begin with</p> <p><span class="math-container">$$\begin{align} (A+xB)(A+x'B) &amp;= A^2 + x BA + x' AB + xx' B^2 \\ &amp;= A^2 + (x' + x) AB+x(BA-AB)+xx'B^2.\end{align} $$</span></p> <p>Now we find numbers <span class="math-container">$x$</span>, <span class="math-container">$x'$</span> such that <span class="math-container">$x'+x = -3$</span>, <span class="math-container">$xx'=1$</span>. These numbers are the roots of the quadratic equation <span class="math-container">$$ \lambda^2 +3\lambda +1 = 0. $$</span> Thus, we have <span class="math-container">$x = \frac{-3+\sqrt 5}2$</span> and <span class="math-container">$x'= \frac{-3-\sqrt 5}2$</span>. </p> <p>With these, we have <span class="math-container">$$\begin{align} (A+xB)(A+x'B) &amp;= A^2 -3AB + B^2 + x(BA-AB) \\ &amp;= BA-AB + x(BA-AB) = (BA-AB)(1+x).\end{align} $$</span> We now have that <span class="math-container">$$ \det(A+xB)(A+x'B) =q \in \mathbb{Q}, $$</span> and <span class="math-container">$$ q=(1+x)^n \det(BA-AB). $$</span> Then it follows by <span class="math-container">$A, B\in M_n(\mathbb{Q})$</span> and <span class="math-container">$(1+x)^n\in\mathbb{R} \backslash \mathbb{Q}$</span> that <span class="math-container">$\det(BA-AB)=0$</span>. </p> <hr> <p><strong>An example with <span class="math-container">$AB\neq BA$</span></strong></p> <p>For the example that @Jonas Meyer requested, here it is: <span class="math-container">$$ A=\begin{pmatrix} 1 &amp; 1 \\ 0 &amp; \frac{-3+\sqrt 5}2\end{pmatrix}, \ \ B=\begin{pmatrix} \frac{3+\sqrt 5}2 &amp; 1 \\ 0 &amp; -1\end{pmatrix}. $$</span> Then <span class="math-container">$$ A-B= \begin{pmatrix} \frac{-1-\sqrt 5}2 &amp; 0 \\ 0 &amp; \frac{-1+\sqrt 5}2\end{pmatrix},$$</span> <span class="math-container">$$ (A-B)^2 = \begin{pmatrix} \frac{3+\sqrt 5}2 &amp; 0 \\ 0 &amp; \frac{3-\sqrt 5}2\end{pmatrix} = AB. $$</span> But, <span class="math-container">$$ BA =\begin{pmatrix} \frac{3+\sqrt 5}2 &amp; \sqrt 5 \\ 0 &amp; \frac{3-\sqrt 5}2\end{pmatrix} \neq AB. $$</span></p>
<p>The statement is true without the assumption that $A,B$ have rational entries. As in i707107's answer, let $$ x=\frac{-3-\sqrt{5}}2,\,x'=\frac{-3+\sqrt{5}}2 $$ so that $x+x'=-3$, $xx'=1$. Let $X=A+xB$, $Y=A+x'B$. Then $$\begin{eqnarray*} (1+x')XY-(1+x)YX &amp;=&amp;(x'-x)(A^2+xx'B^2)+(x'(1+x')-x(1+x))AB\\ &amp;&amp;{}+(x(1+x')-x'(1+x))BA\\ &amp;=&amp;(x'-x)\left[A^2+B^2+(1+x'+x)AB-BA\right]\\ &amp;=&amp;(x'-x)\left[(A-B)^2-AB\right]\\ &amp;=&amp;0. \end{eqnarray*}$$ Thus $XY=kYX$ where $$ k=\frac{1+x}{1+x'} $$ Note that $|k|&gt;1$. Pick an eigenvector $v$ for $X$ whose eigenvalue $\lambda$ has maximal magnitude. Then $$ X(Yv)=kYXv=(k\lambda)(Yv). $$ If $\lambda=0$ then $YXv=XYv=0$. If $\lambda\neq0$ then $|k\lambda|&gt;|\lambda|$, so by assumption $k\lambda$ can't be an eigenvalue of $X$. This implies $Yv=0$, so again $XYv=0$ and $YXv=\lambda Yv=0$. In either case we have $(XY-YX)v=0$, so $XY-YX$ is singular. Finally since $$ XY-YX=(x'-x)(AB-BA), $$ we conclude $AB-BA$ is singular.</p> <hr> <p>To give a bit more insight, suppose we started with an arbitrary homogeneous degree 2 constraint on $A,B$: $$ c_2A^2+c_1AB+c_1'BA+c_0B^2=0. $$ If we replace $A,B$ by commuting variables $a,b$, the corresponding polynomial would factor over $\mathbb C$: $$ c_2a^2+(c_1+c_1')ab+c_0b^2=(\alpha a+\beta b)(\gamma a+\delta b). $$ Let $X=\alpha A+\beta B$ and $Y=\gamma A+\delta B$. If $A$ and $B$ commuted we'd have $XY=0$, but instead we get $$ XY=(\alpha\delta-c_1)[A,B] $$ where $[A,B]=AB-BA$ is the Lie bracket. Note that $[X,Y]=(\alpha\delta-\beta\gamma)[A,B]$, so $$ (\alpha\delta-\beta\gamma)XY=(\alpha\delta-c_1)[X,Y]. $$ Unless a coefficient happens to vanish, this gives $XY=kYX$ for some $k$. When $k$ is not a root of unity this is quite a restrictive constraint (eg $XY$ must be singular).</p>
probability
<p>Both seem to result in one of k different separated outcomes, and Wikipedia says these are often conflated. Despite reading the explanation of the difference on the article about <a href="http://en.wikipedia.org/wiki/Multinomial_distribution">multinomial distribution</a>, I still have trouble understanding what the difference really is. </p>
<p>The multinomial distribution is when there are <b>multiple</b> identical independent trials where each trial has $k$ possible outcomes.</p> <p>The categorical distribution is when there is <b>only one</b> such trial.</p>
<p>Think of it like this proportion:-</p> <p>Bernoulli:Binomial::Categorical:Multinomial</p> <p>So, just like Bernoulli distribution gives us the probability for a binary variable at each instance while Binomial returns it for N examples, Categorical distribution gives us the probability for a k-classifying variable at each instance while a Multinomial distribution returns it for N examples.</p> <p>Thus: Categorical(Pk) = Multinomial(Pk, 1)</p>
differentiation
<p>Compute the taylor series of <span class="math-container">$\ln(1+x)$</span></p> <p>I've first computed derivatives (up to the 4th) of ln(1+x)<br></p> <p><span class="math-container">$f^{'}(x)$</span> = <span class="math-container">$\frac{1}{1+x}$</span> <br> <span class="math-container">$f^{''}(x) = \frac{-1}{(1+x)^2}$</span><br> <span class="math-container">$f^{'''}(x) = \frac{2}{(1+x)^3}$</span><br> <span class="math-container">$f^{''''}(x) = \frac{-6}{(1+x)^4}$</span><br> Therefore the series:<br> <span class="math-container">$\ln(1+x) = f(a) + \frac{1}{1+a}\frac{x-a}{1!} - \frac{1}{(1+a)^2}\frac{(x-a)^2}{2!} + \frac{2}{(1+a)^3}\frac{(x-a)^3}{3!} - \frac{6}{(1+a)^4}\frac{(x-a)^4}{4!} + ...$</span></p> <p>But this doesn't seem to be correct. Can anyone please explain why this doesn't work?</p> <p>The supposed correct answers are: <span class="math-container">$$\ln(1+x) = \int \left(\frac{1}{1+x}\right)dx$$</span> <span class="math-container">$$\ln(1+x) = \sum_{k=0}^{\infty} \int (-x)^k dx$$</span></p>
<p>You got the general expansion about <span class="math-container">$x=a$</span>. Here we are intended to take <span class="math-container">$a=0$</span>. That is, we are finding the Maclaurin series of <span class="math-container">$\ln(1+x)$</span>. That will simplify your expression considerably. Note also that <span class="math-container">$\frac{(n-1)!}{n!}=\frac{1}{n}$</span>.</p> <p>The approach in the suggested solution also works. We note that <span class="math-container">$$\frac{1}{1+t}=1-t+t^2-t^3+\cdots\tag{1}$$</span> if <span class="math-container">$|t|\lt 1$</span> (infinite geometric series). Then we note that <span class="math-container">$$\ln(1+x)=\int_0^x \frac{1}{1+t}\,dt.$$</span> Then we integrate the right-hand side of (1) term by term. We get <span class="math-container">$$\ln(1+x) = x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots,$$</span> precisely the same thing as what one gets by putting <span class="math-container">$a=0$</span> in your expression.</p>
<p>Note that $$\frac{1}{1+x}=\sum_{n \ge 0} (-1)^nx^n$$ Integrating both sides gives you \begin{align} \ln(1+x) &amp;=\sum_{n \ge 0}\frac{(-1)^nx^{n+1}}{n+1}\\ &amp;=x-\frac{x^2}{2}+\frac{x^3}{3}-... \end{align} Alternatively, \begin{align} &amp;f^{(1)}(x)=(1+x)^{-1} &amp;\implies \ f^{(1)}(0)=1\\ &amp;f^{(2)}(x)=-(1+x)^{-2} &amp;\implies f^{(2)}(0)=-1\\ &amp;f^{(3)}(x)=2(1+x)^{-3} &amp;\implies \ f^{(3)}(0)=2\\ &amp;f^{(4)}(x)=-6(1+x)^{-4} &amp;\implies \ f^{(4)}(0)=-6\\ \end{align} We deduce that \begin{align} f^{(n)}(0)=(-1)^{n-1}(n-1)! \end{align} Hence, \begin{align} \ln(1+x) &amp;=\sum_{n \ge 1}\frac{f^{(n)}(0)}{n!}x^n\\ &amp;=\sum_{n \ge 1}\frac{(-1)^{n-1}(n-1)!}{n!}x^n\\ &amp;=\sum_{n \ge 1}\frac{(-1)^{n-1}}{n}x^n\\ &amp;=\sum_{n \ge 0}\frac{(-1)^{n}}{n+1}x^{n+1}\\ &amp;=x-\frac{x^2}{2}+\frac{x^3}{3}-... \end{align} <strong>Edit:</strong> To derive a closed for for the geometric series, let \begin{align} S&amp;=1-x+x^2-x^3+...\\ xS&amp;=x-x^2+x^3-x^4...\\ S+xS&amp;=1\\ S&amp;=\frac{1}{1+x}\\ \end{align} To prove in the other direction, use the binomial theorem or simply compute the series about $0$ manually.</p>
logic
<p>Do I get this right? Gödel's incompleteness theorem applies to first order logic as it applies to second order and any higher order logic. So there is essentially <em>no way</em> pinning down the natural numbers that we think of in everyday life?</p> <ul> <li>First order logic fails to be categorical, i.e. there are always non-standard models.</li> <li>Second order logic is categorical here, but does not allow us to prove all its true statements?</li> <li>Defining the numbers from set theory (e.g. ZFC) suffers from the same problem as all first order theories, i.e. there are non-standard models of ZFC which induce non-standard natural numbers?</li> </ul> <p>How do we even <em>know</em> what natural numbera <em>are</em>, if we have no way to pin down a definition. What are the <em>standard natural numbers</em>? Or do we accept that second-order logic gives us this definition and we just cannot prove all there is?</p> <hr> <p><strong>LATER:</strong> </p> <p>I think I can summarize my question as follows:</p> <blockquote> <p>How do the mathematicians that write <em>standard natural numbers</em> have formal consensus on what they are talking about?</p> </blockquote>
<blockquote> <p>How do the mathematicians that write standard natural numbers have formal consensus on what they are talking about?</p> </blockquote> <p>Mathematicians work in a meta-system (which is usually ZFC unless otherwise stated). ZFC has a collection of natural numbers that is automagically provided for by the axiom of infinity. One can easily define over ZFC the language of arithmetic (as in any standard logic textbook), and also that the standard natural numbers are terms of the form "$0$" or "$1+\cdots+1$" where the number of "$1$"s is a natural number. To disambiguate these two 'kinds' of natural numbers, some authors call the standard natural numbers "standard numerals".</p> <blockquote> <p>Do I get this right? Gödel's incompleteness theorem applies to first order logic as it applies to second order and any higher order logic. So there is essentially no way pinning down the natural numbers that we think of in everyday life?</p> </blockquote> <p>Yes. See <a href="http://math.stackexchange.com/a/1895288/21820">this post</a> for the generalization and proof of the incompleteness theorems that applies to every conceivable formal system, even if it is totally different from first-order or higher-order logic.</p> <blockquote> <p>First order logic fails to be categorical, i.e. there are always non-standard models.</p> </blockquote> <p>Yes, so first-order PA does not pin down the natural numbers.</p> <blockquote> <p>Second order logic is categorical here, but does not allow us to prove all its true statements?</p> </blockquote> <p>Yes; there is no (computably) effective deductive system for second-order logic, so we cannot use second-order PA as a practical formal system. In the first place the second-order induction axiom is useless if you do not add some set-existence axioms. In any case, any effective formal system that describes the natural numbers will be incomplete, by the incompleteness theorem.</p> <p>So although second-order PA is categorical (from the perspective of a strong enough meta-system), the categoricity does not solve the philosophical problem at all since such a meta-system is itself necessarily incomplete and hence the categoricity of second-order PA only ensures uniqueness of the natural numbers within each model of the meta-system, and cannot establish any kind of absolute categoricity.</p> <blockquote> <p>Defining the numbers from set theory (e.g. ZFC) suffers from the same problem as all first order theories, i.e. there are non-standard models of ZFC which induce non-standard natural numbers?</p> </blockquote> <p>Exactly; see previous point.</p> <blockquote> <p>How do we even know what natural numbers are, if we have no way to pin down a definition.</p> </blockquote> <p>We can only describe what we would like them to be like, and our description must be incomplete because we cannot convey any non-effective description. PA is one (incomplete) characterization. ACA is another. ZFC's axiom of infinity is a much stronger characterization. But there will never be an absolute categorical characterization.</p> <p>You might hear a common attempt at defining natural numbers as those that can be obtained from 0 by adding 1 repeatedly. This is <strong>circular</strong>, because "repeatedly" cannot be defined without essentially knowing natural numbers. We are stuck; we <strong>must already know what are natural numbers</strong> before we can talk about iteration. This is why every useful foundational system for mathematics already has something inbuilt to provide such a collection. In the case of ZFC it is the axiom of infinity.</p> <blockquote> <p>What are the standard natural numbers?</p> </blockquote> <p>Good question. But this is highly philosophical, so I'll answer it later.</p> <blockquote> <p>Or do we accept that second-order logic gives us this definition and we just cannot prove all there is?</p> </blockquote> <p>No, second-order PA does not actually help us define natural numbers. The second-order induction axiom asserts "For every <strong>set</strong> of natural numbers, ...", but leaves undefined what "set" means. And it cannot possibly define "set" because it is <a href="http://math.stackexchange.com/a/1334753/21820">circular</a> as usual, and it does not help that the circularity is tied up with natural numbers...</p> <hr> <p>Now for the philosophical part.</p> <p>We have seen that mathematically we cannot uniquely define the natural numbers. Worse still, there does not seem to be ontological reason for believing in the existence of a perfect physical representation of any collection that satisfies PA under a suitable interpretation.</p> <p>Even if we discard the arithmetical properties of natural numbers, there is not even a complete theory of finite strings, in the sense that <a href="https://www.impan.pl/~kz/files/AGKZ_Con.pdf" rel="noreferrer">TC (the theory of concatenation)</a> is essentially incomplete, despite having just the concatenation operation and no arithmetic operations, so we cannot pin down even the finite strings!</p> <p>So we do not even have hope of giving a description that <strong>uniquely identifies</strong> the collection of finite strings, which naturally precludes doing the same for natural numbers. This fact holds under very weak assumptions, such as those required to prove Godel's incompleteness theorems. If one rejects those... Well one reason to reject them is that there is no apparent physical model of PA...</p> <p>As far as we know in modern physics, one cannot store finite strings in any physical medium with high fidelity beyond a certain length, for which I can safely give an upper bound of $2^{10000}$ bits. This is not only because <strong>the observable universe is finite</strong>, but also because a physical storage device with extremely large capacity (on the order of the size of the observable universe) will degrade faster than you can use it.</p> <p>So description aside, we do not have any reason to even believe that finite strings have actual physical representation in the real world. This problem cannot be escaped by using conceptual strings, such as iterations of some particular process, because we have no basis to assume the existence of a process that can be iterated indefinitely, pretty much due to the finiteness of the observable universe, again.</p> <p>Therefore we are stuck with the <strong>physical inability</strong> to even generate all finite strings, or to generate all natural numbers in a physical representation, even if we define them using circular natural-language definitions!</p> <hr> <p>Now I am not saying that there is absolutely no real-world relevance of arithmetical facts.</p> <p>Despite the fact that PA (Peano arithmetic) is based on the assumption of an infinite collection of natural numbers, which as explained above cannot have a perfect physical representation, PA still generates theorems that seem to be <strong>true at least at human scales</strong>. My favourite example is HTTPS, whose decryption process relies crucially on the correctness of Fermat's little theorem applied to natural numbers with length on the order of thousands of bits. So there is some truth in PA at human scales.</p> <p>This may even suggest one way to <strong>escape</strong> the incompleteness theorems, because they only apply to deterministic formal systems that roughly speaking have certain unbounded closure properties (see <a href="https://pdfs.semanticscholar.org/c278/147b7a68385836a90939a175a9959cabbf0b.pdf" rel="noreferrer">this paper about self-verifying theories</a> for sharp results about the incompleteness phenomenon). Perhaps the real world may even be governed by some kind of system that does is syntactically complete, since it has physical 'fuzziness' due to quantum mechanics or the spacetime limitations, but anyway such systems will not have arithmetic in full as we know it!</p>
<p>Yes, as you phrased it in a comment:</p> <blockquote> <p>So even the academic usage of the term standard natural number trusts in our intuitive understanding from preschool?</p> </blockquote> <p>That's exactly how it is.</p> <p>We believe, based on experience, that the intuitive concept we learned in preschool has meaning, and mathematics is an endeavor to <em>explore <strong>that</strong> concept</em> with more powerful tools than we had available in preschool -- not to construct it from scratch.</p> <p>There are the Peano axioms either in their first- or second-order guise, of course. However, even if Gödel hadn't sunk the hope that they would tell us <em>all</em> truth about our intuitive natural numbers, they would still just be ink. The fundamental reason why we <em>care</em> about those axioms is that we believe they capture truth (only some of the truth, but truth nonetheless) about our intuitive conception of number.</p> <p>Indeed it is hard to imagine how one <em>could</em> construct the natural numbers from scratch. To demand that, one would need to build on something else that we <em>already</em> know -- but it can't be turtles all the way down, and somewhere the buck has to stop. We can kick it around a bit and say, for example, that our fundamental concept is not numbers, but the finite strings of symbols that make up formal proofs -- but that's not really progress, because the natural numbers are implicit even there: If we can speak of strings, then we can speak of tally marks, and there's the natural numbers already!</p>
geometry
<p>The derivative of the volume of a sphere, with respect to its radius, gives its surface area. I understand that this is because given an infinitesimal increase in radius, the change in volume will only occur at the surface. Similarly, the derivative of a circle's area with respect to its radius gives its circumference for similar reasons.</p> <p>A similar logic can be applied to other simple geometric shapes and the pattern holds. For a cylinder, it becomes more interesting. Its volume is <span class="math-container">$\pi r^2h$</span>, with the height <span class="math-container">$h$</span> and radius <span class="math-container">$r$</span> being independent.</p> <p>Incremental increases in a cylinders height will &quot;add a circle&quot; to the bottom of the cylinder, so it makes sense to me that the derivative of volume with respect to height is the are of that circle <span class="math-container">$\pi r^2$</span>. Incremental changes to its radius, will add a &quot;tube&quot; around the cylinder, whose area is <span class="math-container">$2\pi rh$</span>, the derivative of volume with respect to radius.</p> <p>I thought I had found a nice heuristic reason for why the derivatives of volumes give surface areas, but it breaks down for the cone. The volume is <span class="math-container">$\frac{\pi r^2 h}{3}$</span>. Assuming the angle <span class="math-container">$\theta$</span> of the cone remains constant, <span class="math-container">$h$</span> and <span class="math-container">$r$</span> are dependent (<span class="math-container">$r=h\tan\theta$</span>).</p> <p>Now, the derivative of the cone's volume with respect to height gives the area of the base of the cone, this intuitively makes sense, but the derivative with respect to radius doesn't seem to match any geometric quantity.</p> <p><span class="math-container">$V=\dfrac{\pi r^3}{3\tan\theta}~$</span> and <span class="math-container">$~\dfrac{\text{d}V}{\text{d}r}=\dfrac{\pi r^2}{\tan\theta}$</span></p> <p>Similarly, I'd hoped to find some way to drive the surface area of the cone without the base <span class="math-container">$\pi rl$</span> where <span class="math-container">$l$</span> is the distance from the point of the cone to the edge of the circle at its base. However, the derivative with respect to <span class="math-container">$l$</span> also doesn't seem to have any meaningful geometric significance.</p> <p>What's going on here? Is it an issue of not defining the shape rigorously enough? Do I need to parametrise the volume rigorously in some way?</p>
<p>If you continuously enlarge a solid cone by adding material to the base, then every inch of added height corresponds to an inch-thick layer added to the circular base, so the rate of change of volume is equal to the area of the base times the rate of change of the height. That is, <span class="math-container">$dV/dh=A_\text{base}=\pi r^2$</span>.</p> <p>However, if you enlarge the cone by adding material to the &quot;cap&quot;, then adding an inch of height (or radius) does <em>not</em> correspond to exactly an inch-thick layer added to the cap. The added thickness is actually <span class="math-container">$\Delta t=\Delta r \cos\theta=\Delta h \sin\theta$</span>, which you can see by drawing a couple of triangles (with a segment of length <span class="math-container">$t$</span> perpendicular to the surface of the cone and with an endpoint at the center of the base). The rate of change of volume is the area of the cap times the rate at which we add thickness to the cap: <span class="math-container">$dV=A_\text{cap}dt$</span>, so we have <span class="math-container">$A_\text{cap}=\frac{dV}{dt}=\frac1{\cos\theta} \frac{dV}{dr}=\frac{\pi rh}{\cos\theta}=\pi rl$</span>, where <span class="math-container">$l$</span> is the slant height.</p>
<p>My intuition tells me that you should be able to find such a formula by choosing the right parameterization. A couple of motivating examples:</p> <h4>First example: a disc</h4> <p>Consider a disk of radius <em>r</em>. The area in terms of the radius is <span class="math-container">$\pi r^2$</span> and in terms of the diameter <span class="math-container">$\pi (\frac{d}{2})^2$</span>. If we take the derivatives of these expressions with respect to each parameter we find:</p> <ul> <li><span class="math-container">$2 \pi r$</span> for the radius, which is the circumference, But</li> <li><span class="math-container">$\frac{1}{2} \pi d$</span> for the diameter, which is not the circumference.</li> </ul> <p>The parameterization affects how &quot;fast&quot; the area grows, and varying the diameter makes the area of the disk &quot;grow too slowly&quot; for the change in area to be the circle that surrounds it.</p> <h4>Second example: an equilateral triangle</h4> <p>Consider an equilateral triangle of side <span class="math-container">$s$</span>. The area is <span class="math-container">$A(s)=\frac{\sqrt{3}}{4}s^2$</span> and the perimeter is <span class="math-container">$P(s)=3s$</span>. The derivative of the area with respect to <span class="math-container">$s$</span> is <span class="math-container">$\frac{dA}{ds} = \frac{\sqrt{3}}{2}s$</span>, which is not the perimeter.</p> <p>So let's introduce a new parameter <span class="math-container">$q$</span>: <span class="math-container">$s = kq$</span>, for <span class="math-container">$k$</span> a constant. We have <span class="math-container">$A(q)=\frac{\sqrt{3}}{4}k^2q^2$</span> and <span class="math-container">$P(q)=3kq$</span>. If we set <span class="math-container">$\frac{dA}{dq} = P(q)$</span> we find <span class="math-container">$k=\frac{\sqrt{3}}{2}$</span>. This gives a parameter of <span class="math-container">$q = \frac{2}{\sqrt{3}}s$</span>, which happens to be the diameter of the circumscribed circle of our triangle!</p> <p>So if we parametrize the area of an equilateral triangle by the diameter of its circumscribed circle, then the rate of change of the area is the circumference.</p> <p>I suspect a similar approach could give you a parametrization with the same property for the cone. As a first step, you could try doing the same for a cone whose base is equal to its side length and see what you get. Who knows, maybe the parameter you'll find will have an interpretation in terms of the radius of the circumscribed sphere.</p>
matrices
<p>Just wanted some input to see if my proof is satisfactory or if it needs some cleaning up.</p> <p>Here is what I have.</p> <hr> <p><strong>Proof</strong></p> <blockquote> <p>Suppose $A$ is square matrix and invertible and, for the sake of contradiction, let $0$ be an eigenvalue. Consider, $(A-\lambda I)\cdot v = 0$ with $\lambda=0 $ $$\Rightarrow (A- 0\cdot I)v=0$$<br> $$\Rightarrow(A-0)v=0$$<br> $$\Rightarrow Av=0$$</p> <p>We know $A$ is an invertible and in order for $Av = 0$, $v = 0$, but $v$ must be non-trivial such that $\det(A-\lambda I) = 0$. Here lies our contradiction. Hence, $0$ cannot be an eigenvalue.</p> </blockquote> <p><strong>Revised Proof</strong></p> <blockquote> <p>Suppose $A$ is square matrix and has an eigenvalue of $0$. For the sake of contradiction, lets assume $A$ is invertible. </p> <p>Consider, $Av = \lambda v$, with $\lambda = 0$ means there exists a non-zero $v$ such that $Av = 0$. This implies $Av = 0v \Rightarrow Av = 0$</p> <p>For an invertible matrix $A$, $Av = 0$ implies $v = 0$. So, $Av = 0 = A\cdot 0$. Since $v$ cannot be $0$,this means $A$ must not have been one-to-one. Hence, our contradiction, $A$ must not be invertible.</p> </blockquote>
<p>Your proof is correct. In fact, a square matrix $A$ is invertible <strong>if and only if</strong> $0$ is not an eigenvalue of $A$. (You can replace all logical implications in your proof by logical equivalences.)</p> <p>Hope this helps!</p>
<p>This looks okay. Initially, we have that $Av=\lambda v$ for eigenvalues $\lambda$ of $A$. Since $\lambda=0$, we have that $Av=0$. Now assume that $A^{-1}$ exists. </p> <p>Now by multiplying on the left by $A^{-1}$, we get $v=0$. This is a contradiction, since $v$ cannot be the zero vector. So, $A^{-1}$ does not exist. </p>
linear-algebra
<p>The dot product of vectors $\mathbf{a}$ and $\mathbf{b}$ is defined as: $$\mathbf{a} \cdot \mathbf{b} =\sum_{i=1}^{n}a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}$$</p> <p>What about the quantity? $$\mathbf{a} \star \mathbf{b} = \prod_{i=1}^{n} (a_{i} + b_{i}) = (a_{1} +b_{1})\,(a_{2}+b_{2})\cdots \,(a_{n}+b_{n})$$</p> <p>Does it have a name?</p> <p>"Dot sum" seems largely inappropriate. Come to think of it, I find it interesting that the dot product is named as such, given that it is, after all, a "sum of products" (although I am aware that properties of $\mathbf{a} \cdot{} \mathbf{b}$, in particular distributivity, make it a meaningful name).</p> <p>$\mathbf{a} \star \mathbf{b}$ is commutative and has the following property:</p> <p>$\mathbf{a} \star (\mathbf{b} + \mathbf{c}) = \mathbf{b} \star (\mathbf{a} + \mathbf{c}) = \mathbf{c} \star (\mathbf{a} + \mathbf{b})$</p>
<p>Too long for a comment, but I'll list some properties below, in hopes some idea comes up.</p> <ul> <li>${\bf a}\star {\bf b}={\bf b}\star {\bf a}$;</li> <li>$(c{\bf a})\star (c {\bf b})=c^n ({\bf a}\star {\bf b})$;</li> <li>$({\bf a+b})\star {\bf c} = ({\bf a+c})\star {\bf b} = ({\bf b+c})\star {\bf a}$;</li> <li>${\bf a}\star {\bf a} = 2^n a_1\cdots a_n$;</li> <li>${\bf a}\star {\bf 0} = a_1\cdots a_n$;</li> <li>$(c{\bf a})\star {\bf b} = c^n ({\bf a}\star ({\bf b}/c))$;</li> <li>${\bf a}\star (-{\bf a}) = 0$;</li> <li>${\bf 1}\star {\bf 0} = 1$, where ${\bf 1} = (1,\ldots,1)$;</li> <li>$\sigma({\bf a}) \star \sigma({\bf b}) = {\bf a}\star {\bf b}$, where $\sigma \in S_n$ acts as $\sigma(a_1,\ldots,a_n) \doteq (a_{\sigma(1)},\ldots,a_{\sigma(n)})$.</li> </ul>
<p>I don't know if it has a particular name, but it is essentially a peculiar type of convolution. Note that $$ \prod_{i}(a_{i} + b_{i}) = \sum_{X \subseteq [n]} \left( \prod_{i \in X} a_{i} \right) \left( \prod_{i \in X^{c}} b_{i} \right), $$ where $X^{c} = [n] \setminus X$ and $[n] = \{1, 2, \dots n\}$. In other words, if we define $f_{a}, f_{b}$ via $$ f_{a}(X) = \prod_{i \in X}a_{i}, $$ then $$ a \star b = (f_{a} \ast f_{b})([n]) $$ where $\ast$ denotes the convolution product $$ (f \ast g)(Y) = \sum_{X \subseteq Y} f(X)g(Y \setminus X). $$ To learn more about this, I would recommend reading about multiplicative functions and Moebius inversion in number theory. I don't know if there is a general theory concerning this, but the notion of convolutions comes up in many contexts (see this <a href="https://en.wikipedia.org/wiki/Convolution" rel="noreferrer">wikipedia article</a>, and another on <a href="https://en.wikipedia.org/wiki/Dirichlet_convolution" rel="noreferrer">its role in number theory</a>).</p> <p>Edit: For what it's worth, the operation is not a vector operation in linear-algebraic sense. That is, it is not preserved under change-of-basis. In fact, it is not even preserved under orthogonal change-of-basis (aka rotation). For example, consider $a = (3,4) \in \mathbb{R}^{2}$. Note that $a \star a = 32$. Then we apply the proper rotation $T$ defined by $T(a) = (5, 0)$. Then we see $T(a) \star T(a) = 0$.</p>
logic
<p>Popular mathematics folklore provides some simple tools enabling us compactly to describe some truly enormous numbers. For example, the number $10^{100}$ is commonly known as a <a href="http://en.wikipedia.org/wiki/Googol">googol</a>, and a <a href="http://en.wikipedia.org/wiki/Googolplex">googol plex</a> is $10^{10^{100}}$. For any number $x$, we have the common vernacular:</p> <ul> <li>$x$ <em>bang</em> is the factorial number $x!$</li> <li>$x$ <em>plex</em> is the exponential number $10^x$</li> <li>$x$ <em>stack</em> is the number obtained by iterated exponentiation (associated upwards) in a tower of height $x$, also denoted $10\uparrow\uparrow x$, $$10\uparrow\uparrow x = 10^{10^{10^{\cdot^{\cdot^{10}}}}}{\large\rbrace} x\text{ times}.$$</li> </ul> <p>Thus, a googol bang is $(10^{100})!$, and a googol stack is $10\uparrow\uparrow 10^{100}$. The vocabulary enables us to name larger numbers with ease:</p> <ul> <li>googol bang plex stack. (This is the exponential tower $10^{10^{\cdot^{\cdot^{^{10}}}}}$ of height $10^{(10^{100})!}$)</li> <li>googol stack bang stack bang</li> <li>googol bang bang stack plex stack</li> <li>and so on…</li> </ul> <p>Consider the collection of all numbers that can be named in this scheme, by a term starting with googol and having finitely many adjectival operands: bang, stack, plex, in any finite pattern, repetitions allowed. (For the purposes of this question, let us limit ourselves to these three operations and please accept the base 10 presumption of the stack and plex terminology simply as an artifact of its origin in popular mathematics.)</p> <p>My goal is to sort all such numbers nameable in this vocabulary by size.</p> <p>A few simple observations get us started. Once $x$ is large enough (about 20), then the factors of $x!$ above $10$ compensate for the few below $10$, and so we see that $10^x\lt x!$, or in other words, $x$ plex is less than $x$ bang. Similarly, $10^{10^{:^{10}}}x$ times is much larger than $x!$, since $10^y\gt (y+1)y$ for large $y$, and so for large values we have</p> <ul> <li>$x$ plex $\lt$ $x$ bang $\lt$ $x$ stack.</li> </ul> <p>In particular, the order for names having at most one adjective is:</p> <pre><code> googol googol plex googol bang googol stack </code></pre> <p>And more generally, replacing plex with bang or bang with stack in any of our names results in a strictly (and much) larger number.</p> <p>Continuing, since $x$ stack plex $= (x+1)$ stack, it follows that</p> <ul> <li>$x$ stack plex $\lt x$ plex stack.</li> </ul> <p>Similarly, for large values,</p> <ul> <li>$x$ plex bang $\lt x$ bang plex,</li> </ul> <p>because $(10^x)!\lt (10^x)^{10^x}=10^{x10^x}\lt 10^{x!}$. Also,</p> <ul> <li>$x$ stack bang $\lt x$ plex stack $\lt x$ bang stack,</li> </ul> <p>because $(10\uparrow\uparrow x)!\lt (10\uparrow\uparrow x)^{10\uparrow\uparrow x}\lt 10\uparrow\uparrow 2x\lt 10\uparrow\uparrow 10^x\lt 10\uparrow\uparrow x!$. It also appears to be true for large values that</p> <ul> <li>$x$ bang bang $\lt x$ stack.</li> </ul> <p>Indeed, one may subsume many more iterations of plex and bang into a single stack. Note also for large values that</p> <ul> <li>$x$ bang $\lt x$ plex plex</li> </ul> <p>since $x!\lt x^x$, and this is seen to be less than $10^{10^x}$ by taking logarithms.</p> <p>The observations above enable us to form the following order of all names using at most two adjectives.</p> <pre><code> googol googol plex googol bang googol plex plex googol plex bang googol bang plex googol bang bang googol stack googol stack plex googol stack bang googol plex stack googol bang stack googol stack stack </code></pre> <p>My request is for any or all of the following:</p> <ol> <li><p>Expand the list above to include numbers named using more than two adjectives. (This will not be an end-extension of the current list, since googol plex plex plex and googol bang bang bang will still appear before googol stack.) If people post partial progress, we can assemble them into a master list later.</p></li> <li><p>Provide general comparison criteria that will assist such an on-going effort.</p></li> <li><p>Provide a complete comparison algorithm that works for any two expressions having the same number of adjectives.</p></li> <li><p>Provide a complete comparison algorithm that compares any two expressions.</p></li> </ol> <p>Of course, there is in principle a computable comparison procedure, since we may program a Turing machine to actually compute the two values and compare their size. What is desired, however, is a simple, feasible algorithm. For example, it would seem that we could hope for an algorithm that would compare any two names in polynomial time of the length of the names.</p>
<p>OK, let's attempt a sorting of the names having at most three operands. I'll make several observations, and then use them to assemble the order section by section, beginning with the part below googol stack.</p> <ul> <li><p>googol bang bang bang $\lt$ googol stack. It seems clear that we shall be able to iterated bangs many times before exceeding googol stack. Since googol bang bang bang is the largest three-operand name using only plex and bang, this means that all such names will interact only with each below googol stack.</p></li> <li><p>plex $\lt$ bang. This was established in the question.</p></li> <li><p>plex bang $\lt$ bang plex. This was established in the question, and it allows us to make many comparisons in terms involving only plex and bang, but not quite all of them.</p></li> <li><p>googol bang bang $\lt$ googol plex plex plex. This is because $g!!\lt (g^g)^{g^g}=g^{gg^g}=10^{100\cdot gg^g}$, which is less than $10^{10^{10^g}}$, since $100\cdot gg^g=10^{102\cdot 10^{100}}\lt 10^{10^g}$. Since googol bang bang is the largest two-operand name using only plex and bang and googol plex plex plex is the smallest three-operand name, this means that the two-operand names using only plex and bang will all come before all the three-operand names.</p></li> <li><p>googol plex bang bang $\lt$ googol bang plex plex. This is because $(10^g)!!\lt ((10^g)^{10^g})!=(10^{g10^g})!=(10^{10^{g+100}})!\lt (10^{10^{g+100}})^{10^{10^{g+100}}}=10^{10^{g+100}10^{10^{g+100}}}= 10^{10^{(g+100)10^{g+100}}}\lt 10^{10^{g!}}$.</p></li> </ul> <p>Combining the previous observations leads to the following order of the three-operand names below googol stack:</p> <pre><code> googol googol plex googol bang googol plex plex googol plex bang googol bang plex googol bang bang googol bang bang googol plex plex plex googol plex plex bang googol plex bang plex googol plex bang bang googol bang plex plex googol bang plex bang googol bang bang plex googol bang bang bang googol stack </code></pre> <p>Perhaps someone can generalize the methods into a general comparison algorithm for larger smallish terms using only plex and bang? This is related to the topic of the Velleman article linked to by J. M. in the comments.</p> <p>Meanwhile, let us now turn to the interaction with stack. Using the observations of the two-operand case in the question, we may continue as follows:</p> <pre><code> googol stack plex googol stack bang googol stack plex plex googol stack plex bang googol stack bang plex googol stack bang bang </code></pre> <p>Now we use the following fact:</p> <ul> <li>stack bang bang $\lt$ plex stack. This is established as in the question, since $(10\uparrow\uparrow x)!!\lt (10\uparrow\uparrow x)^{10\uparrow\uparrow x}!\lt$ $(10\uparrow\uparrow x)^{(10\uparrow\uparrow x)(10\uparrow\uparrow x)^{10\uparrow\uparrow x}}=$ $(10\uparrow\uparrow x)^{(10\uparrow\uparrow x)^{1+10\uparrow\uparrow x}} 10\uparrow\uparrow 4x\lt 10\uparrow\uparrow 10^x$. In fact, it seems that we will be able to absorb many more iterated bangs after stack into plex stack.</li> </ul> <p>The order therefore continues with:</p> <pre><code> googol plex stack googol plex stack plex googol plex stack bang </code></pre> <ul> <li>plex stack bang $\lt$ bang stack. To see this, observe that $(10\uparrow\uparrow 10^x)!\lt (10\uparrow\uparrow 10^x)^{10\uparrow\uparrow 10^x}\lt 10\uparrow\uparrow 2\cdot10^x$, since associating upwards is greater, and this is less than $10\uparrow\uparrow x!$. Again, we will be able to absorb many operands after plex stack into bang stack.</li> </ul> <p>The order therefore continues with:</p> <pre><code> googol bang stack googol bang stack plex googol bang stack bang </code></pre> <ul> <li>bang stack bang $\lt$ plex plex stack. This is because $(10\uparrow\uparrow x!)!\lt (10\uparrow\uparrow x!)^{10\uparrow\uparrow x!}\lt 10\uparrow\uparrow 2x!\lt 10\uparrow 10^{10^x}$.</li> </ul> <p>Thus, the order continues with:</p> <pre><code> googol plex plex stack googol plex bang stack googol bang plex stack googol bang bang stack </code></pre> <p>This last item is clearly less than googol stack stack, and so, using all the pairwise operations we already know, we continue with:</p> <pre><code> googol stack stack googol stack stack plex googol stack stack bang googol stack plex stack googol stack bang stack googol plex stack stack googol bang stack stack googol stack stack stack </code></pre> <p>Which seems to complete the list for three-operand names. If I have made any mistakes, please comment below.</p> <p>Meanwhile, this answer is just partial progress, since we have the four-operand names, which will fit into the hierarchy, and I don't think the observations above are fully sufficient for the four-operand comparisons, although many of them will now be settled by these criteria. And of course, I am nowhere near a general comparison algorithm.</p> <p>Sorry for the length of this answer. Please post comments if I've made any errors.</p>
<p>The following describes a comparison algorithm that will work for expressions where the number of terms is less than googol - 2.</p> <p>First, consider the situation with only bangs and plexes. To compare two numbers, first count the total number of bangs and plexes in each. If one has more than the other, that number is bigger. If the two numbers have the same number of bangs and plexes, compare the terms lexicographically, setting bang > plex. So googol plex bang plex plex > googol plex plex bang bang, since the first terms are equal, and the second term favors the first.</p> <p>To prove this, first note that x bang > x plex for $x \ge 25$. To show that the higher number of terms always wins, it suffices to show that googol plex$^{k+1} >$ googol bang$^k$. We will instead show that $x$ plex$^{k+1} > x$ bang $^k$ for $x \ge 100$. Set $x = 10^y$.</p> <p>$10^y$ bang $&lt; (10^y)^{10^y} = 10^{y*10^y} &lt; 10^{10^{10^y}} = 10^y$ plex plex</p> <p>$10^y$ bang bang $&lt; (10^{y*10^y})^{10^{y*10^y}} $</p> <p>$= 10^{10^{y*10^y + y + \log_{10} y}}$ </p> <p>$= 10^{10^{10^{y + \log_{10} y} (1 + \frac{y + \log_{10} y}{10^{y + \log_{10} y}})}}$ </p> <p>$&lt; 10^{10^{10^{y + \log_{10} y} (1 + \frac{y}{10^{y}})}}$</p> <p>(We use the fact that x/10^x is decreasing for large x.) </p> <p>$= 10^{10^{10^{y + \log_{10} y + \log_{10}(1 + \frac{y}{10^{y}})}}}$ </p> <p>$&lt; 10^{10^{10^{y + \log_{10} y + \frac{y}{10^{y}}}}}$ </p> <p>(We use the fact that ln(1+x) &lt; x, so log_10 (1+x) &lt; x)</p> <p>$&lt; 10^{10^{10^{2y}}} &lt; 10^{10^{10^{10^y}}} = 10^y$ plex plex plex</p> <p>$10^y$ bang bang bang &lt; $(10^{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}})^{10^{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}}} $</p> <p>$= 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}} + 10^{y + \log_{10} y + \frac{y}{10^y}})}}$ </p> <p>$= 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}}(1 + \frac{10^{y + \log_{10} y + \frac{y}{10^y}}}{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}})}}$ </p> <p>$&lt; 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}}(1 + \frac{10^{y }}{10^{10^{y}}})}}$ </p> <p>$= 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} + \log_{10}(1+\frac{10^{y }}{10^{10^{y}}}))}}}$ </p> <p>$&lt; 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} + \frac{10^{y }}{10^{10^{y}}})}}}$ </p> <p>$= 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} (1 + \frac{10^{y }}{10^{10^{y}} * (10^{y + \log_{10} y + \frac{y}{10^y}})}))}}}$ </p> <p>$&lt; 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} (1 + \frac{1}{10^{10^{y}} }))}}}$</p> <p>$= 10^{10^{10^{10^{y + \log_{10} y + \frac{y}{10^y} + \frac{1}{10^{10^{y}} }} }}}$ </p> <p>$&lt; 10^{10^{10^{10^{2y}}}} &lt; 10^{10^{10^{10^{10^y}}}} = 10^y$ plex plex plex plex</p> <p>We can see that the third bang added less than $\frac{1}{10^{10^y}}$ to the top exponent. Similarly, adding a fourth bang will add less than $\frac{1}{10^{10^{10^y}}}$, adding a fifth bang will add less than $\frac{1}{10^{10^{10^{10^y}}}}$, and so on. It's clear that all the fractions will add up to less than 1, so in general,</p> <p>$10^y$ bang$^{k} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{y + \log_{10} y + 1}}}}}}{\large\rbrace} k+1\text{ 10's} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{10^y}}}}}}{\large\rbrace} k+2\text{ 10's} = 10^y$ plex$^{k+1}$.</p> <p>Next, we have to show that the lexicographic order works. We will show that it works for all $x \ge 100$. Suppose our procedure failed; take two numbers with the fewest number of terms for which it fails, e.g. $x s_1 ... s_n$ and $x t_1 ... t_n$. It cannot be that $s_1$ and $t_1$ are both plex or both bang, since then $(x s_1) s_2 ... s_n$ and $(x s_1) t_2 ... t_n$ would be a failure of the procedure with one fewer term. So set $s_1 =$ bang and $t_1 =$ plex. Since our procedure tells us that $x s_1 ... s_n$ > $x t_1 ... t_n$, and our procedure fails, it must be that $x s_1 ... s_n$ &lt; $x t_1 ... t_n$. Then</p> <p>$x$ bang plex ... plex $&lt; x$ bang $s_2 ... s_n &lt; x$ plex $t_2 ... t_n &lt; x$ plex bang ... bang.</p> <p>So to show our procedure works, it suffices to show that x bang plex$^k$ > x plex bang$^k$. Set x = 10^y.</p> <p>$10^y$ bang > $(\frac{10^y}{e})^{10^y} &gt; (10^{y - \frac{1}{2}})^{10^y} = 10^{(y-\frac{1}{2})10^y}$</p> <p>$10^y$ bang plex$^k > 10^{10^{10^{\cdot^{\cdot^{10^{(y-\frac{1}{2})10^y}}}}}}{\large\rbrace} k+1\text{ 10's}$</p> <p>To determine $10^y$ plex bang$^k$, we can use our previous inequality for $10^y$ bang$^k$ and set $x = 10^y$ plex $= 10^{10^y}$, i.e. substitute $10^y$ for $y$. We get</p> <p>$10^y$ plex bang$^k &lt; 10^{10^{10^{\cdot^{\cdot^{10^{(10^y + \log_{10}(10^y) + 1}}}}}}{\large\rbrace} k+1\text{ 10's} = 10^{10^{10^{\cdot^{\cdot^{10^{10^y + y + 1}}}}}}{\large\rbrace} k+1\text{ 10's}$</p> <p>$&lt; 10^{10^{10^{\cdot^{\cdot^{10^{(y-\frac{1}{2})10^y}}}}}}{\large\rbrace} k+1\text{ 10&#39;s} &lt; 10^y$ bang plex$^k$.</p> <p>Okay, now for terms with stack. Given two expressions, first compare the number of times stack appears; the number in which stack appears more often is the winner. If stack appears n times for both expressions, then in each expression consider the n+1 groups of plexes and bangs separated by the n stacks. Compare the n+1 groups lexicographically, using the ordering we defined above for plexes and bangs. Whichever expression is greater denotes the larger number.</p> <p>Now, this procedure clearly does not work all the time, since a googol followed be a googol-2 plexes is greater than googol stack. However, I believe that if the number of terms in the expressions are less than googol-2, then the procedure is correct.</p> <p>First, observe that $x$ plex stack > $x$ stack plex and $x$ bang stack > $x$ stack bang, since </p> <p>$x$ stack plex $&lt; x$ stack bang $&lt; (10\uparrow\uparrow x)^{10\uparrow\uparrow x} &lt; 10\uparrow\uparrow (2x) &lt; x$ plex stack $&lt; x$ bang stack.</p> <p>Thus if googol $s_1 ... s_n$ is some expression with fewer stacks than googol $t_1 ... t_m$, we can move all the plexes and bangs in $s_1 ... s_n$ to the beginning. Let $s_1 ... s_i$ and $t_1 ... t_j$ be the initial bangs and plexes before the first stack. There will be less than googol-2 bangs and plexes, and </p> <p>googol bang$^{\text{googol}-3} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{100 + \log_{10} 100 + 1}}}}}}{\large\rbrace} \text{googol-2 10's} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{103}}}}}}{\large\rbrace} \text{googol-2 10's}$</p> <p>$ &lt; 10 \uparrow\uparrow $googol = googol stack</p> <p>and so googol $s_1 ... s_i$ will be less than googol $t_1 ... t_{j+1}$ ($t_{j+1}$ is a stack). $s_{i+1} ... s_n$ consists of $k$ stacks, and $t_{j+2} ... t_m$ consists of at least $k$ stacks and possibly some plexes and bangs. Thus googol $s_1 ... s_n$ will be less than googol $t_1 ... t_m$.</p> <p>Now consider $x S_1$ stack $S_2$ stack ... stack $S_n$ versus $x T_1$ stack $T_2$ stack ... stack $T_n$, where the $S_i$ and $T_i$ are sequences of plexes and bangs. Without loss of generality, we can assume that $S_1 &gt; T_1$ in our order. (If $S_1 = T_1$, we can consider ($x S_1$ stack) $S_2$ stack ... stack $S_n$ versus ($x T_1$ stack) $T_2$ stack ... stack $T_n$, and compare $S_2$ versus $T_2$, etc., until we get to an $S_i$ and $T_i$ that are different.) $x S_1$ stack $S_2$ stack ... stack $S_n$ is, at the minimum, $x S_1$ stack ... stack, while $x T_1$ stack $T_2$ stack ... stack $T_n$, is, at the maximum, $x T_1$ stack bang$^{\text{googol}-3}$ stack .... stack. So it is enough to show</p> <p>x S_1 stack > x T_1 stack bang$^{\text{googol}-3}$</p> <p>We have seen that $x$ bang$^k &lt; 10^{10^{10^{\cdot^{\cdot^{10^x}}}}}{\large\rbrace} k+1\text{ times}$ so $x$ bang$^{\text{googol}-3} &lt; 10^{10^{10^{\cdot^{\cdot^{10^x}}}}}{\large\rbrace} \text{googol-2 times}$, and $x T_1$ stack bang$^{\text{googol}-3} &lt; ((x T_1) +$ googol) stack. Thus we must show $x S_1 &gt; (x T_1) +$ googol.</p> <p>We can assume without loss of generality that the first term of $S_1$ and the first term of $T_1$ are different. (Otherwise set $x = x s_1 ... s_{i-1}$ where i is the smallest number such that s_i and t_i are different.) We have seen above that it is enough to consider </p> <p>x (plex)^(k+1) versus x (bang)^k x bang (plex)^k versus x plex (bang)^k</p> <p>We have previously examined these two cases. In both cases, adding a googol to the smaller leads to the same inequality.</p> <p>And with that, we are done.</p> <hr> <p>What are the prospects for a general comparison algorithm, when the number of terms exceeds googol-3? The difficulty can be illustrated by considering the following two expressions:</p> <p>$x$ $S_1$ stack plex$^k$</p> <p>$x$ $T_1$ stack</p> <p>The two expressions are equal precisely when k = $x$ $T_1$ - $x$ $S_1$. So a general comparison algorthm must allow for the calculation of arbitrary expressions, which perhaps makes our endeavor pointless.</p> <p>In light of this, I believe the following general comparison algorithm is the best that can be done.</p> <p>We already have a general comparison algorithm for expressions with no appearances of stack. If stack appears in both expressions, let them be $x$ $S_1$ stack $S_2$ and $x$ $T_1$ stack $T_2$, where $S_2$ and $T_2$ have no appearances of stack. Replace $x$ $S_1$ stack with plex$^{(x S_1)}$, and $x$ $T_1$ stack with plex$^{(x T_1)}$, and do our previous comparison algorithm on the two new expressions. This clearly works because $x$ $S_1$ stack = $10^{10}$ plex$^{(x S_1 -2)}$ and $x$ $T_1$ stack = $10^{10}$ plex$^{(x T_1-2)}$.</p> <p>The remaining case is where one expression has stack and the other does not, i.e. googol $S_1$ stack $S_2$ versus googol $T$, where $S_2$ and $T$ have no appearances of stack. Let $s$ and $t$ be the number of terms in $S_2$ and $T$ respectively. Then googol $T$ is greater than googol $S_1$ stack $S_2$ iff $t \ge $ googol $S_1 + s - 2$.</p> <p>Indeed, if $t \ge $ googol $S_1 + s - 2$,</p> <p>googol $T \ge$ googol plex$^{\text{googol} S_1 + s - 2} = 10^{10^{10^{\cdot^{\cdot^{10^{100}}}}}}{\large\rbrace} $ googol $S_1$ $+s-1$ 10's $ &gt; 10^{10^{10^{\cdot^{\cdot^{10^{10} + 10 + 1}}}}}{\large\rbrace} $ googol $S_1 +s-1$ 10's > googol $S_1$ stack bang$^s$ </p> <p>$\ge$ googol $S_1$ stack $S_2$.</p> <p>If $t \le $ googol $S_1 + s - 3$,</p> <p>googol $T \le$ googol bang$^{\text{googol} S_1 + s - 3} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{103}}}}}}{\large\rbrace} $ googol $S_1$ $+s-2$ 10's $ &lt; 10^{10^{10^{\cdot^{\cdot^{10^{10^{10}}}}}}}{\large\rbrace} $ googol $S_1 +s$ 10's = googol $S_1$ stack plex$^s$ </p> <p>$\le$ googol $S_1$ stack $S_2$.</p> <p>So the comparison algorithm, while not particularly clever, works. </p> <p>In one of the comments, someone raised the question of a polynomial algorithm (presumably as a function of the maximum number of terms). We can implement one as follows. Let n be the maximum number of terms. We use the following lemma.</p> <p>Lemma. For any two expressions googol $S$ and googol $T$, if googol $S$ > googol $T$, then googol $S$ > 2 googol $T$.</p> <p>This lemma is not too hard to verify, but for reasons of space I will not do so here.</p> <p>As before, we have a simple algorithm in O(n) time when both expressions do not contain stack. If exactly one of the expressions has a stack, we compute $x$ $S_1$ as above, but we stop if the calculation exceeds n. If the calculation finishes, then we can do the previous comparison in O(log n) time; if the calculation stops, then we know that $x$ $S_1$ stack $S_2$ is larger, again in O(log n) time.</p> <p>If both expressions have stack, then from our previous precedure we calculate both $x$ $S_1$ and $x$ $T_1$. Now we stop if either calculation exceeds $2m$, where $m = $the maximum length of $S_2$ and $T_2$ (Clearly, $2m &lt; 2n$). If the caculation finishes, then we can do our previous procedure in O(n) time. If the calculation stops prematurely, than the larger of $x$ $S_1$ or $x$ $T_1$ will determine the larger original expression. indeed, if $y = x$ $S_1$ and $z = x$ $T_1$, and $y &gt; z$, then by the Lemma $y &gt; 2z$, so since $y &gt; 2m$, we have $y &gt; z+m$. In our procedure we replace $x$ $S_1$ stack by plex$^y$ and $x$ $T_1$ stack by plex$^z$; since $y$ is more than $m$ more than $z$, plex$^y$ $S_2$ will be longer than plex$^z$ $T_2$, so the first expression will be larger. So we apply our procedure to $x$ $S_1$ and $x$ $S_2$; this will reduce the sum of the lengths by at least $m+2$, having used O(log m) operations.</p> <p>So we wind up with an algorithm that takes O(n) operations.</p> <hr> <p>We could extend the notation to have suffixes that apply k -> 10^^^k (Pent), k -> 10^^^^k (Hex), etc. I believe the obvious extension of the above procedure will work, e.g. for expressions with plex, bang, stack, and pent, first count the number of pents; the expression with more pents will be the larger. Otherwise, compare the n+1 groups of plex bang and stack lexicographically by our previously defined procedure. So long as the number of terms is less than googol-2, this procedure should work.</p>
probability
<p>If you were to flip a coin 150 times, what is the probability that it would land tails 7 times in a row? How about 6 times in a row? Is there some forumula that can calculate this probability?</p>
<p>Here are some details; I will only work out the case where you want $7$ tails in a row, and the general case is similar. I am interpreting your question to mean "what is the probability that, at least once, you flip at least 7 tails in a row?" </p> <p>Let $a_n$ denote the number of ways to flip $n$ coins such that at no point do you flip more than $6$ consecutive tails. Then the number you want to compute is $1 - \frac{a_{150}}{2^{150}}$. The last few coin flips in such a sequence of $n$ coin flips must be one of $H, HT, HTT, HTTT, HTTTT, HTTTTT$, or $HTTTTTT$. After deleting this last bit, what remains is another sequence of coin flips with no more than $6$ consecutive tails. So it follows that</p> <p>$$a_{n+7} = a_{n+6} + a_{n+5} + a_{n+4} + a_{n+3} + a_{n+2} + a_{n+1} + a_n$$</p> <p>with initial conditions $a_k = 2^k, 0 \le k \le 6$. Using a computer it would not be very hard to compute $a_{150}$ from here, especially if you use the matrix method that David Speyer suggests.</p> <p>In any case, let's see what we can say approximately. The asymptotic growth of $a_n$ is controlled by the largest positive root of the characteristic polynomial $x^7 = x^6 + x^5 + x^4 + x^3 + x^2 + x + 1$, which is a little less than $2$. Rearranging this identity gives $2 - x = \frac{1}{x^7}$, so to a first approximation the largest root is $r \approx 2 - \frac{1}{128}$. This means that $a_n$ is approximately $\lambda \left( 2 - \frac{1}{128} \right)^n$ for some constant $\lambda$, which means that $\frac{a_{150}}{2^{150}}$ is roughly</p> <p>$$\lambda \left( 1 - \frac{1}{256} \right)^{150} \approx \lambda e^{ - \frac{150}{256} } \approx 0.56 \lambda$$</p> <p>although $\lambda$ still needs to be determined.</p> <p><strong>Edit:</strong> So let's approximate $\lambda$. I claim that the generating function for $a_n$ is</p> <p>$$A(x) = 1 + \sum_{n \ge 1} a_{n-1} x^n = \frac{1}{1 - x - x^2 - x^3 - x^4 - x^5 - x^6 - x^7}.$$</p> <p>This is because, by iterating the argument in the second paragraph, we can decompose any valid sequence of coin flips into a sequence of one of seven blocks $H, HT, ...$ uniquely, except that the initial segment does not necessarily start with $H$. To simplify the above expression, write $A(x) = \frac{1 - x}{1 - 2x + x^8}$. Now, the partial fraction decomposition of $A(x)$ has the form</p> <p>$$A(x) = \frac{\lambda}{r(1 - rx)} + \text{other terms}$$</p> <p>where $\lambda, r$ are as above, and it is this first term which determines the asymptotic behavior of $a_n$ as above. To compute $\lambda$ we can use l'Hopital's rule; we find that $\lambda$ is equal to</p> <p>$$\lim_{x \to \frac{1}{r}} \frac{r(1 - rx)(1 - x)}{1 - 2x + x^8} = \lim_{x \to \frac{1}{r}} \frac{-r(r+1) + 2r^2x}{-2 + 8x^7} = \frac{r^2-r}{2 - \frac{8}{r^7}} \approx 1.$$</p> <p>So my official guess at the actual value of the answer is $1 - 0.56 = 0.44$. Anyone care to validate it?</p> <hr> <p>Sequences like $a_n$ count the number of words in objects called <a href="http://en.wikipedia.org/wiki/Regular_language">regular languages</a>, whose enumerative behavior is described by <a href="http://en.wikipedia.org/wiki/Recurrence_relation#Linear_homogeneous_recurrence_relations_with_constant_coefficients">linear recurrences</a> and which can also be analyzed using <a href="http://en.wikipedia.org/wiki/Finite-state_machine">finite state machines</a>. Those are all good keywords to look up if you are interested in generalizations of this method. I discuss some of these issues in my <a href="http://web.mit.edu/~qchu/Public/TopicsInGF.pdf">notes on generating functions</a>, but you can find a more thorough introduction in the relevant section of Stanley's <a href="http://www-math.mit.edu/~rstan/ec/">Enumerative Combinatorics</a>.</p>
<p>I'll sketch a solution; details are left to you.</p> <p>As you flip your coin, think about what data you would want to keep track of to see whether $7$ heads have come up yet. You'd want to know: Whether you have already won and what the number of heads at the end of your current sequence was. In other words, there are $8$ states:</p> <p>$A$: We have not flipped $7$ heads in a row yet, and the last flip was $T$.</p> <p>$B$: We have not flipped $7$ heads in a row yet, and the last two flips was $TH$.</p> <p>$C$: We have not flipped $7$ heads in a row yet, and the last three flips were $THH$. </p> <p>$\ldots$</p> <p>$G$: We have not flipped $7$ heads in a row yet, and the last seven flips were $THHHHHH$.</p> <p>$H$: We've flipped $7$ heads in a row!</p> <p>If we are in state $A$ then, with probability $1/2$ we move to state $B$ and with probability $1/2$ we stay in state $A$. If we are in state $B$ then, with probability $1/2$ we move to state $C$ and with probability $1/2$ we move back to state $A$. $\ldots$ If we are in state $G$, with probability $1/2$ we move forward to state $H$ and with probability $1/2$ we move back to state $A$. Once we are in state $H$ we stay there.</p> <p>In short, define $M$ to be the matrix $$\begin{pmatrix} 1/2 &amp; 1/2 &amp; 1/2 &amp; 1/2 &amp; 1/2 &amp; 1/2 &amp; 1/2 &amp; 0 \\ 1/2 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1/2 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1/2 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1/2 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1/2 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1/2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1/2 &amp; 1 \end{pmatrix}$$</p> <p>Then the entries of $M^n$ give the probability of transitioning from one given state to another in $n$ coin flips. (Please, please, please, do not go on until you understand why this works! This is one of the most standard uses of matrix multiplication.) You are interested in the lower left entry of $M^{150}$.</p> <p>Of course, a computer algebra system can compute this number for you quite rapidly. Rather than do this, I will discuss some interesting math which comes out of this.</p> <hr> <p>(1) The <a href="http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem">Perron-Frobenius theorem</a> tells us that $1$ is an eigenvalue of $M$ (with corresponding eigenvector $(0,0,0,0,0,0,0,1)^T$, in this case) and all the other eigenvalues are less then $1$. Let $\lambda$ be the largest eigenvalue less than $1$, then probability of getting $7$ heads in a row, when we flip $n$ times, is approximately $1-c \lambda^n$ for some constant $c$. </p> <p>(2) You might wonder what sort of configurations of coin flips can be answered by this method. For example, could we understand the case where we flip $3$ heads in a row before we flip $2$ tails in a row? (Answer: Yes.) Could we understand the question of whether the first $2k$ flips are a palindrome for some $k$? (Answer: No, not by this method.) In general, the question is which properties can be recognized by <a href="http://en.wikipedia.org/wiki/Finite-state_machine">finite state automata</a>, also called the regular languages. There is a lot of study of this subject. </p> <p>(3) See chapter $8.4$ of <em>Concrete Mathematics</em>, by Graham, Knuth and Patashnik, for many more coin flipping problems.</p>
logic
<p>This is a very "soft" question, but regarding language in logic and proofs, should</p> <p>"Either A or B"</p> <p>Be interpreted as "A or B, but not both"?</p> <p>I have always avoided saying "either" when my intent is a standard, inclusive or, because saying "either" to me makes it feel like an exclusive or.</p>
<p>No, you cannot depend on that. If it were that simple, we wouldn't need clunky phrases like "exclusive or" to make clear when an "or" is exclusive.</p> <p>Linguistically, "either" is simply a marker that warns you in advance that an "or" is going to follow. Nothing more.</p>
<p>In everyday speech, "or" is usually exclusive even without "either." In mathematics or logic though "or" is inclusive unless explicitly specified otherwise, even with "either."</p> <p>This is not a fundamental law of the universe, it is simply a virtually universal convention in these subjects. The reason is that inclusive "or" is vastly more common.</p>
linear-algebra
<p>I'm majoring in mathematics and currently enrolled in Linear Algebra. It's very different, but I like it (I think). My question is this: What doors does this course open? (I saw a post about Linear Algebra being the foundation for Applied Mathematics -- but I like doing math for the sake of math, not so much the applications.) Is this a stand-alone class, or will the new things I'm learning come into play later on? </p>
<p>Linear Algebra is indeed one of the foundations of modern mathematics. There are a lot of things which use the language and tools developed in linear algebra:</p> <ul> <li><p>Multidimensional Calculus, i.e. Analysis for functions of many variables, i.e. vectors (for example, the first derivative becomes a matrix)</p></li> <li><p>Differential Geometry, which investigates structures which look locally like a vector space and functions on this.</p></li> <li><p>Functional Analysis, which is essentially linear algebra on infinite-dimensional vector spaces which is the foundation of quantum mechanics.</p></li> <li><p>Multivariate Statistics, which investigates vectors whose entries are random. For instance, to describe the relation between two components of such random vector, one can calculate the correlation matrix. Furthermore, one can apply a technique called singular value decomposition (which is close to calculating the eigenvalues of a matrix) to find which components are having a main influence on the data.</p></li> <li><p>Tagging on to the Multivariate Statistics and multidimensional calculus, there are a number of Machine Learning techniques which require you to find a (local) minimum of a nonlinear function (the likelihood function), for example for neural nets. Generally speaking, on can try to find the parameters which maximize the likelihood, e.g. by applying the gradient descent method, which uses vectors arithmetic. (Thanks, frogeyedpeas!)</p></li> <li><p>Control Theory and Dynamical Systems theory is mainly concerned with differential equations where matrices are factors in front of the functions. It helps tremendously to know the eigenvalues of the matrices involved to predict how the system will behave and also how to change the matrices in front to make sure the system behaves like you want it to - in Control Theory, this is related to the poles and zeros of the transfer function, but in essence it's all about placing eigenvalues at the right place. This is not only relevant for mechanical systems, but also for electric engineering. (Thanks, Look behind you!) </p></li> <li><p>Optimization in general and Linear Programming in particular is closely related to multidimensional calculus, namely about finding minima (or maxima) of functions, but you can use the structure of vector spaces to simplify your problems. (Thanks, Look behind you!) </p></li> </ul> <p>On top of that, there are a lot of applications in engineering and physics which use tools of linear algebra and the fields listed above to solve real-world problems (often calculating eigenvalues and solving differential equations). </p> <p>In essence, a lot of things in the mathematical toolbox in one variable can be lifted up to the multivariable case with the help of Linear Algebra.</p> <p>Edit: This list is by no means complete, these were just the topics which came to my mind at first thought. Not mentioning a particular field doesn't mean that this field irrelevant, but just that I don't feel qualified to write a paragraph about it.</p>
<p>By now, we can roughly say that all we <em>fully</em> understand in Mathematics is <strong>linear</strong>.</p> <p>So I guess Linear Algebra is a good topic to master.</p> <p>Both in Mathematics and in Physics one usually brings the problem to a Linear problem and then solve it with Linear Algebras techniques. This happens in Algebra with linear representations, in Functional Analysis with the study of Hilbert Spaces, in Differential Geometry with the tangent spaces, and so on almost in every field of Mathematics. Indeed I think there should be at least 3 courses of Linear Algebra (undergraduate, graduate, advanced), everytime with different insights on the subject.</p>
game-theory
<p><em>Remark:</em> The problem is the <a href="https://sites.math.washington.edu/~morrow/336_11/papers/yisong.pdf" rel="nofollow noreferrer">prisoners and lightbulb problem</a>, but without the usual probabilistic frame (the ogre chooses as he pleases). Also, the strategy is symmetric and the dwarves don't have a sense of time. The link above does not give a solution for these hypothesis.</p> <p>In more detail, the problem goes as follows: <span class="math-container">$100$</span> immortal dwarves are captured by an immortal ogre in order for him to play a game. The dwarves are in separate cells and never communicate, and each day the ogre chooses arbitrarily one dwarf and brings him to a room with a lightbulb which is either on or off. The dwarf can leave it as it is or switch it. On the first day, the dwarve decide on a strategy, and the lightbulb is off.</p> <p>The dwarves are able to go out if one day one of them can say that all <span class="math-container">$100$</span> dwarves have been taken to the lightbulb (and be right about it).</p> <blockquote> <p>Two important hypothesis:</p> <ul> <li>the choice of the ogre is arbitrary (<strong>not random</strong>, arbitrary), but the game isn't unfair so he promises that he plans to take each dwarf an infinite number of times to the lightbulb room,</li> <li>the dwarves have <strong>no sense of time</strong> (so in particular they cannot know how many dwarves went to the room before them, or if the ogre took them to the room several times in a row). Equivalently, we could say the ogre takes a dwarf to the room whenever he wants.</li> </ul> </blockquote> <p><span class="math-container">$ $</span></p> <p><strong>Question:</strong> before the game, all <span class="math-container">$100$</span> dwarves meet one (last?) time to decide on a strategy. </p> <ul> <li><p>A. Can they find a winning strategy?</p></li> <li><p>B. [Contains hint for 1...] Can they find a winning <strong>symmetric</strong> strategy (i.e. each dwarf has the same action policy) ?</p></li> </ul> <hr> <p><strong>I don't have an answer for B.</strong> I am putting this question here since I think some (very simple) combinatorics may be needed to find a proof for a symmetric strategy.</p> <p>Here is my answer for A: one of the dwarf is chosen as a 'leader': only him can switch on the light, and only him can speak to the ogre. So each time he switches it on (or not if it is already) and count the number of times he switched it on. All <span class="math-container">$99$</span> other dwarves can only switch the light off once if they find it on (and then do nothing more). Once the leader has switched the light on <span class="math-container">$100$</span> times (it is off at the beginning), he tells the ogre that all dwarves have seen the lightbulb. Using the fact that after any number of days, the ogre will still bring each dwarf again in the room, it is easy to show that the dwarves will win.</p>
<p>If the dwarves are aware of the separation between rounds, then the problems are trivial, as you can see from @Arthur 's answer. Assuming that the dwarves lose their sense of time, the problems are much more interesting. A suggestion for 2): Everybody plays assuming that the light was off to begin with. The only modification in the strategy: each of the 99 non-leader dwarves turn the light off twice if they find it on. If the leader finds the light on during his first visit, then he leaves it on, and knows that this is where the game started, because he must have been the first visitor. If he finds it off, then he turns it on (according to the original strategy), and assumes that there was already zero or one visitor. In any case, after having to switch on the light for the 198th time, the leader can make the statement. </p> <p>The 198th time definitely comes: even if the light was turned off by a dwarf on day one, it is turned on after each time it was turned off, which is 198 times. When the light is turned on for the 198th time, there cannot have been at most 98 dwarves in the room before that: otherwise, they would have had turned the light off 196 times, plus maybe there was one extra turn-on round by the leader (which can happen if the leader was taken to the room on day one and the light was off).</p>
<p>I believe there is no symmetric strategy for the prisoners. This problem appeared in <em>Mathematical Puzzles, a Connoisseur's Collection</em> by Peter Winkler. In the solution provided, he went above and beyond and gave a complete characterization of all successful strategies. Here is the exact excerpt from the book, which Winkler warns is only "sketchy." </p> <blockquote> <p>Let us focus on one prisoner, say Alice. Her strategy can be assumed to be deterministic and based solely on the sequence of light-states she has so far observed.</p> <p>Suppose that Alice's strategy calls for her (in some circumstance) to change the state (of the light) after finding it in the state in which she last left it. Then the adversary could have brought her back immediately to the room, "wasting" her previous visit; in effect, this piece of Alice's strategy can only give the adversary an extra option. We may assume, therefore, that Alice never changes the state when she finds it where she last left it.</p> <p>Next, suppose Alice is required at some point to leave the state as she found it. Then we claim we can assume she will never act again! Why? Because if the adversary doesn't want her ever to act again, he can insure that she never sees a state different from the state she now finds. He can do this because if Alice did become permanently inactive, at least one of the states (on or off) will recur infinitely often; suppose it's the "off" state. Then he can schedule Alice so that she sees "off" now and at every subsequent visit, hence, by the previous argument, she will never act again. So, once more, the adversary always has the option of silencing Alice so we may assume that it is his only option.</p> <p>Obviously, Alice can't then begin with the instruction to leave the state as she finds it, since in that case she is forever inactive and no one will ever know that she has visited the room.3 Say she is supposed to turn the light on if it's off, otherwise leave it on. Then she won't be doing anything until she again finds the light off, at which point she may only turn it on again or go inactive forever. Thus, she is limited to turning the light on some number of times <span class="math-container">$j$</span> (which may as well be constant, else the adversary has more options). We call this strategy <span class="math-container">$+j$</span>, where <span class="math-container">$j$</span> is a positive integer or infinity. Similar arguments apply if she is instructed to turn the light off on first visit, leading to strategy <span class="math-container">$-j$</span>.</p> <p>The only remaining possibility is that she is instructed to change the state of the light at first visit, in which case she must proceed as above depending on whether she turned the first light on or off. This again only gives the adversary an additional option.</p> <p>We are reduced to each prisoner having a strategy <span class="math-container">$+ j$</span> or <span class="math-container">$- j$</span> for various <span class="math-container">$j$</span>. If they all turn lights only off (or only on), no one will learn anything; thus, we may assume Alice's strategy is <span class="math-container">$+j$</span> and Bob's is <span class="math-container">$- k$</span>. </p> </blockquote> <p>At this point, the question is answered. Every strategy is of the form <span class="math-container">$\pm j$</span> for <span class="math-container">$j\in \mathbb N\cup \{\infty\}$</span>, and it is clear that no one of these works when all prisoners use it. Here is the rest of the solution in case anyone is curious.</p> <blockquote> <p>If Charlie turns lights on, Alice will never be able to tell the difference between Bob and Charlie both having finished, and Bob and Charlie each having one task left. If Charlie turns lights off, it's Bob who will be "left in the dark."</p> <p>Putting all this together, we have that for a prisoner to be able to determine that everyone has visited, she must turn the light on, while everyone else turns it off (or vice-versa). In fact, if her strategy is <span class="math-container">$+j_1$</span> and the others are <span class="math-container">$- j_2, \dots, -j_{n}$</span> then it's easy to check that having each <span class="math-container">$j_i$</span> finite, but at least <span class="math-container">$2$</span>, and <span class="math-container">$j_1$</span> greater than the sum of the other <span class="math-container">$j_i$</span>'s minus the least of them, is necessary and sufficient.</p> </blockquote>
combinatorics
<p>Consider a dynamical system with state space $2^n$ represented as a sequence of $n$ black or white characters, such as $BWBB\ldots WB$.</p> <p>At every step, we choose a random pair $(i,j)$ with $i&lt;j$ and copy the $i$-th color into the $j$-th position. For example, if the state is $BWW$ and we choose $(1,3)$, then the next state would be $BWB$.</p> <p>Starting with an initial state $BWW\ldots W$, we will reach a final state $BB\ldots B$ in finite time with probability $1$ (because any initial segment of blacks persists forever).</p> <blockquote> <p>What is the expected time to reach this final state $BB\ldots B$?</p> </blockquote> <p>Low-dimensional analysis shows that the answer is exactly $(n-1)^2$ whenever $n&gt;2$; this was "verified" up to $n=13$ or so. But we can't find a proof of this conjecture, nor even show that the answer is an integer. One expects that, if the answer is that easy, there should also be an elementary proof.</p> <hr> <p>I tried various kinds of induction. For example, I tried to use induction to compute the expected time to reach the state $BB\ldots B*$ where $*$ is <em>something</em>, and reduced the initial conjecture to a claim that the expected time from $BB\ldots B*$ to $BB\ldots BB$ is <em>exactly $1$</em>. But I don't know how to prove that it is $1$.</p>
<p>This is now a complete answer.</p> <p>It turns out that Markus was right all along : with this particular starting configuration, the number of black balls <strong>is</strong> a Markov chain, and at any point in time, the distribution of the sequence is uniform conditionned on the number of black balls in the space of sequences starting with a black ball.</p> <hr> <p>Let $\Omega$ be the set of sequences of $0$ and $1$ of length $n$ <strong>starting with $1$</strong>.</p> <p>To show that the number of black balls is a Markov chain it is enough to show that if we are given a random sequence in $\Omega$ distributed uniformly conditioned on the number of black balls and apply one step of the process, then we still get a sequence distributed uniformly conditioned on the number of black balls.</p> <p>More formally, let $\Omega' = \{1 \ldots n\}$ be the possible number of black balls and $f : \Omega \to \Omega'$ be the counting map.<br> Let $F,F'$ be the free vector spaces on the basis $\Omega$ and $\Omega'$ respectively.<br> Your process is described by a linear endormorphism $T : F \to F$, and $f$ induces a linear map $f : F \to F'$.<br> Let $g : F' \to F$ be the map sending $1.[k]$ for $k \in \Omega'$ to $\frac 1{|f^{-1}(k)|} \sum_{f^{-1}(k)} 1.[\omega]$.</p> <p>We obviously have that $f \circ g = id_{F'}$.<br> If we prove that the image of $g$ is stable by $T$ then there is a map $T' : F' \to F'$ such that $T \circ g = g \circ T'$.<br> Then we get (I will omit the compositions from now on)<br> $gfTgf = gf(Tg)f = gf(gT')f = g(fg)T'f = gT'f = Tgf$.</p> <p>By induction it follows that $T^ngf = (Tgf)^n$ and $fT^ngf = (fTg)^nf = T'^n f$</p> <p>Since we are only interested in the behaviour of $fT^n(V)$ for the initial vector $V = 1.[10\ldots0] \in F$, and because we have $gfV=V$, we are interested in the behaviour of $fT^n(V) = fT^ngf(V) = T'^n f(V) = T'^n(1.[1])$.</p> <p>That is, $T'$ is the transition matrix of a Markov chain on $\Omega'$ and we want to know what's the expected time to get from $1$ to $n$.</p> <hr> <p>Intuitively, if you interpret $gf$ as a process that randomly permutes the $n-1$ rightmost balls, this says that the two processes<br> "randomize ; apply step ; randomize" and "randomize ; apply step" are equivalent.</p> <p>Then for any $k$, the process "initialize ; randomize ;( apply step ; randomize) x $k$" is equivalent to the process "initialize ; randomize ; apply step x $k$".</p> <p>And <strong>since the initial configuration is already randomized</strong>, "initialize ; randomize" is equivalent to "initialize". </p> <p>Therefore the original process is equivalent to "initialize ; randomize ; (apply step ; randomize) x $k$", where the number of black balls is <strong>obviously</strong> a Markov chain</p> <hr> <p>Now I will show that the image of $g$.is stable by $T$. Let $N = n(n-1)/2$ be the number of pairs $(i,j)$ with $1 \le i &lt; j \le n$.</p> <p>To prove this you essentially to count things up. The "surprising" part is that to obtain a particular sequence as a result of a non trivial copy move, the number of ways to obtain it <strong>does not depend on the order of the balls</strong> If you want to get a sequence with $4$ ones by copying a $1$ onto a $0$, <strong>you have to choose two $1$</strong> then replace the right one with a $0$.</p> <p>Even though you get some input sequences more times than others, since they are all equiprobable, the resulting probability for each output sequence doesn't depend on the order of the balls.</p> <p>Conversely, some inputs have a tendency to increase the sum, and some others have a tendency to decrease the sum, but really, when you sum up every permutation of the inputs, you do get something uniform inside each equivalence class.</p> <hr> <p>To compute $T(g(k))$, consider a random sequence that has sum $k$ and is uniformly distributed, so that every sequence of sum $k$ has probability $1/\binom {k-1}{n-1}$ to be the input.</p> <p>For each possible output sequence of sum $k$, consider how many times it can be reached.<br> To get a sequence of sum $k$ you need to make a move that does nothing, and for each output sequence there are $k(k-1)/2+(n-k)(n-k-1)/2$ moves that do this, so every sequence of sum $k$ is reached in total with probability $(k(k-1)+(n-k)(n-k-1))/(2N \binom {k-1}{n-1})$, this is $ (k(k-1)+(n-k)(n-k-1))/2N \;.g(k)$.</p> <p>Now consider the output sequences of sum $k+1$ (if $k&lt;n$). To get a sequence of sum $k+1$ you need to copy a $1$ onto a $0$.<br> For each output, you have $k$ sequences of sum $k$ that can produce it, one that produces it in one case, the next one in two cases, and so on. Since every sequence of sum $k$ is equally probable, each output sequence can be reached with equal probability $k(k+1)/(2N \binom {k-1}{n-1})$, so summing up everything this is $k(k+1)\binom {k}{n-1}/(2N\binom {k-1}{n-1})\;. g(k+1) = (k+1)(n-k)/2N \;. g(k+1)$</p> <p>Finally, consider the output sequences of sum $k-1$ (if $k&gt;0$) To get a sequence of sum $k-1$ you need to copy a $0$ onto a $1$.<br> For each output, you have $n-k$ sequences of sum $k$ that can produce it, one that produces it in one case, the next one in two cases, and so on. Since every sequence of sum $k$ is equally probable, each output sequence can be reached with equal probability $(n-k)(n-k+1)/(2N \binom {k-1}{n-1})$, so summing up everything this is $(n-k)(n-k+1)\binom {k-2}{n-1}/(2N\binom {k-1}{n-1}) \;. g(k-1) = (n-k)(k-1)/2N \;. g(k-1)$</p> <p>And $k(k-1)+(n-k)(n-k-1) + (k+1)(n-k) + (n-k)(k-1) = n(k-1)+(n-k)n = n^2-n = 2N$ so the three terms sum to $1$ as needed.</p> <hr> <p>Finally, we can use the fact that the sequences are uniformly distributed conditioned on the number of black balls to prove that $E[T_n - T_{n-1}] = 1$.</p> <p>Consider at each time, the probabilities that we go from a particular sequence with $n-1$ $1$s to the final sequence with $n$ $1$s.</p> <p>At every time, the $n-1$ sequences with sum $n-1$ are equally probable (say $p_t$), then a simple counting argument says that the final sequence comes from $1\ldots10$ with probability $(n-1)p_t/N = 2p_t/n$ and it comes from one of the other $n-2$ other sequences with probability $(1+2+\ldots+n-2)p_t/N = (n-2)(n-1)p_t/2N = (n-2)p_t/n$.</p> <p>Summing up those events for every time $t$, the probability that the last step comes from $1\ldots10$ is $2/n$, which means that $P(T_{n-1} &lt; T_n) = 2/n$, and that $P(T_{n-1} = T_n) = (n-2)/n$.</p> <p>But if $T_{n-1} &lt; T_n$ then the expected time for the last step is $N/(n-1) = n/2$ (because we are stuck on $1\ldots10$ until we move to $1\ldots 11$, which happens with probability $2/n$)</p> <p>Thus $E[T_n - T_{n-1}] = 2/n \times n/2 + 0 \times (n-2)/2 = 1$.</p> <hr> <p>You can in fact easily prove @leonbloy's conjecture from the observation that $P(T_{n} &gt; T_{n-1}) = 2/n$.</p> <p>If you hide the last $n-k$ balls, then what you see is a mix of the same process on the first $k$ balls and steps that would copy one of visible balls to one of the hidden balls, and so steps that don't visibly do anything. Therefore, even though you see a slowed process, this doesn't influence the probabilities $P(T_{k-1} &gt; T_{k}) = 2/k$.</p> <p>From then, it's easy to get the expected time between each step, because if we have equality then $T_{k-1} - T_{k} = 0$,<br> and if not, at every step between them you have a probability $(k-1)/N$ of coloring the $k$th ball black, and so it takes $N/(k-1)$ steps.<br> Finally, the expected difference is $N/(k-1) \times 2/k = n(n-1)/k(k-1)$.</p>
<p>@leonbloy pointed out an interesting observation:</p> <blockquote> <p>Related conjecture (supported by simulations) : Let $T_i$ be the incremental amount of steps it takes to colour the first $i$ positions of the sequence over and above the number of steps it takes to color the first $i-1$ character. Then $$ E(T_i)=\frac{\binom{n}{2}}{\binom{i}{2}} = \frac{n(n-1)}{i(i-1)} $$ This agrees with the original result $E(T_2+T_3 + \cdots T_n)=(n-1)^2$ and your conjecture $E(T_n)=1$.</p> </blockquote> <p>I will try and "prove" this but it relies on another conjecture (also supported by simulation). This may just be a way of rewriting @leonbloy's conjeture, but perhaps it is easier to prove.</p> <p>We will do a sort of pseudo induction. We can prove it is true directly for $T_2$ where our sequence is $BW...W$. The only way to color the second character is to pick the pair $(1,2)$ which has probability $1/{n \choose 2}$. We then have the following set up:</p> <p>$$ E(T_2) = 1 + \frac{1}{\binom{n}{2}}\cdot 0 \ + \left(1-\frac{1}{\binom{n}{2}}\right)E(T_2) \\ E(T_2) = \binom{n}{2} = \frac{\binom{n}{2}}{\binom{2}{2}} $$</p> <p>i.e. we must draw one pair and there is a $1/{n \choose 2}$ we are done and a $1 - 1/{n \choose 2}$ chance we are back where we started.</p> <p>Now consider a character sequence where the first $k$ characters have been colored in and the rest are unknown i.e. we have $B\ldots B*$. We will prove the expected number of steps to color the first $k+1$ characters, $E(T_{k+1})$, should be: </p> <p>$$E(T_{k+1}) = \frac{\binom{n}{2}}{\binom{k+1}{2}} = \frac{n(n-1)}{k(k+1)}$$</p> <p>To do so we need some more information about the sequence besides the fact that the first $k$ characters are colored. In partiuclar, if we can differentiate the cases where the $k+1^{th}$ character happened to have been colored or not in the process of coloring the first $k$ colors, we can calculate $E(T_{k+1})$. Let $p_k$ be the probability that the $k+1^{th}$ character is already colored. Then we have a $p_k$ probability that our sequence looks like $B\ldots BB*$ i.e. it already has the first $k+1$ characters colored and a $1 - p_k$ probability that it looks like $B\ldots BW*$ i.e. the $k+1^{th}$ character is white (W) and thus we need to calculate the expected number of steps to paint it black. Let us use $W_k$ to denote the number of steps needed to color the $k+1^{th}$ character of the sequence $B\ldots BW*$ black. Then:</p> <p>$$ E(T_{k+1}) = p_k \cdot 0 + (1 - p_k)E(W_k) = (1 - p_k)E(W_k) $$</p> <p>We can easily calculate $W_k$. The pairs we can draw that will color the $k+1^{th}$ character are of the form $(i,k+1)$ where $i \in [1,k]$. Thus we have $k$ possible pairs that will paint the desired character giving a probability of $k / \binom{n}{2}$. Thus:</p> <p>$$ E(W_k) = 1 + \frac{k}{\binom{n}{2}}\cdot 0 \ + \left(1-\frac{k}{\binom{n}{2}}\right)E(W_k) \\ E(W_k) = \frac{\binom{n}{2}}{k} = \frac{n(n-1)}{2k} $$</p> <p>Thus substituting this back, we get:</p> <p>$$ E(T_{k+1}) = (1 - p_k)\frac{n(n-1)}{2k} $$</p> <p>Here is where we get stuck and need the conjecture. </p> <blockquote> <p><strong>Conjecture (verified by simulation):</strong> After following a number of steps until the first $k$ characters of the sequence are colored, the probability, $p_k$, that the $k+1^{th}$ character is already colored is given by: $$ p_k = \frac{k-1}{k+1} $$ Evidence provided at bottom of answer.</p> </blockquote> <p>Assuming it is true, we get:</p> <p>$$ \begin{align} E(T_{k+1}) &amp;= \left(1 - \frac{k-1}{k+1} \right)\frac{n(n-1)}{2k} \\ &amp;= \frac{2}{k+1}\cdot \frac{n(n-1)}{2k} \\ &amp;= \frac{n(n-1)}{k(k+1)} = \frac{\binom{n}{2}}{\binom{k+1}{2}} \\ \end{align} $$</p> <p>as required. This would prove op's conjecture since the expected number of steps till completion is given by:</p> <p>$$ \begin{align} E(T_2 + T_3 + \cdots + T_n) &amp;= \sum_{i=2}^n\frac{n(n-1)}{i(i+1)} \\ &amp;= n(n-1) \cdot \sum_{i=2}^n\frac{1}{i(i+1)} \\ &amp;= n(n-1) \cdot \sum_{i=2}^n\frac{1}{i-1} - \frac{1}{i} \\ &amp;= n(n-1) \cdot \left ( \frac{1}{1} -\frac{1}{2} + \frac{1}{2} -\frac{1}{3} + \frac{1}{3} + \cdots + \frac{1}{n-1} - \frac{1}{n} \right) \\ &amp;= n(n-1) \cdot \left ( \frac{1}{1} - \frac{1}{n} \right) \\ &amp;= n(n-1) \cdot \left (\frac{n-1}{n} \right) \\ &amp;= (n-1)^2 \end{align} $$</p> <hr> <blockquote> <p>Evidence that $p_k = \frac{k-1}{k+1}$:</p> </blockquote> <p>The array below is a matrix $M$ where entry $i,j$ represents the probability of the $j+1^{th}$ character of a sequence of length $i$ being colored after the first $j$ are colored. As you can see, the values are invariant of the length of the sequence (the rows). In actuality I ran a larger tests but did not save the results. For tonight, my computer needs a break. I might post a bigger sample tomorrow.</p> <pre><code> 1 2 3 4 5 6 7 8 9 1 0.0 0.00000 0.00000 0.00000 0.00000 0.00000 0.0000 0.00000 0.0000 2 0.0 0.00000 0.00000 0.00000 0.00000 0.00000 0.0000 0.00000 0.0000 3 0.0 0.32890 0.00000 0.00000 0.00000 0.00000 0.0000 0.00000 0.0000 4 0.0 0.33455 0.49570 0.00000 0.00000 0.00000 0.0000 0.00000 0.0000 5 0.0 0.33545 0.49400 0.60520 0.00000 0.00000 0.0000 0.00000 0.0000 6 0.0 0.33605 0.49825 0.59395 0.66200 0.00000 0.0000 0.00000 0.0000 7 0.0 0.33055 0.50465 0.59865 0.67025 0.71760 0.0000 0.00000 0.0000 8 0.0 0.33335 0.50275 0.60105 0.66430 0.71635 0.7476 0.00000 0.0000 9 0.0 0.33005 0.50605 0.60035 0.66255 0.71310 0.7490 0.77605 0.0000 10 0.0 0.33480 0.49800 0.60330 0.66855 0.70855 0.7497 0.77940 0.8023 </code></pre> <p>The program used to simulate the events is here (python3):</p> <pre><code>import numpy as np from pandas import DataFrame def t_i(n, k, seq=[]): if not seq: seq = ['B'] + ['W']*(n - 1) assert(k &lt;= len(seq)) count = 0 while seq[:k] != ['B']*k: i, j = rand_pair(seq) seq[j] = seq[i] count += 1 if seq[k] == 'B': rem_count = 1 else: rem_count = 0 return rem_count def ti_time(n, k, seq=[], iter_n=20000): """ Tries to get probability of k+1-th character being colored after coloring the first k characters. """ rem_avg_sum = 0 count = 0 for i in range(iter_n): rem_count = t_i(n, k, seq) rem_avg_sum += rem_count count += 1 return rem_avg_sum/count def main(): N_LIM = 7 arr = np.zeros((N_LIM, N_LIM)) for n in range(3, N_LIM+1): for k in range(2, n): arr[n-1][k-1] = ti_time(n, k) df = DataFrame(arr) df.index += 1 df.columns += 1 print(df) return df </code></pre>
differentiation
<p>Is there a topology <span class="math-container">$T$</span> on the set of real numbers <span class="math-container">$\mathbb{R}$</span>, such that the set of <span class="math-container">$T$</span>-continuous functions from <span class="math-container">$\mathbb{R}$</span> to <span class="math-container">$\mathbb{R}$</span> is precisely the set of differentiable functions on <span class="math-container">$\mathbb{R}$</span>?</p>
<p>Νo. Indeed, if such a topology exists, then the functions <span class="math-container">$f_{1},f_{2}:\mathbb R \to \mathbb R$</span> defined by <span class="math-container">$f_{1}(x)=-x$</span> and <span class="math-container">$f_{2}(x)=x$</span> are continuous. Now we can restrict the domain <span class="math-container">$f_1:(-\infty,0] \to \mathbb R$</span> and <span class="math-container">$f_2:[0,\infty) \to \mathbb R$</span> and it will stay continuous.</p> <p>Now define <span class="math-container">$f(x)=|x|$</span>; it is continuous from the gluing lemma with the functions <span class="math-container">$f_1,f_2$</span>.</p> <p>But it is not differentiable.</p> <p>Edit: As Mark Saving correctly points out, we need to prove that <span class="math-container">$(-\infty,0]$</span> and <span class="math-container">$[0,\infty)$</span> are closed.</p> <p>To prove that we first prove that <span class="math-container">$T$</span> is <span class="math-container">$T_1$</span>. Let <span class="math-container">$a \neq b \in \mathbb R$</span>. Choose a an open set <span class="math-container">$B \in T$</span> such that <span class="math-container">$B \neq \mathbb R, \emptyset$</span> - it exists as otherwise T is the trivial topology. Define <span class="math-container">$A = \mathbb R \setminus B$</span>. Now the function <span class="math-container">$f$</span> defined by <span class="math-container">$a$</span> on <span class="math-container">$A$</span> and <span class="math-container">$b$</span> on <span class="math-container">$B$</span> can't be continuous as its image is two points (hence not continuous in the standard topology on <span class="math-container">$\mathbb R$</span> and thus not differentiable). Now the preimages of <span class="math-container">$f$</span> are <span class="math-container">$\emptyset, A, B, \mathbb R$</span> thus <span class="math-container">$A$</span> can't be open (as otherwise every preimage is open and thus <span class="math-container">$f$</span> is continuous) and as <span class="math-container">$f$</span> is not continuous and <span class="math-container">$A$</span> is the only preimage which is not open, there is an open set <span class="math-container">$U \in T$</span> such that <span class="math-container">$f^{-1}(U)=A$</span> and thus <span class="math-container">$U$</span> is an open set containing <span class="math-container">$a$</span> and not <span class="math-container">$b$</span>. And thus <span class="math-container">$T$</span> is <span class="math-container">$T_1$</span>.</p> <p>Thus every singleton is closed in <span class="math-container">$T$</span>.</p> <p>Now we use the next theorem: let <span class="math-container">$A$</span> be a closed set in <span class="math-container">$\mathbb R$</span> thus there exists a differentiable function <span class="math-container">$f:\mathbb R \to \mathbb R$</span> such that <span class="math-container">$f^{-1}(\{0\})=A$</span>. Now as <span class="math-container">$f$</span> is differentiable it is continuous in T and because <span class="math-container">$\{0\}$</span> is closed we have that <span class="math-container">$A= f^{-1}(\{0\})$</span> is closed in <span class="math-container">$T$</span> thus every closed set in the standard topology is closed in <span class="math-container">$T$</span>.</p>
<p>To build on @RT1's answer, we may use the pasting lemma provided we first show that <span class="math-container">$(-\infty,0]$</span> and <span class="math-container">$[0,\infty)$</span> are closed sets in <span class="math-container">$T$</span>.</p> <p>Suppose such a <span class="math-container">$T$</span> exists. Clearly, <span class="math-container">$T$</span> is not the trivial topology. Let <span class="math-container">$U\in T$</span> be a nonempty set such that <span class="math-container">$U\neq \mathbb R$</span>. Then there exists <span class="math-container">$y\in\mathbb R$</span> such that <span class="math-container">$y\not\in U$</span>. Let <span class="math-container">$f:\mathbb R\to\mathbb R$</span> be a differentiable function such that <span class="math-container">$f(x)=y$</span> for <span class="math-container">$y\geq 1$</span> and <span class="math-container">$y\leq 0$</span>, and <span class="math-container">$f(x)\in U$</span> for some <span class="math-container">$x\in (0,1)$</span>. Then <span class="math-container">$V:=f^{-1}(U)$</span> is a nonempty subset of <span class="math-container">$(0,1)$</span>. Since <span class="math-container">$f$</span> is <span class="math-container">$T$</span>-continuous, <span class="math-container">$V\in T$</span>.</p> <p>Now, let <span class="math-container">$(a,b)\subset\mathbb R$</span>. Define <span class="math-container">$g:\mathbb R\to\mathbb R$</span> by <span class="math-container">$$g(x)=\frac{a-x}{a-b}.$$</span> Then observe that <span class="math-container">$g^{-1}(V)\subset(a,b)$</span>. Since <span class="math-container">$V\in T$</span> and since <span class="math-container">$g$</span> is <span class="math-container">$T$</span>-continuous, we conclude that <span class="math-container">$(a,b)$</span> contains a set in <span class="math-container">$T$</span>.</p> <p>Next, let <span class="math-container">$c\in\mathbb R$</span> and consider <span class="math-container">$h:\mathbb R\to\mathbb R$</span> defined by <span class="math-container">$h(x)=x-c$</span>. Then <span class="math-container">$h$</span> is <span class="math-container">$T$</span>-continuous, so <span class="math-container">$$h^{-1}=V+c\in T.$$</span> Thus, translations of sets in <span class="math-container">$T$</span> are also in <span class="math-container">$T$</span>.</p> <p>Now let <span class="math-container">$W\subset\mathbb R$</span> be open in the usual topology. Then for each <span class="math-container">$x\in W$</span>, there exists <span class="math-container">$\varepsilon&gt;0$</span> such that <span class="math-container">$(x-\varepsilon,x+\varepsilon)\subset W$</span>. Since any interval contains a nonempty set in <span class="math-container">$T$</span>, we conclude that there exists a nonepmty <span class="math-container">$\tilde U_x\in T$</span> such that <span class="math-container">$\tilde U_x\subset(x-\varepsilon/2,x+\varepsilon/2)$</span>. The set <span class="math-container">$\tilde U_x$</span> may not contain <span class="math-container">$x$</span>, but it does contain some point <span class="math-container">$\tilde x\in(x-\varepsilon/2,x+\varepsilon/2)$</span>. Therefore, define <span class="math-container">$U_x:=\tilde U_x+(x-\tilde x)$</span>, which has the following properties:</p> <ul> <li><span class="math-container">$x\in U_x$</span>,</li> <li><span class="math-container">$U_x\in T$</span> since it is a translation of <span class="math-container">$\tilde U_x\in T$</span>, and</li> <li><span class="math-container">$U_x\subset (x-\varepsilon,x+\varepsilon)$</span> since <span class="math-container">$\tilde U_x\subset (x-\varepsilon/2,x+\varepsilon/2)$</span> and <span class="math-container">$|x-\tilde x|&lt;\varepsilon/2$</span>.</li> </ul> <p>Thus, <span class="math-container">$U_x\subset W$</span>, and <span class="math-container">$$W=\bigcup_{x\in W}U_x\in T.$$</span></p> <p>We conclude that <span class="math-container">$(0,\infty)$</span> and <span class="math-container">$(-\infty,0)$</span> are in <span class="math-container">$T$</span>, so <span class="math-container">$(-\infty,0]$</span> and <span class="math-container">$[0,\infty)$</span> are closed, so we may use the pasting lemma argument.</p>
matrices
<p>I'm trying to prove the following: Let $A$ be a $k\times k$ matrix, let $D$ have size $n\times n$, and $C$ have size $n\times k$. Then,</p> <p>$$\det\left(\begin{array}{cc} A&amp;0\\ C&amp;D \end{array}\right) = \det(A)\det(D).$$</p> <p>Can I just say that $AD - 0C = AD$, and I'm done?</p>
<p>If <span class="math-container">$A$</span> is singular, its rows are linearly dependent, hence the rows of the entire matrix are linearly dependent, hence both sides of the equation vanish.</p> <p>If <span class="math-container">$A$</span> is not singular, we have</p> <p><span class="math-container">$$\pmatrix{I&amp;0\\-CA^{-1}&amp;I}\pmatrix{A&amp;0\\C&amp;D}=\pmatrix{A&amp;0\\0&amp;D}\;.$$</span></p> <p>The determinants of the two new matrices are perhaps easier to derive from the <a href="http://en.wikipedia.org/wiki/Laplace_expansion" rel="nofollow noreferrer">Laplace expansion</a> than that of the entire matrix. They are <span class="math-container">$1$</span> and <span class="math-container">$\det A \det D$</span>, respectively, and the result follows.</p> <p>Another way to express this is that if <span class="math-container">$A$</span> is not singular you can get rid of <span class="math-container">$C$</span> by Gaussian elimination.</p>
<p>As @user153012 is asking for a proof in full detail, here is a brute-force approach using an explicit expression of a determinant of an $n$ by $n$ matrix, say $A = (a[i,j])$, $$\det A = \sum_{\sigma\in S_n}\operatorname{sgn}\sigma \prod_i a[{i,\sigma(i)}],$$ where $S_n$ is the symmetric group on $[n] = \{1,\dots, n\}$ and $\operatorname{sgn}\sigma$ denotes the signature of $\sigma$.</p> <p>In matrix $$B = \left(\begin{array}{cc} A&amp;0\\ C&amp;D \end{array}\right),$$ we have $$b[i,j] = \begin{cases}a[i,j] &amp; \text{if }i \le k, j \le k;\\ d[i-k, j-k] &amp; \text{if }i &gt; k, j &gt; k; \\ 0 &amp; \text{if }i \le k, j &gt; k; \\ c[i-k,j] &amp; \text{otherwise}.\end{cases}$$ Observe in $$\det B = \sum_{\sigma\in S_{n+k}}\operatorname{sgn}\sigma\prod_i b[i, \sigma(i)],$$ if $\sigma(i) = j$ such that $i\le k$ and $j &gt; k$, then the corresponding summand $\prod_i b[i,\sigma(i)]$ is $0$. Any permutation $\sigma\in S_{n+k}$ for which no such $i$ and $j$ exist can be uniquely "decomposed" into two permutations, $\pi$ and $\tau$, where $\pi\in S_k$ and $\tau\in S_n$ such that $\sigma(i) = \pi(i)$ for $i \le k$ and $\sigma(k+i) = k+\tau(i)$ for $i \le n$. Moreover, we have $\operatorname{sgn}\sigma = \operatorname{sgn}\pi\operatorname{sgn}\tau$. Denote the collection of such permutations by $S_n'$. Therefore, we can write $$\begin{eqnarray}\det B &amp;=&amp; \sum_{\sigma\in S_n'}\operatorname{sgn}\sigma\prod_{i=1}^k b[i,\sigma(i)]\prod_{i=k+1}^{k+n} b[i,\sigma(i)] \\ &amp;=&amp; \sum_{\sigma\in S_n'}\operatorname{sgn}\sigma\prod_{i=1}^k a[i,\sigma(i)]\prod_{i=1}^nd[i,\sigma(i+k)-k] \\ &amp; = &amp; \sum_{\pi\in S_k,\tau\in S_n}\operatorname{sgn}\pi\operatorname{sgn}\tau\prod_{i=1}^k a[i,\pi(i)]\prod_{i=1}^nd[i,\tau(i)] \\ &amp;=&amp; \sum_{\pi\in S_k}\operatorname{sgn}\pi\prod_{i=1}^k a[i,\pi(i)]\sum_{\tau\in S_{n}}\operatorname{sgn}\tau\prod_{i=1}^nd[i,\tau(i)] \\ &amp; = &amp; \det A\det D.\end{eqnarray}$$ QED.</p> <p><strong>Update</strong> As @Marc van Leeuwen mentioned in the comment, a similar formula holds for permanents.The proof is basically the same as the proof for determinant except one has to drop off all those signatures of permutations.</p>
game-theory
<h2>The problem:</h2> <p>Three scorpions are chasing a single ant on the edgegraph of a cube. The scorpions have the same speed (<span class="math-container">$v$</span>), while the ant is <span class="math-container">$3$</span> times as fast (<span class="math-container">$3v$</span>). They can move in any direction and instantly turn around. Additionally, the scorpions are <em>dumb</em>, meaning that they always go after the ant on the shortest possible path.</p> <p><a href="https://i.sstatic.net/Gegvd.png" rel="noreferrer"><img src="https://i.sstatic.net/Gegvd.png" alt="Three scoprions chasing ant ant on a cube"></a></p> <p>The scoprions and the ant can be thought of as a point, they have no volume.</p> <p><a href="https://i.sstatic.net/MVjM1.png" rel="noreferrer"><img src="https://i.sstatic.net/MVjM1.png" alt="They can be thought of as points"></a></p> <p>And the edges of the cube can be thought of as a planar graph, with all sidelengths being <span class="math-container">$1$</span>.</p> <p><a href="https://i.sstatic.net/rCwGW.png" rel="noreferrer"><img src="https://i.sstatic.net/rCwGW.png" alt="The cube can be thought of as a graph with sidelengths 1"></a></p> <p><strong>Question:</strong> Is there a strategy the ant can use to forever avoid the scorpions? In what positions can the ant accomplish that?</p> <h2>My attempt:</h2> <p>There is definitely a position where the scorpions win, and that's when they can corner the ant:</p> <p><a href="https://i.sstatic.net/7QFVc.png" rel="noreferrer"><img src="https://i.sstatic.net/7QFVc.png" alt="Ant cornered"></a></p> <p>Also, if the scoprions are <span class="math-container">$1+ \frac{1}{3}$</span> edge away from the corner where the ant sits, they can still get into the position drawn above to catch the ant.</p> <p>Otherwise it gets really complicated and messy. It seems though that if the ant just goes as far away as possible from the scoprions at every moment, they won't be able to catch it. Obviously this would need a mathematical proof though.</p> <p>I tried to write up the distance function between scorpions and the ant, but since even if <span class="math-container">$1$</span> scoprion touches the ant, it's over, we have to take the <span class="math-container">$\min$</span> of them.</p> <p>First of all, this is the area they can move in:</p> <p><span class="math-container">$$C = \{(x_1,x_2,x_3) \in \Bbb{R}^3| \\ |(i_1,i_2,i_3) \in \{1,2,3\}, x_{i_1} \in \{0,1\}, x_{i_2} \in \{0,1\}, x_{i_3} \in [0,1]\}$$</span></p> <p>And this is the distance from the ant to the closest scoprion, which we always want to maximize as the ant:</p> <p><span class="math-container">$$d_{\text{closest scorpion}}(x \in C) = \\ = \min (d_{\text{scoprion 1}}(x,s_1),d_{\text{scoprion 2}}(x,s_2),d_{\text{scoprion 3}}(x,s_3))$$</span></p> <p>And</p> <p><span class="math-container">$$d_{\text{scoprion i}}(x,s_i) = \text{Metropolis distance}(x,s_i)$$</span></p> <p>where</p> <p><span class="math-container">$$\text{Metropolis distance}(x,y) = |x_1-y_1|+|x_2-y_2|+|x_3-y_3|$$</span></p> <p>Maximizing <span class="math-container">$d_{\text{closest scorpion}}$</span> is tough, because neither <span class="math-container">$\min$</span> nor the <span class="math-container">$\text{Metropolis distance}$</span> can be differentiated. But if that's done, we could look at the gradient vector to help the ant decide where to go.</p> <p>I'm kinda stuck at this point, so any help would be appreciated!</p>
<p>The distance you proposed, called the Manhattan distance or <span class="math-container">$L_{1}$</span> distance:</p> <p><span class="math-container">$$ d_{L_{1}}(x, y) = \sum_{i=1}^{3} |x_{i} - y_{i}| $$</span></p> <p>is not the distance the ant needs, to walk from point <span class="math-container">$x$</span> to point <span class="math-container">$y$</span> on the cube. Distance the ant needs to walk <span class="math-container">$d_{G}(x,y)$</span> is the length of the shortest path between <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, for example if <span class="math-container">$x=(0.1, 0, 0)$</span> and <span class="math-container">$y= (0.2, 0, 1)$</span>: <span class="math-container">$$ d_{G}(x,y) = 1.3 \neq 1.1 = d_{L_{1}}(x,y). $$</span></p> <p>Ant starts on some edge <span class="math-container">$v_{1}v_{2}$</span> of the cube or graph <span class="math-container">$G$</span>, let's say <span class="math-container">$\delta \geq 0$</span> away from the vertex <span class="math-container">$v_{1}$</span> and <span class="math-container">$1 - \delta$</span> from the vertex <span class="math-container">$v_{2}$</span>. Ant can wait for the scorpions, or it can move to one of the four edges nearby. You can use the line graph <span class="math-container">$L(G)$</span> of the graph <span class="math-container">$G$</span> drawn in red in the Figure below to derive a state space:</p> <p><a href="https://i.sstatic.net/EqcqV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EqcqV.png" alt="Line graph of G"></a></p> <p>Each red vertex of <span class="math-container">$L(G)$</span> is an edge in <span class="math-container">$G$</span> and has 4 neighbors. If <span class="math-container">$d_{G}(S, v_{1}) \leq \frac{\delta}{3}$</span>, for some scorpion <span class="math-container">$S$</span>, then ant can not cross <span class="math-container">$v_{1}$</span> and these two edges (incident to <span class="math-container">$v_{1}$</span>) are not accessible to the ant and corresponding two vertices are not present in the state space. Similar for vertex <span class="math-container">$v_{2}$</span> if <span class="math-container">$d_{G}(S, v_{2}) \leq \frac{1 - \delta}{3}$</span>.</p> <p>To show your intuition:</p> <blockquote> <p>"Otherwise it gets really complicated and messy. It seems though that if the ant just goes as far away as possible from the scoprions at every moment, they won't be able to catch it. Obviously this would need a mathematical proof though."</p> </blockquote> <p>is true, you could try do case analysis of how state space depends on the initial positions. Positions that make state space a tree, means that scorpions win. Otherwise state space has cycles and ant can just walk endlessly in a cycle. (Line graph of a cycle is a cycle, so a cycle in state space corresponds to a cycle on the cube.)</p>
<p>Let's consider starting positions to be at the corners.</p> <p>As you already found, if the S's start at the positions cornering A, then A has no hope.</p> <p><a href="https://i.sstatic.net/62xmZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/62xmZ.png" alt="Ant_Scorpion_1"></a></p> <p>On the contrary, we can always individuate a face of the cube with max <span class="math-container">$2$</span> S and an opposite face with max <span class="math-container">$1$</span> S , with A being at most one non-blocked edge away from the face with less scorpions.<br> At <span class="math-container">$t=1/3$</span>, we can still associate the S positions to the nearest corners, so same as the original configuration, and A is quick enough to position herself on the less crowded face, as far as possible from the scorpion there.</p> <p>If the Ant plays wisely that strategy she can always survive.</p>
logic
<p>What is the basis for a proof? I am trying to understand what a proof is, what is the most simple example of a proof? like can you have a 'proof' for $1+1=2$?</p>
<p>What is a proof? Well, commonly in mathematics this either means "a rigorously convincing argument" or a "formal proof". I will talk about the latter here, and I will specifically take the axiomatic approach to what is a proof.</p> <p>Well first we need to understand what is an "<strong>inference rule</strong>". An inference rule is a rule which allows us to deduce a formal assertion from previous assertions.</p> <p>For example, "If $\alpha\land\beta$ [read: $\alpha$ and $\beta$] are true, then $\alpha$ is true". This means that if we assume something which has a certain <em>form</em> then it is permitted to derive a certain statement. For example:</p> <blockquote> <p>You are wearing a black coat. Therefore you are wearing a coat.</p> </blockquote> <p>We assumed that you are wearing a coat <strong>and</strong> that the coat is black, we deduced that you are wearing a coat. It seems trivial, but in fact inference rules come to formalize a very trivial reasoning, or to quote Mr. Spock "<em>It's only logical</em>".</p> <p>Now what is a proof? </p> <p>Well... before we talk about that we need a <strong>framework</strong>. We work with mathematical logic. In this context we have symbols to <em>write</em> our proof (e.g., $\land$ is the letter for "and"). We have inference rules, and we have axioms of logic which are a bit like inference rules, but slightly different. In less-mathy environment there are usually a lot of inference rules and less axioms; while in the course I took in my undergrad there were many axioms but only one inference rule.</p> <p>In the mathematical logic we also have rules to tell us what sort of strings are valid formulas, sentences or terms in the language. Much like "Chair me dog" is not a valid English sentence. We only care about syntax, about the form, not about the content. Nonsense like "I ate the invisible chair for dinner last night." is syntactically correct, but has no meaning -- we don't care. So legal strings of characters are called formulas or <strong>sentences</strong>. Formulas and sentences describe some relations, or properties of objects. <strong>Terms</strong> are also valid strings of letters in the language, but those describe objects rather than properties.</p> <p>So now can we get to the point? What is a mathematical proof??</p> <p>Well, one last thing before we get to that. <strong>Axioms</strong>, we prove things from axioms. What are axioms? Axioms are simply sentences which we <em>assume</em> to be true. We may not even have axioms, in which case we prove something from the axioms of logic and the inference rules, and then whatever we proved is always true in this framework.</p> <p><strong>Now</strong> we can get to the point!</p> <p>What is a proof? Well, we prove $\varphi$ from a collection of axioms $T$. This means that there is a <em>finite</em> sequence of sentences $\varphi_1,\ldots,\varphi_k$ such that:</p> <ol> <li>$\varphi_k=\varphi$, we finish with what we wanted to prove.</li> <li>For every $i&lt;k$, either $\varphi_i$ is either an axiom (sentence from $T$) or it was derived from previous statements using axioms of logic and inference rules.</li> </ol> <hr> <p>Now to give a proper example I will have to be specific and tell you what are the symbols, what are the inference rules, the logical axioms, the rules for creating sentences and formulas, what are the axioms and what we want to prove from them...</p> <p>Well, this is a bit too much for a single post at this hour. However, you should definitely go to your nearest library and open up a book about mathematical logic, preferably an introductory book.</p> <p>It raises a question, if <strong>that</strong> is what you need in order to write a proof, why is this site not full of such posts? Well, in mathematics there is a <em>big</em> difference between a formal proof and a rigorous proof. Mathematicians [usually] care about rigor, rather than absolute formality. They also interchange these two terms more often than they should.</p> <p>However proving something rigorously just means explaining how the proof should look like, in words or symbols. The background framework is often agreed upon "silently" in the background, and it takes practice but one can usually make the needed switches without much notice.</p> <hr> <p>Finally, when you say "can you have a proof that $1+1=2$?" the answer is yes, but in what context? Is it the context of natural numbers with their common axioms (the properties we assume natural numbers have), or is it in the context of the real numbers with the axioms of those?</p> <p>The two systems are very different, and formal proofs - even of the same claim - in different frameworks can be very different.</p> <p>However in this case not <strong>too</strong> different. We use the symbol $2$ to abbreviate the formal term $1+1$. So either proof would be relatively short.</p>
<p>Belgi and Asaf have answered the question assuming "proof" means "formal proof". As André indicated in the comments, there is an informal notion of proof: A proof is a convincing argument that some fact is true.</p> <p>Officially, all that counts is the formal proof. On the other hand, almost all proofs one encounters are informal (though, at least in theory, they can be translated into formal proofs.)</p> <p>Here's an example of an informal proof. I'll begin with some definitions</p> <p>Definition: By an "integer" we mean any number of the form $0, \pm 1, \pm2 , \pm 3,...$. (So, something like $\frac{2}{3}$ or $\pi$ is not an integer.)</p> <p>Definition: If $n$ is an integer, we say "n is even" if there is another integer $k$ with the property that $2k = n$.</p> <p><strong>Theorem</strong>: The number $n=6$ is even.</p> <p><strong>Proof</strong>: In order to prove this, we need to find an integer $k$ with the property that $2k = 6$. Well, $k=3$ works, so $n=6$ is even. $\square$</p> <p>(The symbol $\square$ is often used to denote the end of a proof.)</p> <p>Ok, that seems like a waste of time. But what have we gained? No longer are we depending on someone's word that 6 is even. We now know it is true. Ok, it still seems like a waste of time.</p> <p>Here's another one.</p> <p><strong>Theorem</strong>: If $n$ is even and $m$ is even, then so is $m+n$.</p> <p><strong>Proof</strong>: We need to show that we can write $m+n = 2k$ for some integer $k$. By assumption, since $m$ is even $m = 2a$ for some integer $a$. (I'm not using $k$ since I want to use that letter for another purpose later). Since $n$ is even, $n = 2b$ for some integer b.</p> <p>Then, $m+n = 2a + 2b = 2(a+b)$. So, if I let $k = a+b$ (which is still an integer), then this is the $k$ I'm searching for. This shows that $m+n$ is even. $\square$</p> <p>Now what have I gained? Before this proof, you could have experimented with many even numbers, adding them together and observing that you always got an even number out. (For example, $2+6 = 8$, $14+ 290 = 304$, etc.)</p> <p>But even if you were to spend your whole life checking that adding even numbers together gives evens, there is no guarantee that there <strong>aren't</strong> even numbers that add up to a non-even number. Perhaps it happens very rarely that 2 evens add up to a non-even, perhaps you just needed to check one more case to find that elusive example.</p> <p>The proof shows that no matter which 2 even numbers you pick, their sum is even. It guarantees that your examples aren't flukes, that they are happening for a reason and that they will happen every time.</p> <p>Now, let me point out a potential pitfall of this proof. Right where I say "...$k = a+b$ (which is still an integer)..." - <em>why</em> it still an integer? The answer to this question is going to depend on what axioms you are working with and this is going to get back to what Asaf and Belgi were saying.</p>
geometry
<p>Today, I encountered a rather interesting problem in a waiting room:</p> <p>$\qquad \qquad \qquad \qquad$<img src="https://i.sstatic.net/ktmV7.png" alt="enter image description here"></p> <p>Notice how the light is being cast on the wall? There is a curve that defines the boundary between light and shadow. In my response below, I will prove, algebraically, that this curve is a hyperbola. I'm also interested in seeing a variety of other solutions, so please feel free to post your own.</p>
<p>To begin, I noted two things. Given how light works, the curve must be a radial projection (from the light source) of the bottom boundary of the shade (a circle) onto the wall. Second, only the half of the circle closest to the wall would actually contribute to the shadow on the wall, and thus the curve. And so begins the math:</p> <p>In this particular waiting room, the shade was actually touching the wall. That is, there was a single point on the bottom boundary of the shade touching the wall, which was the point at the 'apex' of this curve where the shadow began. Therefore, I imagined the scenario as follows:</p> <p>The bottom boundary of the shade can be represented as the unit circle $\alpha = \langle \cos(\theta) + 1, \sin(\theta), 0 \rangle$. The wall can be thought of as the $yz$-plane, and perhaps the light source was centered at $P = (1, 0, 1)$. Note that the half of the circle contributing to the shadow is $\frac{\pi}{2} \leq \theta \leq \frac{3\pi}{2}$. </p> <p>Next, recall that the curve is a radial projection of the circle, so we first wish to find a parametrization of the line from $P$ to an arbitrary point on the circle $\alpha$. Such a parametrization is as follows: $l(t) = \langle 1 + \cos(\theta)t, \sin(\theta)t, 1-t \rangle$. Note that at $t = 0$, we are at the point $P$, and at $t = 1$, we are at the point $\langle 1 + \cos(\theta), \sin(\theta), 0 \rangle$ on the circle. </p> <p>We want to know where this family of lines intersects the $yz$-plane. Well, the $x$-coordinate at the intersection points is $0$, so we get the following equations:</p> <p>$$x = 1 + \cos(\theta)t = 0$$ $$y = \sin(\theta)t$$ $$z = 1 - t$$</p> <p>Hence, $t = -\sec(\theta)$. Plugging this in, we get the following system of equations describing the curve in the $yz$-plane:</p> <p>$$y = -\tan(\theta)$$ $$z = 1 + \sec(\theta)$$</p> <p>For simplicity, we can just pretend we're back in the $xy$-plane, and $x = -\tan(\theta)$ and $y = 1 + \sec(\theta)$. From here, we can get out of a parametrized equation and into rectangular form by noting that $y^2 - x^2 - 2y = 0$. </p> <p><a href="http://en.wikipedia.org/wiki/Hyperbola#Quadratic_equation">Recall</a> that a general equation $A_{xx}x^2 + 2A_{xy}xy + A_{yy}y^2 + 2B_xx + 2B_yy + C = 0$ for constants $A_{\circ \circ}$ and $B_{\circ}$ describes a hyperbola provided that $ \det \left[ \begin{array}{ c c } A_{xx} &amp; A_{xy} \\ A_{xy} &amp; A_{yy} \end{array} \right] &lt; 0$. Well, in our case, $A_{xx} = -1$, $A_{xy} = 0$, and $A_{yy} = 1$. </p> <p><strong>Therefore, we conclude that the curve is a hyperbola!</strong></p>
<p>Oh... this question was my homework a year ago, and this is my proof which is short as per my laziness.</p> <p><img src="https://i.sstatic.net/9HKoe.png" alt="enter image description here"></p> <p>The bad grey drawing is the locus required. Obviously, for this part the light rays just touch the plate. Now $$\dfrac{PA}{PD}=\dfrac{CA}{CB}=\text{constant}&gt;1$$</p> <p>Hence, the locus is a hyperbola as ratio of distance from the top of candle and line of candle(extended) is ...</p> <p>This case is similar to the lamp except the fact that there is darkness below and light above the boundary as contrary to a lamp. But this has no effect on the boundary.</p>
linear-algebra
<p>Why do we care about eigenvalues of graphs?</p> <p>Of course, any novel question in mathematics is interesting, but there is an entire discipline of mathematics devoted to studying these eigenvalues, so they must be important.</p> <p>I always assumed that spectral graph theory extends graph theory by providing tools to prove things we couldn't otherwise, somewhat like how representation theory extends finite group theory. But most results I see in spectral graph theory seem to concern eigenvalues not as means to an end, but as objects of interest in their own right.</p> <p>I also considered practical value as motivation, e.g. using a given set of eigenvalues to put bounds on essential properties of graphs, such as maximum vertex degree. But I can't imagine a situation in which I would have access to a graph's eigenvalues before I would know much more elementary information like maximum vertex degree.</p> <p>(<em>EDIT:</em> for example, dtldarek points out that $\lambda_2$ is related to diameter, but then why would we need $\lambda_2$ when we already have diameter? Is this somehow conceptually beneficial?)</p> <blockquote> <p>So, what is the meaning of graph spectra intuitively? And for what practical purposes are they used? Why is finding the eigenvalues of a graph's adjacency/Laplacian matrices more than just a novel problem?</p> </blockquote>
<p>This question already has a number of nice answers; I want to emphasize the breadth of this topic.</p> <p>Graphs can be represented by matrices - adjacency matrices and various flavours of Laplacian matrices. This almost immediately raises the question as to what are the connections between the spectra of these matrices and the properties of the graphs. Let's call the study of these connections "the theory of graph spectra". (But I am not entirely happy with this definition, see below.) It is tempting to view the map from graphs to eigenvalues as a kind of Fourier theory, but there are difficulties with this analogy. First, graphs in general are not determined by the their eigenvalues. Second, which of the many adjacency matrices should we use?</p> <p>The earliest work on graph spectra was carried out in the context of the Hueckel molecular orbital theory in Quantum Chemistry. This lead among other things to work on the matching polynomial; this gives us eigenvalues without adjacency matrices (which is why I feel the above definition of the topic is unsatisfactory). A more recent manifestation of this stream of ideas is the work on the spectra of fullerenes.</p> <p>The second source of the topic arises in Seidel's work on regular two-graphs, which started with questions about regular simplices in real projective space and lead to extraordinarily interesting questions about sets of equiangular lines in real space. The complex analogs of these questions are now of interest to quantum physicists - see SIC-POVMs. (It is not clear what role graph theory can play here.) In parallel with Seidel's work was the fundamental paper by Hoffman and Singleton on Moore graphs of diameter two. In both cases, the key observation was that certain extremal classes of graphs could be characterized very naturally by conditions on their spectra. This work gained momentum because a number of sporadic simple groups were first constructed as automorphism groups of graphs. For graph theorists it flowered into the the theory of distance-regular graphs, starting with the work of Biggs and his students, and still very active. </p> <p>One feature of the paper of Hoffman and Singleton is that its conclusion makes no reference to spectra. So it offers an important graph theoretical result for which the "book proof" uses eigenvalues. Many of the results on distance-regular graphs preserve this feature.</p> <p>Hoffman is also famous for his eigenvalue bounds on chromatic numbers, and related bounds on the maximum size of independent sets and cliques. This is closely related to Lovász's work on Shannon capacity. Both the Erdős-Ko-Rado theorem and many of its analogs can now be obtained using extensions of these techniques.</p> <p>Physicists have proposed algorithms for graph isomorphism based on the spectra of matrices associated to discrete and continuous walks. The connections between continuous quantum walks and graph spectra are very strong.</p>
<p>I can't speak much to what traditional Spectral Graph Theory is about, but my personal research has included the study of what I call "Spectral Realizations" of graphs. A spectral realization is a special geometric realization (vertices are not-necessarily-distinct points, edges are not-necessarily-non-degenerate line segments, in some $\mathbb{R}^n$) derived from the eigenvectors of a graph's adjacency matrix.</p> <blockquote> <p>In particular, if the <em>rows</em> of a matrix constitute a basis for some eigenspace of the adjacency matrix a graph $G$, then the <em>columns</em> of that matrix are coordinate vectors of (a projection of) a spectral realization.</p> </blockquote> <p>A spectral realization of a graph has two nice properties:</p> <ul> <li>It's <em>harmonious</em>: Every graph automorphism induces to a rigid isometry of the realization; you can <em>see</em> the graph's automorphic structure!</li> <li>It's <em>eigenic</em>: Moving each vertex to the vector-sum of its immediate neighbors is equivalent to <em>scaling</em> the figure; the scale factor is the corresponding eigenvalue.</li> </ul> <p>Well, the properties are nice <em>in theory</em>. Usually, a spectral realization is a jumble of collapsed segments, or is embedded is high-dimensional space; such circumstances make a realization difficult to "see". Nevertheless, a spectral realization can be a helpful first pass at visualizing a graph. Moreover, a graph with a high degree of symmetry can admit some visually-interesting low-dimensional spectral realizations; for example, the skeleton of the truncated octahedron has this modestly-elaborate collection:</p> <p><img src="https://i.sstatic.net/ffIyp.png" alt="Spectral Realizations of the Truncated Octahedron"></p> <p>For a gallery of hundreds of these things, see the PDF linked at my Bloog post, <a href="http://daylateanddollarshort.com/bloog/spectral-realizations-of-graphs/" rel="noreferrer">"Spectral Realizations of Graphs"</a>.</p> <p>Since many mathematical objects decompose into eigen-objects, it probably comes as no surprise that <em>any geometric realization of a graph is the sum of spectral realizations of that graph</em>. (Simply decomposing the realization's coordinate matrix into eigen-matrices gets most of the way to that result, although the eigen-matrices themselves usually represent "affine images" of properly-spectral realizations. The fact that affine images decompose into a sum of <em>similar</em> images takes <a href="http://daylateanddollarshort.com/bloog/extending-a-theorem-of-barlotti/" rel="noreferrer">an extension of a theorem of Barlotti</a>.) There's likely something interesting to be said about how each spectral component spectral influences the properties of the combined figure.</p> <p>Anyway ... That's why <em>I</em> care about the eigenvalues of graphs.</p>
probability
<p>I've a confession to make. I've been using PDF's and PMF's without actually knowing what they are. My understanding is that density equals area under the curve, but if I look at it that way, then it doesn't make sense to refer to the &quot;mass&quot; of a random variable in discrete distributions. How can I interpret this? Why do we call use &quot;mass&quot; and &quot;density&quot; to describe these functions rather than something else?</p> <p>P.S. Please feel free to change the question itself in a more understandable way if you feel this is a logically wrong question.</p>
<p>(This answer takes as its starting point the OP's question in the comments, "Let me understand mass before going to density. Why do we call a point in the discrete distribution as mass? Why can't we just call it a point?")</p> <p>We could certainly call it a point. The utility of the term "probability mass function," though, is that it tells us something about how the function in the discrete setting relates to the function in the continuous setting because of the associations we already have with "mass" and "density." And I think to understand why we use these terms in the first place we have to start with what we call the density function. (In fact, I'm not sure we would even be using "probability mass" without the corresponding "probability density" function.)</p> <p>Let's say we have some function $f(x)$ that we haven't named yet but we know that $\int_a^b f(x) dx$ yields the probability that we see an outcome between $a$ and $b$. What should we call $f(x)$? Well, what are its properties? Let's start with its units. We know that, in general, the units on a definite integral $\int_a^b f(x) dx$ are the units of $f(x)$ times the units of $dx$. In our setting, the integral gives a probability, and $dx$ has units in say, length. So the units of $f(x)$ must be probability per unit length. This means that $f(x)$ must be telling us something about how much probability is concentrated per unit length near $x$; i.e., how <em>dense</em> the probability is near $x$. So it makes sense to call $f(x)$ a "probability density function." (In fact, one way to view $\int_a^b f(x) dx$ is that, if $f(x) \geq 0$, $f(x)$ is <em>always</em> a density function. From this point of view, height is area density, area is volume density, speed is distance density, etc. One of my colleagues uses an approach like this when he discusses applications of integration in second-semester calculus.)</p> <p>Now that we've named $f(x)$ a density function, what should we call the corresponding function in the discrete setting? It's not a density function; its units are probability rather than probability per unit length. So what is it? Well, when we say "density" without a qualifier we are normally talking about "mass density," and when we integrate a density function over an object we obtain the mass of that object. With this in mind, the relationship between the probability function in the continuous setting to that of the probability function in the discrete setting is exactly that of density to mass. So "probability mass function" is a natural term to grab to apply to the corresponding discrete function. </p>
<p>Probability mass functions are used for discrete distributions. It assigns a probability to each point in the sample space. Whereas the integral of a probability density function gives the probability that a random variable falls within some interval. </p>
linear-algebra
<p>My professor keeps mentioning the word "isomorphic" in class, but has yet to define it... I've asked him and his response is that something that is isomorphic to something else means that they have the same vector structure. I'm not sure what that means, so I was hoping anyone could explain its meaning to me, using knowledge from elementary linear algebra only. He started discussing it in the current section of our textbook: General Vector Spaces.</p> <p>I've also heard that this is an abstract algebra term, so I'm not sure if isomorphic means the same thing in both subjects, but I know absolutely no abstract algebra, so in your definition if you keep either keep abstract algebra out completely, or use very basic abstract algebra knowledge, that would be appreciated.</p>
<p>Isomorphisms are defined in many different contexts; but, they all share a common thread.</p> <p>Given two objects $G$ and $H$ (which are of the same type; maybe groups, or rings, or vector spaces... etc.), an <em>isomorphism</em> from $G$ to $H$ is a bijection $\phi:G\rightarrow H$ which, in some sense, <em>respects the structure</em> of the objects. In other words, they basically identify the two objects as actually being the same object, <em>after renaming of the elements</em>.</p> <p>In the example that you mention (vector spaces), an isomorphism between $V$ and $W$ is a bijection $\phi:V\rightarrow W$ which respects scalar multiplication, in that $\phi(\alpha\vec{v})=\alpha\phi(\vec{v})$ for all $\vec{v}\in V$ and $\alpha\in K$, and also respects addition in that $\phi(\vec{v}+\vec{u})=\phi(\vec{v})+\phi(\vec{u})$ for all $\vec{v},\vec{u}\in V$. (Here, we've assumed that $V$ and $W$ are both vector spaces over the same base field $K$.)</p>
<p>Two vector spaces $V$ and $W$ are said to be isomorphic if there exists an invertible linear transformation (aka an isomorphism) $T$ from $V$ to $W$.</p> <p>The idea of a homomorphism is a transformation of an algebaric structure (e.g. a vector space) that preserves its algebraic properties. So an homomorphism of a vector space should preserve the basic algebraic properties of the vector space, in the following sense:</p> <p>$1$. Scalar multiplication and vector addition in $V$ is carried over to scalar multiplication and vector addition in $W$:</p> <blockquote> <p><strong>For any vectors $x,y$ in $V$ and scalars $a,b$ from the underlying field, $T(ax+by)=aT(x)+bT(y)$.</strong></p> </blockquote> <p>$2$. The identity element of $V$ is carried over to the identity element of $W$:</p> <blockquote> <p><strong>If $0_V$ is the identity vector in $V$, then $T(0_V)$ is the identity vector in $W$.</strong></p> </blockquote> <p>$3$. Vector inversion in $V$ is carried over to vector inversion.</p> <blockquote> <p><strong>$T(-v)=-T(v)$ for all $v$ in $V$.</strong></p> </blockquote> <p>$1$ is precisely the property that defines linear transformations, and $2$ and $3$ are redundant (they follow from $1$). So linear transformations are the homomorphisms of vector spaces.</p> <p>An isomorphism is a homomorphism that can be reversed; that is, an invertible homomorphism. So a vector space isomorphism is an invertible linear transformation. The idea of an invertible transformation is that it transforms spaces of a particular "size" into spaces of the same "size." Since dimension is the analogue for the "size" of a vector space, an isomorphism must preserve the dimension of the vector space.</p> <p>So this is the idea of the (finite-dimensional) vector space isomorphism: a linear (i.e. structure-preserving) dimension-preserving (i.e. size-preserving, invertible) transformation.</p> <p>Because isomorphic vector spaces are the same size and have the same algebraic properties, mathematicians think of them as "the same, for all intents and purposes."</p>
number-theory
<p>Our algebra teacher usually gives us a paper of $20-30$ questions for our homework. But each week, he tells us to do all the questions which their number is on a specific form.</p> <p>For example, last week it was all the questions on the form of $3k+2$ and the week before that it was $3k+1$. I asked my teacher why he always uses the linear form of numbers... Instead of answering, he told me to determine which form of numbers of the new paper should we solve this week.</p> <p>I wanted to be a little creative, so I used the prime numbers and I told everyone to do numbers on the form of $\lfloor \sqrt{p} \rfloor$ where $p$ is a prime number. At that time, I couldn't understand the smile on our teacher's face.</p> <p>Everything was O.K., until I decided to do the homework. Then I realized I had made a huge mistake.$\lfloor \sqrt{p} \rfloor$ generated all of the questions of our paper and my classmates wanted to kill me. I went to my teacher to ask him to change the form, but he said he will only do it if I could solve this:</p> <blockquote> <p>Prove that $\lfloor \sqrt{p} \rfloor$ generates all natural numbers.</p> </blockquote> <p>What I tried: Suppose there is a $k$ for which there is no prime number $p$ that $\lfloor \sqrt{p} \rfloor=k$. From this we can say there exists two consecutive perfect squares so that there is no prime number between them. So if we prove that for every $x$ there exists a prime number between $x^2$ and $x^2+2x+1$ we are done.</p> <p>I tried using Bertrand's postulate but it didn't work.</p> <p>I would appreciate any help from here :) </p>
<blockquote> <p>If we prove that for every <em>x</em> there exists a prime number between $x^2$ and $x^2+2x+1$, we are done.</p> </blockquote> <p>This is <a href="http://en.wikipedia.org/wiki/Legendre%27s_conjecture">Legendre's conjecture</a>, which remains unsolved. Hence <a href="http://static1.wikia.nocookie.net/__cb20111127113630/batman/images/2/22/The_Joker_smile.jpg">the big smile on your teacher's face</a>.</p>
<p>Any of the accepted conjectures on sieves and random-like behavior of primes would predict that the chance of finding counterexamples to the conjecture in $(x^2, (x+1)^2)$ decrease rapidly with $x$, since they correspond to random events that are (up to logarithmic factors) $x$ standard deviations from the mean, and probabilities of those are suppressed very rapidly. This makes the computational evidence for the conjecture more reliable than just the fact of checking up to the millions and billions.</p>
matrices
<p>I am unsure how to go about doing this inverse product problem:</p> <p>The question says to find the value of each matrix expression where A and B are the invertible 3 x 3 matrices such that $$A^{-1} = \left(\begin{array}{ccc}1&amp; 2&amp; 3\\ 2&amp; 0&amp; 1\\ 1&amp; 1&amp; -1\end{array}\right) $$ and $$B^{-1}=\left(\begin{array}{ccc}2 &amp;-1 &amp;3\\ 0&amp; 0 &amp;4\\ 3&amp; -2 &amp; 1\end{array}\right) $$</p> <p>The actual question is to find $ (AB)^{-1}$. </p> <p>$ (AB)^{-1}$ is just $ A^{-1}B^{-1}$ and we already know matrices $ A^{-1}$ and $ B^{-1}$ so taking the product should give us the matrix $$\left(\begin{array}{ccc}11 &amp;-7 &amp;14\\ 7&amp; -4 &amp;7\\ -1&amp; 1 &amp; 6\end{array}\right) $$ yet the answer is $$ \left(\begin{array}{ccc} 3 &amp;7 &amp;2 \\ 4&amp; 4 &amp;-4\\ 0 &amp; 7 &amp; 6 \end{array}\right) $$</p> <p>What am I not understanding about the problem or what am I doing wrong? Isn't this just matrix multiplication?</p>
<p>Actually the inverse of matrix product does not work in that way. Suppose that we have two invertible matrices, $A$ and $B$. Then it holds: $$ (AB)^{-1}=B^{-1}A^{-1}, $$ and, in general: $$ \left(\prod_{k=0}^NA_k\right)^{-1}=\prod_{k=0}^NA^{-1}_{N-k} $$</p>
<p>Note that the matrix multiplication is not <em>commutative</em>, i.e, you'll <strong>not</strong> always have: <span class="math-container">$AB = BA$</span>.</p> <p>Now, say the matrix <span class="math-container">$A$</span> has the inverse <span class="math-container">$A^{-1}$</span> (i.e <span class="math-container">$A \cdot A^{-1} = A^{-1}\cdot A = I$</span>); and <span class="math-container">$B^{-1}$</span> is the inverse of <span class="math-container">$B$</span> (i.e <span class="math-container">$B\cdot B^{-1} = B^{-1} \cdot B = I$</span>).</p> <h2>Claim</h2> <p><span class="math-container">$B^{-1}A^{-1}$</span> is the inverse of <span class="math-container">$AB$</span>. So basically, what I need to prove is: <span class="math-container">$(B^{-1}A^{-1})(AB) = (AB)(B^{-1}A^{-1}) = I$</span>.</p> <p>Note that, although matrix multiplication is not commutative, it is however, <strong>associative</strong>. So:</p> <ul> <li><p><span class="math-container">$(B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B = B^{-1}IB = (B^{-1}I)B = B^{-1}B=I$</span></p> </li> <li><p><span class="math-container">$(AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = A^{-1}IA = (A^{-1}I)A = A^{-1}A=I$</span></p> </li> </ul> <p>So, the inverse if <span class="math-container">$AB$</span> is indeed <span class="math-container">$B^{-1}A^{-1}$</span>, and <strong>NOT</strong> <span class="math-container">$A^{-1}B^{-1}$</span>.</p>
probability
<p>A pair of dice is rolled repeatedly until each outcome (2 through 12) has occurred at least once. What is the expected number of rolls necessary for this to occur?</p> <p>Notes: This is not very deep conceptually, but because of the unequal probabilities for the outcomes, it seems that the calculations involved are terribly messy. It must have been done already (dice have been studied for centuries!) but I can't find a discussion in any book, or on line. Can anybody give a reference?</p>
<p>This is the coupon collectors problem with unequal probabilities. There is a treatment of this problem in <strong>Example 5.17</strong> of the 10th edition of <em>Introduction to Probability Models</em> by Sheldon Ross (page 322). He solves the problem by embedding it into a Poisson process. Anyway, the answer is </p> <p>$$ E[T]=\int_0^\infty \left(1-\prod_{j=1}^m (1-e^{-p_j t})\right) dt, $$</p> <p>when there are $m$ events with probability $p_j, j=1, \dots, m$ of occurring.</p> <p>In your particular problem with two fair dice, my calculations give</p> <p>$$E[T] = {769767316159\over 12574325400} = 61.2173.$$</p>
<p>Suppose the probabilities are $p_i$. For each set $S$, the probability that in the first $n$ terms we have only seen throws from $S$ is $$ p_S^n, \quad p_S \triangleq \sum_{i \in S} p_i. $$ The probability that we have <em>not</em> seen all outcomes by the $n$th throw is $$ r_n = \sum_{S \neq \emptyset} (-1)^{|S|+1} p_{\overline{S}}^n. $$ The required expectation is $$ E = \sum_{n \geq 1} r_n. $$ Changing the order of summation, the summand corresponding to $S$ contributes to the sum $$ (-1)^{|S|+1} \sum_{n \geq 1} p_{\overline{S}}^n = \frac{(-1)^{|S|+1} p_{\overline{S}}}{p_S}. $$ So we get the formula $$ E = \sum_{S \neq \emptyset} (-1)^{|S|+1} \left(\frac{1}{p_S} - 1 \right) = \sum_{S \neq \emptyset} \frac{(-1)^{|S|+1}}{p_S} - 1. $$ You can now in principle plug in the values of the dice and get the result. Since there are $11$ outcomes, you need to sum $2^{11} = 2048$ reciprocals (including the final $-1$).</p>
probability
<p>On average, how many times must I roll a dice until I get a $6$?</p> <p>I got this question from a book called Fifty Challenging Problems in Probability. </p> <p>The answer is $6$, and I understand the solution the book has given me. However, I want to know why the following logic does not work: The chance that we do not get a $6$ is $5/6$. In order to find the number of dice rolls needed, I want the probability of there being a $6$ in $n$ rolls being $1/2$ in order to find the average. So I solve the equation $(5/6)^n=1/2$, which gives me $n=3.8$-ish. That number makes sense to me intuitively, where the number $6$ does not make sense intuitively. I feel like on average, I would need to roll about $3$-$4$ times to get a $6$. Sometimes, I will have to roll less than $3$-$4$ times, and sometimes I will have to roll more than $3$-$4$ times.</p> <p>Please note that I am not asking how to solve this question, but what is wrong with my logic above.</p> <p>Thank you!</p>
<p>You can calculate the average this way also.</p> <p>The probability of rolling your first <span class="math-container">$6$</span> on the <span class="math-container">$n$</span>-th roll is <span class="math-container">$$\left[1-\left(\frac{5}{6}\right)^n\right]-\left[1-\left(\frac{5}{6}\right)^{n-1}\right]=\left(\frac{5}{6}\right)^{n-1}-\left(\frac{5}{6}\right)^{n}$$</span></p> <p>So the weighted average on the number of rolls would be <span class="math-container">$$\sum_{n=1}^\infty \left(n\left[\left(\frac{5}{6}\right)^{n-1}-\left(\frac{5}{6}\right)^{n}\right]\right)=6$$</span></p> <p>Again, as noted already, the difference between mean and median comes in to play. The distribution has a long tail way out right pulling the mean to <span class="math-container">$6$</span>. <img src="https://i.sstatic.net/iItFE.jpg" alt="enter image description here" /></p> <p>For those asking about this graph, it is the expression above, without the Summation. It is not cumulative. (The cumulative graph would level off at <span class="math-container">$y=6$</span>). This graph is just <span class="math-container">$y=x\left[\left(\frac{5}{6}\right)^{x-1}-(\left(\frac{5}{6}\right)^{x}\right]$</span></p> <p>It's not a great graph, honestly, as it is kind of abstract in what it represents. But let's take <span class="math-container">$x=4$</span> as an example. There is about a <span class="math-container">$0.0965$</span> chance of getting the first roll of a <span class="math-container">$6$</span> on the <span class="math-container">$4$</span>th roll. And since we're after a weighted average, that is multiplied by <span class="math-container">$4$</span> to get the value at <span class="math-container">$x=4$</span>. It doesn't mean much except to illustrate why the mean number of throws to get the first <span class="math-container">$6$</span> is higher than around <span class="math-container">$3$</span> or <span class="math-container">$4.$</span></p> <p>You can imagine an experiment with <span class="math-container">$100$</span> trials. About <span class="math-container">$17$</span> times it will only take <span class="math-container">$1$</span> throw(<span class="math-container">$17$</span> throws). About <span class="math-container">$14$</span> times it will take <span class="math-container">$2$</span> throws (<span class="math-container">$28$</span> throws). About <span class="math-container">$11$</span> times it will take <span class="math-container">$3$</span> throws(<span class="math-container">$33$</span> throws). About <span class="math-container">$9$</span> times it will take <span class="math-container">$4$</span> throws(<span class="math-container">$36$</span> throws) etc. Then you would add up ALL of those throws and divide by <span class="math-container">$100$</span> and get <span class="math-container">$\approx 6.$</span></p>
<p>The probability of the time of first success is given by the <a href="http://en.wikipedia.org/wiki/Geometric_distribution" rel="noreferrer">Geometric distribution</a>.</p> <p>The distribution formula is:</p> <p>$$P(X=k) = pq^{n-1}$$</p> <p>where $q=1-p$.</p> <p>It's very simple to explain this formula. Let's assume that we consider as a success getting a 6 rolling a dice. Then the probability of getting a success at the first try is</p> <p>$$P(X=1) = p = pq^0= \frac{1}{6}$$</p> <p>To get a success at the second try, we have to fail once and then get our 6:</p> <p>$$P(X=2)=qp=pq^1=\frac{1}{6}\frac{5}{6}$$</p> <p>and so on.</p> <p>The expected value of this distribution answers this question: how many tries do I have to wait before getting my first success, as an average? The expected value for the Geometric distribution is:</p> <p>$$E(X)=\displaystyle\sum^\infty_{n=1}npq^{n-1}=\frac{1}{p}$$</p> <p>or, in our example, $6$.</p> <p><strong>Edit:</strong> We are assuming multiple independent tries with the same probability, obviously.</p>
differentiation
<p>How do I perform the following?</p> <p>$$\frac{d}{dx} \int_0^x \int_0^x f(y,z) \;dy\; dz$$</p> <p>Help/hints would be appreciated. The Leibniz rule for integration does not seem to be applicable.</p>
<p>The Leibniz integral rule does work.</p> <p>$$\begin{align}\frac{d}{dx} \int_{0}^{x} dy \left[\int_{0}^{x} f(y,z) dz \right] &amp;=\int_{0}^{x} f(x,z) dz + \int_{0}^{x} dy \frac{\partial}{\partial x}\left[\int_{0}^{x} f(y,z) dz \right]\\ &amp;= \int_{0}^{x} f(x,z) dz + \int_{0}^{x} dy \left[f(y,x) + \int_{0}^{x} \frac{\partial f(y,z)}{\partial x} dz\right]\\ &amp;= \int_{0}^{x} f(x,z) dz + \int_{0}^{x} f(y,x) dy \end{align}$$ You only need to remember when the integration limit depends on $x$, $\frac{d}{dx}$ on the integral will pick up extra terms for the integration limits. In general:</p> <p>$$\frac{d}{dx} \int_{a(x)}^{b(x)} g(x,y) dy = g(x,b(x)) b'(x) - g(x,a(x)) a'(x) + \int_{a(x)}^{b(x)} \frac{\partial g(x,y)}{\partial x} dy$$</p>
<p>Hint: Use the following fact:</p> <blockquote> <p>Let $\phi(\alpha)=\int_{u_1}^{u_2}f(x,\alpha)dx, ~~a\leq\alpha\leq b$, where in the functions $u_1$ and $u_2$ may depend on the parameter $\alpha$. Then $$\frac{d\phi}{d\alpha}=\int_{u_1}^{u_2}f_{\alpha}dx+f(u_2,\alpha)\frac{du_2}{d\alpha}-f(u_1,\alpha)\frac{du_1}{d\alpha}$$ </p> </blockquote> <p>Here we can consider $x$ as $\alpha$ and consider $\int_0^{x}f(y,z)dy$ as $g(x,z)$.</p>
logic
<p>A huge group of people live a bizarre box based existence. Every day, everyone changes the box that they're in, and every day they share their box with exactly one person, and never share a box with the same person twice.</p> <p>One of the people of the boxes gets sick. The illness is spread by co-box-itation. What is the minimum number of people who are ill on day n?</p> <hr> <p><strong>Additional information (not originally included in problem):</strong></p> <p>Potentially relevant OEIS sequence: <a href="http://oeis.org/A007843">http://oeis.org/A007843</a></p>
<p>I used brute forcing, minimizing as possible, until day 20, then I was comfortable enough to try to make an educated guessing about the minimum number of patients on successive days. A pattern emerged quite evidently, and the number was exactly the sequence of <a href="http://oeis.org/A007843/list">oeis.org/A007843</a>. First thing I have noticed, after having all the results of the first 20 days written down, is that whenever the minimum number of patients becomes a power of 2, it becomes clearly obvious how long the containment of illness, in the minimum case, is going to be valid: it is given as $\log_2 x$, where x is the minimum number of patients. So I have rewritten the minimum number of patients in terms of powers of 2 $ (2^0, 2^1, 2^2, 2^2+2^1, 2^3, 2^3+2^1, 2^3+2^2, 2^3+2^2+2^1, 2^4, 2^4+2^1, 2^4+2^2, 2^4+2^2+2^1, 2^4+2^3, 2^4+2^3+2^1, 2^4+2^3+2^2, 2^4+2^3+2^2+2^1, ...).$ Each item in this sequence represents the minimum number of patients at some point in time, in consecutive order. Note that, the exponent of the last term in each item exposes the number of days in which the illness is going to keep contained at the same level exhibited by the item's value, that is (1 patient at Day 0, 2 patients for 1 day, 4 patients for 2 days, 6 patients for 1 day, 8 patients for 3 days, 10 patients for 1 day, 12 patients for 2 days, ...). So the sequence of the successive days starting at Day 0 is (1, 2, 4, 4, 6, 8, 8, 8, 10, 12, 12, 14, 16, 16, 16, 16, 18, 20, 20, 22, 24, 24, 24, 26, 28, 28, 30, ...) as displayed at <a href="http://oeis.org/A007843">OEIS sequence</a>. (I wanted to post this as a comment but it seems that I am not allowed to?)</p>
<p>Just in case this helps someone:</p> <p><img src="https://i.sstatic.net/DKsDj.png" alt="enter image description here"></p> <p>(In each step we must cover a $N\times N$ board with $N$ non-self attacking rooks, diagonal forbidden). This gives the sequence (I start numbering day 1 for N=2) : (2,4,4,6,8,8,8,10,12,12,14,16,16,16)</p> <p>Updated: a. Brief explanation: each column-row corresponds to a person; the numbered $n$ cells shows the pairings of sick people corresponding to day $n$ (day 1: pair {1,2}; day 2: pairs {1,4}, {2,3})</p> <p>b. This, as most answers here, assumes that we are interested in a sequence of pairings that minimize the number of sick people <strong>for all $n$</strong>. But it can be argued that the question is not clear on this, and that might be interested in minimize the number of sick people for one fixed $n$. In this case, the problem is simpler, see Daniel Martin's answer.</p>
linear-algebra
<p>I gave the following problem to students:</p> <blockquote> <p>Two $n\times n$ matrices $A$ and $B$ are <em>similar</em> if there exists a nonsingular matrix $P$ such that $A=P^{-1}BP$.</p> <ol> <li><p>Prove that if $A$ and $B$ are two similar $n\times n$ matrices, then they have the same determinant and the same trace.</p></li> <li><p>Give an example of two $2\times 2$ matrices $A$ and $B$ with same determinant, same trace but that are not similar.</p></li> </ol> </blockquote> <p>Most of the ~20 students got the first question right. However, almost none of them found a correct example to the second question. Most of them gave examples of matrices that have same determinant and same trace. </p> <p>But computations show that their examples are similar matrices. They didn't bother to check that though, so they just tried <em>random</em> matrices with same trace and same determinant, hoping it would be a correct example.</p> <p><strong>Question</strong>: how to explain that none of the random trial gave non similar matrices?</p> <p>Any answer based on density or measure theory is fine. In particular, you can assume any reasonable distribution on the entries of the matrix. If it matters, the course is about matrices with real coefficients, but you can assume integer coefficients, since when choosing numbers <em>at random</em>, most people will choose integers.</p>
<p>If <span class="math-container">$A$</span> is a <span class="math-container">$2\times 2$</span> matrix with determinant <span class="math-container">$d$</span> and trace <span class="math-container">$t$</span>, then the characteristic polynomial of <span class="math-container">$A$</span> is <span class="math-container">$x^2-tx+d$</span>. If this polynomial has distinct roots (over <span class="math-container">$\mathbb{C}$</span>), then <span class="math-container">$A$</span> has distinct eigenvalues and hence is diagonalizable (over <span class="math-container">$\mathbb{C}$</span>). In particular, if <span class="math-container">$d$</span> and <span class="math-container">$t$</span> are such that the characteristic polynomial has distinct roots, then any other <span class="math-container">$B$</span> with the same determinant and trace is similar to <span class="math-container">$A$</span>, since they are diagonalizable with the same eigenvalues.</p> <p>So to give a correct example in part (2), you need <span class="math-container">$x^2-tx+d$</span> to have a double root, which happens only when the discriminant <span class="math-container">$t^2-4d$</span> is <span class="math-container">$0$</span>. If you choose the matrix <span class="math-container">$A$</span> (or the values of <span class="math-container">$t$</span> and <span class="math-container">$d$</span>) &quot;at random&quot; in any reasonable way, then <span class="math-container">$t^2-4d$</span> will usually not be <span class="math-container">$0$</span>. (For instance, if you choose <span class="math-container">$A$</span>'s entries uniformly from some interval, then <span class="math-container">$t^2-4d$</span> will be nonzero with probability <span class="math-container">$1$</span>, since the vanishing set in <span class="math-container">$\mathbb{R}^n$</span> of any nonzero polynomial in <span class="math-container">$n$</span> variables has Lebesgue measure <span class="math-container">$0$</span>.) Assuming that students did something like pick <span class="math-container">$A$</span> &quot;at random&quot; and then built <span class="math-container">$B$</span> to have the same trace and determinant, this would explain why none of them found a correct example.</p> <p>Note that this is very much special to <span class="math-container">$2\times 2$</span> matrices. In higher dimensions, the determinant and trace do not determine the characteristic polynomial (they just give two of the coefficients), and so if you pick two matrices with the same determinant and trace they will typically have different characteristic polynomials and not be similar.</p>
<p>As Eric points out, such $2\times2$ matrices are special. In fact, there are only two such pairs of matrices. The number depends on how you count, but the point is that such matrices have a <em>very</em> special form.</p> <p>Eric proved that the two matrices must have a double eigenvalue. Let the eigenvalue be $\lambda$. It is a little exercise<sup>1</sup> to show that $2\times2$ matrices with double eigenvalue $\lambda$ are similar to a matrix of the form $$ C_{\lambda,\mu} = \begin{pmatrix} \lambda&amp;\mu\\ 0&amp;\lambda \end{pmatrix}. $$ Using suitable diagonal matrices shows that $C_{\lambda,\mu}$ is similar to $C_{\lambda,1}$ if $\mu\neq0$. On the other hand, $C_{\lambda,0}$ and $C_{\lambda,1}$ are not similar; one is a scaling and the other one is not.</p> <p>Therefore, up to similarity transformations, the only possible example is $A=C_{\lambda,0}$ and $B=C_{\lambda,1}$ (or vice versa). Since scaling doesn't really change anything, <strong>the only examples</strong> (up to similarity, scaling, and swapping the two matrices) are $$ A = \begin{pmatrix} 1&amp;0\\ 0&amp;1 \end{pmatrix}, \quad B = \begin{pmatrix} 1&amp;1\\ 0&amp;1 \end{pmatrix} $$ and $$ A = \begin{pmatrix} 0&amp;0\\ 0&amp;0 \end{pmatrix}, \quad B = \begin{pmatrix} 0&amp;1\\ 0&amp;0 \end{pmatrix}. $$ If adding multiples of the identity is added to the list of symmetries (then scaling can be removed), then there is only one matrix pair up to the symmetries.</p> <p>If you are familiar with the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="noreferrer">Jordan normal form</a>, it gives a different way to see it. Once the eigenvalues are fixed to be equal, the only free property (up to similarity) is whether there are one or two blocks in the normal form. The Jordan normal form is invariant under similarity transformations, so it gives a very quick way to solve problems like this.</p> <hr> <p><sup>1</sup> You only need to show that any matrix is similar to an upper triangular matrix. The eigenvalues (which now coincide) are on the diagonal. You can skip this exercise if you have Jordan normal forms at your disposal.</p>
logic
<p>In Real Analysis class, a professor told us 4 claims: let x be a real number, then:</p> <blockquote> <p>1) $0\leq x \leq 0$ implies $x = 0$</p> <p>2) $0 \leq x &lt; 0$ implies $x = 0$</p> <p>3) $0 &lt; x \leq 0$ implies $x = 0$</p> <p>4) $0 &lt; x &lt; 0$ implies $x = 0$</p> </blockquote> <p>Of course, claim #1 comes from the fact that the reals are totally ordered by the $\leq$ relation, and when you think about it from the perspective of the trichotomy property, it makes sense, because claim #1 says: $0 \leq x$, and, $x \leq 0$, and then, the only number which satisfies both propositions, is $x = 0$.</p> <p>But I am not sure I understand the reason behind claims #2, #3 and #4. Let's analyze claim #2:</p> <p>It starts saying $0 \leq x$, that means that x is a number greater than or equal to $0$, but then it says $x &lt; 0$, therefore x is less than zero. I think no number can satisfy both propositions, as it would contradict the trichotomy property, because x is less than $0$ AND greater than or equal to $0$.</p> <p>Same thing with claim #4, since $0 &lt; x &lt; 0$ means: $x &gt; 0$ AND $x &lt; 0$, and no real number satisfy both propositions at the same time. Therefore, saying $x = 0$ is the same as saying $x = 1$, or $2$, or $42$ (this is because the antecedent if always false).</p> <p>Am I missing something? My professor then told us that these were axioms, but I think that axioms should not contradict well established properties (like the trichotomy property) or, at the very least, make some sense. Are claims #2 to #4 well accepted and used "axioms" in real analysis?</p>
<p>A false proposition implies <em>any</em> other proposition, so given the assumption on the left of 2),3), or 4), which are each false for any real $x,$ one could put <em>any</em> statement after "implies" in these and the overall statement would hold.</p> <p>Here's a made-up example where a proof could use e.g. claim d) during the proof. The overall goal of the proof would be to show under certain hypotheses that $x=0.$ Suppose somehow the proof split into case A and case B, and that in treating case A one could show each of $x \le 0$ and $0 \le x.$ Here the use of a), namely $0 \le x \le 0$ implies $x=0,$ would suffice and finish case A in a mathematically sound way. On the other hand suppose in case B one could show each of $x&lt;0$ and $0&lt;x,$ thus arriving at the left side $0&lt;x&lt;0$ of claim d). Though it is <em>logically</em> correct to conclude here that again $x=0,$ since $0&lt;x&lt;0$ is false, in my opinion a better mathematical write-up of case B would be, once having arrived at the two statements $x&lt;0$ and $0&lt;x,$ just to say something like "thus case B cannot arise after all" or "so case B is contradictory". </p> <p>I myself haven't seen a proof by a good mathematical expositor which used anything like claims b), c), or d) during the argument; as outlined in the above made-up proof scenario such proofs would just say such things as "this case does not arise" at the appropriate times.</p>
<p>$A\implies B$ means ($B$ or (not $A$)).</p> <p>Let $A:0&lt;x&lt;0$ and $B:x=0$.</p> <p>$A$ is false, therefore $not(A)$ is true. Thus $A\implies B$ is true.</p> <p>See <a href="https://en.wikipedia.org/wiki/Vacuous_truth">https://en.wikipedia.org/wiki/Vacuous_truth</a> .</p>
probability
<p>$X \sim \mathcal{P}( \lambda) $ and $Y \sim \mathcal{P}( \mu)$ meaning that $X$ and $Y$ are Poisson distributions. What is the probability distribution law of $X + Y$. I know it is $X+Y \sim \mathcal{P}( \lambda + \mu)$ but I don't understand how to derive it.</p>
<p>This only holds if $X$ and $Y$ are independent, so we suppose this from now on. We have for $k \ge 0$: \begin{align*} P(X+ Y =k) &amp;= \sum_{i = 0}^k P(X+ Y = k, X = i)\\ &amp;= \sum_{i=0}^k P(Y = k-i , X =i)\\ &amp;= \sum_{i=0}^k P(Y = k-i)P(X=i)\\ &amp;= \sum_{i=0}^k e^{-\mu}\frac{\mu^{k-i}}{(k-i)!}e^{-\lambda}\frac{\lambda^i}{i!}\\ &amp;= e^{-(\mu + \lambda)}\frac 1{k!}\sum_{i=0}^k \frac{k!}{i!(k-i)!}\mu^{k-i}\lambda^i\\ &amp;= e^{-(\mu + \lambda)}\frac 1{k!}\sum_{i=0}^k \binom ki\mu^{k-i}\lambda^i\\ &amp;= \frac{(\mu + \lambda)^k}{k!} \cdot e^{-(\mu + \lambda)} \end{align*} Hence, $X+ Y \sim \mathcal P(\mu + \lambda)$.</p>
<p>Another approach is to use characteristic functions. If $X\sim \mathrm{po}(\lambda)$, then the characteristic function of $X$ is (if this is unknown, just calculate it) $$ \varphi_X(t)=E[e^{itX}]=e^{\lambda(e^{it}-1)},\quad t\in\mathbb{R}. $$ Now suppose that $X$ and $Y$ are <em>independent</em> Poisson distributed random variables with parameters $\lambda$ and $\mu$ respectively. Then due to the independence we have that $$ \varphi_{X+Y}(t)=\varphi_X(t)\varphi_Y(t)=e^{\lambda(e^{it}-1)}e^{\mu(e^{it}-1)}=e^{(\mu+\lambda)(e^{it}-1)},\quad t\in\mathbb{R}. $$ As the characteristic function completely determines the distribution, we conclude that $X+Y\sim\mathrm{po}(\lambda+\mu)$.</p>
probability
<p>What is the distribution of a random variable that is the product of two normal random variables?</p> <p>Let <span class="math-container">$X\sim N(\mu_1,\sigma_1), Y\sim N(\mu_2,\sigma_2)$</span> and <span class="math-container">$Z=XY$</span></p> <p>That is, what is its probability density function, its expected value, and its variance?</p>
<p>I will assume $X$ and $Y$ are independent. By scaling, we may assume for simplicity that $\sigma_1 = \sigma_2 = 1$. You might then note that $XY = (X+Y)^2/4 - (X-Y)^2/4$ where $X+Y$ and $X-Y$ are independent normal random variables; $(X+Y)^2/2$ and $(X-Y)^2/2$ have noncentral chi-squared distributions with $1$ degree of freedom. If $f_1$ and $f_2$ are the densities for those, the PDF for $XY$ is $$ f_{XY}(z) = 2 \int_0^\infty f_1(t) f_2(2z+t)\ dt$$ </p>
<p>For the special case that both Gaussian random variables $X$ and $Y$ have zero mean and unit variance, and are independent, the answer is that $Z=XY$ has the probability density $p_Z(z)={\rm K}_0(|z|)/\pi$. The brute force way to do this is via the transformation theorem: \begin{align} p_Z(z)&amp;=\frac{1}{2\pi}\int_{-\infty}^\infty{\rm d}x\int_{-\infty}^\infty{\rm d}y\;{\rm e}^{-(x^2+y^2)/2}\delta(z-xy) \\ &amp;= \frac{1}{\pi}\int_0^\infty\frac{{\rm d}x}{x}{\rm e}^{-(x^2+z^2/x^2)/2}\\ &amp;= \frac{1}{\pi}{\rm K}_0(|z|) \ . \end{align}</p>
game-theory
<p>I'm thinking about a game theory problem related to factorization. Here it is,</p> <p>Q: two players A and B are playing this factorization game. At very first, we have a natural number $270000=2^4\times 3^3\times 5^4$ on a chalkboard.</p> <p>in their own turn, each player chooses one number $N$ from the chalkboard and erase it, and then write down new two natural numbers $X$ and $Y$ satisfying</p> <p>(1) $N=X\times Y$ (2) $gcd(X,Y)&gt;1$ (So, they are "NOT" coprime)</p> <p>a player loses if he cannot do this process in his turn.</p> <p>So, in this game, possible states at $k$-th turn are actually sequence of natural numbers $a_1,a_2,\dots,a_k$ with $a_i&gt;1$ and $a_1\times a_2\dots \times a_k=270000$</p> <p>EXAMPLE of this game)<br> A's turn-$2^{4}\times3^{3}\times5^{4}$<br> B's turn-$2^2\times3\times5^2,2^2\times3^2\times5^2$<br> A's turn-$2^2\times3\times5^2,2\times3\times5,2\times3\times5$<br> B's turn-$2\times5,2\times3\times5,2\times3\times5,2\times3\times5$<br> So B loses in above case.</p> <p>Actually, in this game, the first player(So, A) has winning strategy.<br> What is this winning strategy for A?</p> <p>-Attempted approach: I tried to find what is "winning position" and "losing position" for this combinatorial game. but classifying these positions were not so obvious.</p> <p>What is A's winning strategy? Thanks for any help in advance.</p>
<p>This is nim in disguise. I suggest you represent a pile by a sequence of the exponents of the primes, so $270000$ would be represented by $(4,3,4)$ You can then sort the starting numbers in order, as $(4,4,3)$ would play exactly the same. A legal move is replacing a sequence with two sequences such that: the sum of the corresponding positions in the new sequences matches the number in the original sequence and at least one position of the new sequences is greater than zero in both. For example, from $(4,3,4)$ you can move to $(2,3,4)+(2,0,0)$ or to $(3,2,1)+(1,1,3)$ or to $(4,1,2)+(0,2,2)$, but not to $(4,3,0)+(0,0,4)$. You are trying to find the nim values of various positions. </p> <p>I would start with single position sequences, so the original number is a prime power. $(1)$ is a losing position, so is $*0$. $(2)$ is clearly $*1$ as you have to move to $(1)+(1)$. $(3)$ is losing, so is $*0$. $(4)$ is $*1$ because you can only move to $*0$. From $(5)$ you can only move to $*1$, so it is $*0$ and losing. A single even pile is winning as you can move to two piles half the size, then mirror your opponent's play, so is $*0$. A single odd pile is losing and is $*1$.</p> <p>I claim that as first player I can win any game of the form $(a,b)$ with $a\gt 1, b \gt 1$. The point is that an entry of $0$ in a sequence is equivalent to an entry of $1$ as neither can be divided and neither can provide the matching entry. I will move to either $(a,1)+(0,b-1)$ or to $(a-1,1)+(1,b-1)$, whichever makes the large numbers have the same parity. Then I either leave $*0+*0$ or $*1+*1$, both of which are $*0$ and losing for my opponent. </p> <p>I believe a similar argument can be made for longer sequences, but have not fleshed it out. You can kill off one of the entries of the sequence by leaving $0$ or $1$ in that location in one of the daughter sequences and one of them will win.</p>
<p>It seems from your question that you might only be interested in the case of $270000$. One winning move in $270000=2^{4}\times3^{3}\times5^{4}$ is for $A$ to split it into $1350,200$ $=\left(2^{1}\times3^{3}\times5^{2}\right),\left(2^{3}\times3^{0}\times5^{2}\right)$. After that, any splitting $B$ does in one of those factors can be mirrored (splitting a power of $3$ instead of $2$ or vice versa) in the other, and future moves by $B$ can be mirrored as well (the subfactor of $2$ in $1350$ doesn't affect anything since it can never be split). </p> <p>For example, if $B$ moves in the $200$ component to $\left(2^{1}\times3^{0}\times5^{0}\right),\left(2^{2}\times3^{0}\times5^{2}\right)$ then $A$ can move in the $1350$ component to $\left(2^{1}\times3^{1}\times5^{0}\right),\left(2^{0}\times3^{2}\times5^{2}\right)$. Since $A$ can always mirror a move by $B$, $A$ won't be left in a position without a move.</p> <p>There are other winning moves for $A$ like $27000,10$ and $7500,36$, but it is more difficult to prove that they are winning, since there is no longer a straightforward mirroring strategy to follow.</p> <p>If you were asking about the general case of an arbitrary starting number, things get a bit tricky (and please clarify in your question). <a href="https://math.stackexchange.com/users/1827/ross-millikan">Ross Millikan</a>'s <a href="https://math.stackexchange.com/a/1673620/26369">answer</a> contains a strategy for the first player to win when they can in the one and two prime case, but this doesn't generalize well. In particular, $27000=2^{3}\times3^{3}\times5^{3}$ is a losing position (though this is not obvious). It may be essentially the only nontrivial losing position, but I have not quite proved this yet. (I've been working on typing up my results in the general case, but they will be several pages long and I'm not sure an MSE answer is the best venue.)</p>
linear-algebra
<p>Some days ago, I was thinking on a problem, which states that <span class="math-container">$$AB-BA=I$$</span> does not have a solution in <span class="math-container">$M_{n\times n}(\mathbb R)$</span> and <span class="math-container">$M_{n\times n}(\mathbb C)$</span>. (Here <span class="math-container">$M_{n\times n}(\mathbb F)$</span> denotes the set of all <span class="math-container">$n\times n$</span> matrices with entries from the field <span class="math-container">$\mathbb F$</span> and <span class="math-container">$I$</span> is the identity matrix.)</p> <p>Although I couldn't solve the problem, I came up with this problem:</p> <blockquote> <p>Does there exist a field <span class="math-container">$\mathbb F$</span> for which that equation <span class="math-container">$AB-BA=I$</span> has a solution in <span class="math-container">$M_{n\times n}(\mathbb F)$</span>?</p> </blockquote> <p>I'd really appreciate your help.</p>
<p>Let $k$ be a field. The first Weyl algebra $A_1(k)$ is the free associative $k$-algebra generated by two letters $x$ and $y$ subject to the relation $$xy-yx=1,$$ which is usually called the Heisenberg or Weyl commutation relation. This is an extremely important example of a non-commutative ring which appears in many places, from the algebraic theory of differential operators to quantum physics (the equation above <em>is</em> Heisenberg's indeterminacy principle, in a sense) to the pinnacles of Lie theory to combinatorics to pretty much anything else.</p> <p>For us right now, this algebra shows up because </p> <blockquote> <p>an $A_1(k)$-modules are essentially the same thing as solutions to the equation $PQ-QP=I$ with $P$ and $Q$ endomorphisms of a vector space. </p> </blockquote> <p>Indeed:</p> <ul> <li><p>if $M$ is a left $A_1(k)$-module then $M$ is in particular a $k$-vector space and there is an homomorphism of $k$-algebras $\phi_M:A_1(k)\to\hom_k(M,M)$ to the endomorphism algebra of $M$ viewed as a vector space. Since $x$ and $y$ generate the algebra $A_1(k)$, $\phi_M$ is completely determined by the two endomorphisms $P=\phi_M(x)$ and $Q=\phi_M(y)$; moreover, since $\phi_M$ is an algebra homomorphism, we have $PQ-QP=\phi_1(xy-yx)=\phi_1(1_{A_1(k)})=\mathrm{id}_M$. We thus see that $P$ and $Q$ are endomorphisms of the vector space $M$ which satisfy our desired relation.</p></li> <li><p>Conversely, if $M$ is a vector space and $P$, $Q:M\to M$ are two linear endomorphisms, then one can show more or less automatically that there is a unique algebra morphism $\phi_M:A_1(k)\to\hom_k(M,M)$ such that $\phi_M(x)=P$ and $\phi_M(y)=Q$. This homomorphism turns $M$ into a left $A_1(k)$-module.</p></li> <li><p>These two constructions, one going from an $A_1(k)$-module to a pair $(P,Q)$ of endomorphisms of a vector space $M$ such that $PQ-QP=\mathrm{id}_M$, and the other going the other way, are mutually inverse.</p></li> </ul> <p>A conclusion we get from this is that your question </p> <blockquote> <p>for what fields $k$ do there exist $n\geq1$ and matrices $A$, $B\in M_n(k)$ such that $AB-BA=I$?</p> </blockquote> <p>is essentially equivalent to</p> <blockquote> <p>for what fields $k$ does $A_1(k)$ have finite dimensional modules?</p> </blockquote> <p>Now, it is very easy to see that $A_1(k)$ is an infinite dimensional algebra, and that in fact the set $\{x^iy^j:i,j\geq0\}$ of monomials is a $k$-basis. Two of the key properties of $A_1(k)$ are the following:</p> <blockquote> <p><strong>Theorem.</strong> If $k$ is a field of characteristic zero, then $A_1(k)$ is a simple algebra—that is, $A_1(k)$ does not have any non-zero proper bilateral ideals. Its center is trivial: it is simply the $1$-dimensional subspace spanned by the unit element.</p> </blockquote> <p>An immediate corollary of this is the following</p> <blockquote> <p><strong>Proposition.</strong> If $k$ is a field of characteristic zero, the $A_1(k)$ does not have any non-zero finite dimensional modules. Equivalently, there do not exist $n\geq1$ and a pair of matrices $P$, $Q\in M_n(k)$ such that $PQ-QP=I$.</p> </blockquote> <p><em>Proof.</em> Suppose $M$ is a finite dimensional $A_1(k)$-module. Then we have an algebra homomorphism $\phi:A_1(k)\to\hom_k(M,M)$ such that $\phi(a)(m)=am$ for all $a\in A_1(k)$ and all $m\in M$. Since $A_1(k)$ is infinite dimensional and $\hom_k(M,M)$ is finite dimensional (because $M$ is finite dimensional!) the kernel $I=\ker\phi$ cannot be zero —in fact, it must hace finite codimension. Now $I$ is a bilateral ideal, so the theorem implies that it must be equal to $A_1(k)$. But then $M$ must be zero dimensional, for $1\in A_1(k)$ acts on it at the same time as the identity and as zero. $\Box$</p> <p>This proposition can also be proved by taking traces, as everyone else has observed on this page, but the fact that $A_1(k)$ is simple is an immensely more powerful piece of knowledge (there are examples of algebras which do not have finite dimensional modules and which are not simple, by the way :) )</p> <p><em>Now let us suppose that $k$ is of characteristic $p&gt;0$.</em> What changes in term of the algebra? The most significant change is </p> <blockquote> <p><strong>Observation.</strong> The algebra $A_1(k)$ is not simple. Its center $Z$ is generated by the elements $x^p$ and $y^p$, which are algebraically independent, so that $Z$ is in fact isomorphic to a polynomial ring in two variables. We can write $Z=k[x^p,y^p]$.</p> </blockquote> <p>In fact, once we notice that $x^p$ and $y^p$ are central elements —and this is proved by a straightforward computation— it is easy to write down non-trivial bilateral ideals. For example, $(x^p)$ works; the key point in showing this is the fact that since $x^p$ is central, the <em>left</em> ideal which it generates coincides with the <em>bilateral</em> ideal, and it is very easy to see that the <em>left</em> ideal is proper and non-zero.</p> <p>Moreover, a little playing with this will give us the following. Not only does $A_1(k)$ have bilateral ideals: it has bilateral ideals of <em>finite codimension</em>. For example, the ideal $(x^p,y^p)$ is easily seen to have codimension $p^2$; more generally, we can pick two scalars $a$, $b\in k$ and consider the ideal $I_{a,b}=(x^p-a,y^p-b)$, which has the same codimension $p^2$. Now this got rid of the obstruction to finding finite-dimensional modules that we had in the characteristic zero case, so we can hope for finite dimensional modules now!</p> <p>More: this actually gives us a method to produce pairs of matrices satisfying the Heisenberg relation. We just can pick a proper bilateral ideal $I\subseteq A_1(k)$ of finite codimension, consider the finite dimensional $k$-algebra $B=A_1(k)/I$ and look for finitely generated $B$-modules: every such module is provides us with a finite dimensional $A_1(k)$-module and the observations above produce from it pairs of matrices which are related in the way we want.</p> <p>So let us do this explicitly in the simplest case: let us suppose that $k$ is algebraically closed, let $a$, $b\in k$ and let $I=I_{a,b}=(x^p-a,y^p-b)$. The algebra $B=A_1(k)/I$ has dimension $p^2$, with $\{x^iy^j:0\leq i,j&lt;p\}$ as a basis. The exact same proof that the Weyl algebra is simple when the ground field is of characteristic zero proves that $B$ is simple, and in the same way the same proof that proves that the center of the Weyl algebra is trivial in characteristic zero shows that the center of $B$ is $k$; going from $A_1(k)$ to $B$ we have modded out the obstruction to carrying out these proofs. In other words, the algebra $B$ is what's called a (finite dimensional) central simple algebra. Wedderburn's theorem now implies that in fact $B\cong M_p(k)$, as this is the only semisimple algebra of dimension $p^2$ with trivial center. A consequence of this is that there is a unique (up to isomorphism) simple $B$-module $S$, of dimension $p$, and that all other finite dimensional $B$-modules are direct sums of copies of $S$. </p> <p>Now, since $k$ is algebraically closed (much less would suffice) there is an $\alpha\in k$ such that $\alpha^p=a$. Let $V=k^p$ and consider the $p\times p$-matrices $$Q=\begin{pmatrix}0&amp;&amp;&amp;&amp;b\\1&amp;0\\&amp;1&amp;0\\&amp;&amp;1&amp;0\\&amp;&amp;&amp;\ddots&amp;\ddots\end{pmatrix}$$ which is all zeroes expect for $1$s in the first subdiagonal and a $b$ on the top right corner, and $$P=\begin{pmatrix}-\alpha&amp;1\\&amp;-\alpha&amp;2\\&amp;&amp;-\alpha&amp;3\\&amp;&amp;&amp;\ddots&amp;\ddots\\&amp;&amp;&amp;&amp;-\alpha&amp;p-1\\&amp;&amp;&amp;&amp;&amp;-\alpha\end{pmatrix}.$$ One can show that $P^p=aI$, $Q^p=bI$ and that $PQ-QP=I$, so they provide us us a morphism of algebras $B\to\hom_k(k^ p,k^ p)$, that is, they turn $k^p$ into a $B$-module. It <em>must</em> be isomorphic to $S$, because the two have the same dimension and there is only one module of that dimension; this determines <em>all</em> finite dimensional modules, which are direct sums of copies of $S$, as we said above..</p> <p>This generalizes the example Henning gave, and in fact one can show that this procedure gives <em>all</em> $p$-dimensional $A_1(k)$-modules can be constructed from quotients by ideals of the form $I_{a,b}$. Doing direct sums for various choices of $a$ and $b$, this gives us lots of finite dimensional $A_1(k)$-modules and, then, of pairs of matrices satisfying the Heisenberg relation. I think we obtain in this way all the semisimple finite dimensional $A_1(k)$-modules but I would need to think a bit before claiming it for certain.</p> <p>Of course, this only deals with the simplest case. The algebra $A_1(k)$ has non-semisimple finite-dimensional quotients, which are rather complicated (and I think there are pretty of wild algebras among them...) so one can get many, many more examples of modules and of pairs of matrices.</p>
<p>As noted in the comments, this is impossible in characteristic 0.</p> <p>But $M_{2\times 2}(\mathbb F_2)$ contains the example $\pmatrix{0&amp;1\\0&amp;0}, \pmatrix{0&amp;1\\1&amp;0}$.</p> <p>In general, in characteristic $p$, we can use the $p\times p$ matrices $$\pmatrix{0&amp;1\\&amp;0&amp;2\\&amp;&amp;\ddots&amp;\ddots\\&amp;&amp;&amp;0&amp;p-1\\&amp;&amp;&amp;&amp;0}, \pmatrix{0\\1&amp;0\\&amp;\ddots&amp;\ddots\\&amp;&amp;1&amp;0\\&amp;&amp;&amp;1&amp;0}$$ which works even over general unital rings of finite characteristic.</p>
linear-algebra
<blockquote> <p>$\mathbf{a}\times \mathbf{b}$ follows the right hand rule? Why not left hand rule? Why is it $a b \sin (x)$ times the perpendicular vector? Why is $\sin (x)$ used with the vectors but $\cos(x)$ is a scalar product?</p> </blockquote> <p>So why is cross product defined in the way that it is? I am mainly interested in the right hand rule defintion too as it is out of reach?</p>
<p>The cross product originally came from the <em>quaternions</em>, which extend the complex numbers with two other 'imaginary units' $j$ and $k$, that have noncommutative multiplication (i.e. you can have $uv \neq vu$), but satisfy the relations</p> <p>$$ i^2 = j^2 = k^2 = ijk = -1 $$</p> <p>AFAIK, this is the exact form that Hamilton originally conceived them. Presumably the choice that $ijk = -1$ is simply due to the convenience in writing this formula compactly, although it could have just as easily been an artifact of how he arrived at them.</p> <p>Vector algebra comes from separating the quaternions into scalars (the real multiples of $1$) and vectors (the real linear combinations of $i$, $j$, and $k$). The cross product is literally just the vector component of the ordinary product of two vector quaternions. (the scalar component is the negative of the dot product)</p> <p>The association of $i$, $j$, and $k$ to the unit vectors along the $x$, $y$, and $z$ axes is just lexicographic convenience; you're just associating them in alphabetic order.</p>
<p>If the right hand rule seems too arbitrary to you, use a definition of the cross product that doesn't make use of it (explicitly). Here's one way to construct the cross product:</p> <p>Recall that the (signed) volume of a <a href="https://en.wikipedia.org/wiki/Parallelepiped">parallelepiped</a> in $\Bbb R^3$ with sides $a, b, c$ is given by</p> <p>$$\textrm{Vol} = \det(a,b,c)$$</p> <p>where $\det(a,b,c) := \begin{vmatrix}a_1 &amp; b_1 &amp; c_1 \\ a_2 &amp; b_2 &amp; c_2 \\ a_3 &amp; b_3 &amp; c_3\end{vmatrix}$.</p> <p>Now let's fix $b$ and $c$ and allow $a$ to vary. Then what is the volume in terms of $a = (a_1, a_2, a_3)$? Let's see:</p> <p>$$\begin{align}\textrm{Vol} = \begin{vmatrix}a_1 &amp; b_1 &amp; c_1 \\ a_2 &amp; b_2 &amp; c_2 \\ a_3 &amp; b_3 &amp; c_3\end{vmatrix} &amp;= a_1\begin{vmatrix} b_2 &amp; c_2 \\ b_3 &amp; c_3\end{vmatrix} - a_2\begin{vmatrix} b_1 &amp; c_1 \\ b_3 &amp; c_3\end{vmatrix} + a_3\begin{vmatrix} b_1 &amp; c_1 \\ b_2 &amp; c_2\end{vmatrix} \\ &amp;= a_1(b_2c_3-b_3c_2)+a_2(b_3c_1-b_1c_3)+a_3(b_1c_2-b_2c_1) \\ &amp;= (a_1, a_2, a_3)\cdot (b_2c_3-b_3c_2,b_3c_1-b_1c_3,b_1c_2-b_2c_1)\end{align}$$</p> <p>So apparently the volume of a parallelopiped will always be the vector $a$ dotted with this interesting vector $(b_2c_3-b_3c_2,b_3c_1-b_1c_3,b_1c_2-b_2c_1)$. We call that vector the cross product and denote it $b\times c$.</p> <hr> <p>From the above construction we can define the cross product in either of two equivalent ways:</p> <p><strong>Implicit Definition</strong><br> Let $b,c\in \Bbb R^3$. Then define the vector $d = b\times c$ by $$a\cdot d = \det(a,b,c),\qquad \forall a\in\Bbb R^3$$</p> <p><strong>Explicit Definition</strong><br> Let $b=(b_1,b_2,b_3)$, $c=(c_1,c_2,c_3)$. Then define the vector $b\times c$ by $$b\times c = (b_2c_3-b_3c_2,b_3c_1-b_1c_3,b_1c_2-b_2c_1)$$</p> <hr> <p>Now you're probably wondering where that arbitrary right-handedness went. Surely it must be hidden in there somewhere. It is. It's in the ordered basis I'm implicitly using to give the coordinates of each of my vectors. If you choose a right-handed coordinate system, then you'll get a right-handed cross product. If you choose a left-handed coordinate system, then you'll get a <em>left</em>-handed cross product. So this definition essentially shifts the choice of chirality onto the basis for the space. This is actually rather pleasing (at least to me).</p> <hr> <p>The other properties of the cross product are readily verified from this definition. For instance, try checking that $b\times c$ is orthogonal to both $b$ and $c$. If you know the properties of determinants it should be immediately clear. Another property of the cross product, $\|b\times c\| = \|b\|\|c\|\sin(\theta)$, is easily determined by the geometry of our construction. Draw a picture and see if you can verify this one.</p>
combinatorics
<p>In one of his interviews, <a href="https://www.youtube.com/shorts/-qvC0ISkp1k" rel="noreferrer">Clip Link</a>, Neil DeGrasse Tyson discusses a coin toss experiment. It goes something like this:</p> <ol> <li>Line up 1000 people, each given a coin, to be flipped simultaneously</li> <li>Ask each one to flip if <strong>heads</strong> the person can continue</li> <li>If the person gets <strong>tails</strong> they are out</li> <li>The game continues until <strong>1*</strong> person remains</li> </ol> <p>He says the &quot;winner&quot; should not feel too surprised or lucky because there would be another winner if we re-run the experiment! This leads him to talk about our place in the Universe.</p> <p>I realised, however, that there need <strong>not be a winner at all</strong>, and that the winner should feel lucky and be surprised! (Because the last, say, three people can all flip tails)</p> <p>Then, I ran an experiment by writing a program with the following parameters:</p> <ol> <li><strong>Bias of the coin</strong>: 0.0001 - 0.8999 (8999 values)</li> <li><strong>Number of people</strong>: 10000</li> <li><strong>Number of times experiment run per Bias</strong>: 1000</li> </ol> <p>I plotted the <strong>Probability of 1 Winner</strong> vs <strong>Bias</strong></p> <p><a href="https://i.sstatic.net/iVUuXvZj.png" rel="noreferrer"><img src="https://i.sstatic.net/iVUuXvZj.png" alt="enter image description here" /></a></p> <p>The plot was interesting with <strong>zig-zag</strong> for low bias (for heads) and a smooth one after <strong>p = 0.2</strong>. (Also, there is a 73% chance of a single winner for a fair coin).</p> <p><strong>Is there an analytic expression for the function <span class="math-container">$$f(p) = (\textrm{probability of $1$ winner with a coin of bias $p$}) \textbf{?}$$</span></strong></p> <p>I tried doing something and got here: <span class="math-container">$$ f(p)=p\left(\sum_{i=0}^{e n d} X_i=N-1\right) $$</span> where <span class="math-container">$X_i=\operatorname{Binomial}\left(N-\sum_{j=0}^{i-1} X_j, p\right)$</span> and <span class="math-container">$X_0=\operatorname{Binomial}(N, p)$</span></p>
<p>It is known (to a nonempty set of humans) that when <span class="math-container">$p=\frac12$</span>, there is no limiting probability. Presumably the analysis can be (might have been) extended to other values of <span class="math-container">$p$</span>. Even more surprisingly, the reason I know this is because it ends up having an application in number theory! In any case, a reference for this limit's nonexistence is <a href="https://www.kurims.kyoto-u.ac.jp/%7Ekyodo/kokyuroku/contents/pdf/1274-9.pdf" rel="noreferrer">Primitive roots: a survey</a> by Li and Pomerance (see the section &quot;The source of the oscillation&quot; starting on page 79). As the number of coins increases to infinity, the probability of winning (when <span class="math-container">$p=\frac12$</span>) oscillates between about <span class="math-container">$0.72134039$</span> and about <span class="math-container">$0.72135465$</span>, a difference of about <span class="math-container">$1.4\times10^{-5}$</span>.</p>
<p>There is a pretty simple formula for the probability of a unique winner, although it involves an infinite sum. To derive the formula, suppose that there are <span class="math-container">$n$</span> people, and that you continue tossing until everyone is out, since they all got tails. Then you want the probability that at the last time before everyone was out, there was just one person left. If <span class="math-container">$p$</span> is the probability that the coin-flip is heads, to find the probability that this happens after <span class="math-container">$k+1$</span> steps with just person <span class="math-container">$i$</span> left, you can multiply the probability that person <span class="math-container">$i$</span> survives for <span class="math-container">$k$</span> steps, which is <span class="math-container">$p^k$</span>, by the probability that he is out on the <span class="math-container">$k+1^{\rm st}$</span> step, which is <span class="math-container">$1-p$</span>. You also have to multiply this by the probability that none of the other <span class="math-container">$n-1$</span> people survive for <span class="math-container">$k$</span> steps, which is <span class="math-container">$1-p^k$</span> for each of them. Multiplying these probabilities together gives <span class="math-container">$$ p^k (1-p) (1-p^k)^{n-1} $$</span> and summing over the <span class="math-container">$n$</span> possible choices for <span class="math-container">$i$</span> and all <span class="math-container">$k$</span> gives <span class="math-container">$$ f(p)= (1-p)\sum_{k\ge 1} n p^k (1 - p^k)^{n-1}.\qquad\qquad(*) $$</span> (I am assuming that <span class="math-container">$n&gt;1$</span>, so <span class="math-container">$k=0$</span> is impossible.)</p> <p>Now, the summand in (*) can be approximated by <span class="math-container">$n p^k \exp(-n p^k)$</span>, so if <span class="math-container">$n=p^{-L-\epsilon}$</span>, <span class="math-container">$L\ge 0$</span> large and integral, <span class="math-container">$0\le \epsilon \le1$</span>, <span class="math-container">$f(p)$</span> will be about <span class="math-container">$$ (1-p) \sum_{j\ge 1-L} p^{j-\epsilon} \exp(-p^{j-\epsilon}) $$</span> and we can further approximate this by summing over all integers: if <span class="math-container">$L$</span> becomes large and <span class="math-container">$\epsilon$</span> approaches some <span class="math-container">$0\le \delta \le 1$</span>, <span class="math-container">$f(p)$</span> will approach <span class="math-container">$$ g(\delta):=(1-p)\sum_{j\in\Bbb Z} p^{j-\delta} \exp(-p^{j-\delta}). $$</span> The average of this over <span class="math-container">$\delta$</span> has the simple formula <span class="math-container">$$ \int_0^1 g(\delta) d\delta = (1-p)\int_{\Bbb R} p^x \exp(-p^x) dx = -\frac{1-p}{\log p}, $$</span> which is <span class="math-container">$1/(2 \log 2)\approx 0.72134752$</span> if <span class="math-container">$p=\frac 1 2$</span>, but as others have pointed out, <span class="math-container">$g(\delta)$</span> oscillates, so the large-<span class="math-container">$n$</span> limit for <span class="math-container">$f(p)$</span> will not exist. You can expand <span class="math-container">$g$</span> in Fourier series to get <span class="math-container">$$ g(\delta)=-\frac{1-p}{\log p}\left(1+2\sum_{n\ge 1} \Re\left(e^{2\pi i n \delta} \,\,\Gamma(1 + \frac{2\pi i n}{\log p})\right) \right). $$</span> Since <span class="math-container">$\Gamma(1+ri)$</span> falls off exponentially as <span class="math-container">$|r|$</span> becomes large, the peak-to-peak amplitude of the largest oscillation will be <span class="math-container">$$ h(p):=-\frac{4(1-p)}{\log p} \left|\Gamma(1+\frac{2\pi i}{\log p})\right|, $$</span> which, as has already been pointed out, is <span class="math-container">$\approx 1.426\cdot 10^{-5}$</span> for <span class="math-container">$p=1/2$</span>. For some smaller <span class="math-container">$p$</span> it will be larger, although for very small <span class="math-container">$p$</span>, it will decrease as the value of the gamma function approaches 1 and <span class="math-container">$|\log p|$</span> increases. This doesn't mean that the overall oscillation disappears, though, since other terms in the Fourier series will become significant. To illustrate this, here are some graphs of <span class="math-container">$g(\delta)$</span>. From top to bottom, the <span class="math-container">$p$</span> values are <span class="math-container">$0.9$</span>, <span class="math-container">$0.5$</span>, <span class="math-container">$0.2$</span>, <span class="math-container">$0.1$</span>, <span class="math-container">$10^{-3}$</span>, <span class="math-container">$10^{-6}$</span>, <span class="math-container">$10^{-12}$</span>, and <span class="math-container">$10^{-24}$</span>. As <span class="math-container">$p$</span> becomes small, <span class="math-container">$$ g(0)=(1-p)\sum_{j\in\Bbb Z} p^{j} \exp(-p^{j}) \ \ \qquad {\rm approaches} \ \ \qquad p^0 \exp(-p^0) = \frac 1 e. $$</span></p> <p><a href="https://i.sstatic.net/KX5RNjGy.png" rel="noreferrer"><img src="https://i.sstatic.net/KX5RNjGy.png" alt="Graphs of g(delta)" /></a></p> <p><strong>References</strong></p> <ul> <li><a href="https://research.tue.nl/en/publications/on-the-number-of-maxima-in-a-discrete-sample" rel="noreferrer">&quot;On the number of maxima in a discrete sample&quot;, J. J. A. M. Brands, F. W. Steutel, and R. J. G. Wilms, Memorandum COSOR 92-16, 1992, Technische Universiteit Eindhoven.</a></li> <li><a href="https://doi.org/10.1214/aoap/1177005360" rel="noreferrer">&quot;The Asymptotic Probability of a Tie for First Place&quot;, Bennett Eisenberg, Gilbert Stengle, and Gilbert Strang, <em>The Annals of Applied Probability</em> <strong>3</strong>, #3 (August 1993), pp. 731 - 745.</a></li> <li><a href="https://www.jstor.org/stable/2325134" rel="noreferrer">Problem E3436, &quot;Tossing Coins Until All Show Heads&quot;, solution, Lennart Råde, Peter Griffin, O. P. Lossers, <em>The American Mathematical Monthly</em>, <strong>101</strong>, #1 (January 1994), pp. 78-80.</a></li> </ul>
linear-algebra
<p>In his online lectures on Computational Science, Prof. Gilbert Strang often interprets divergence as the "transpose" of the gradient, for example <a href="http://youtu.be/gYME3EbIqV4?t=31m50s">here</a> (at 32:30), however he does not explain the reason.</p> <p>How is it that the divergence can be interpreted as the transpose of the gradient?</p>
<p>Here is a considerably less sophisticated point of view than some of the other answers. Recall that the dot product of vectors can be obtained by transposing the first vector. That is, $$ \textbf{v}^T \textbf{w} \;=\; \begin{bmatrix}v_x &amp; v_y &amp; v_z\end{bmatrix}\begin{bmatrix}w_x \\ \\ w_y \\ \\w_z\end{bmatrix} \;=\; v_x w_x + v_y w_y + v_z w_z \;=\; \textbf{v}\cdot \textbf{w}. $$ (Here we are thinking of column vectors as being the "standard" vectors.)</p> <p>In the same way, divergence can be thought of as involving the transpose of the $\nabla$ operator. First recall that, if $g$ is a real-valued function, then the gradient of $g$ is given by the formula $$ \nabla g \;=\; \begin{bmatrix}\partial_x \\ \\ \partial_y \\ \\ \partial_z \end{bmatrix}g \;=\; \begin{bmatrix}\partial_xg \\ \\ \partial_yg \\ \\ \partial_zg \end{bmatrix} $$ Similarly, if $F=(F_x,F_y,F_z)$ is a vector field, then the divergence of $F$ is given by the formula $$ \nabla^T F \;=\; \begin{bmatrix}\partial_x &amp; \partial_y &amp; \partial_z\end{bmatrix}\begin{bmatrix}F_x \\ \\ F_y \\ \\ F_z\end{bmatrix} \;=\; \partial_xF_x + \partial_yF_y + \partial_zF_z. $$ Thus, the divergence corresponds to the transpose $\nabla^T$ of the $\nabla$ operator.</p> <p>This transpose notation is often advantageous. For example, the formula $$ \nabla^T (gF) \;=\; (\nabla^T g)F \,+\, g(\nabla^T F) $$ (where $\nabla^T g$ is the transpose of the gradient of $g$) seems much more obvious than $$ \text{div} (gF) \;=\; (\text{grad } g)\cdot F \,+\, g\;\text{div } F. $$ Indeed, this is the formula that leads to the integration by parts used in the video: $$ \int\!\!\int g (\nabla^T F)\,dx\,dy \;=\; -\int\!\!\int (\nabla g)^T F \,dx\,dy. $$</p>
<p>A "dual pair" in functional analysis consists of a topological vector space E and its dual space $E&#39;$ of continuous linear functionals, or some subspace of this.</p> <p>That is for real vector spaces, for every element $e \in E$ and $e&#39; \in E&#39;$, we can write $$ \langle e, e&#39; \rangle \in \mathbb{R} $$ Example: Let $E$ be a Hilbert space, then $E = E&#39;$ and the dual pairing is given by the scalar product.</p> <p>In the case at hand we have two function spaces and the dual pairing is defined to be $$ \int_{\Omega} u(x, y) v(x, y) d x d y $$ When you have some operator $$ T: E \to E $$ it is often possible to define the "transposed operator" T' to be the operator $$ T&#39;: E&#39; \to E&#39; $$ by the requirement that $$ \langle T e, e&#39; \rangle = \langle e, T&#39; e&#39; \rangle $$ for all e, e'. In the context of Hilbert spaces, it is more common to talk about "adjoint operators". The name "transpose" is motivated by the fact that for linear operators on finite dimensional vector spaces, the transpose is given by the transposed (conjugate, for complex ground field) matrix of the matrix that represents $T$ with respect to a fixed basis.</p> <p>In the case at hand, when we write down $$ \int_{\Omega} (- div \; grad u(x, y)) v(x, y) d x d y $$ you'll see that this is the same as $$ \int_{\Omega} (grad \; u(x, y)) \cdot (grad \; v(x, y)) d x d y $$ by integration by parts, if the boundary terms are zero. The $\cdot$ denotes the canonical scalar product of vectors in $\mathbb{R}^n$. So, if the boundary terms are zero, we have</p> <p>$$ \langle - div \; e, e&#39; \rangle = \langle e, grad \; e&#39; \rangle $$ where - strictly speaking - the dual pairing on each side is different, because the first is a dual pairing of functions with values in $\mathbb{R}$, while the second is for functions with values in $\mathbb{R}^2$. But neglecting this technical detail, the operator grad is in this sense the transposed operator of the operator div.</p>
combinatorics
<p>I need help to answer the following question:</p> <blockquote> <p>Is it possible to place 26 points inside a rectangle that is $20\, cm$ by $15\,cm$ so that the distance between every pair of points is greater than $5\, cm$?</p> </blockquote> <p>I haven't learned any mathematical ways to find a solution; whether it maybe yes or no, to a problem like this so it would be very helpful if you could help me with this question.</p>
<p>No, it is not. If we assume that $P_1,P_2,\ldots,P_{26}$ are $26$ distinct points inside the given rectangle, such that $d(P_i,P_j)\geq 5\,cm$ for any $i\neq j$, we may consider $\Gamma_1,\Gamma_2,\ldots,\Gamma_{26}$ as the circles centered at $P_1,P_2,\ldots,P_{26}$ with radius $2.5\,cm$. We have that such circles are disjoint and fit inside a $25\,cm \times 20\,cm$ rectangle. That is impossible, since the total area of $\Gamma_1,\Gamma_2,\ldots,\Gamma_{26}$ exceeds $500\,cm^2$.</p> <p><a href="https://i.sstatic.net/waP4m.png" rel="noreferrer"><img src="https://i.sstatic.net/waP4m.png" alt="enter image description here"></a></p> <p>Highly non-trivial improvement: it is impossible to fit $25$ points inside a $20\,cm\times 15\,cm$ in such a way that distinct points are separated by a distance $\geq 5\,cm$.</p> <p><a href="https://i.sstatic.net/JVXsh.png" rel="noreferrer"><img src="https://i.sstatic.net/JVXsh.png" alt="enter image description here"></a></p> <p>Proof: the original rectangle can be covered by $24$ hexagons with diameter $(5-\varepsilon)\,cm$. Assuming is it possible to place $25$ points according to the given constraints, by the pigeonhole principle / Dirichlet's box principle at least two distinct points inside the rectangle lie in the same hexagon, so they have a distance $\leq (5-\varepsilon)\,cm$, contradiction.</p> <p>Further improvement: <a href="https://i.sstatic.net/Cfuil.png" rel="noreferrer"><img src="https://i.sstatic.net/Cfuil.png" alt="enter image description here"></a></p> <p>the depicted partitioning of a $15\,cm\times 20\,cm$ rectangle $R$ in $22$ parts with diameter $(5-\varepsilon)\,cm$ proves that we may fit at most $\color{red}{22}$ points in $R$ in such a way that they are $\geq 5\,cm$ from each other.</p>
<p>Jack D'Aurizio's answer is nice, but I think the following is probably the solution intended by whoever posed the puzzle:</p> <p>Note that $26=5^2+1$. So perhaps we can divide our $20\times15$ rectangle into $5^2$ pieces, apply the pigeonhole principle, and be done. That would require that our pieces each have diameter at most 5. Well, what's the most obvious way to divide a $20\times15$ rectangle into $5^2$ pieces? Answer: chop it into $5\times5$ rectangles, each of size $4\times3$. And lo, the diagonal of each of those rectangles has length exactly 5 and we're done.</p>
combinatorics
<p>Let $H_n$ be the $n^{th}$ harmonic number,</p> <p>$$ H_n = \sum_{i=1}^{n} \frac{1}{i} $$</p> <p>Question: Calculate the following</p> <p>$$\sum_{j=1}^{n} H_j^2.$$</p> <p>I have attempted a generating function approach but could not solve this. </p>
<p>This is an interesting exercise in partial summation. For first, we have: $$\begin{eqnarray*}\sum_{j=1}^{n}H_j &amp;=&amp; n H_n-\sum_{j=1}^{n-1} \frac{j}{j+1} = n H_n - (n-1)+\sum_{j=1}^{n-1}\frac{1}{j+1}\\&amp;=&amp; n H_n-n+1+(H_n-1) = (n+1)\,H_n-n\tag{1}\end{eqnarray*} $$ hence: $$\begin{eqnarray*}\color{red}{\sum_{j=1}^n H_j^2} &amp;=&amp; \left((n+1)H_n^2-nH_n\right)-\sum_{j=1}^{n-1}\frac{(j+1)\,H_j-j}{j+1}\\&amp;=&amp;\left((n+1)H_n^2-nH_n\right)-\sum_{j=1}^{n-1}H_j+(n-1)-(H_n-1)\\&amp;=&amp;(n+1)\,H_n^2-nH_n-(n+1)\,H_n+n+H_n+(n-1)-H_n+1\\&amp;=&amp;\color{red}{(n+1)\,H_n^2-(2n+1)\,H_n+2n\phantom{\sum_{j=0}^{+\infty}}}.\tag{2}\end{eqnarray*}$$ Notice the deep analogy with: $$\int \log^2 x\,dx = x\log^2 x -2x\log x +2x.$$</p>
<p><span class="math-container">$$\sum_{k =1}^xH_k^2 = (x+1)H_x^2 - (2x+1)H_x + 2x$$</span> <a href="https://math.stackexchange.com/questions/4305791/the-harmonic-series-sequence/4305799#4305799">Before we prove the above solution consider this following <span class="math-container">$$\begin{align*} \color{blue}{\sum_{k =1}^{x}H_k} &amp;=\sum_{k =1}^{x}\sum_{n =1}^{k}\frac 1{n}=\sum_{k =1}^{x}\frac 1{n}\sum_{k =n}^{x}1 = \sum_{k =1}^{x}\frac 1{n}(x-n+1)\\ &amp;=\sum_{k =1}^{x}\frac {(x+1)-n}{n} =(x+1)\sum_{k =1}^{x}\frac 1{n} - \sum_{k =1}^{x}1 = \color{blue}{(x+1)H_x - x = \sum_{k =1}^{x}H_k} \end{align*}$$</span></a> Consider, <span class="math-container">$$\sum_{k =1}^xa_kb_k = s_xb_x- \sum_{k = 1}^{x-1}s_k(b_{k+1}-b_k)\text{ Where } s_u =\sum_{k =1}^ua_k$$</span> Put: <span class="math-container">$a_k = b_k = H_k$</span></p> <p><span class="math-container">$$\begin{align*} \color{blue}{\sum_{k =1}^xH^2_k} &amp; = s_xH_k- \sum_{k =1}^{x-1}s_k(H_{k+1}- H_k)\\ &amp; =((x+1)H_x-x)H_k -\sum_{k =1}^{x-1}[(k+1)H_k - k](\frac 1{k+1})\\ &amp; = (x+1)H^2_x - xH_x - \sum_{k =1}^{x-1}H_k + \sum_{k =1}^{x-1}\left(1-\frac {1}{k+1}\right)\\ &amp; = (x+1)H^2_x-xH_x -xH_{x-1} + 2x - 2- \sum_{k =2}^{x}\frac 1{k}\\ &amp; = \color{blue}{(x+1)H^2_x-(2x+1)H_x +2x} \end{align*}$$</span></p> <p><a href="https://i.sstatic.net/3ORw2.jpg" rel="nofollow noreferrer">image</a></p>
linear-algebra
<p>Let $A=\begin{bmatrix}a &amp; b\\ c &amp; d\end{bmatrix}$.</p> <p>How could we show that $ad-bc$ is the area of a parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+b, c+d)$?</p> <p>Are the areas of the following parallelograms the same? </p> <p>$(1)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+c, b+d)$.</p> <p>$(2)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+b, c+d)$.</p> <p>$(3)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+d, b+c)$.</p> <p>$(4)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+d, b+c)$.</p> <p>Thank you very much.</p>
<p>Spend a little time with this figure due to <a href="http://en.wikipedia.org/wiki/Solomon_W._Golomb" rel="noreferrer">Solomon W. Golomb</a> and enlightenment is not far off:</p> <p><img src="https://i.sstatic.net/gCaz3.png" alt="enter image description here" /></p> <p>(Appeared in <em>Mathematics Magazine</em>, March 1985.)</p>
<p><img src="https://i.sstatic.net/PFTa4.png" alt="enter image description here"></p> <p>I know I'm extremely late with my answer, but there's a pretty straightforward geometrical approach to explaining it. I'm surprised no one has mentioned it. It does have a shortcoming though - it does not explain why area flips the sign, because there's no such thing as negative area in geometry, just like you can't have a negative amount of apples(unless you are economics major).</p> <p>It's basically:</p> <pre><code> Parallelogram = Rectangle - Extra Stuff. </code></pre> <p>If you simplify $(c+a)(b+d)-2ad-cd-ab$ you will get $ad-bc$.</p> <p>Also interesting to note that if you swap vectors places then you get a negative(opposite of what $ad-bc$ would produce) area, which is basically:</p> <pre><code> -Parallelogram = Rectangle - (2*Rectangle - Extra Stuff) </code></pre> <p>Or more concretely:</p> <p>$(c+a)(b+d) - [2*(c+a)(b+d) - (2ad+cd+ab)]$</p> <p>Also it's $bc-ad$, when simplified.</p> <p>The sad thing is that there's no good geometrical reason why the sign flips, you will have to turn to linear algebra to understand that. </p> <p>Like others had noted, determinant is the scale factor of linear transformation, so a negative scale factor indicates a reflection.</p>
probability
<p>A teenage acquaintance of mine lamented:</p> <blockquote> <p>Every one of my friends is better friends with somebody else.</p> </blockquote> <p>Thanks to my knowledge of mathematics I could inform her that she's not alone and $e^{-1}\approx 37\%$ of all people could be expected to be in the same situation, which I'm sure cheered her up immensely.</p> <p>This number assumes that friendships are distributed randomly, such that each person in a population of $n$ chooses a best friend at random. Then the probability that any given person is not anyone's best friend is $(1-\frac{1}{n-1})^{n-1}$, which tends to $e^{-1}$ for large $n$.</p> <p>Afterwards I'm not sure this is actually the best way to analyze the claim. Perhaps instead we should imagine assigning a random "friendship strength" to each edge in the complete graph on $n$ vertices, in which case my friend's lament would be "every vertex I'm connected to has an edge with higher weight than my edge to it". This is not the same as "everyone choses a best friend at random", because it guarantees that there's at least one pair of people who're mutually best friends, namely the two ends of the edge with the highest weight.</p> <p>(Of course, some people are not friends at all; we can handle that by assigning low weights to their mutual edges. As long as everyone has at least one actual friend, this won't change who are whose best friends).</p> <p>(It doesn't matter which distribution the friendship weights are chosen by, as long as it's continuous -- because all that matters is the <em>relative</em> order between the weights. Equivalently, one may simply choose a random total order on the $n(n-1)/2$ edges in the complete graph).</p> <p><strong>In this model, what is the probability that a given person is not anyone's best friend?</strong></p> <p>By linearity of expectations, the probability of being <em>mutually</em> best friends with anyone is $\frac{n-1}{2n-3}\approx\frac 12$ (much better than in the earlier model), but that doesn't take into account the possibility that some poor soul has me as <em>their</em> best friend whereas I myself has other better friends. Linearity of expectation doesn't seem to help here -- it tells me that the <em>expected</em> number of people whose best friend I am is $1$, but not the probability of this number being $0$.</p> <hr> <p><em>(Edit: Several paragraphs of numerical results now moved to a significantly expanded answer)</em></p>
<p>The probability for large $n$ is $e^{-1}$ in the friendship-strength model too. I can't even begin to <em>explain</em> why this is, but I have strong numerical evidence for it. More precisely, if $p(n)$ is the probability that someone in a population of $n$ isn't anyone's best friend, then it looks strongly like</p> <p>$$ p(n) = \Bigl(1-\frac{1}{2n-7/3}\Bigr)^{2n-7/3} + O(n^{-3})$$ as $n$ tends to infinity.</p> <p>The factor of $2$ may hint at some asymptotic connection between the two models, but it has to be subtle, because the best-friend relation certainly doesn't look the same in the two models -- as noted in the question, in the friendship-strength model we expect half of all people to be <em>mutually</em> best friends with someone, whereas in the model where everyone chooses a best friend independently, the <em>total</em> expected number of mutual friendships is only $\frac12$.</p> <p>The offset $7/3$ was found experimentally, but there's good evidence that it is exact. If it means anything, it's a mystery to me what.</p> <p><strong>How to compute the probability.</strong> Consider the complete graph on $n$ vertices, and assign random friendship weights to each edge. Imagine processing the edges in order from the strongest friendship towards the weakest. For each vertex/person, the <em>first time</em> we see an edge ending there will tell that person who their best friend is.</p> <p>The graphs we build can become very complex, but for the purposes of counting we only need to distinguish three kinds of nodes:</p> <ul> <li><strong>W</strong> - Waiting people who don't yet know any of their friends. (That is, vertices that are not an endpoint of any edge processed yet).</li> <li><strong>F</strong> - Friends, people who are friends with someone, but are not anyone's best friend <em>yet</em>. Perhaps one of the Waiting people will turn out to have them as their best friend.</li> <li><strong>B</strong> - Best friends, who know they are someone's best friend.</li> </ul> <p>At each step in the processing of the graph, it can be described as a triple $(w,f,b)$ stating the number of each kind of node. We have $w+f+b=n$, and the starting state is $(n,0,0)$ with everyone still waiting.</p> <ul> <li>If we see a <strong>WW</strong> edge, two waiting people become mutually best friends, and we move to state $(w-2,f,b+2)$. There are $w(w-1)/2$ such edges.</li> <li>If we see a <strong>WF</strong> edge, the <strong>F</strong> node is now someone's best friend and becomes a <strong>B</strong>, and the <strong>W</strong> node becomes <strong>F</strong>. The net effect is to move us to state $(w-1,f,b+1)$. There are $wf$ such edges.</li> <li>If we see a <strong>WB</strong> edge, tne <strong>W</strong> node becomes <strong>F</strong>, but the <strong>B</strong> stays a <strong>B</strong> -- we don't care how <em>many</em> people's best friends one is, as long there is someone. We move to $(w-1,f+1,b)$, and there are $wb$ edges of this kind.</li> <li>If we see a <strong>FF</strong> or <strong>FB</strong> or <strong>BB</strong> edge, it represents a friendship where both people already have better friends, so the state doesn't change.</li> </ul> <p>Thus, for each state, the next <strong>WW</strong> or <strong>WF</strong> or <strong>WB</strong> edge we see determine which state we move to, and since all edges are equally likely, the probabilities to move to the different successor states are $\frac{w-1}{2n-w-1}$ and$\frac{2f}{2n-w-1}$ and $\frac{2b}{2n-w-1}$, respectively.</p> <p>Since $w$ decreases at every move between states, we can fill out a table of the probabilities that each state is ever visited simply by considering all possible states in order of decreasing $w$. When all edges have been seen we're in some state $(0,f,n-f)$, and summing over all these we can find the <em>expected</em> $f$ for a random weight assignment.</p> <p>By linearity of expectation, the probability that any <em>given</em> node is <strong>F</strong> at the end must then be $\langle f\rangle/n$.</p> <p>Since there are $O(n^2)$ states with $w+f+b=n$ and a constant amount of work for each state, this algorithm runs in time $O(n^2)$.</p> <p><strong>Numerical results.</strong> Here are exact results for $n$ up to 18:</p> <pre><code> n approx exact -------------------------------------------------- 1 100% 1/1 2 0.00% 0/1 3 33.33% 1/3 4 33.33% 1/3 5 34.29% 12/35 6 34.81% 47/135 7 35.16% 731/2079 8 35.40% 1772/5005 9 35.58% 20609/57915 10 35.72% 1119109/3132675 11 35.83% 511144/1426425 12 35.92% 75988111/211527855 13 36.00% 1478400533/4106936925 14 36.06% 63352450072/175685635125 15 36.11% 5929774129117/16419849744375 16 36.16% 18809879890171/52019187845625 17 36.20% 514568399840884/1421472473796375 18 36.24% 120770557736740451/333297887934886875 </code></pre> <p>After this point, exact rational arithmetic with 64-bit denominators start overflowing. It does look like $p(n)$ tends towards $e^{-1}$. (As an aside, the sequence of numerators and denominators are both unknown to OEIS).</p> <p>To get further, I switched to native machine floating point (Intel 80-bit) and got the $p(n)$ column in this table:</p> <pre><code> n p(n) A B C D E F G H --------------------------------------------------------------------- 10 .3572375 1.97+ 4.65- 4.65- 4.65- 2.84+ 3.74+ 3.64+ 4.82- 20 .3629434 2.31+ 5.68- 5.68- 5.68- 3.47+ 4.67+ 4.28+ 5.87- 40 .3654985 2.62+ 6.65- 6.64- 6.64- 4.09+ 5.59+ 4.90+ 6.84- 80 .3667097 2.93+ 7.59- 7.57- 7.57- 4.70+ 6.49+ 5.51+ 7.77- 100 .3669469 3.03+ 7.89- 7.87- 7.86- 4.89+ 6.79+ 5.71+ 8.07- 200 .3674164 3.33+ 8.83- 8.79- 8.77- 5.50+ 7.69+ 6.32+ 8.99- 400 .3676487 3.64+ 9.79- 9.69- 9.65- 6.10+ 8.60+ 6.92+ 9.90- 800 .3677642 3.94+ 10.81- 10.60- 10.52- 6.70+ 9.50+ 7.52+ 10.80- 1000 .3677873 4.04+ 11.17- 10.89- 10.80- 6.90+ 9.79+ 7.72+ 11.10- 2000 .3678334 4.34+ 13.18- 11.80- 11.63- 7.50+ 10.69+ 8.32+ 12.00- 4000 .3678564 4.64+ 12.74+ 12.70- 12.41- 8.10+ 11.60+ 8.92+ 12.90- 8000 .3678679 4.94+ 13.15+ 13.60- 13.14- 8.70+ 12.50+ 9.52+ 13.81- 10000 .3678702 5.04+ 13.31+ 13.89- 13.36- 8.90+ 12.79+ 9.72+ 14.10- 20000 .3678748 5.34+ 13.86+ 14.80- 14.03- 9.50+ 13.69+ 10.32+ 15.00- 40000 .3678771 5.64+ 14.44+ 15.70- 14.67- 10.10+ 14.60+ 10.92+ 15.91- </code></pre> <p>The 8 other columns show how well $p(n)$ matches various attempts to model it. In each column I show $-\log_{10}|p(n)-f_i(n)|$ for some test function $f_i$ (that is, how many digits of agreement there are between $p(n)$ and $f_i(n)$), and the sign of the difference between $p$ and $f_i$.</p> <ul> <li>$f_{\tt A}(n)=e^{-1}$</li> </ul> <p>In the first column we compare to the constant $e^{-1}$. It is mainly there as evidence that $p(n)\to e^{-1}$. More precisely it looks like $p(n) = e^{-1} + O(n^{-1})$ -- whenever $n$ gets 10 times larger, another digit of $e^{-1}$ is produced.</p> <ul> <li>$f_{\tt C}(n)=\Bigl(1-\frac{1}{2n-7/3}\Bigr)^{2n-7.3}$</li> </ul> <p>I came across this function by comparing $p(n)$ to $(1-\frac{1}{n-1})^{n-1}$ (the probability in the chose-best-friends-independently model) and noticing that they almost matched beteen $n$ and $2n$. The offset $7/3$ was found by trial and error. With this value it looks like $f_{\tt C}(n)$ approximates $p(n)$ to about $O(n^{-3})$, since making $n$ 10 times larger gives <em>three</em> additional digits of agreement.</p> <ul> <li>$ f_{\tt B}=\Bigl(1-\frac{1}{2n-2.332}\Bigr)^{2n-2.332}$ and $f_{\tt D}=\Bigl(1-\frac{1}{2n-2.334}\Bigr)^{2n-2.334} $</li> </ul> <p>These columns provide evidence that $7/3$ in $f_{\tt C}$ is likely to be exact, since varying it just slightly in each direction gives clearly worse approximation to $p(n)$. These columns don't quite achieve three more digits of precision for each decade of $n$.</p> <ul> <li>$ f_{\tt E}(n)=e^{-1}\bigl(1-\frac14 n^{-1}\bigr)$ and $f_{\tt F}(n)=e^{-1}\bigl(1-\frac14 n^{-1} - \frac{11}{32} n^{-2}\bigr)$</li> </ul> <p>Two and three terms of the asymptotic expansion of $f_{\tt C}$ in $n$. $f_{\tt F}$ also improves cubically, but with a much larger error than $f_{\tt C}$. This seems to indicate that the specific structure of $f_{\tt C}$ is important for the fit, rather than just the first terms in its expansion.</p> <ul> <li>$ f_{\tt G}(n)=e^{-1}\bigl(1-\frac12 (2n-7/3)^{-1}\bigr)$ and $f_{\tt H}(n)=e^{-1}\bigl(1-\frac12 (2n-7/3)^{-1}-\frac{5}{24} (2n-7/3)^{-2}\bigr) $</li> </ul> <p>Here's a surprise! Expanding $f_{\tt C}$ in powers of $2n-7/3$ instead of powers of $n$ not only gives better approximations than $f_{\tt E}$ and $f_{\tt F}$, but also approximates $p(n)$ better than $f_{\tt C}$ itself does, by a factor of about $10^{0.2}\approx 1.6$. This seems to be even more mysterious than the fact that $f_{\tt C}$ matches.</p> <p>At $n=40000$ the computation of $p(n)$ takes about a minute, and the comparisons begin to push the limit of computer floating point. The 15-16 digits of precision in some of the columns are barely even representable in double precision. Funnily enough, the calculation of $p(n)$ itself seems to be fairly robust compared to the approximations.</p>
<p>Here's another take at this interesting problem. Consider a group of $n+1$ persons $x_0,\dots,x_n$ with $x_0$ being myself. Define the probability $$ P_{n+1}(i)=P(\textrm{each of $x_1,\dots,x_i$ has me as his best friend}). $$ Then we can compute the wanted probability using the <em>inclusion-exclusion formula</em> as follows: \begin{eqnarray} P_{n+1} &amp;=&amp; P(\textrm{I am nobody's best friend}) \\ &amp;=&amp; 1-P(\textrm{I am somebody's best friend}) \\ &amp;=&amp; \sum_{i=0}^n (-1)^i\binom{n}{i}P_{n+1}(i). \tag{$*$} \end{eqnarray}</p> <p>To compute $P_{n+1}(i)$, note that for the condition to be true, it is necessary that of all friendships between one of $x_1,\dots,x_i$ and anybody, the one with the highest weight is a friendship with me. The probability of that being the case is $$ \frac{i}{in-i(i-1)/2}=\frac{2}{2n-i+1}. $$ Suppose, then, that that is the case and let this friendship be $(x_0, x_i)$. Then I am certainly the best friend of $x_i$. The probability that I am also the best friend of each of $x_1,\dots,x_{i-1}$ is unchanged. So we can repeat the argument and get $$ P_{n+1}(i) =\frac{2}{2n}\cdot\frac{2}{2n-1}\cdots\frac{2}{2n-i+1} =\frac{2^i(2n-i)!}{(2n)!}. $$ Plugging this into $(*)$ gives a formula for $P_{n+1}$ that agrees with Henning's results.</p> <p>To prove $P_{n+1}\to e^{-1}$ for $n\to\infty$, use that the $i$-th term of $(*)$ converges to, but is numerically smaller than, the $i$-th term of $$ e^{-1}=\sum_{i=0}^\infty\frac{(-1)^i}{i!}. $$</p> <p>By the way, in the alternative model where each person chooses a best friend at random, we would instead have $P_{n+1}(i)=1/n^i$ and $P_{n+1}=(1-1/n)^n$.</p>
game-theory
<p>You are allowed to roll a die up to six times. Anytime you stop, you get the dollar amount of the face value of your last roll.</p> <p>Question: What is the best strategy?</p> <p>According to my calculation, for the strategy 6,5,5,4,4, the expected value is $142/27\approx 5.26$, which I consider quite high. So this might be the best strategy.</p> <p>Here, 6,5,5,4,4 means in the first roll you stop only when you get a 6; if you did not get a 6 in the first roll, then in the second roll you stop only when you roll a number 5 or higher (i.e. 5 or 6), etc.</p>
<p>Just work backwards. At each stage, you accept a roll that is >= the expected gain from the later stages:<br> Expected gain from 6th roll: 7/2<br> Therefore strategy for 5th roll is: accept if >= 4<br> Expected gain from 5th roll: (6 + 5 + 4)/6 + (7/2)(3/6) = 17/4<br> Therefore strategy for 4th roll is: accept if >= 5<br> Expected gain from 4th roll: (6 + 5)/6 + (17/4)(4/6) = 14/3<br> Therefore strategy for 3rd roll is: accept if >= 5<br> Expected gain from 3rd roll: (6 + 5)/6 + (14/3)(4/6) = 89/18<br> Therefore strategy for 2nd roll is: accept if >= 5<br> Expected gain from 2nd roll: (6 + 5)/6 + (89/18)(4/6) = 277/54<br> Therefore strategy for 1st roll is: accept only if 6<br> Expected gain from 1st roll: 6/6 + (277/54)(5/6) = 1709/324 </p> <p>So your strategy is 6,5,5,5,4 for an expectation of $5.27469...</p>
<p>Let $X_n$ be your winnings in a game of length $n$ (in your case $n = 6$), if you are playing optimally. Here, "optimally" means that at roll $m$, you will accept if the value is greater than $\mathbb{E} X_{n-m}$, which is your expected winnings if you continued to play with this strategy. </p> <p>Let $X \sim \mathrm{Unif}(1,2,3,4,5,6) $ (you can also insert any distribution you like here). Then $X_n$ can be defined as $X_1 = X$ and for $n \geq 2$,</p> <p>$$ X_n = \begin{cases} X_{n-1}, \quad \mathrm{if} \quad X &lt; \mathbb{E}X_{n-1} \\ X \quad \mathrm{if} \quad X \geq \mathbb{E}X_{n-1} \end{cases} $$ </p> <p>So your decisions can be determined by computing $\mathbb{E} X_n$ for each $n$ recursively. For the dice case, $\mathbb{E} X_1 = \mathbb{E}X = 7/2$ (meaning on the fifth roll, accept if you get >7/2, or 4,5 or 6), and so,</p> <p>$$\mathbb{E} X_2 = \mathbb{E} X_1 \mathrm{P}[X = 1,2,3] + \mathbb{E} [X | X \geq 4] \mathrm{P}[X = 4,5,6]$$ $$ = (7/2)(3/6) + (4 + 5 + 6)/3 (1/2) = 17/4 $$ </p> <p>So on the fourth roll, accept if you get > 17/4, or 5 or 6, and so on (you need to round the answer up at each step, which makes it hard to give a closed form for $\mathbb{E} X_n$ unfortunately). </p>
differentiation
<p>I was asked to find the derivative of the function $$ \sin^{-1}(e^x + 3). $$ First I was thinking of the chain rule, but something looked strange about the function. The domain of $\sin^{-1}$ is $[-1,1]$ and $e^x + 3$ is always strictly greater than $3$. I think that there is a mistake in the problem, but assuming that there isn't: <strong>what exactly is the derivative of a function like this one that "doesn't exist"?</strong> (Is it true to say that the function doesn't exist?)</p>
<p>The derivative exists, but as the result of a formal process. For $ \sin^{-1}(e^x + 3)$ put $u=e^x+3.$ Chain rule gives us</p> <p>$$ \frac{d}{dx}\sin^{-1}(e^x + 3) =\frac{d}{dx}\sin^{-1}(u)= \frac{1}{\sqrt{1-u^2}}\frac{du}{dx}=\frac{e^x}{\sqrt{1-(e^x+3)^2}}.$$</p> <p>The derivative map didn't even notice the domain or range of $\sin^{-1}(x),$ since in this type of computation we're simply applying an algorithm. </p> <p>Another thing to notice, is that $e^x+3&gt;3,$ as you pointed out, so the derivative isn't defined either, since the denominator will always be the square root of a negative number.</p> <p>In a way, what you're touching on here is much deeper. Look at $\ln(x)$ for example. We never consider it as a composition of $x$ with $\ln(x),$ since this seems silly. However, $x$ has domain $\Bbb{R},$ and $\ln$ has domain $\Bbb{R}_{&gt;0}.$ The functions that the derivative map $\frac{d}{dx}$ can evaluate aren't necessarily going to make sense. But as a formal process these will exist, because this is a linear map on a collection of functions which are defined in a formal manner. </p>
<p>You can probably make sense of this as an operation on formal power series, which can be manipulated consistently and are often useful even if they don't converge.</p> <p>For the power series for $\arcsin$: </p> <p><a href="https://math.stackexchange.com/questions/105024/finding-the-power-series-of-arcsin-x">Finding the power series of $\arcsin x$</a></p> <p>Then substitute the power series for $e^x +3$, expand and differentiate term by term. The answer will be pretty ugly. It will be the formal power series for the "function" you get by naively applying the chain rule to compute the "derivative". </p> <p>I doubt that this is what the person who wrote the question had in mind. I suspect either a typo, or plain thoughtlessness.</p>
linear-algebra
<p>I'm looking for a book to learn Algebra. The programme is the following. The units marked with a $\star$ are the ones I'm most interested in (in the sense I know nothing about) and those with a $\circ$ are those which I'm mildly comfortable with. The ones that aren't marked shouldn't be of importance. Any important topic inside a unite will be boldfaced.</p> <p><strong>U1:</strong> <em>Vector Algebra.</em> Points in the $n$-dimensional space. Vectors. Scalar product. Norm. Lines and planes. Vectorial product.</p> <p>$\circ$ <strong>U2:</strong> <em>Vector Spaces.</em> Definition. Subspaces. Linear independence. Linear combination. Generating systems. Basis. Dimesion. Sum and intersection of subspaces. Direct sum. Spaces with inner products.</p> <p>$\circ$ <strong>U3:</strong> <em>Matrices and determinants.</em> Matrix Spaces. Sum and product of matrices. Linear ecuations. Gauss-Jordan elimination. Range. <strong>Roché Frobenius Theorem. Determinants. Properties. Determinant of a product. Determinants and inverses.</strong></p> <p>$\star$ <strong>U4:</strong> <em>Linear transformations.</em> Definition. Nucleus and image. Monomorphisms, epimorphisms and isomorphisms. Composition of linear transformations. Inverse linear tranforms.</p> <p><strong>U5:</strong> <em>Complex numbers and polynomials.</em> Complex numbers. Operations. Binomial and trigonometric form. De Möivre's Theorem. Solving equations. Polynomials. Degree. Operations. Roots. Remainder theorem. Factorial decomposition. FTA. <strong>Lagrange interpolation.</strong></p> <p>$\star$ <strong>U6:</strong> <em>Linear transformations and matrices.</em> Matrix of a linear transformation. Matrix of the composition. Matrix of the inverse. Base changes. </p> <p>$\star$ <strong>U7:</strong> <em>Eigen values and eigen vectors</em> Eigen values and eigen vectors. Characteristc polynomial. Aplications. Invariant subspaces. Diagonalization. </p> <p>To let you know, I own a copy of Apostol's Calculus $\mathrm I $ which has some of those topics, precisely:</p> <ul> <li>Linear Spaces</li> <li>Linear Transformations and Matrices.</li> </ul> <p>I also have a copy of Apostol's second book of Calc $\mathrm II$which continues with</p> <ul> <li>Determinants</li> <li>Eigenvalues and eigenvectors</li> <li>Eigenvalues of operators in Euclidean spaces.</li> </ul> <p>I was reccommended <em>Linear Algebra</em> by Armando Rojo and have <em>Linear Algebra</em> by <a href="http://www.uv.es/~ivorra/">Carlos Ivorra</a>, which seems quite a good text. </p> <p>What do you reccomend? </p>
<p>"Linear Algebra Done Right" by Sheldon Axler is an excellent book.</p>
<p>Gilbert Strang has a ton of resources on his webpage, most of which are quite good:</p> <p><a href="http://www-math.mit.edu/~gs/">http://www-math.mit.edu/~gs/</a></p>
matrices
<p>I have just watched the first half of the 3rd lecture of Gilbert Strang on the open course ware with link:</p> <p><a href="http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/">http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/</a></p> <p>It seems that with a matrix multiplication $AB=C$, that the entries as scalars, are formed from the dot product computations of the rows of $A$ with the columns of $B$. Visual interpretations from mechanics of overlpaing forces come to mind immediately because that is the source for the dot product (inner product).</p> <p>I see the rows of $C$ as being the dot product of the rows of $B$, with the dot product of a particular row of $A$. Similar to the above and it is easy to see this from the individual entries in the matrix $C$ as to which elements change to give which dot products.</p> <p>For understanding matrix multiplication there is the geometrical interpretation, that the matrix multiplication is a change in the reference system since matrix $B$ can be seen as a transormation operator for rotation, scalling, reflection and skew. It is easy to see this by constructing example $B$ matrices with these effects on $A$. This decomposition is a strong argument and is strongly convincing of its generality. This interpreation is strong but not smooth because I would find smoother an explanation which would be an interpretation begining from the dot product of vectors and using this to explain the process and the interpretation of the results (one which is a bit easier to see without many examples of the putting numbers in and seeing what comes out which students go through).</p> <p>I can hope that sticking to dot products throughout the explanation and THEN seeing how these can be seen to produce scalings, rotations, and skewings would be better. But, after some simple graphical examples I saw this doesn't work as the order of the columns in matrix $B$ are important and don't show in the graphical representation. </p> <p>The best explanation I can find is at <a href="http://answers.yahoo.com/question/index?qid=20081012135509AA1xtKz">Yahoo Answers</a>. It is convincing but a bit disappointing (explains why this approach preserves the "composition of linear transformations"; thanks @Arturo Magidin). So the question is: <strong>Why does matrix multiplication happen as it does, and are there good practical examples to support it? Preferably not via rotations/scalings/skews</strong> (thanks @lhf).</p>
<p><strong>Some comments first.</strong> There are several serious confusions in what you write. For example, in the third paragraph, having seen that the entries of <span class="math-container">$AB$</span> are obtained by taking the dot product of the corresponding row of <span class="math-container">$A$</span> with column of <span class="math-container">$B$</span>, you write that you view <span class="math-container">$AB$</span> as a dot product of <strong>rows</strong> of <span class="math-container">$B$</span> and rows of <span class="math-container">$A$</span>. It's not. </p> <p>For another example, you talk about matrix multiplication "happening". Matrices aren't running wild in the hidden jungles of the Amazon, where things "happen" without human beings. Matrix multiplication is <strong>defined</strong> a certain way, and then the definition is why matrix multiplication is done the way it is done. You may very well ask <em>why</em> matrix multiplication is defined the way it is defined, and whether there are other ways of defining a "multiplication" on matrices (yes, there are; read further), but that's a completely separate question. "Why does matrix multiplication happen the way it does?" is pretty incoherent on its face.</p> <p>Another example of confusion is that not every matrix corresponds to a "change in reference system". This is only true, viewed from the correct angle, for <em>invertible</em> matrices.</p> <p><strong>Standard matrix multiplication.</strong> Matrix multiplication is defined the way it is because it corresponds to composition of linear transformations. Though this is valid in extremely great generality, let's focus on linear transformations <span class="math-container">$T\colon \mathbb{R}^n\to\mathbb{R}^m$</span>. Since linear transformations satisfy <span class="math-container">$T(\alpha\mathbf{x}+\beta\mathbf{y}) = \alpha T(\mathbf{x})+\beta T(\mathbf{y})$</span>, if you know the value of <span class="math-container">$T$</span> at each of <span class="math-container">$\mathbf{e}_1,\ldots,\mathbf{e}_n$</span>, where <span class="math-container">$\mathbf{e}^n_i$</span> is the (column) <span class="math-container">$n$</span>-vector that has <span class="math-container">$0$</span>s in each coordinate except the <span class="math-container">$i$</span>th coordinate where it has a <span class="math-container">$1$</span>, then you know the value of <span class="math-container">$T$</span> at every single vector of <span class="math-container">$\mathbb{R}^n$</span>.</p> <p>So in order to describe the value of <span class="math-container">$T$</span>, I just need to tell you what <span class="math-container">$T(\mathbf{e}_i)$</span> is. For example, we can take <span class="math-container">$$T(\mathbf{e}_i) = \left(\begin{array}{c}a_{1i}\\a_{2i}\\ \vdots\\ a_{mi}\end{array}\right).$$</span> Then, since <span class="math-container">$$\left(\begin{array}{c}k_1\\k_2\\ \vdots\\k_n\end{array}\right) = k_1\mathbf{e}_1 + \cdots +k_n\mathbf{e}_n,$$</span> we have <span class="math-container">$$T\left(\begin{array}{c}k_1\\k_2\\ \vdots\\ k_n\end{array}\right) = k_1T(\mathbf{e}_1) + \cdots +k_nT(\mathbf{e}_n) = k_1\left(\begin{array}{c}a_{11}\\a_{21}\\ \vdots\\a_{m1}\end{array}\right) + \cdots + k_n\left(\begin{array}{c}a_{1n}\\a_{2n}\\ \vdots\\ a_{mn}\end{array}\right).$$</span></p> <p>It is very fruitful, then to keep track of the <span class="math-container">$a_{ij}$</span> in some way, and given the expression above, we keep track of them in a matrix, which is just a rectangular array of real numbers. We then think of <span class="math-container">$T$</span> as being "given" by the matrix <span class="math-container">$$\left(\begin{array}{cccc} a_{11} &amp; a_{12} &amp; \cdots &amp; a_{1n}\\ a_{21} &amp; a_{22} &amp; \cdots &amp; a_{2n}\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\ a_{m1} &amp; a_{m2} &amp; \cdots &amp; a_{mn} \end{array}\right).$$</span> If we want to keep track of <span class="math-container">$T$</span> this way, then for an arbitrary vector <span class="math-container">$\mathbf{x} = (x_1,\ldots,x_n)^t$</span> (the <span class="math-container">${}^t$</span> means "transpose"; turn every rown into a column, every column into a row), then we have that <span class="math-container">$T(\mathbf{x})$</span> corresponds to: <span class="math-container">$$\left(\begin{array}{cccc} a_{11} &amp; a_{12} &amp; \cdots &amp; a_{1n}\\ a_{21} &amp; a_{22} &amp; \cdots &amp; a_{2n}\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\ a_{m1} &amp; a_{m2} &amp; \cdots &amp; a_{mn} \end{array}\right) \left(\begin{array}{c} x_1\\x_2\\ \vdots\\ x_n\end{array}\right) = \left(\begin{array}{c} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n\\ \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n \end{array}\right).$$</span></p> <p>What happens when we have <strong>two</strong> linear transformations, <span class="math-container">$T\colon \mathbb{R}^n\to\mathbb{R}^m$</span> and <span class="math-container">$S\colon\mathbb{R}^p\to\mathbb{R}^n$</span>? If <span class="math-container">$T$</span> corresponds as above to a certain <span class="math-container">$m\times n$</span> matrix, then <span class="math-container">$S$</span> will likewise correspond to a certain <span class="math-container">$n\times p$</span> matrix, say <span class="math-container">$$\left(\begin{array}{cccc} b_{11} &amp; b_{12} &amp; \cdots &amp; b_{1p}\\ b_{21} &amp; b_{22} &amp; \cdots &amp; b_{2p}\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\ b_{n1} &amp; b_{n2} &amp; \cdots &amp; b_{np} \end{array}\right).$$</span> What is <span class="math-container">$T\circ S$</span>? First, it is a linear transformation because composition of linear transformations yields a linear transformation. Second, it goes from <span class="math-container">$\mathbb{R}^p$</span> to <span class="math-container">$\mathbb{R}^m$</span>, so it should correspond to an <span class="math-container">$m\times p$</span> matrix. Which matrix? If we let <span class="math-container">$\mathbf{f}_1,\ldots,\mathbf{f}_p$</span> be the (column) <span class="math-container">$p$</span>-vectors given by letting <span class="math-container">$\mathbf{f}_j$</span> have <span class="math-container">$0$</span>s everywhere and a <span class="math-container">$1$</span> in the <span class="math-container">$j$</span>th entry, then the matrix above tells us that <span class="math-container">$$S(\mathbf{f}_j) = \left(\begin{array}{c}b_{1j}\\b_{2j}\\ \vdots \\b_{nj}\end{array}\right) = b_{1j}\mathbf{e}_1+\cdots + b_{nj}\mathbf{e}_n.$$</span></p> <p>So, what is <span class="math-container">$T\circ S(\mathbf{f}_j)$</span>? This is what goes in the <span class="math-container">$j$</span>th column of the matrix that corresponds to <span class="math-container">$T\circ S$</span>. Evaluating, we have: <span class="math-container">\begin{align*} T\circ S(\mathbf{f}_j) &amp;= T\Bigl( S(\mathbf{f}_j)\Bigr)\\\ &amp;= T\Bigl( b_{1j}\mathbf{e}_1 + \cdots + b_{nj}\mathbf{e}_n\Bigr)\\\ &amp;= b_{1j} T(\mathbf{e}_1) + \cdots + b_{nj}T(\mathbf{e}_n)\\\ &amp;= b_{1j}\left(\begin{array}{c} a_{11}\\\ a_{21}\\\ \vdots\\\ a_{m1}\end{array}\right) + \cdots + b_{nj}\left(\begin{array}{c} a_{1n}\\a_{2n}\\\ \vdots\\\ a_{mn}\end{array}\right)\\\ &amp;= \left(\begin{array}{c} a_{11}b_{1j} + a_{12}b_{2j} + \cdots + a_{1n}b_{nj}\\\ a_{21}b_{1j} + a_{22}b_{2j} + \cdots + a_{2n}b_{nj}\\\ \vdots\\\ a_{m1}b_{1j} + a_{m2}b_{2j} + \cdots + a_{mn}b_{nj} \end{array}\right). \end{align*}</span> So if we want to write down the matrix that corresponds to <span class="math-container">$T\circ S$</span>, then the <span class="math-container">$(i,j)$</span>th entry will be <span class="math-container">$$a_{i1}b_{1j} + a_{i2}b_{2j} + \cdots + a_{in}b_{nj}.$$</span> So we <strong>define</strong> the "composition" or product of the matrix of <span class="math-container">$T$</span> with the matrix of <span class="math-container">$S$</span> to be precisely the matrix of <span class="math-container">$T\circ S$</span>. We can make this definition without reference to the linear transformations that gave it birth: if the matrix of <span class="math-container">$T$</span> is <span class="math-container">$m\times n$</span> with entries <span class="math-container">$a_{ij}$</span> (let's call it <span class="math-container">$A$</span>); and the matrix of <span class="math-container">$S$</span> is <span class="math-container">$n\times p$</span> with entries <span class="math-container">$b_{rs}$</span> (let's call it <span class="math-container">$B$</span>), then the matrix of <span class="math-container">$T\circ S$</span> (let's call it <span class="math-container">$A\circ B$</span> or <span class="math-container">$AB$</span>) is <span class="math-container">$m\times p$</span> and with entries <span class="math-container">$c_{k\ell}$</span>, where <span class="math-container">$$c_{k\ell} = a_{k1}b_{1\ell} + a_{k2}b_{2\ell} + \cdots + a_{kn}b_{n\ell}$$</span> <strong>by definition</strong>. Why? Because then the matrix of the composition of two functions is <em>precisely</em> the product of the matrices of the two functions. We can work with the matrices directly without having to think about the functions. </p> <p>In point of fact, there is nothing about the dot product which is at play in this definition. It is essentially by happenstance that the <span class="math-container">$(i,j)$</span> entry can be obtained as a dot product of <em>something</em>. In fact, the <span class="math-container">$(i,j)$</span>th entry is obtained as the <strong>matrix product</strong> of the <span class="math-container">$1\times n$</span> matrix consisting of the <span class="math-container">$i$</span>th row of <span class="math-container">$A$</span>, with the <span class="math-container">$n\times 1$</span> matrix consisting of the <span class="math-container">$j$</span>th column of <span class="math-container">$B$</span>. Only if you transpose this column can you try to interpret this as a dot product. (In fact, the modern view is <em>the other way around</em>: we <em>define</em> the dot product of two vectors as a special case of a more general inner product, called the Frobenius inner product, which is defined in terms of matrix multiplication, <span class="math-container">$\langle\mathbf{x},\mathbf{y}\rangle =\mathrm{trace}(\overline{\mathbf{y}^t}\mathbf{x})$</span>). </p> <p>And because product of matrices corresponds to composition of linear transformations, all the nice properties that composition of linear functions has will <em>automatically</em> also be true for product of matrices, <em>because products of matrices is nothing more than a book-keeping device for keeping track of the composition of linear transformations</em>. So <span class="math-container">$(AB)C = A(BC)$</span>, because composition of functions is associative. <span class="math-container">$A(B+C) = AB + AC$</span> because composition of linear transformations distributes over sums of linear transformations (sums of matrices are defined entry-by-entry because that agrees <em>precisely</em> with the sum of linear transformations). <span class="math-container">$A(\alpha B) = \alpha(AB) = (\alpha A)B$</span>, because composition of linear transformations behaves that way with scalar multiplication (products of matrices by scalar are defined the way they are <em>precisely</em> so that they will correspond to the operation with linear transformations). </p> <p>So we <strong>define</strong> product of matrices <strong>explicitly</strong> so that it will match up composition of linear transformations. There really is no deeper hidden reason. It seems a bit incongruous, perhaps, that such a simple reason results in such a complicated formula, but such is life.</p> <p>Another reason why it is somewhat misguided to try to understand matrix product in terms of dot product is that the matrix product keeps track of <em>all</em> the information lying around about the two compositions, but the dot product loses a lot of information about the two vectors in question. Knowing that <span class="math-container">$\mathbf{x}\cdot\mathbf{y}=0$</span> only tells you that <span class="math-container">$\mathbf{x}$</span> and <span class="math-container">$\mathbf{y}$</span> are perpendicular, it doesn't really tell you anything else. There is a lot of informational loss in the dot product, and trying to explain matrix product in terms of the dot product requires that we "recover" all of this lost information in some way. In practice, it means keeping track of all the original information, which makes trying to shoehorn the dot product into the explanation unnecessary, because you will <em>already</em> have all the information to get the product directly.</p> <p><strong>Examples that are not just "changes in reference system".</strong> Note that <strong>any</strong> linear transformation corresponds to a matrix. But the only linear transformations that can be thought of as "changes in perspective" are the linear transformations that map <span class="math-container">$\mathbb{R}^n$</span> to itself, and which are one-to-one and onto. There are <em>lots</em> of linear transformations that aren't like that. For example, the linear transformation <span class="math-container">$T$</span> from <span class="math-container">$\mathbb{R}^3$</span> to <span class="math-container">$\mathbb{R}^2$</span> defined by <span class="math-container">$$T\left(\begin{array}{c} a\\b\\c\end{array}\right) = \left(\begin{array}{c}b\\2c\end{array}\right)$$</span> is not a "change in reference system" (because lots of nonzero vectors go to zero, but there is no way to just "change your perspective" and start seeing a nonzero vector as zero) but is a linear transformation nonetheless. The corresponding matrix is <span class="math-container">$2\times 3$</span>, and is <span class="math-container">$$\left(\begin{array}{cc} 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 2 \end{array}\right).$$</span> Now consider the linear transformation <span class="math-container">$U\colon\mathbb{R}^2\to\mathbb{R}^2$</span> given by <span class="math-container">$$U\left(\begin{array}{c}x\\y\end{array}\right) = \left(\begin{array}{c}3x+2y\\ 9x + 6y\end{array}\right).$$</span> Again, this is not a "change in perspective", because the vector <span class="math-container">$\binom{2}{-3}$</span> is mapped to <span class="math-container">$\binom{0}{0}$</span>. It has a matrix, <span class="math-container">$2\times 2$</span>, which is <span class="math-container">$$\left(\begin{array}{cc} 3 &amp; 2\\ 9 &amp; 6 \end{array}\right).$$</span> So the composition <span class="math-container">$U\circ T$</span> has matrix: <span class="math-container">$$\left(\begin{array}{cc} 3 &amp; 2\\ 9 &amp; 6 \end{array}\right) \left(\begin{array}{ccc} 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 2 \end{array}\right) = \left(\begin{array}{ccc} 0 &amp; 3 &amp; 4\\ 0 &amp; 9 &amp; 12 \end{array}\right),$$</span> which tells me that <span class="math-container">$$U\circ T\left(\begin{array}{c}x\\y\\z\end{array}\right) = \left(\begin{array}{c} 3y + 4z\\ 9y+12z\end{array}\right).$$</span></p> <p><strong>Other matrix products.</strong> Are there other ways to define the product of two matrices? Sure. There's the <a href="http://en.wikipedia.org/wiki/Matrix_multiplication#Hadamard_product" rel="noreferrer">Hadamard product</a>, which is the "obvious" thing to try: you can multiply two matrices of the same size (and only of the same size), and you do it entry by entry, just the same way that you add two matrices. This has some nice properties, but it has nothing to do with linear transformations. There's the <a href="http://en.wikipedia.org/wiki/Kronecker_product" rel="noreferrer">Kronecker product</a>, which takes an <span class="math-container">$m\times n$</span> matrix times a <span class="math-container">$p\times q$</span> matrix and gives an <span class="math-container">$mp\times nq$</span> matrix. This one is associated to the <em>tensor product</em> of linear transformations. They are defined differently because they are meant to model other operations that one does with matrices or vectors. </p>
<p>I think part of the problem people have with getting used to linear transformations vs. matrices is that they have probably never seen an example of a linear transformation defined without reference to a matrix or a basis. So here is such an example. Let $V$ be the vector space of real polynomials of degree at most $3$, and let $f : V \to V$ be the derivative.</p> <p>$V$ does not come equipped with a natural choice of basis. You might argue that $\{ 1, x, x^2, x^3 \}$ is natural, but it's only convenient: there's no reason to privilege this basis over $\{ 1, (x+c), (x+c)^2, (x+c)^3 \}$ for any $c \in \mathbb{R}$ (and, depending on what my definitions are, it is literally impossible to do so). More generally, $\{ a_0(x), a_1(x), a_2(x), a_3(x) \}$ is a basis for any collection of polynomials $a_i$ of degree $i$.</p> <p>$V$ also does not come equipped with a natural choice of dot product, so there's no way to include those in the discussion without making an arbitrary choice. It really is just a vector space equipped with a linear transformation.</p> <p>Since we want to talk about composition, let's write down a second linear transformation. $g : V \to V$ will send a polynomial $p(x)$ to the polynomial $p(x + 1)$. Note that, once again, I do not need to refer to a basis to define $g$. </p> <p>Then the abstract composition $gf : V \to V$ is well-defined; it sends a polynomial $p(x)$ to the polynomial $p&#39;(x + 1)$. I don't need to refer to a basis or multiply any matrices to see this; all I am doing is composing two functions.</p> <p>Now let's do everything in a particular basis to see that we get the same answer using the <em>correct and natural definition of matrix multiplication.</em> We'll use the basis $ \{ 1, x, x^2, x^3 \}$. In this basis $f$ has matrix</p> <p>$$\left[ \begin{array}{cccc} 0 &amp; 1 &amp; 0 &amp; 0 \\\ 0 &amp; 0 &amp; 2 &amp; 0 \\\ 0 &amp; 0 &amp; 0 &amp; 3 \\\ 0 &amp; 0 &amp; 0 &amp; 0 \end{array} \right]$$</p> <p>and $g$ has matrix</p> <p>$$\left[ \begin{array}{cccc} 1 &amp; 1 &amp; 1 &amp; 1 \\\ 0 &amp; 1 &amp; 2 &amp; 3 \\\ 0 &amp; 0 &amp; 1 &amp; 3 \\\ 0 &amp; 0 &amp; 0 &amp; 1 \end{array} \right].$$</p> <p>Now I encourage you to go through all the generalities in Arturo's post in this example to verify that $gf$ has the matrix it is supposed to have. </p>
differentiation
<p>For the Quadratic Form $X^TAX; X\in\mathbb{R}^n, A\in\mathbb{R}^{n \times n}$ (which simplifies to $\Sigma_{i=0}^n\Sigma_{j=0}^nA_{ij}x_ix_j$), I tried to take the derivative wrt. X ($\Delta_X X^TAX$) and ended up with the following:</p> <p>The $k^{th}$ element of the derivative represented as</p> <p>$\Delta_{X_k}X^TAX=[\Sigma_{i=1}^n(A_{ik}x_k+A_{ki})x_i] + A_{kk}x_k(1-x_k)$</p> <p>Does this result look right? Is there an alternative form?</p> <p>I'm trying to get to the $\mu_0$ of Gaussian Discriminant Analysis by maximizing the log likelihood and I need to take the derivative of a Quadratic form. Either the result I mentioned above is wrong (shouldn't be because I went over my arithmetic several times) or the form I arrived at above is not the terribly useful to my problem (because I'm unable to proceed).</p> <p>I can give more details about the problem or the steps I put down to arrive at the above result, but I didn't want to clutter to start off. Please let me know if more details are necessary.</p> <p>Any link to related material is also much appreciated.</p>
<p>Let $Q(x) = x^T A x$. Then expanding $Q(x+h)-Q(x)$ and dropping the higher order term, we get $DQ(x)(h) = x^TAh+h^TAx = x^TAh+x^TA^Th = x^T(A+A^T)h$, or more typically, $\frac{\partial Q(x)}{\partial x} = x^T(A+A^T)$.</p> <p>Notice that the derivative with respect to a <strong>column</strong> vector is a <strong>row</strong> vector!</p>
<p>You could also take the derivative of the scalar sum. <span class="math-container">\begin{equation} \begin{aligned} {\bf x^TAx} = \sum\limits_{j=1}^{n}x_j\sum\limits_{i=1}^{n}x_iA_{ji} \end{aligned} \end{equation}</span> The derivative with respect to the <span class="math-container">$k$</span>-th variable is then(product rule): <span class="math-container">\begin{equation} \begin{aligned} \frac{d {\bf x^TAx}}{d x_k} &amp; = \sum\limits_{j=1}^{n}\frac{dx_j}{dx_k}\sum\limits_{i=1}^{n}x_iA_{ji} + \sum\limits_{j=1}^{n}x_j\sum\limits_{i=1}^{n} \frac{dx_i}{dx_k}A_{ji} \\ &amp; = \sum\limits_{i=1}^{n}x_iA_{ki} + \sum\limits_{j=1}^{n}x_jA_{jk} \end{aligned} \end{equation}</span></p> <p>If then you arrange these derivatives into a column vector, you get: <span class="math-container">\begin{equation} \begin{aligned} \begin{bmatrix} \sum\limits_{i=1}^{n}x_iA_{1i} + \sum\limits_{j=1}^{n}x_jA_{j1} \\ \sum\limits_{i=1}^{n}x_iA_{2i} + \sum\limits_{j=1}^{n}x_jA_{j2} \\ \vdots \\ \sum\limits_{i=1}^{n}x_iA_{ni} + \sum\limits_{j=1}^{n}x_jA_{jn} \\ \end{bmatrix} = {\bf Ax} + ({\bf x}^T{\bf A})^T = ({\bf A} + {\bf A}^T){\bf x} \end{aligned} \end{equation}</span></p> <p>or if you choose to arrange them in a row, then you get: <span class="math-container">\begin{equation} \begin{aligned} \begin{bmatrix} \sum\limits_{i=1}^{n}x_iA_{1i} + \sum\limits_{j=1}^{n}x_jA_{j1} &amp; \sum\limits_{i=1}^{n}x_iA_{2i} + \sum\limits_{j=1}^{n}x_jA_{j2} &amp; \dots &amp; \sum\limits_{i=1}^{n}x_iA_{ni} + \sum\limits_{j=1}^{n}x_jA_{jn} \end{bmatrix} \\ = ({\bf Ax} + ({\bf x}^T{\bf A})^T)^T = (({\bf A} + {\bf A}^T){\bf x})^T = {\bf x}^T({\bf A} + {\bf A}^T) \end{aligned} \end{equation}</span></p> <hr />
matrices
<blockquote> <p>Given a field $F$ and a subfield $K$ of $F$. Let $A$, $B$ be $n\times n$ matrices such that all the entries of $A$ and $B$ are in $K$. Is it true that if $A$ is similar to $B$ in $F^{n\times n}$ then they are similar in $K^{n\times n}$?</p> </blockquote> <p>Any help ... thanks!</p>
<p>If the fields are infinite, there is an easy proof.</p> <p>Let $F \subseteq K$ be a field extension with $F$ infinite. Let $A, B \in \mathcal{Mat}_n(F)$ be two square matrices that are similar over $K$. So there is a matrix $M \in \mathrm{GL}_n(K)$ such that $AM = MB$. We can write: $$ M = M_1 e_1 + \dots + M_r e_r, $$ with $M_i \in \mathcal{M}_n(F)$ and $\{ e_1, \dots, e_r \}$ is a $F$-linearly independent subset of $K$. So we have $A M_i = M_i B$ for every $i = 1,\dots, r$. Consider the polynomial $$ P(t_1, \dots, t_r) = \det( t_1 M_1 + \dots + t_r M_r) \in F[t_1, \dots, t_r ]. $$ Since $\det M \neq 0$, $P(e_1, \dots, e_r) \neq 0$, hence $P$ is not the zero polynomial. Since $F$ is infinite, there exist $\lambda_1, \dots, \lambda_r \in F$ such that $P(\lambda_1, \dots, \lambda_r) \neq 0$. Picking $N = \lambda_1 M_1 + \dots + \lambda_r M_r$, we have $N \in \mathrm{GL}_n(F)$ and $A N = N B$.</p>
<blockquote> <p><strong>THEOREM 1.</strong> Let <span class="math-container">$E$</span> be a field, let <span class="math-container">$F$</span> be a subfield, and let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be <span class="math-container">$n$</span> by <span class="math-container">$n$</span> matrices with coefficients in <span class="math-container">$F$</span>. If <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar over <span class="math-container">$E$</span>, they are similar over <span class="math-container">$F$</span>.</p> </blockquote> <p>This is an immediate consequence of</p> <blockquote> <p><strong>THEOREM 2.</strong> In the above setting, let <span class="math-container">$X$</span> be an indeterminate, and let <span class="math-container">$g_k(A)\in F[X]$</span>, <span class="math-container">$1\le k\le n$</span>, be the monic gcd of the determinants of all the <span class="math-container">$k$</span> by <span class="math-container">$k$</span> submatrices of <span class="math-container">$X-A$</span>. Then <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar over <span class="math-container">$F$</span> if and only if <span class="math-container">$g_k(A)=g_k(B)$</span> for all <span class="math-container">$k$</span>.</p> </blockquote> <p>References:</p> <p><a href="http://books.google.com/books?id=_K04QgAACAAJ&amp;dq=inauthor:%22Nathan+Jacobson%22&amp;hl=en&amp;ei=4YxHTqjgJYfBswaugvzBBw&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=3&amp;ved=0CDYQ6AEwAg" rel="nofollow noreferrer">Basic Algebra I: Second Edition</a>, Jacobson, N., Section 3.10.</p> <p><a href="http://books.google.com/books?id=YHuYOwAACAAJ&amp;dq=intitle%3Asurvey%20intitle%3Aalgebra%20inauthor%3Abirkhoff&amp;source=gbs_book_other_versions" rel="nofollow noreferrer">A Survey of Modern Algebra</a>, Birkhoff, G. and Lane, S.M., 2008. In the 1999 edition it was in Section X1.8, titled &quot;The Calculation of Invariant Factors&quot;.</p> <p><a href="http://books.google.com/books?id=Dc-TU2Iub6sC&amp;dq=intitle:algebre+intitle:4+intitle:7+inauthor:bourbaki&amp;source=gbs_navlinks_s" rel="nofollow noreferrer">Algèbre: Chapitres 4 à 7</a>, Nicolas Bourbaki. Translation: <a href="http://books.google.com/books?id=-10_AQAAIAAJ&amp;dq=bibliogroup:%22Algebra+II%22&amp;hl=en&amp;ei=wYRnTvHiFc3BtAa69eT8Cg&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CCkQ6AEwAA" rel="nofollow noreferrer">Algebra II</a>.</p> <p>(I haven't found online references.)</p> <p>Here is the sketch of a proof of Theorem 2.</p> <p><strong>EDIT</strong> [This edit follows Soarer's interesting comment.] Each of the formulas <span class="math-container">$fv:=f(A)v$</span> and <span class="math-container">$fv:=f(B)v$</span> (for all <span class="math-container">$f\in F[X]$</span> and all <span class="math-container">$v\in F^n$</span>) defines on <span class="math-container">$F^n$</span> a structure of finitely generated module over the principal ideal domain <span class="math-container">$F[X]$</span>. Moreover, <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar if and only if the corresponding modules are isomorphic. The good news is that a wonderful theory for the finitely generated modules over a principal ideal domain is freely available to us. <strong>TIDE</strong></p> <blockquote> <p><strong>THEOREM 3.</strong> Let <span class="math-container">$A$</span> be a principal ideal domain and <span class="math-container">$M$</span> a finitely generated <span class="math-container">$A$</span>-module. Then <span class="math-container">$M$</span> is isomorphic to <span class="math-container">$\oplus_{i=1}^nA/(a_i)$</span>, where the <span class="math-container">$a_i$</span> are elements of <span class="math-container">$A$</span> satisfying <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_n$</span>. [As usual <span class="math-container">$(a)$</span> is the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$a\mid b$</span> means &quot;<span class="math-container">$a$</span> divides <span class="math-container">$b$</span>&quot;.] Moreover the ideals <span class="math-container">$(a_i)$</span> are uniquely determined by these conditions.</p> </blockquote> <p>Let <span class="math-container">$K$</span> be the field of fractions of <span class="math-container">$A$</span>, and <span class="math-container">$S$</span> a submodule of <span class="math-container">$A^n$</span>. The maximum number of linearly independent elements of <span class="math-container">$S$</span> is also the dimension of the vector subspace of <span class="math-container">$K^n$</span> generated by <span class="math-container">$S$</span>. Thus this integer, called the <strong>rank</strong> of <span class="math-container">$S$</span>, only depends on the isomorphism class of <span class="math-container">$S$</span> and is additive with respect to finite direct sums.</p> <blockquote> <p><strong>THEOREM 4.</strong> In the above setting we have:</p> <p>(a) <span class="math-container">$S$</span> is free of rank <span class="math-container">$r\le n$</span>.</p> <p>(b) There is a basis <span class="math-container">$u_1,\dots,u_n$</span> of <span class="math-container">$A^n$</span> and there are elements <span class="math-container">$a_1,\dots,a_r$</span> of <span class="math-container">$A$</span> such that <span class="math-container">$a_1u_1,\dots,a_ru_r$</span> is a basis of <span class="math-container">$S$</span>, and <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_r$</span>.</p> </blockquote> <p>Let <span class="math-container">$A$</span> be a commutative ring with one. Recall that <span class="math-container">$A$</span> is a <strong>principal ideal ring</strong> if all its ideals are principal, and that <span class="math-container">$A$</span> is a <strong>Bézout ring</strong> if all its <em>finitely generated</em> ideals are principal.</p> <blockquote> <p><strong>LEMMA.</strong> Let <span class="math-container">$A$</span> be a Bézout ring and let <span class="math-container">$c,d$</span> be in <span class="math-container">$A$</span>. Let <span class="math-container">$\Phi$</span> be a set of ideals of <span class="math-container">$A$</span> such that: <span class="math-container">$(c)$</span> and <span class="math-container">$(d)$</span> are in <span class="math-container">$\Phi$</span>; <span class="math-container">$(c)$</span> is maximal in <span class="math-container">$\Phi$</span>; and <span class="math-container">$(ac+bd)\in\Phi$</span> for all <span class="math-container">$a,b\in A$</span>. Then <span class="math-container">$c$</span> divides <span class="math-container">$d$</span> [equivalently <span class="math-container">$(c)$</span> contains <span class="math-container">$(d)$</span>].</p> </blockquote> <p><strong>Proof.</strong> We have <span class="math-container">$(c,d)=(ac+bd)$</span> for some <span class="math-container">$a,b\in A$</span>. This ideal belongs to <span class="math-container">$\Phi$</span>, contains <span class="math-container">$(c)$</span> and is thus equal to <span class="math-container">$(c)$</span>. Hence we get <span class="math-container">$(d)\subset(c,d)=(c)$</span>. <strong>QED</strong></p> <blockquote> <p><strong>PROPOSITION 1.</strong> Let <span class="math-container">$A$</span> be a principal ideal ring and <span class="math-container">$f$</span> an <span class="math-container">$A$</span>-valued bilinear map defined on a product of two <span class="math-container">$A$</span>-modules. Then the image of <span class="math-container">$f$</span> is an ideal.</p> </blockquote> <p><strong>Proof.</strong> Let <span class="math-container">$\Phi$</span> be the set of all ideals of the form <span class="math-container">$(f(x,y))$</span>; pick <span class="math-container">$x,y$</span> such that <span class="math-container">$(f(x,y))$</span> is maximal in <span class="math-container">$\Phi$</span>; and let <span class="math-container">$(f(u,v))$</span> be another element of <span class="math-container">$\Phi$</span>. It suffices to show that <span class="math-container">$f(x,y)\mid f(u,v)$</span>.</p> <p>Claim: <span class="math-container">$f(x,y)\mid f(x,v)$</span> and <span class="math-container">$f(x,y)\mid f(u,y)$</span>.</p> <p>Since we have <span class="math-container">$af(x,y)+bf(x,v)=f(x,ay+bv)$</span> and <span class="math-container">$af(x,y)+bf(u,y)=f(ax+bu,y)$</span>, the claim follows from the lemma.</p> <p>By the claim we have <span class="math-container">$f(u,y)=af(x,y)$</span> and <span class="math-container">$f(x,v)=bf(x,y)$</span> for some <span class="math-container">$a,b\in A$</span>. Setting <span class="math-container">$u'=u-av$</span>, <span class="math-container">$v'=v-by$</span> we get <span class="math-container">$f(x,v')=0=f(u',y)$</span> and thus <span class="math-container">$af(x,y)+bf(u',v')=f(ax+bu',y+v')$</span>. Now the lemma yields the conclusion. <strong>QED</strong></p> <p>We assume now that <span class="math-container">$A$</span> is a principal ideal <em>domain</em>.</p> <p><strong>Proof of Theorem 4.</strong> We assume (as we may) that <span class="math-container">$S$</span> is nonzero, we let <span class="math-container">$f:A^n\times A^n\to A$</span> be the dot product. By Proposition 1 the set <span class="math-container">$f(S\times A^n)$</span>. Let <span class="math-container">$a_1=f(s_1,y_1)$</span> be a a generator of this ideal. [Naively: <span class="math-container">$a_1$</span> is a gcd of the coordinates of the elements of <span class="math-container">$S$</span>.] Clearly, <span class="math-container">$u_1:=s_1/a_1$</span> is in <span class="math-container">$A^n$</span> and <span class="math-container">$f(u_1,y_1)=1$</span>. Moreover we have <span class="math-container">$$ A^n=Au_1\oplus (y_1)^\perp,\qquad S=As_1\oplus(S\cap(y_1)^\perp), $$</span> where <span class="math-container">$(y_1)^\perp$</span> is the orthogonal of <span class="math-container">$y_1$</span>. [The corresponding projection <span class="math-container">$A^n\twoheadrightarrow Au_1$</span> is given by <span class="math-container">$x\mapsto f(x,u_1)\,u_1$</span>.] Then (a) follows by induction on <span class="math-container">$r$</span>. Let us prove (b). By (a) we know that <span class="math-container">$(y_1)^\perp$</span> and <span class="math-container">$S\cap(y_1)^\perp$</span> are free of rank <span class="math-container">$n-1$</span> and <span class="math-container">$r-1$</span>. By the induction hypothesis there is a basis <span class="math-container">$u_2,\dots,u_n$</span> of <span class="math-container">$(y_1)^\perp$</span> and there are elements <span class="math-container">$a_2,\dots,a_r$</span> of <span class="math-container">$A$</span> such that <span class="math-container">$a_2u_2,\dots,a_ru_r$</span> is a basis of <span class="math-container">$S\cap (y_1)^\perp$</span> and <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_r$</span>. It only remains to show <span class="math-container">$a_1\mid a_2$</span>. We have <span class="math-container">$a_1\mid f(s,y)$</span> for all <span class="math-container">$(s,y)\in S\times A^n$</span>. There is a <span class="math-container">$y$</span> in <span class="math-container">$A^n$</span> such that <span class="math-container">$f(u_2,y)=1$</span>. Indeed, since the determinant of <span class="math-container">$(f(u_i,e_j))$</span> is <span class="math-container">$\pm1$</span>, no prime of <span class="math-container">$A$</span> can divide <span class="math-container">$f(u_2,e_i)$</span> for all <span class="math-container">$i$</span>, and we get <span class="math-container">$a_1\mid f(a_2u_2,y)=a_1$</span>. <strong>QED</strong></p> <p><strong>Proof of Theorem 3.</strong> First statement: Let <span class="math-container">$v_1,\dots,v_n$</span> be generators of the <span class="math-container">$A$</span>-module <span class="math-container">$M$</span>, let <span class="math-container">$(e_i)$</span> be the canonical basis of <span class="math-container">$A^n$</span>, and let <span class="math-container">$\phi:A^n\twoheadrightarrow M$</span> be the <span class="math-container">$A$</span>-linear surjection mapping <span class="math-container">$e_i$</span> to <span class="math-container">$v_i$</span>. Applying Theorem 4 to the submodule <span class="math-container">$\operatorname{Ker}\phi$</span> of <span class="math-container">$A^n$</span>, we get a basis <span class="math-container">$u_1,\dots,u_n$</span> of <span class="math-container">$A^n$</span> and elements <span class="math-container">$a_1,\dots,a_r$</span> of <span class="math-container">$A$</span> such that <span class="math-container">$a_1u_1,\dots,a_ru_r$</span> is a basis of <span class="math-container">$\operatorname{Ker}\phi$</span> and <span class="math-container">$a_1\mid a_2\mid\cdots\mid a_r$</span>, and we set <span class="math-container">$a_{r+1}=\cdots=a_n=0$</span>. Then <span class="math-container">$M$</span> is isomorphic to <span class="math-container">$\oplus_{i=1}^nA/(a_i)$</span>, where the <span class="math-container">$a_i$</span> are as in Theorem 3.</p> <p>Second statement: Assume that <span class="math-container">$M$</span> is also isomorphic to <span class="math-container">$\oplus_{i=1}^mA/(b_i)$</span>, where the <span class="math-container">$b_i$</span> satisfy the same conditions as the <span class="math-container">$a_i$</span>. We only need to prove <span class="math-container">$m=n$</span> and <span class="math-container">$(a_i)=(b_i)$</span> for all <span class="math-container">$i$</span>. Let <span class="math-container">$p\in A$</span> be a prime. By the Chinese Remainder Theorem [see below] it suffices to prove the above equality in the case where <span class="math-container">$M$</span> is the direct sum of a finite family of modules of the form <span class="math-container">$M_i:=A/(p^{i+1})$</span> for <span class="math-container">$i\ge0$</span>. For each <span class="math-container">$j\ge0$</span> the quotient <span class="math-container">$p^jM/p^{j+1}M$</span> is an <span class="math-container">$A/(p)$</span> vector space of finite dimension <span class="math-container">$n_j$</span>. We claim that the multiplicity of <span class="math-container">$A/(p^{i+1})$</span> in <span class="math-container">$M$</span> is then <span class="math-container">$n_i-n_{i+1}$</span>.</p> <p>Here is a way to see this. Form the polynomial <span class="math-container">$M(X):=\sum n_jX^j$</span> (where <span class="math-container">$X$</span> is an indeterminate). We have <span class="math-container">$$ M_i(X)=1+X+X^2+\cdots+X^i=\frac{X^{i+1}-1}{X-1}\ , $$</span> and we must solve <span class="math-container">$\sum\,m_i\,M_i(X)=\sum\,n_j\,X^j$</span> for the <span class="math-container">$m_i$</span>, where the <span class="math-container">$n_j$</span> are considered as known quantities (almost all equal to zero). Multiplying through by <span class="math-container">$X-1$</span> we get <span class="math-container">$$ \sum\,m_{i-1}\,X^i-\sum\,m_i=\sum\,(n_{i-1}-n_i)\,X^i, $$</span> whence the formula. <strong>QED</strong></p> <blockquote> <p><strong>PROPOSITION 2.</strong> Let <span class="math-container">$0\to A^r\overset f{\to}A^n\to M\to0$</span> be an exact sequence of <span class="math-container">$A$</span>-modules. Then there are basis of <span class="math-container">$A^r$</span> and <span class="math-container">$A^n$</span> making the matrix of <span class="math-container">$f$</span> of the form <span class="math-container">$$ \begin{bmatrix} a_1\\ &amp;\ddots\\ &amp;&amp;a_r\\ {}\\ {}\\ {} \end{bmatrix} $$</span> where only the nonzero entries are indicated. The ideals <span class="math-container">$(a_i)$</span> coincide with the ones given by Theorem 3. Moreover, if <span class="math-container">$\alpha$</span> is the matrix of <span class="math-container">$f$</span> relative to an arbitrary basis of <span class="math-container">$A^r$</span> and <span class="math-container">$A^n$</span>, then the ideal of <span class="math-container">$A$</span> generated by the <span class="math-container">$k$</span>-minors of <span class="math-container">$\alpha$</span> is <span class="math-container">$(a_1a_2\cdots a_k)$</span>.</p> </blockquote> <p><strong>Proof.</strong> It suffices to prove the last sentence because the other statements follow immediately from Theorems 3 and 4. Let <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma$</span> be rectangular matrices with entries in <span class="math-container">$A$</span> such that the product <span class="math-container">$\beta\gamma$</span> is defined. Clearly, if an element of <span class="math-container">$A$</span> divides each entry of <span class="math-container">$\alpha$</span>, or if it divides each entry of <span class="math-container">$\gamma$</span>, then it divides each entry of <span class="math-container">$\beta\gamma$</span>. A similar statement holds if we replace <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma$</span> with <span class="math-container">$\bigwedge^k\beta$</span> and <span class="math-container">$\bigwedge^k\gamma$</span>. Thus, multiplying <span class="math-container">$\beta$</span> on the left or on the right by an invertible matrix does not change the ideal of <span class="math-container">$A$</span> generated by the <span class="math-container">$k$</span>-minors. <strong>QED</strong></p> <p><strong>Proof of Theorem 2.</strong> We will apply Proposition 2 to the principal ideal domain <span class="math-container">$F[X]$</span>. It suffices to find an exact sequence of the form <span class="math-container">$$ 0\to F[X]^n\xrightarrow{X-A}F[X]^n\xrightarrow\phi F^n\to0. $$</span> We do this in a slightly more general setting:</p> <p>Let <span class="math-container">$K$</span> be a commutative ring, let <span class="math-container">$M$</span> be a <span class="math-container">$K$</span>-module, let <span class="math-container">$f$</span> be an endomorphism of <span class="math-container">$M$</span>, let <span class="math-container">$X$</span> be an indeterminate, and let <span class="math-container">$M[X]$</span> be the <span class="math-container">$K[X]$</span>-module of polynomials in <span class="math-container">$X$</span> with coefficients in <span class="math-container">$M$</span>. [In particular, any <span class="math-container">$K$</span>-basis of <span class="math-container">$M$</span> is a <span class="math-container">$K[X]$</span>-basis of <span class="math-container">$M[X]$</span>.] Equip <span class="math-container">$M$</span> and <span class="math-container">$M[X]$</span> with the <span class="math-container">$K[X]$</span>-module structures characterized by <span class="math-container">$$ X^i\cdot x=f^ix,\qquad X^i\cdot X^jx=X^{i+j}x $$</span> for all <span class="math-container">$i,j$</span> in <span class="math-container">$\mathbb N$</span> and all <span class="math-container">$x$</span> in <span class="math-container">$M$</span>. Let <span class="math-container">$\phi$</span> be the <span class="math-container">$K[X]$</span>-linear map from <span class="math-container">$M[X]$</span> to <span class="math-container">$M$</span> satisfying <span class="math-container">$\phi(X^ix)=f^ix$</span> for all <span class="math-container">$i,x$</span>, and write again <span class="math-container">$f:M[X]\to M[X]$</span> the <span class="math-container">$K[X]$</span>-linear extension of <span class="math-container">$f:M\to M$</span>. It is enough to check that the sequence <span class="math-container">$$ 0\to M[X]\xrightarrow{X-f}M[X]\xrightarrow{\phi}M\to0 $$</span> is exact. The only nontrivial inclusion to verify is <span class="math-container">$\operatorname{Ker}\phi\subset\operatorname{Im}(X-f)$</span>. For <span class="math-container">$x=\sum_{i\ge0}X^ix_i$</span> in <span class="math-container">$\operatorname{Ker}\phi$</span>, we have <span class="math-container">$$ x=\sum_{i\ge0}X^ix_i-\sum_{i\ge0}f^ix_i=\sum_{i\ge1}\,(X^i-f^i)\,x_i=(X-f) \sum_{j+k=i-1}X^jf^kx_i. $$</span> [Non-rigorous wording of the argument: Since <span class="math-container">$f$</span> is a root of the polynomial <span class="math-container">$P(X)=\sum X^ix_i$</span>, the linear polynomial <span class="math-container">$X-f$</span> divides <span class="math-container">$P(X)$</span>.] <strong>QED</strong></p> <p>Here is a proof of the Chinese Remainder Theorem.</p> <blockquote> <p><strong>CHINESE REMAINDER THEOREM.</strong> Let <span class="math-container">$A$</span> be a commutative ring and <span class="math-container">$\mathfrak a_1,\dots,\mathfrak a_n$</span> ideals such that <span class="math-container">$\mathfrak a_p+\mathfrak a_q=A$</span> for <span class="math-container">$p\not=q$</span>. Then the natural morphism from <span class="math-container">$A$</span> to the product of the <span class="math-container">$A/\mathfrak a_p$</span> is surjective. Moreover the intersection of the <span class="math-container">$\mathfrak a_p$</span> coincides with their product.</p> </blockquote> <p><strong>Proof.</strong> Multiplying the equalities <span class="math-container">$A=\mathfrak a_1+\mathfrak a_p$</span> for <span class="math-container">$p=2,\dots,n$</span> we get <span class="math-container">$$ A=\mathfrak a_1+\mathfrak a_2\cdots\mathfrak a_n.\qquad(*) $$</span> In particular there is an <span class="math-container">$a_1$</span> in <span class="math-container">$A$</span> such that <span class="math-container">$$ a_1\equiv1\bmod\mathfrak a_1,\quad a_1\equiv0\bmod\mathfrak a_p\ \forall\ p&gt;1. $$</span> Similarly we can find elements <span class="math-container">$a_p$</span> in <span class="math-container">$A$</span> such that <span class="math-container">$a_p\equiv\delta_{pq}\bmod\mathfrak a_q$</span> (Kronecker delta). This proves the first claim. Let <span class="math-container">$\mathfrak a$</span> be the intersection of the <span class="math-container">$\mathfrak a_p$</span>. Multiplying <span class="math-container">$(*)$</span> by <span class="math-container">$\mathfrak a$</span> we get <span class="math-container">$$ \mathfrak a=\mathfrak a_1\mathfrak a+\mathfrak a\mathfrak a_2\cdots\mathfrak a_n\subset\mathfrak a_1\,(\mathfrak a_2\cap\cdots\cap\mathfrak a_n)\subset\mathfrak a. $$</span> This gives the second claim, directly for <span class="math-container">$n=2$</span>, by induction for <span class="math-container">$n&gt;2$</span>. <strong>QED</strong></p>
differentiation
<p>Going through Spivak's Calculus on Manifolds and in his definition of a differentiable function from a subset $A$ of $\mathbb{R}^n$ to $\mathbb{R}^m$, $f$ is said to be differentiable if it can be extended to a differentiable function on an open set containing $A$. </p> <p>Why is this? So we dont have to worry about taking limits at boundary points? Why is that even a problem? If thats not the problem, what is wrong with defining $f$ to be differentiable on $A$ if it is differentiable at each point in $A$?</p>
<p>Here is a motivating example. Say we have $f:[0,1)\rightarrow \mathbb{R}$. When should we call $f$ differentiable on $[0,1)$?</p> <p>In particular, how should we determine if $f$ is differentiable at $0$? Even if we have that $f$ is differentiable on $(0,1)$, and that $$\lim_{h\rightarrow 0^+}\frac{f(h)-f(0)}{h}$$ exists, we have no way of determining anything about $$\lim_{h\rightarrow 0^-}\frac{f(h)-f(0)}{h}$$ since $f(x)$ is not defined for $x&lt;0$. </p> <p>We could simply say that $f$ is differentiable on $[0,1)$ if it is differentiable on $(0,1)$ and right differentiable at $0$. However, then if we took, for example, $f(x)=|x|$, we would find that $f$ is not differentiable at $0$ with domain $(-1,1)$ (or any open interval containing $[0,1)$), and yet $f$ is differentiable at $0$ on $[0,1)$. Intuitively, $f$ should be differentiable at $0$ either all the time or never, rather than only sometimes, so this would not be a very good definition.</p> <p>Instead, we simply insist that $0$ be differentiable in the usual way at $x=0$, which requires $f$ be defined on an open neighborhood around $0$. </p> <p>You can see where this is going. In general, if $f$ has domain $\mathcal{D}$, we want to call $f$ differentiable whenever $f$ is differentiable at every $d\in\mathcal{D}$, which requires that we can extend $f$ to include open neighborhoods around every $d$.</p>
<p>Short answer: the requirement of the domain set to be open is not needed to define differentiability. A longer answer follows.</p> <h2>Topological spaces and normed spaces</h2> <p>Let $V$ and $W$ be normed spaces over $\mathbb{R}$. The norms induce the metric topologies on $V$ and $W$, and so $V$ and $W$ are also topological spaces.</p> <p>Let $E \subset V$. Then $E$ is a topological space which carries the subspace topology inherited from $V$, called a topological subspace of $V$. It is important to note that $E$ is a topological space in itself, and therefore you can never fall off from $E$. To be precise, every open neighborhood of a point $a \in E$ is a subset of $E$. These open neighborhoods are exactly the intersections of open sets of $V$ with $E$, by the definition of a subspace topology. Since $E \subset V$, it can be tempting for intuition to view it from the outside as being embedded in $V$. However, a better intuition is obtained by imagining living in $E$; there is no way out of $E$ by neighborhoods.</p> <h2>The domain need not be open</h2> <p>A function $f : E \to W$ is <em>differentiable</em> at $p \in E$, if there exists a linear function $(D_p f) : V \to W$ such that $$ \frac{f(x) - (f(p) + (D_p f)(x - p))}{\lVert x - p \rVert} $$ has a limit at $p$ equal to zero. Recall that a limit in a topological space is defined using open neighborhoods, so that again there is no way to somehow fall off from $E$. </p> <p>We conclude that the definition of differentiability does not require $E$ to be an open set in $V$. </p> <p>What if $E = \{0\}$, say? Then - by the definition of a limit - every element of $W$ is vacuously a limit. In particular, zero is a limit, and therefore $f$ is differentiable on $E$. The derivative at a singular set is whatever you want it to be. </p> <p>Footnote 1: To avoid confusion, note that $E$ is an open set in $E$, by the definition of subspace topology. This is not the same as $E$ being open in $V$.</p> <p>Footnote 2: A limit in a singular set shows that the limit is not always unique even in Hausdorff topological spaces.</p> <p>Footnote 3: The same discussion holds for other forms of differentiation, such as directional derivatives.</p> <h2>An interesting example</h2> <p>This example is adapted from <a href="https://math.stackexchange.com/questions/773213/is-the-function-fx-x-on-pm-frac1nn-in-bbb-n-differentiable-at-0">this</a> question. Let $f : \mathbb{R} \to \mathbb{R}$ be differentiable at $0$. Let $E = \{0\} \cup \{1 / n\}_{n \in \mathbb{N}^{&gt;0}}$. Then $(f|E)$ is differentiable at $0$, and $f'(0) = (f|E)'(0)$. Every point in $E$, except $0$, is isolated. The $0$ is an accumulation point.</p> <h2>Another example</h2> <p>Let $f : \mathbb{R}^n \to \mathbb{R}^m$ be differentiable. Then $(f|\mathbb{Q}^n)$ is differentiable, and $f'(x) = (f|\mathbb{Q}^n)'(x)$, for all $x \in \mathbb{Q}^n$. </p> <h2>An open domain is sufficient for locality</h2> <p>Why would anyone require the domain set $E$ to be open? One answer is locality.</p> <p>Let $\mathbb{R}$ be equipped with any norm. Since all norms in a finite-dimensional vector space are equivalent, this norm generates the Euclidean topology. Let $E = \{x \in \mathbb{R} : x \geq 0\}$, and $f : \mathbb{R} \to \mathbb{R}$ be such that $$ f(x) = \begin{cases} 1, &amp; x \in E, \\ 0, &amp; x \in X \setminus E. \end{cases} $$ Then $(f|E)$ is differentiable, and $(f|(X \setminus E))$ is differentiable, but $f$ is not differentiable, or even continuous. To be able to state that $f$ is differentiable - a synthesis of localized analyses - we need a theorem which allows to glue differentiable functions together to form a differentiable function, under some conditions. </p> <p>A sufficient condition for such locality is for the domains of the functions to be open, and the functions to agree on value on the overlaps. However, this condition is not necessary for locality. Consider a constant function $f$, and its restrictions to $E$ and $(X \setminus E) \cup \{0\}$. The sets $E$ and $(X \setminus E) \cup \{0\}$ are both non-open sets, $(f|E)$ and $(f|((X \setminus E) \cup \{0\}))$ are differentiable, and $f$ is differentiable.</p> <p>Footnote 4: The gluing lemma for differentiable functions is in analogue to the gluing (pasting) lemma for continuous functions.</p> <h2>Open questions</h2> <p>Is locality the only reason for some requiring the domain to be an open set? What is a necessary and sufficient condition for the domains sets for the gluing lemma to hold for differentiable functions?</p>
linear-algebra
<p>I know what a projection operator is, but I am unable to explain it in words without using mathematical symbols. Can anyone help me?</p> <p>I don't need examples or the definition - I want to know why and how its need arose, and what is the idea behind it?</p>
<p>This is how I used to imagine projections:</p> <p>If a mouse:</p> <p><img src="https://img4.wikia.nocookie.net/__cb20120803011325/villains/images/c/cf/Jerry.png" alt=""></p> <p>gets run over by a steamroller:</p> <p><img src="https://i.sstatic.net/ZxZKT.gif" alt="enter image description here"></p> <p>It will look like this:</p> <p><img src="https://i.sstatic.net/8atiO.jpg" alt="enter image description here"></p> <p>Now if it gets run over by a steamroller another time, it will still look like this:</p> <p><img src="https://i.sstatic.net/8atiO.jpg" alt="enter image description here"></p>
<p>I do think the wiki's explanation is pretty good:</p> <pre><code>A projection is a mapping of a set (or other mathematical structure) into a subset (or sub-structure), which is equal to its square for mapping composition (or, in other words, which is idempotent). </code></pre> <p>Even more simply: it is an idempotent homomorphism.</p>
probability
<p>If <span class="math-container">$X\sim \Gamma(a_1,b)$</span> and <span class="math-container">$Y \sim \Gamma(a_2,b)$</span>, I need to prove <span class="math-container">$X+Y\sim\Gamma(a_1+a_2,b)$</span> if <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent.</p> <p>I am trying to apply formula for independence integral and just trying to multiply the gamma function but stuck ?</p>
<p>Now that the homework deadline is presumably long past, here is a proof for the case of $b=1$, adapted from an <a href="https://stats.stackexchange.com/a/51623/6633">answer</a> of mine on stats.SE, which fleshes out the details of what I said in a comment on the question.</p> <p>If $X$ and $Y$ are independent continuous random variables, then the probability density function of $Z=X+Y$ is given by the convolution of the probability density functions $f_X(x)$ and $f_Y(y)$ of $X$ and $Y$ respectively. Thus, $$f_{X+Y}(z) = \int_{-\infty}^{\infty} f_X(x)f_Y(z-x)\,\mathrm dx. $$ But when $X$ and $Y$ are nonnegative random variables, $f_X(x) = 0$ when $x &lt; 0$, and for positive number $z$, $f_Y(z-x) = 0$ when $x &gt; z$. Consequently, for $z &gt; 0$, the above integral can be simplified to $$\begin{align} f_{X+Y}(z) &amp;= \int_0^z f_X(x)f_Y(z-x)\,\mathrm dx\\ &amp;=\int_0^z \frac{x^{a_1-1}e^{-x}}{\Gamma(a_1)}\frac{(z-x)^{a_2-1}e^{-(z-x)}}{\Gamma(a_2)}\,\mathrm dx\\ &amp;= e^{-z}\int_0^z \frac{x^{a_1-1}(z-x)^{a_2-1}}{\Gamma(a_1)\Gamma(a_2)}\,\mathrm dx &amp;\scriptstyle{\text{now substitute}}~ x = zt~ \text{and think}\\ &amp;= e^{-z}z^{a_1+a_2-1}\int_0^1 \frac{t^{a_1-1}(1-t)^{a_2-1}}{\Gamma(a_1)\Gamma(a_2)}\,\mathrm dt &amp; \scriptstyle{\text{of Beta}}(a_1,a_2)~\text{random variables}\\ &amp;= \frac{e^{-z}z^{a_1+a_2-1}}{\Gamma(a_1+a_2)} \end{align}$$</p>
<p>It's easier to use Moment Generating Functions to prove that. <span class="math-container">$$ M(t;\alpha,\beta ) = Ee^{tX} = \int_{0}^{+\infty} e^{tx} f(x;\alpha,\beta)dx = \int_{0}^{+\infty} e^{tx} \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1}e^{-\beta x}dx \\ = \frac{\beta^\alpha}{\Gamma(\alpha)} \int_{0}^{+\infty} x^{\alpha-1}e^{-(\beta - t) x}dx = \frac{\beta^\alpha}{\Gamma(\alpha)} \frac{\Gamma(\alpha)}{(\beta - t)^\alpha} = \frac{1}{(1- \frac{t}{\beta})^\alpha} $$</span> By using the property of independent random variables, we know <span class="math-container">$$M_{X + Y}(t) = M_{X}(t)M_{Y}(t) $$</span> So if <span class="math-container">$X \sim Gamma(\alpha_1,\beta), Y \sim Gamma(\alpha_2,\beta), $</span> <span class="math-container">$$M_{X + Y}(t) = \frac{1}{(1- \frac{t}{\beta})^{\alpha_1}} \frac{1}{(1- \frac{t}{\beta})^{\alpha_2}} = \frac{1}{(1- \frac{t}{\beta})^{\alpha_1 + \alpha_2}}$$</span> You can see the MGF of the product is still in the format of Gamma distribution. Finally we can get <span class="math-container">$X + Y \sim Gamma(\alpha_1 + \alpha_2, \beta)$</span></p>
logic
<p>What books/notes should one read to learn model theory? As I do not have much background in logic it would be ideal if such a reference does not assume much background in logic. Also, as I am interested in arithmetic geometry, is there a reference with a view towards such a topic?</p>
<p>I really like Introduction to Model Theory by David Marker. It starts from scratch and has a lot of algebraic examples. </p>
<p>For a free alternative, Peter L. Clark has posted his notes <a href="http://alpha.math.uga.edu/%7Epete/MATH8900.html" rel="nofollow noreferrer">Introduction to Model Theory</a> on his website. He says no prior knowledge of logic is assumed and the applications are primarily in the areas of Algebra, Algebraic Geometry and Number Theory.</p>
logic
<p>It is said that our current basis for mathematics are the ZFC-axioms. </p> <p><strong>Question:</strong> Where are these axioms in our mathematics? When do we use them? I have now studied math for a year, and have yet to run into a single one of these ZFC axioms. How can this be if they are supposed to be the basis for everything I have done so far? </p>
<p>I think this is a very good question. I don't have the time right now to write a complete answer, but let me make a few quick points (and I'll add more when I get the chance):</p> <ul> <li><p><strong>Most mathematics is not done in ZFC</strong>. Most mathematics, in fact, isn't done axiomatically at all: rather, we simply use propositions which seem "intuitively obvious" without comment. This is true even when it looks like we're being rigorous: for example, when we formally define the real numbers in a real analysis class (by Cauchy sequences, Dedekind cuts, or however), we (usually) <em>don't</em> set forth a list of axioms of set theory which we're using to do this. The reason is, that the facts about sets which we need seem to be utterly tame: for example, that the intersection of two sets is again a set. </p></li> <li><p><strong>ZFC arose in response to a historical need.</strong> The history of modern logic is fascinating, and I don't want to do it injustice; let me just say (wildly oversimplifying) that you only really need to axiomatize mathematics if there's real danger of different people using different axioms implicitly, without realizing it. One standard example here is the axiom of choice, which very reasonable people alternately find perfectly intuitive and clearly false. So ZFC, very roughly speaking, won the job of being the "default" axiom system for mathematics: you're perfectly free to prove theorems using (say) NF instead, but it's considered gauche if you don't explicitly say that's what you're doing. Are there reasons to prefer some other system to ZFC? I'm a very pro-ZFC person, but even I'd have to say yes. The point isn't that ZFC is perfect, though; it's that it's strong enough to address the vast majority of our mathematical needs, while also being reasonable enough that it doesn't cause huge problems most of the time. This strength, by the way, is crucial: we don't want to have to keep updating our axiomatic framework to allow us to do basic mathematics, so overshooting in terms of strength is (I would argue) preferable (the counterargument is that overshooting runs a greater risk of inconsistency; but that's an issue for a different question, or at least a bit later when I have more time to write).</p></li> <li><p><strong>Even in the ZFC context, ZFC is usually overkill.</strong> OK, let's say you buy the ZFC sales pitch (I certainly did - and I just love the complimentary toaster!). Then you have some class of theorems you want to prove, and - after expressing them in the language of ZFC (which is frankly a tedious process, and one of the practical objections to ZFC emerges from this) - proceed to prove them from the ZFC axioms. But then you notice that you didn't use most of the ZFC axioms at all! This is in fact the norm - <em>replacement</em> especially is overkill in most situations. This isn't a problem, though: ZFC doesn't claim to be minimal, in any sense. And in fact the study of what axioms are <em>needed</em> to prove a given theorem is quite rich: see e.g. <em>reverse mathematics</em>.</p></li> </ul> <p>Tl;dr: I would say that the sentence "ZFC is the foundation for modern mathematics," while broadly correct, is hiding a <em>lot</em> of context. In particular:</p> <ul> <li><p>Most of the time you're not going to actually be using axioms at all.</p></li> <li><p>ZFC's claim to prominence is primarily sociological/historical; we could equally well have gone with NF, or something completely different.</p></li> <li><p>The ZFC axioms are wildly overpowered for most of mathematics; in particular, you probably won't really need the whole of ZFC for almost anything you do.</p></li> <li><p>Most of all: <strong>ZFC doesn't come first</strong>. <em>Mathematics</em> comes first; ZFC is a <em>mathematical</em> theory that, among other things, "absorbs" the vast majority of mathematics in a certain way. But you can do math without ZFC. (It's just that you run the risk of accidentally invoking an "obvious" set-theoretic principle which isn't so obvious, and so conflicting with other results which invoke the "obvious" negation of your "obvious" axiom. ZFC provides a general language for us to do math in, so we don't have to worry about things like this. But in practice, this almost never occurs.)</p></li> </ul> <hr> <p>Note that you could also ask this question with regard to formal logic - specifically, classical first-order logic - in general; and there has been a lot written about this (I'll add some citations when I have more time). But that's going <em>very</em> far afield.</p> <p>Really tl;dr (and, I should add, in conflict with a number of people - this is my opinion): foundations do not <em>enable</em>, but rather <em>serve</em>, mathematics.</p>
<p>"My house is supposedly built on concrete pad foundations, but I've been fixing the pipes upstairs, and I haven't seen any foundation yet."</p> <p>This is analogous - you don't see them because they're so deep beneath the surface of where you're working. If you lifted the floorboards and poked around, you'd find the foundations.</p> <p>Though they might actually be wooden pile or columns-on-ball-bearings, in the same way you might actually be using a system besides ZFC. Though until you go and check, you probably won't know the difference.</p> <p>As going down to the foundation level is way beyond scope for most house repairs, so is going down to axioms beyond scope for most mathematics.</p>
number-theory
<p>It is well known that <span class="math-container">$\sqrt{2}$</span> is irrational, and by modifying the proof (replacing <em>even</em> with <em>divisible by <span class="math-container">$3$</span></em>), one can prove that <span class="math-container">$\sqrt{3}$</span> is irrational, as well. On the other hand, clearly <span class="math-container">$\sqrt{n^2} = n$</span> for any positive integer <span class="math-container">$n$</span>. It seems that any positive integer has a square root that is either an integer or irrational number.</p> <blockquote> <p><span class="math-container">$1.$</span> How do we prove that if <span class="math-container">$a \in \mathbb N$</span>, then <span class="math-container">$\sqrt a$</span> is an integer or an irrational number <span class="math-container">$?$</span></p> </blockquote> <p>I also notice that I can modify the proof that <span class="math-container">$\sqrt{2}$</span> is irrational to prove that <span class="math-container">$\sqrt[3]{2}, \sqrt[4]{2}, ...$</span> are all irrational. This suggests we can extend the previous result to other radicals.</p> <blockquote> <p><span class="math-container">$2.$</span> Can we extend <span class="math-container">$1?$</span> That is, can we show that for any <span class="math-container">$a, b \in \mathbb{N}$</span>, <span class="math-container">$a^{1/b}$</span> is either an integer or irrational?</p> </blockquote>
<p><strong>Theorem</strong>: If <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are positive integers, then <span class="math-container">$a^{1/b}$</span> is either irrational or an integer.</p> <p>If <span class="math-container">$a^{1/b}=x/y$</span> where <span class="math-container">$y$</span> does not divide <span class="math-container">$x$</span>, then <span class="math-container">$a=(a^{1/b})^b=x^b/y^b$</span> is not an integer (since <span class="math-container">$y^b$</span> does not divide <span class="math-container">$x^b$</span>), giving a contradiction.</p> <p>I subsequently found a variant of this proof on Wikipedia, under <a href="https://en.wikipedia.org/wiki/Square_root_of_2#Proofs_of_irrationality" rel="noreferrer">Proof by unique factorization</a>.</p> <p>The bracketed claim is proved below.</p> <p><strong>Lemma</strong>: If <span class="math-container">$y$</span> does not divide <span class="math-container">$x$</span>, then <span class="math-container">$y^b$</span> does not divide <span class="math-container">$x^b$</span>.</p> <p><a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic" rel="noreferrer">Unique prime factorisation</a> implies that there exists a prime <span class="math-container">$p$</span> and positive integer <span class="math-container">$t$</span> such that <span class="math-container">$p^t$</span> divides <span class="math-container">$y$</span> while <span class="math-container">$p^t$</span> does not divide <span class="math-container">$x$</span>. Therefore <span class="math-container">$p^{bt}$</span> divides <span class="math-container">$y^b$</span> while <span class="math-container">$p^{bt}$</span> does not divide <span class="math-container">$x^b$</span> (since otherwise <span class="math-container">$p^t$</span> would divide <span class="math-container">$x$</span>). Hence <span class="math-container">$y^b$</span> does not divide <span class="math-container">$x^b$</span>.</p> <p>[OOC: This answer has been through several revisions (some of the comments below might not relate to this version)]</p>
<p>These (standard) results are discussed in detail in</p> <p><a href="http://alpha.math.uga.edu/%7Epete/4400irrationals.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/4400irrationals.pdf</a></p> <p>This is the second handout for a first course in number theory at the advanced undergraduate level. Three different proofs are discussed:</p> <ol> <li><p>A generalization of the proof of irrationality of <span class="math-container">$\sqrt{2}$</span>, using the decomposition of any positive integer into a perfect <span class="math-container">$k$</span>th power times a <span class="math-container">$k$</span>th power-free integer, followed by Euclid's Lemma. (For some reason, I don't give all the details of this proof. Maybe I should...)</p> </li> <li><p>A proof using the functions <span class="math-container">$\operatorname{ord}_p$</span>, very much along the lines of the one Carl Mummert mentions in his answer.</p> </li> <li><p>A proof by establishing that the ring of integers is integrally closed. This is done directly from unique factorization, but afterwards I mention that it is a special case of the Rational Roots Theorem.</p> </li> </ol> <p>Let me also remark that every proof I have ever seen of this fact uses the Fundamental Theorem of Arithmetic (existence and uniqueness of prime factorizations) in some form. [<b>Edit</b>: I have now seen Robin Chapman's answer to the question, so this is no longer quite true.] However, if you want to prove any particular case of the result, you can use a brute force case-by-case analysis that avoids FTA.</p>
logic
<p>I'm reading Behnke's <em>Fundamentals of mathematics</em>:</p> <blockquote> <p>If the number of axioms is finite, we can reduce the concept of a consequence to that of a tautology.</p> </blockquote> <p>I got curious on this: Are there infinite sets of axioms? The only thing I could think about is the possible existence of unknown axioms and perhaps some belief that this number of axioms is infinite.</p>
<p>In first order Peano axioms the principal of mathematical induction is not one axiom, but a &quot;template&quot; called an <a href="http://en.wikipedia.org/wiki/Axiom_schema" rel="nofollow noreferrer">axiom scheme</a>. For every possible expression (or &quot;predicate&quot;) with a free variable, <span class="math-container">$P(n)$</span>, we have the axiom:</p> <p><span class="math-container">$$(P(0) \land \left(\forall n: P(n)\implies P(n+1)\right))\implies \\\forall n: P(n)$$</span></p> <p>So, if <span class="math-container">$P(x)$</span> is the predicate, <span class="math-container">$x\cdot 0 = 1$</span> then we'd have the messy axiom:</p> <p><span class="math-container">$$(0\cdot 0=1 \land \left(\forall n: n\cdot 0 =1\implies (n+1)\cdot 0=1\right))\implies \\\forall n: n\cdot 0 = 1$$</span></p> <p>Our inclination is to think of this axiom scheme as a single axiom when preceded by &quot;<span class="math-container">$\forall P$</span>&quot;, but in first-order theory, there is only one &quot;type.&quot; In first-order number theory, that type is &quot;natural number.&quot; So there is no room in the language for the concept of <span class="math-container">$\forall P$</span>. <a href="http://en.wikipedia.org/wiki/Mathematical_induction#Axiom_of_induction" rel="nofollow noreferrer">In second order theory</a>, we can say <span class="math-container">$\forall P$</span>.</p> <p>In set theory, you have a similar rule, the <a href="http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory#3._Axiom_schema_of_specification_.28also_called_the_axiom_schema_of_separation_or_of_restricted_comprehension.29" rel="nofollow noreferrer">&quot;axiom of specification&quot;</a> which lets you construct a set from any predicate, <span class="math-container">$P(x,y)$</span>, with two free variables:</p> <p><span class="math-container">$$\forall S:\exists T: \forall x: (x\in T\iff (x\in S\land P(x,S)))$$</span></p> <p>(The axiom lets you do more, but this is a simple case.)</p> <p>which essentially means that there exists a set:</p> <p><span class="math-container">$$\{x\in S: P(x,S)\}$$</span></p> <p>Again, there is no such object inside set theory as a &quot;predicate.&quot;</p> <p>For most human axiom systems, even when the axioms are infinite, we have a level of verifiability. We usually desire an ability to verify a proof using mechanistic means, and therefore, given any step in a proof, we desire the ability to verify the step in a finite amount of time.</p>
<p>Many important theories, most significantly first-order Peano arithmetic, and ZFC, the most commonly used axiomatic set theory, have an infinite number of axioms. So does the theory of algebraically closed fields. </p>
logic
<p>I am new to discrete mathematics, and I am trying to simplify this statement. I'm using a chart of logical equivalences, but I can't seem to find anything that really helps reduce this. </p> <p><a href="https://i.sstatic.net/Hvpyg.png" rel="noreferrer"><img src="https://i.sstatic.net/Hvpyg.png" alt=""></a></p> <p>Which of these would help me to solve this problem? I realize I can covert $p \to q$ into $\lnot p \lor q$, but I'm stuck after that step. Any push in the right direction would be awesome.</p>
<p>Is it a tautology? Yes</p> <p>Why? Just draw the truth table and see that the truth value of the main implication is always 'true'. Here, I say '0' is 'false' and '1' is true: \begin{array}{|c|c|c|c|c|} \hline p &amp; q &amp; p\to q &amp; p\land(p\to q) &amp; (p\land(p\to q))\to q \\ \hline 0 &amp; 0&amp; 1&amp;0&amp;1\\ \hline 0 &amp; 1&amp; 1&amp;0&amp;1\\ \hline 1 &amp; 0&amp; 0&amp;0&amp;1\\ \hline 1 &amp; 1&amp; 1&amp; 1&amp; 1\\ \hline \end{array}</p> <p>Edit: you can also say \begin{align}p \land (p \to q) &amp;\equiv p \land (\lnot p \lor q)\\ &amp;\equiv (p \land \lnot p) \lor (p \land q )\\ &amp;\equiv \text {false} \lor (p \land q)\\&amp;\equiv p \land q, \end{align}</p> <p>that implies $q $.</p> <p>Edit 2: this inference method is called <em>modus ponens</em>, and it is the simplest one.</p> <p>For instance, say $p = $ it rains, $g = $ I get wet. </p> <p>So, if we know that the implication <em>if it rains, then I get wet</em> is true (that is, $p\to q $) and <em>It rains</em> ($p $), what can we infere? It is obvious that <em>I get wet</em> ($q $).</p> <p>Note that, even though $p \land (p\to q) $ is not verified, as $p\land (p\to q) \to q$, the main implication is verified.</p>
<p>One approach is just to start writing a truth table - after all, there are only four cases. And this can be reduced: If $q$ is true, the statement is true (this is two cases). If $p$ is false, the statement is true because</p> <p>$$p \wedge (p \to q)$$</p> <p>is false. The only case that's left is when $p$ is true and $q$ is false, and I'll leave it to you to verify that the proposition is true in this case too.</p>
combinatorics
<p>Let $S$ be a set of size $n$. There is an easy way to count the number of subsets with an even number of elements. Algebraically, it comes from the fact that</p> <p>$\displaystyle \sum_{k=0}^{n} {n \choose k} = (1 + 1)^n$</p> <p>while</p> <p>$\displaystyle \sum_{k=0}^{n} (-1)^k {n \choose k} = (1 - 1)^n$.</p> <p>It follows that </p> <p>$\displaystyle \sum_{k=0}^{n/2} {n \choose 2k} = 2^{n-1}$. </p> <p>A direct combinatorial proof is as follows: fix an element $s \in S$. If a given subset has $s$ in it, add it in; otherwise, take it out. This defines a bijection between the number of subsets with an even number of elements and the number of subsets with an odd number of elements.</p> <p>The analogous formulas for the subsets with a number of elements divisible by $3$ or $4$ are more complicated, and divide into cases depending on the residue of $n \bmod 6$ and $n \bmod 8$, respectively. The algebraic derivations of these formulas are as follows (with $\omega$ a primitive third root of unity): observe that</p> <p>$\displaystyle \sum_{k=0}^{n} \omega^k {n \choose k} = (1 + \omega)^n = (-\omega^2)^n$</p> <p>while</p> <p>$\displaystyle \sum_{k=0}^{n} \omega^{2k} {n \choose k} = (1 + \omega^2)^n = (-\omega)^n$</p> <p>and that $1 + \omega^k + \omega^{2k} = 0$ if $k$ is not divisible by $3$ and equals $3$ otherwise. (This is a special case of the discrete Fourier transform.) It follows that</p> <p>$\displaystyle \sum_{k=0}^{n/3} {n \choose 3k} = \frac{2^n + (-\omega)^n + (-\omega)^{2n}}{3}.$</p> <p>$-\omega$ and $-\omega^2$ are sixth roots of unity, so this formula splits into six cases (or maybe three). Similar observations about fourth roots of unity show that</p> <p>$\displaystyle \sum_{k=0}^{n/4} {n \choose 4k} = \frac{2^n + (1+i)^n + (1-i)^n}{4}$</p> <p>where $1+i = \sqrt{2} e^{ \frac{\pi i}{4} }$ is a scalar multiple of an eighth root of unity, so this formula splits into eight cases (or maybe four). </p> <p><strong>Question:</strong> Does anyone know a direct combinatorial proof of these identities? </p>
<p>There's a very pretty combinatorial proof of the general identity <span class="math-container">$$\sum_{k \geq 0} \binom{n}{rk} = \frac{1}{r} \sum_{j=0}^{r-1} (1+\omega^j)^n,$$</span> for <span class="math-container">$\omega$</span> a primitive <span class="math-container">$r$</span>th root of unity, in Benjamin, Chen, and Kindred, "<a href="https://scholarship.claremont.edu/hmc_fac_pub/535/" rel="nofollow noreferrer">Sums of Evenly Spaced Binomial Coefficients</a>," <em>Mathematics Magazine</em> 83 (5), pp. 370-373, December 2010.</p> <p>They show that both sides count the number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span> beginning at vertex 0, where <span class="math-container">$C_r$</span> is the directed cycle on <span class="math-container">$r$</span> elements with the addition of a loop at each vertex, and a walk is <em>closed</em> if it ends where it starts. </p> <p><em>Left-hand side</em>: In order for an <span class="math-container">$n$</span>-walk to be closed, it has to take <span class="math-container">$kr$</span> forward moves and <span class="math-container">$n-kr$</span> stationary moves for some <span class="math-container">$k$</span>.</p> <p><em>Right-hand side</em>: The number of closed walks starting at vertex <span class="math-container">$j$</span> is the same regardless of the choice of <span class="math-container">$j$</span>, and so it suffices to prove that the total number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span> is <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n.$</span> For each <span class="math-container">$n$</span>-walk with initial vertex <span class="math-container">$j$</span>, assign each forward step a weight of <span class="math-container">$\omega^j$</span> and each stationary step a weight of <span class="math-container">$1$</span>. Define the weight of an <span class="math-container">$n$</span>-walk itself to be the product of the weights of the steps in the walk. Thus the sum of the weights of all <span class="math-container">$n$</span>-walks starting at <span class="math-container">$j$</span> is <span class="math-container">$(1+\omega^j)^n$</span>, and <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span> gives the total weight of all <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span>. The open <span class="math-container">$n$</span>-walks can then be partitioned into orbits such that the sum of the weights of the walks in each orbit is <span class="math-container">$0$</span>. Thus the open <span class="math-container">$n$</span>-walks contribute a total of <span class="math-container">$0$</span> to the sum <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span>. Since a closed <span class="math-container">$n$</span>-walk has weight <span class="math-container">$1$</span>, <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span> must therefore give the number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span>.</p> <p><HR></p> <p>They then make a slight modification of the argument above to give a combinatorial proof of <span class="math-container">$$\sum_{k \geq 0} \binom{n}{a+rk} = \frac{1}{r} \sum_{j=0}^{r-1} \omega^{-ja}(1+\omega^j)^n,$$</span> where <span class="math-container">$0 \leq a &lt; r$</span>.</p> <p><HR></p> <p>Benjamin and Scott, in "<a href="https://scholarship.claremont.edu/hmc_fac_pub/124/" rel="nofollow noreferrer">Third and Fourth Binomial Coefficients</a>" (<em>Fibonacci Quarterly</em>, 49 (2), pp. 99-101, May 2011) give different combinatorial arguments for the specific cases you're asking about, <span class="math-container">$\sum_{k \geq 0} \binom{n}{3k}$</span> and <span class="math-container">$\sum_{k \geq 0} \binom{n}{4k}$</span>. I prefer the more general argument above, though, so I'll just leave this one as a link and not summarize it. </p>
<p>Fix two elements s<sub>1</sub>,s<sub>2</sub>&isin;S and divide subsets of S into two parts (subsets of S containing only s<sub>2</sub>)&cup;(subsets of S which contains s<sub>1</sub> if they contain s<sub>2</sub>). The second part contains equal number of sets for all reminders mod 3 (because Z/3 acts there adding s<sub>1</sub>, then s<sub>2</sub>, then removing both of them) -- namely, 2<sup>n-2</sup>. And for the first part we have a bijection with subsets <em>(edit: with 2 mod 3 elements)</em> of a set with (n-2) elements.</p> <p>So we get a recurrence relation that gives an answer 2<sup>n-2</sup>+2<sup>n-4</sup>+... -- i.e. (2<sup>n</sup>-1):3 for even and (2<sup>n</sup>-2):3 for odd n.</p> <hr> <p><strong>Errata.</strong> For n=0,1,5 mod 6 one should add "+1" to the answer from the previous paragraph (e.g. for n=6 the correct answer is 1+20+1=22 and not 21).</p> <p>Let me try to rephrase the solution to make it obvious. For n=2k divide S on k pairs and consider an action of a group Z/3Z on each pair described in a first paragraph. We get an action of (Z/3Z)<sup>k</sup> on subsets of S, and after removal of it's only fixed point (k-element set consisting of second points from each pair) we get a bijection between subsets which have 0, 1 or 2 elements mod 3. So there are (2<sup>n</sup>-1):3 sets with i mod 3 elements excluding the fixed point <em>and</em> to count that point one should add "+1" for i=k mod 3.</p> <p>And for n=2k+1 there are 2 fixed points&nbsp;&mdash; including or not (2k+1)-th element of S&nbsp;&mdash; with k+1 and k elements respectively.</p>
linear-algebra
<p>In <a href="https://math.stackexchange.com/questions/1696686/is-linear-algebra-laying-the-foundation-for-something-important">a recent question</a>, it was discussed how LA is a foundation to other branches of mathematics, be they pure or applied. One answer argued that linear problems are <em>fully understood</em>, and hence a natural target to reduce pretty much anything to. </p> <p>Now, it's evident enough that such a linearisation, if possible, tends to make hard things easier. Find a Hilbert space hidden in your domain, obtain an orthonormal basis, and <em>bam</em>, any point/state can be described as a mere sequence of numbers, any mapping boils down to a matrix; we have some good theorems for existence of inverses / eigenvectors/exponential objects / etc..</p> <p>So, LA sure is <em>convenient</em>.</p> <p>OTOH, it seems unlikely that any nontrivial mathematical system could ever be said to be <em>thoroughly understood</em>. Can't we always find new questions within any such framework that haven't been answered yet? I'm not firm enough with G&ouml;del's incompleteness theorems to judge whether they are relevant here. The first incompleteness theorem says that discrete disciplines like number theory can't be both complete and consistent. Surely this is all the more true for e.g. topology.</p> <p>Is LA for some reason exempt from such arguments, or does it for some other reason deserve to be called <em>the</em> best understood branch of mathematics?</p>
<p>It's closer to true that all the questions in finite-dimensional linear algebra that can be asked in an introductory course can be answered in an introductory course. This is wildly far from true in most other areas. In number theory, algebraic topology, geometric topology, set theory, and theoretical computer science, for instance, here are some questions you could ask within a week of beginning the subject: how many primes are there separated by two? How many homotopically distinct maps are there between two given spaces? How can we tell apart two four dimensional manifolds? Are there sets in between a given set and its powerset in cardinality? Are various natural complexity classes actually distinct?</p> <p>None of these questions are very close to completely answered, only one is really anywhere close, in fact one is known to be unanswerable in general, and partial answers to each of these have earned massive accolades from the mathematical community. No such phenomena can be observed in finite dimensional linear algebra, where we can classify in a matter of a few lectures all possible objects, give explicit algorithms to determine when two examples are isomorphic, determine precisely the structure of the spaces of maps between two vector spaces, and understand in great detail how to represent morphisms in various ways amenable to computation. Thus linear algebra becomes both the archetypal example of a completely successful mathematical field, and a powerful tool when other mathematical fields can be reduced to it. </p> <p>This is an extended explanation of the claim that linear algebra is "thoroughly understood." That doesn't mean "totally understood," as you claim!</p>
<p>The hard parts of linear algebra have been given new and different names, such as representation theory, invariant theory, quantum mechanics, functional analysis, Markov chains, C*-algebras, numerical methods, commutative algebra, and K-theory. Those are full of mysteries and open problems.</p> <p>What is left over as the "linear algebra" taught to students is a much smaller subject that was mostly finished a hundred years ago. </p>
geometry
<p>In this wikipedia, article <a href="http://en.wikipedia.org/wiki/Circle#Area_enclosed" rel="noreferrer">http://en.wikipedia.org/wiki/Circle#Area_enclosed</a> its stated that the circle is the closed curve which has the maximum area for a given arc length. First, of all, I would like to see different proofs, for this result. (If there are any elementary ones!)</p> <p>One, interesting observation, which one can think while seeing this problem, is: How does one propose such type of problem? Does, anyone take all closed curves, and calculate their area to come this conclusion? I don't think that's the right intuition.</p>
<p>Here is a physicist's answer:</p> <p>Imagine a rope enclosing a two-dimensional gas, with vacuum outside the rope. The gas will expand, pushing the rope to enclose a maximal area at equilibrium.</p> <p>When the system is at equilibrium, the tension in the rope must be constant, because if there were a tension gradient at some point, there would be a non-zero net force at that point in the direction of the rope, but at equilibrium the net force must be zero in all directions.</p> <p>The gas exerts a force outward on the rope, so tension must cancel this force. Take a small section of rope, so that it can be thought of as a part of some circle, called the osculating circle. The force on this rope segment due to pressure is <span class="math-container">$P l$</span>, with <span class="math-container">$P$</span> pressure and <span class="math-container">$l$</span> the length. The net force due to tension is <span class="math-container">$2 T \sin(l/2R)$</span>, with <span class="math-container">$T$</span> tension and <span class="math-container">$R$</span> the radius of the osculating circle.</p> <p>Because the pressure is the same everywhere, and the force from pressure must be canceled by the force from tension, the net tension force must be the same for any rope segment of the same length. That means the radius of the osculating circle is the same everywhere, so the rope must be a circle.</p> <p>For a simple experimental demonstration, we replace the gas with a soap film. A soap film will minimize its area, so if we put a loop of string inside a soap film, then break the film inside the string, the remaining film outside the string will pull the string into a circle.</p> <p><a href="https://i.sstatic.net/xXnWf.png" rel="noreferrer"><img src="https://i.sstatic.net/xXnWf.png" alt="soap film circle" /></a></p> <p>image credit: Carboni, Giorgio. &quot;Experiments on Surface Phenomena and Colloids&quot;, <a href="http://www.funsci.com/fun3_en/exper2/exper2.htm" rel="noreferrer">http://www.funsci.com/fun3_en/exper2/exper2.htm</a></p>
<p>As Qiaochu Yuan pointed out, this is a consequence of the isoperimetric inequality that relates the length $L$ and the area $A$ for any closed curve $C$:</p> <p>$$ 4\pi A \leq L^2 \ . $$</p> <p>Taking a circumference of radius $r$ such that $2\pi r = L$, you obtain</p> <p>$$ A \leq \frac{L^2}{4\pi} = \frac{4 \pi^2 r^2}{4\pi} = \pi r^2 \ . $$</p> <p>That is, the area $A$ enclosed by the curve $C$ is smaller than the area enclosed by the circumference of the same length.</p> <p><strong>As for the proof of the isoperimetric inequality</strong>, here is the one I've learnt as undergraduate, which is elementary and beautiful, I think.</p> <p>Go round your curve $C$ counterclockwise. For a plane vector field $(P,Q)$, <a href="http://en.wikipedia.org/wiki/Green%27s_theorem">Green's theorem</a> says</p> <p>$$ \oint_{\partial D}(Pdx + Qdy) = \int_D \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right) dxdy\ . $$</p> <p>Apply it for the vector field $(P,Q) = (-y,x)$ and when $D$ is the region enclosed by your curve $C = \partial D$. You obtain</p> <p>$$ A = \frac{1}{2} \oint_{\partial D} (-ydx + xdy) \ . $$</p> <p>Now, parametrize $C= \partial D$ with arc length: </p> <p>$$ \gamma : [0,L] \longrightarrow \mathbb{R}^2 \ ,\qquad \gamma (s) = (x(s), y(s)) \ . $$</p> <p>Taking into account that</p> <p>$$ 0= xy \vert_0^L = \int_0^L x&#39;yds + \int_0^L xy&#39;ds \ , $$</p> <p>we get</p> <p>$$ A = \int_0^L xy&#39;ds = -\int_0^L x&#39;yds \ . $$</p> <p><strong>So enough for now with our curve $C$. Let's look for a nice circumference to compare with!</strong></p> <p>First of all, $[0,L]$ being compact, the function $x: [0,L] \longrightarrow \mathbb{R}$ will have a global maximum and a global minimum. Changing the origin of our parametrization if necessary, me may assume the minimum is attained at $s=0$. Let the maximum be attained at $s=s_0 \in [0,L]$. Let $q = \gamma (0)$ and $p = \gamma (s_0)$. (If there are more than one minimum and more than one maximum, we choose one of each: the ones you prefer.)</p> <p>Since $x&#39;(0) = x&#39;(s_0) = 0$, we have vertical tangent lines at both points $p,q$ of our curve $C$. Draw a circumference between these parallel lines, tangent to both of them (a little far away of $C$ to avoid making a mess). So the radius of this circumference will be $r = \frac{\| pq \|}{2}$.</p> <p>Let's take the origin of coordinates at the center of this circumference. We parametrize it with the same $s$, the arc length of $C$:</p> <p>$$ \sigma (s) = (\overline{x}(s), \overline{y}(s)) \ , \quad s \in [0, L] \ . $$</p> <p>Of course, $\overline{x}(s)^2 + \overline{y}(s)^2 = r^2$ for all $s$. If we choose $\overline{x}(s) = x(s)$, this forces us to take $ \overline{y}(s) = \pm \sqrt{r^2 - \overline{x}(s)^2}$. In order that $\sigma (s)$ goes round all over our circumference counterclockwise too, we choose the minus sign if $0\leq s \leq s_0$ and the plus sign if $s_0 \leq s \leq L$.</p> <p><strong>We are almost done, just a few computations left.</strong></p> <p>Let $\overline{A}$ denote the area enclosed by our circumference. So, we have</p> <p>$$ A = \int_0^L xy&#39;ds = \int_0^L \overline{x}y&#39;ds \qquad \text{and} \qquad \overline{A}= \pi r^2 = -\int_0^L\overline{y}\overline{x}&#39;ds = -\int_0^L\overline{y} x&#39;ds \ . $$</p> <p>Hence,</p> <p>$$ \begin{align} A + \pi r^2 &amp;= A + \overline{A} = \int_0^L (\overline{x}y&#39; - \overline{y}x&#39;)ds \\\ &amp;\leq \int_0^L \vert \overline{x}y&#39; - \overline{y}x&#39;\vert ds \\\ &amp;= \int_0^L \vert (\overline{x}, \overline{y})\cdot (y&#39;, -x&#39;)\vert ds \\\ &amp;\leq \int_0^L \sqrt{\overline{x}^2 + \overline{y}^2} \cdot \sqrt{(y&#39;)^2+ (-x&#39;)^2}ds \\\ &amp;= \int_0^L rds = rL \ . \end{align} $$</p> <p>The last inequality is Cauchy-Schwarz's one and the last but one equality is due to the fact that $s$ is the arc-length of $C$.</p> <p>Summing up:</p> <p>$$ A + \pi r^2 \leq rL \ . $$</p> <p>Now, since the geometric mean is always smaller than the arithmetic one,</p> <p>$$ \sqrt{A\pi r^2} \leq \frac{A + \pi r^2}{2} \leq \frac{rL}{2} \ . $$</p> <p>Thus</p> <p>$$ A \pi r^2 \leq \frac{r^2L^2}{4} \qquad \Longrightarrow \qquad 4\pi A \leq L^2 \ . $$</p>
differentiation
<p>What would be the derivative of square roots? For example if I have $2 \sqrt{x}$ or $\sqrt{x}$.</p> <p>I'm unsure how to find the derivative of these and include them especially in something like implicit.</p>
<p>Let $f(x) = \sqrt{x}$, then $$f'(x) = \lim_{h \to 0} \dfrac{\sqrt{x+h} - \sqrt{x}}{h} = \lim_{h \to 0} \dfrac{\sqrt{x+h} - \sqrt{x}}{h} \times \dfrac{\sqrt{x+h} + \sqrt{x}}{\sqrt{x+h} + \sqrt{x}} = \lim_{x \to 0} \dfrac{x+h-x}{h (\sqrt{x+h} + \sqrt{x})}\\ = \lim_{h \to 0} \dfrac{h}{h (\sqrt{x+h} + \sqrt{x})} = \lim_{h \to 0} \dfrac1{(\sqrt{x+h} + \sqrt{x})} = \dfrac1{2\sqrt{x}}$$ In general, you can use the fact that if $f(x) = x^{t}$, then $f'(x) = tx^{t-1}$. </p> <p>Taking $t=1/2$, gives us that $f'(x) = \dfrac12 x^{-1/2}$, which is the same as we obtained above.</p> <p>Also, recall that $\dfrac{d (c f(x))}{dx} = c \dfrac{df(x)}{dx}$. Hence, you can pull out the constant and then differentiate it.</p>
<p>$\sqrt x=x^{1/2}$, so you just use the power rule: the derivative is $\frac12x^{-1/2}$.</p>
number-theory
<p>Background: Let $n$ be an integer and let $p$ be a prime. If $p^{e} || n$, we write $v_{p}(n) = e$. A natural number $n$ is a sum of two integer squares if and only if for each prime $p \equiv 3 \pmod 4$, $v_{p}(n)$ is even. Every natural number is a sum of four squares. A natural number $n$ is a sum of three squares if and only if it is not of the form $4^{k}u$ where $u \equiv 7 \pmod 8$. </p> <p>I would like to know why it is harder to prove the above result for sums of three squares as opposed to sums of two squares or four squares.</p> <p>I've heard somewhere that one way to see this involves modular forms... but I don't remember any details. I would also like to know if there is a formula for the number of ways of representing a natural number n as a sum of three squares (or more generally, $m$ squares) that is similar in spirit to the formulas for the number of ways of representing a natural number as the sum of two squares and four squares. </p>
<p>The modular forms explanation is basically due to the fact that $3$ is odd and so the generating function for representations of sums of three squares is a modular form of half-integer weight.</p> <p>In general if $r_k(n)$ is the number of representations of $n$ as a sum of $k$ squares then $$\sum_{n=0}^\infty r_k(n)q^n=\theta(z)^k$$ where $q=\exp(\pi i z)$ and $$\theta(z)=1+2\sum_{n=1}^\infty q^{n^2}.$$ Then $f_k(z)=\theta(z)^k$ is a modular form of weight $k/2$ for the group $\Gamma_0(4)$. This means that $$f_k((az+b)/(cz+d))=(cz+d)^{k/2}f_k(z)$$ whenever the matrix $\begin{pmatrix}a&amp;b\\\\c&amp;d\end{pmatrix}$ lies in $\Gamma_0(4)$, that is $a$, $b$, $c$ and $d$ are integers, $4\mid c$ and $ad-bc=1$.</p> <p>This definition is easy to understand when $k$ is even, but for odd $k$ one needs to take the correct branch of $(cz+d)^{k/2}$, and this is awkward. The space of modular forms of weight $k/2$ is finite-dimensional for all $k$, and is one-dimensional for small enough $k$. For these small $k$ the space is spanned by an "Eisenstein series". Computing the Eisenstein series isn't too hard for even $k$, but is much nastier for odd $k$ where again square roots need to be dealt with. See Koblitz's book on modular forms and elliptic functions for the calculation for $k\ge5$ odd. The calculation for $k=3$ is even nastier as the Eisenstein series does not converge absolutely. In fact the cases where $k$ is divisible by $4$ are even easier, as even weight modular forms behave nicer.</p> <p>For large $k$, Eisenstein series are no longer enough, one needs also "cusp forms". While fascinating, cusp forms have coefficients which aren't given by nice formulae unlike Eisenstein series.</p> <p>Of course there is a formula for $r_3(n)$, due to Gauss in his <em>Disquisitiones Arithmeticae</em>. It involves class numbers of quadratic fields (or to Gauss numbers of classes of integral quadratic forms).</p>
<p>An explanation from a different direction than you may be expecting: one reason why the sum-of-three-squares problem is so much harder is that there's no normed division algebra of dimension three. A key element of both the two-squares and four-squares proof is expressing the product of two numbers in the relevant form as another number in that form; these product formulas 'come from' the formulas for multiplying complex and quaternionic numbers respectively. That is, taking norms of $(a+bi)(c+di) = (ac-bd) + (ad+bc)i$ gives rise to the formula $(a^2+b^2)(c^2+d^2) = (ac-bd)^2+(ad+bc)^2$, and the equivalent formula for quaternions gives Euler's four-square identity. John Baez wrote a particularly interesting article on why you don't keep getting division algebras as you keep doubling the dimension; you should be able to find it on his homepage if you're curious.</p>
probability
<p>I've studied mathematics and statistics at undergraduate level and am pretty happy with the main concepts. However, I've come across measure theory several times, and I know <a href="https://math.stackexchange.com/a/118227/77151">it is a basis for probability theory</a>, and, unsurprising, looking at a basic introduction such as this <a href="https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2006-0008.pdf" rel="noreferrer">Measure Theory Tutorial (pdf)</a>, I see there are concepts such as events, sample spaces, and ways of getting from them to real numbers, that seem familiar.</p> <p>So measure theory seems like an area of pure mathematics that I probably <em>ought</em> to study (as discussed very well <a href="https://math.stackexchange.com/questions/502132/what-is-the-motivation-of-measure-theory-when-there-is-probability-theory">here</a>) but I have a lot of other areas I'd like to look at. For example, I'm studying and using calculus and Taylor series at a more advanced level and I've never studied real analysis properly -- and I can tell! In the future I'd like to study the theory of differential equations and integral transforms, and to do that I think I will need to study complex analysis. But I don't have the same kind of "I don't know what I'm doing" feeling when I do probability and statistics as when I look at calculus, series, or integral transforms, so those seem a lot more urgent to me from a foundational perspective. </p> <p>So my real question is, <strong>are there some application relating to probability and statistics that I can't tackle without measure theory</strong>, or for that matter applications in other areas? Or is it more, I'm glad those measure theory guys have got the foundations worked out, I can trust they did a good job and get on with using what's built on top?</p>
<p>First, there are things that are much easier given the abstract formultion of measure theory. For example, let <span class="math-container">$X,Y$</span> be independent random variables and let <span class="math-container">$f:\mathbb{R}\to\mathbb{R}$</span> be a continuous function. Are <span class="math-container">$f\circ X$</span> and <span class="math-container">$f\circ Y$</span> independent random variables. The answer is utterly trivial in the measure theoretic formulation of probability, but very hard to express in terms of cumulative distribution functions. Similarly, convergence in distribution is really hard to work with in terms of cumulative distribution functions but easily expressed with measure theory.</p> <p>Then there are things that one can consume without much understanding, but that requires measure theory to actually understand and to be comfortable with it. It may be easy to get a good intuition for sequences of coin flips, but what about continuous time stochastic processes? How irregular can sample paths be?</p> <p>Then there are powerful methods that actually require measure theory. One can get a lot from a little measure theory. The <a href="http://en.wikipedia.org/wiki/Borel%E2%80%93Cantelli_lemma" rel="noreferrer">Borel-Cantelli lemmas</a> or the <a href="http://en.wikipedia.org/wiki/Kolmogorov%27s_zero%E2%80%93one_law" rel="noreferrer">Kolmogorov 0-1-law</a> are not hard to prove but hard to even state without measure theory. Yet, they are immensely useful. Some results in probability theory require very deep measure theory. The two-volume book <em>Probability With a View Towards Statistics</em> by Hoffman-Jorgensen contains a lot of very advanced measure theory. </p> <p>All that being said, there are a lot of statisticians who live happily avoiding any measure theory. There are however no real analysts who can really do without measure theory.</p>
<p>The usual answer is that of course measure theory is not only provides the right language for rigorous statements, but allows achieve progress not possible without it. The only place I found a different point of view is a remarkable book by Edwin Jaynes, <a href="http://omega.albany.edu:8008/JaynesBook.html">Probability Theory: The Logic of Science</a>, which is real pleasure to read. Here is an exctract from Appendix B.3: Willy Feller on measure theory:</p> <blockquote> <p>In contrast to our policy, many expositions of probability theory begin at the outset to try to assign probabilities on infinite sets, both countable or uncountable. Those who use measure theory are, in effect, supposing the passage to an infinite set already accomplished before introducing probabilities. For example, Feller advocates this policy and uses it throughout his second volume (Feller, 1966).</p> <p>In discussing this issue, Feller (1966) notes that specialists in various applications sometimes ‘deny the need for measure theory because they are unacquainted with problems of other types and with situations where vague reasoning did lead to wrong results’. If Feller knew of any case where such a thing has happened, this would surely have been the place to cite it – yet he does not. Therefore we remain, just as he says, unacquainted with instances where wrong results could be attributed to failure to use measure theory.</p> <p>But, as noted particularly in Chapter 15, there are many documentable cases where careless use of infinite sets has led to absurdities.We know of no case where our ‘cautious approach’ policy leads to inconsistency or error; or fails to yield a result that is reasonable.</p> <p>We do not use the notation of measure theory because it presupposes the passage to an infinite limit already carried out at the beginning of a derivation – in defiance of the advice of Gauss, quoted at the start of Chapter 15. But in our calculations we often pass to an infinite limit at the end of a derivation; then we are in effect using ‘Lebesgue measure’ directly in its original meaning. We think that failure to use current measure theory notation is not ‘vague reasoning’; quite the opposite. It is a matter of doing things in the proper order.</p> </blockquote> <p>You should finish reading the whole text in reference I gave above. </p>
linear-algebra
<p>In my opinion both are almost same. However there should be some differenes like any two elements can be multiplied in a field but it is not allowed in vector space as only scalar multiplication is allowed where scalars are from the field.</p> <p>Could anyone give me atleast one counter- example where field and vector space are both same. Every field is a vector space but not every vectorspace is a field. I need an example for which a vector space is also a field.</p> <p>Thanks in advance. (I'm not from mathematical background.)</p>
<p>It is true that vector spaces and fields both have operations we often call multiplication, but these operations are fundamentally different, and, like you say, we sometimes call the operation on vector spaces <em>scalar multiplication</em> for emphasis.</p> <p>The operations on a field <span class="math-container">$\mathbb{F}$</span> are</p> <ul> <li><span class="math-container">$+$</span>: <span class="math-container">$\mathbb{F} \times \mathbb{F} \to \mathbb{F}$</span></li> <li><span class="math-container">$\times$</span>: <span class="math-container">$\mathbb{F} \times \mathbb{F} \to \mathbb{F}$</span></li> </ul> <p>The operations on a vector space <span class="math-container">$\mathbb{V}$</span> over a field <span class="math-container">$\mathbb{F}$</span> are</p> <ul> <li><span class="math-container">$+$</span>: <span class="math-container">$\mathbb{V} \times \mathbb{V} \to \mathbb{V}$</span></li> <li><span class="math-container">$\,\cdot\,$</span>: <span class="math-container">$\mathbb{F} \times \mathbb{V} \to \mathbb{V}$</span></li> </ul> <p>One of the field axioms says that any nonzero element <span class="math-container">$c \in \mathbb{F}$</span> has a multiplicative inverse, namely an element <span class="math-container">$c^{-1} \in \mathbb{F}$</span> such that <span class="math-container">$c \times c^{-1} = 1 = c^{-1} \times c$</span>. There is no corresponding property among the vector space axioms.</p> <p>It's an important example---and possibly the source of the confusion between these objects---that any field <span class="math-container">$\mathbb{F}$</span> is a vector space over itself, and in this special case the operations <span class="math-container">$\cdot$</span> and <span class="math-container">$\times$</span> coincide.</p> <p>On the other hand, for any field <span class="math-container">$\mathbb{F}$</span>, the Cartesian product <span class="math-container">$\mathbb{F}^n := \mathbb{F} \times \cdots \times \mathbb{F}$</span> has a natural vector space structure over <span class="math-container">$\mathbb{F}$</span>, but for <span class="math-container">$n &gt; 1$</span> it does not in general have a <em>natural</em> multiplication rule satisfying the field axioms, and hence does not have a natural field structure.</p> <p><strong>Remark</strong> As @hardmath points out in the below comments, one can often realize a finite-dimensional vector space <span class="math-container">$\mathbb{F}^n$</span> over a field <span class="math-container">$\mathbb{F}$</span> as a field in its own right <em>if</em> one makes additional choices. If <span class="math-container">$f$</span> is a polynomial irreducible over <span class="math-container">$\mathbb{F}$</span>, say with <span class="math-container">$n := \deg f$</span>, then we can form the set <span class="math-container">$$\mathbb{F}[x] / \langle f(x) \rangle$$</span> over <span class="math-container">$\mathbb{F}$</span>: This just means that we consider the vector space of polynomials with coefficients in <span class="math-container">$\mathbb{F}$</span> and declare two polynomials to be equivalent if their difference is some multiple of <span class="math-container">$f$</span>. Now, polynomial addition and multiplication determine operations <span class="math-container">$+$</span> and <span class="math-container">$\times$</span> on this set, and it turns out that because <span class="math-container">$f$</span> is irreducible, these operations give the set the structure of a field. If we denote by <span class="math-container">$\alpha$</span> the image of <span class="math-container">$x$</span> under the map <span class="math-container">$\mathbb{F}[x] \to \mathbb{F}[x] / \langle f(x) \rangle$</span> (since we identify <span class="math-container">$f$</span> with <span class="math-container">$0$</span>, we can think of <span class="math-container">$\alpha$</span> as a root of <span class="math-container">$f$</span>), then by construction <span class="math-container">$\{1, \alpha, \alpha^2, \ldots, \alpha^{n - 1}\}$</span> is a basis of (the underlying vector space of) <span class="math-container">$\mathbb{F}[x] / \langle f \rangle$</span>; in particular, we can identify the span of <span class="math-container">$1$</span> with <span class="math-container">$\Bbb F$</span>, which we may hence regard as a subfield of <span class="math-container">$\mathbb{F}[x] / \langle f(x) \rangle$</span>; we thus call the latter a <em>field extension</em> of <span class="math-container">$\Bbb F$</span>. In particular, this basis defines a vector space isomorphism <span class="math-container">$$\mathbb{F}^n \to \mathbb{F}[x] / \langle f(x) \rangle, \qquad (p_0, \ldots, p_{n - 1}) \mapsto p_0 + p_1 \alpha + \ldots + p_{n - 1} \alpha^{n - 1}.$$</span> Since <span class="math-container">$\alpha$</span> depends on <span class="math-container">$f$</span>, this isomorphism <em>does</em> depend on a choice of irreducible polynomial <span class="math-container">$f$</span> of degree <span class="math-container">$n$</span>, so the field structure defined on <span class="math-container">$\mathbb{F}^n$</span> by declaring the vector space isomorphism to be a field isomorphism is not natural.</p> <p><strong>Example</strong> Taking <span class="math-container">$\Bbb F := \mathbb{R}$</span> and <span class="math-container">$f(x) := x^2 + 1 \in \mathbb{R}[x]$</span> gives a field <span class="math-container">$$\mathbb{C} := \mathbb{R}[x] / \langle x^2 + 1 \rangle.$$</span> In this case, the image of <span class="math-container">$x$</span> under the canonical quotient map <span class="math-container">$\mathbb{R}[x] \to \mathbb{R}[x] / \langle x^2 + 1 \rangle$</span> is usually denoted <span class="math-container">$i$</span>, and this field is exactly the complex numbers, which we have realized as a (real) vector space of dimension <span class="math-container">$2$</span> over <span class="math-container">$\mathbb{R}$</span> with basis <span class="math-container">$\{1, i\}$</span>.</p>
<p>A <em>field</em> is an algebraic structure allowing the four basic operations $+$, $-$, $\cdot$, and $:\,$, such that the usual rules of algebra hold, e.g., $(x+y)\cdot z=(x\cdot z) +(y\cdot z)$, etcetera, and division by $0$ is forbidden. The elements of a given field should be considered as "numbers". The systems ${\mathbb Q}$, ${\mathbb R}$, and ${\mathbb C}$ are fields, but there are many others, e.g., the field ${\mathbb F}_2$ consisting only of the two elements $0$, $1$ and satisfying (apart from the obvious relations) $1+1=0$.</p> <p>A <em>vector space</em> $X$ is in the first place an "additive structure" satisfying the rules we associate with such structures, e.g., $a+({-a})=0$, etc. In addition any vector space has associated with it a certain field $F$, the <em>field of scalars</em> for that vector space. The elements $x$, $y\in X$ cannot only be added and subtracted, but they can be as well <em>scaled</em> by "numbers" $\lambda\in F$. The vector $x$ scaled by the factor $\lambda$ is denoted by $\lambda x$. This scaling satisfies the laws we are accustomed to from the scaling of vectors in ${\mathbb R}^3$: $$\lambda(x+y)=\lambda x+\lambda y,\qquad (\lambda+\mu)x=\lambda x+\mu x\ .$$</p> <p>Asking "What is the difference between a vector space and a field" is similar to asking "What is the difference between tension and charge" in electrodynamics. In both cases the simple answer would be: "They are different notions making sense in the same discipline".</p>
matrices
<p>What is an <em>intuitive</em> meaning of the null space of a matrix? Why is it useful?</p> <p>I'm not looking for textbook definitions. My textbook gives me the definition, but I just don't "get" it.</p> <p>E.g.: I think of the <em>rank</em> $r$ of a matrix as the minimum number of dimensions that a linear combination of its columns would have; it tells me that, if I combined the vectors in its columns in some order, I'd get a set of coordinates for an $r$-dimensional space, where $r$ is minimum (please correct me if I'm wrong). So that means I can relate <em>rank</em> (and also dimension) to actual coordinate systems, and so it makes sense to me. But I can't think of any physical meaning for a null space... could someone explain what its meaning would be, for example, in a coordinate system?</p> <p>Thanks!</p>
<p>If $A$ is your matrix, the null-space is simply put, the set of all vectors $v$ such that $A \cdot v = 0$. It's good to think of the matrix as a linear transformation; if you let $h(v) = A \cdot v$, then the null-space is again the set of all vectors that are sent to the zero vector by $h$. Think of this as the set of vectors that <em>lose their identity</em> as $h$ is applied to them.</p> <p>Note that the null-space is equivalently the set of solutions to the homogeneous equation $A \cdot v = 0$.</p> <p>Nullity is the complement to the rank of a matrix. They are both really important; here is a <a href="https://math.stackexchange.com/questions/21100/importance-of-rank-of-a-matrix">similar question</a> on the rank of a matrix, you can find some nice answers why there.</p>
<p>This is <a href="https://math.stackexchange.com/a/987657">an answer</a> I got from <a href="https://math.stackexchange.com/q/987146">my own question</a>, it's pretty awesome!</p> <blockquote> <p>Let's suppose that the matrix A represents a physical system. As an example, let's assume our system is a rocket, and A is a matrix representing the directions we can go based on our thrusters. So what do the null space and the column space represent?</p> <p>Well let's suppose we have a direction that we're interested in. Is it in our column space? If so, then we can move in that direction. The column space is the set of directions that we can achieve based on our thrusters. Let's suppose that we have three thrusters equally spaced around our rocket. If they're all perfectly functional then we can move in any direction. In this case our column space is the entire range. But what happens when a thruster breaks? Now we've only got two thrusters. Our linear system will have changed (the matrix A will be different), and our column space will be reduced.</p> <p>What's the null space? The null space are the set of thruster intructions that completely waste fuel. They're the set of instructions where our thrusters will thrust, but the direction will not be changed at all.</p> <p>Another example: Perhaps A can represent a rate of return on investments. The range are all the rates of return that are achievable. The null space are all the investments that can be made that wouldn't change the rate of return at all.</p> <p>Another example: room illumination. The range of A represents the area of the room that can be illuminated. The null space of A represents the power we can apply to lamps that don't change the illumination in the room at all.</p> </blockquote> <p>-- <a href="https://math.stackexchange.com/a/987657">NicNic8</a></p>
logic
<p>In a "math structures" class at the community college I'm attending (uses the book Discrete Math by Epp, and is basically a discrete math "light" edition), we've been covering some basic logic.</p> <p>I've been reading some of the logic questions on here to get used to notation, etc. However, when I came across the question <a href="https://math.stackexchange.com/questions/279015/visualizing-concepts-in-mathematical-logic">Visualizing Concepts in Mathematical Logic</a>, I didn't understand what the $\vdash$ symbol means.</p> <p>It's not in Discrete Math by Epp, nor is it in my mom's old logic book from when she went to college.</p> <p>Wikipedia's <a href="http://en.wikipedia.org/wiki/List_of_mathematical_symbols" rel="nofollow noreferrer">Math Symbols</a> page says it means "can be derived from" when used in a logic context. However, that doesn't make any sense in the above question, as there is nothing on the left of the $\vdash$.</p> <p><strong>So, what does $\vdash$ mean, especially in the context of the question linked above?</strong></p>
<p>Let $S$ be a set of (logical) formulae and $\psi$ be a formula. Then $S \vdash \psi$ means that $\psi$ can be derived from the formulae in $S$. Intuitively, $S$ is a list of assumptions, and $S \vdash \psi$ if we can prove $\psi$ from the assumptions in $S$.</p> <p>$\vdash \psi$ is shorthand for $\varnothing \vdash \psi$. That is, $\psi$ can be derived with no assumptions, so that in some sense, $\psi$ is 'true').</p> <hr> <p>More precisely, systems of logic consist of certain axioms and rules of inference (one such rule being "from $\phi$ and $\phi \to \psi$ we can infer $\psi$"). What it means for $\psi$ to be 'derivable' from a set $S$ of formulae is that in a finite number of steps you can work with (i) the formulae in $S$, (ii) the axioms of your logical system, and (iii) the rules of inference, and end up with $\psi$.</p> <p>In particular, if $\vdash \psi$ then $\psi$ can be derived solely from the axioms by using the rules of inference in your logical system.</p>
<p>⊢ means "can be derived from" or "proves", and denotes syntactic entailment. For example, let G be a set of sentences in logic, and A be any sentence in logic. G ⊢ A (read: G proves A) iff A can be derived using only the sentences in G as assumptions. Thus, if for a certain A we have ⊢ A, then A can be derived without any open assumptions.</p> <p>Note that ⊢ is different than ⊨, which stands for semantic entailment.</p>
probability
<p>I am going to give a presentation about the <em>indicator functions</em>, and I am looking for some interesting examples to include. The examples can be even an overkill solution since I am mainly interested in demonstrating the creative ways of using it.</p> <p>I would be grateful if you share your examples. The diversity of answers is appreciated.</p> <p>To give you an idea, here are my examples. Most of my examples are in probability and combinatorics so examples from other fields would be even better.</p> <ol> <li><p>Calculating the expected value of a random variable using linearity of expectations. Most famously the number of fixed points in a random permutation.</p> </li> <li><p>Showing how <span class="math-container">$|A \Delta B| = |A|+|B|-2|A \cap B|$</span> and <span class="math-container">$(A-B)^2 = A^2+B^2-2AB$</span> are related.</p> </li> <li><p>An overkill proof for <span class="math-container">$\sum \deg(v) = 2|E|$</span>.</p> </li> </ol>
<p>Whether it's overkill is open to debate, but I feel that the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="noreferrer">inclusion-exclusion principle</a> is best seen through the prism of indicator functions.</p> <p>Basically, the classical formula is just what you get numerically from the (clear) identity: <span class="math-container">$$ 1 - 1_{\bigcup_{i=1}^n A_i} = 1_{\bigcap_{i=1}^n \overline A_i} = \prod_{i=1}^n (1-1_{A_i}) = \sum_{J \subseteq [\![ 1,n ]\!]} (-1)^{|J|} 1_{\bigcap_{j\in J} A_j}.$$</span></p>
<p>Indicator functions are often very useful in conjunction with Fubini’s theorem.</p> <p>Suppose you want to show: <span class="math-container">$$\newcommand\dif{\mathop{}\!\mathrm{d}} \int_Y \int_{X_y} f(x, y) \dif x \dif y = \int_X \int_{Y_x} f(x,y) \dif y \dif x$$</span> where the two subsets <span class="math-container">$X_y \subseteq X$</span> and <span class="math-container">$Y_x \subseteq Y$</span> describe the same relation <span class="math-container">$x \in X_y \iff y \in Y_x$</span>.</p> <p>Because of the variable in the inner integral’s domain, you cannot use Fubini right away to permutate the two sums directly.</p> <p>But you can do it if you use an indicator function to describe the set <span class="math-container">$$Z = \left\{ (x,y) \in X \times Y \mid x \in X_y \right\} = \left\{ (x,y) \in X \times Y \mid y \in Y_x \right\}.$$</span></p> <p>Finally: <span class="math-container">\begin{align*} \int_Y \int_{X_y} f(x, y) \dif x \dif y &amp; = \int_Y \int_X 1_Z(x,y) f(x,y) \dif x \dif y \\ &amp; = \int_X \int_Y 1_Z(x,y) f(x,y) \dif y \dif x \\ &amp; = \int_X \int_{Y_x} f(x,y) \dif y \dif x. \end{align*}</span></p>