tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
matrices
<p>Let $A$ be a symmetric $n\times n$ matrix.</p> <p>I found <a href="https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/04LinearAlgebra/posdef/" rel="noreferrer">a method on the web to check if $A$ is <strong>positive definite</strong></a>:</p> <blockquote> <p>$A$ is positive-definite if all the diagonal entries are positive, and each diagonal entry is greater than the sum of the absolute values of all other entries in the corresponding row/column.</p> </blockquote> <p>I couldn't find a proof for this statement. I also couldn't find a reference in my linear algebra books.</p> <p>I've a few questions.</p> <ol> <li><p>How do we prove the above statement?</p></li> <li><p>Is the following <strong>slightly weaker</strong> statement true? </p></li> </ol> <blockquote> <p>A symmetric matrix $A$ is positive-definite if all the diagonal entries are positive, each diagonal entry is greater than <strong>or equal to</strong> the sum of the absolute values of all other entries in the corresponding row/column, and there exists one diagonal entry which is strictly greater than the sum of the absolute values of all other entries in the corresponding row/column.</p> </blockquote>
<p>These matrices are called (strictly) <a href="http://en.wikipedia.org/wiki/Diagonally_dominant_matrix">diagonally dominant</a>. The standard way to show they are positive definite is with the <a href="http://en.wikipedia.org/wiki/Gershgorin_circle_theorem">Gershgorin Circle Theorem</a>. Your weaker condition does not give positive definiteness; a counterexample is $ \left[ \begin{matrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 1 \end{matrix} \right] $.</p>
<p>Before continuing, let me add the caution that a symmetric matrix can violate your rules and still be positive definite, give me a minute to check the eigenvalues</p> <p><span class="math-container">$$ H \; = \; \left( \begin{array}{rrr} 3 &amp; 2 &amp; 0 \\ 2 &amp; 3 &amp; 2 \\ 0 &amp; 2 &amp; 3 \end{array} \right) . $$</span> This is positive by Sylvester's Law of Inertia, <a href="https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia" rel="noreferrer">https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia</a></p> <p>A proof is given <a href="https://planetmath.org/propertiesofdiagonallydominantmatrix" rel="noreferrer">here</a> as a consequence of Gershgorin's circle theorem. For additional information, see <a href="http://en.wikipedia.org/wiki/Diagonally_dominant_matrix" rel="noreferrer">http://en.wikipedia.org/wiki/Diagonally_dominant_matrix</a> and <a href="http://mathworld.wolfram.com/DiagonallyDominantMatrix.html" rel="noreferrer">http://mathworld.wolfram.com/DiagonallyDominantMatrix.html</a> or just Google &quot;diagonally dominant symmetric&quot;</p> <p>Later methodology, amounting to repeated completing the square:</p> <p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p> <p><span class="math-container">$$ P^T H P = D $$</span></p> <p><span class="math-container">$$\left( \begin{array}{rrr} 1 &amp; 0 &amp; 0 \\ - \frac{ 2 }{ 3 } &amp; 1 &amp; 0 \\ \frac{ 4 }{ 5 } &amp; - \frac{ 6 }{ 5 } &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 &amp; 2 &amp; 0 \\ 2 &amp; 3 &amp; 2 \\ 0 &amp; 2 &amp; 3 \\ \end{array} \right) \left( \begin{array}{rrr} 1 &amp; - \frac{ 2 }{ 3 } &amp; \frac{ 4 }{ 5 } \\ 0 &amp; 1 &amp; - \frac{ 6 }{ 5 } \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 &amp; 0 &amp; 0 \\ 0 &amp; \frac{ 5 }{ 3 } &amp; 0 \\ 0 &amp; 0 &amp; \frac{ 3 }{ 5 } \\ \end{array} \right) $$</span> <span class="math-container">$$ $$</span></p> <p><span class="math-container">$$ Q^T D Q = H $$</span></p> <p><span class="math-container">$$\left( \begin{array}{rrr} 1 &amp; 0 &amp; 0 \\ \frac{ 2 }{ 3 } &amp; 1 &amp; 0 \\ 0 &amp; \frac{ 6 }{ 5 } &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 &amp; 0 &amp; 0 \\ 0 &amp; \frac{ 5 }{ 3 } &amp; 0 \\ 0 &amp; 0 &amp; \frac{ 3 }{ 5 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 &amp; \frac{ 2 }{ 3 } &amp; 0 \\ 0 &amp; 1 &amp; \frac{ 6 }{ 5 } \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 &amp; 2 &amp; 0 \\ 2 &amp; 3 &amp; 2 \\ 0 &amp; 2 &amp; 3 \\ \end{array} \right) $$</span></p> <p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p> <p>Algorithm discussed at <a href="http://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr">reference for linear algebra books that teach reverse Hermite method for symmetric matrices</a><br /> <a href="https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia" rel="noreferrer">https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia</a><br /> <span class="math-container">$$ H = \left( \begin{array}{rrr} 3 &amp; 2 &amp; 0 \\ 2 &amp; 3 &amp; 2 \\ 0 &amp; 2 &amp; 3 \\ \end{array} \right) $$</span> <span class="math-container">$$ D_0 = H $$</span> <span class="math-container">$$ E_j^T D_{j-1} E_j = D_j $$</span> <span class="math-container">$$ P_{j-1} E_j = P_j $$</span> <span class="math-container">$$ E_j^{-1} Q_{j-1} = Q_j $$</span> <span class="math-container">$$ P_j Q_j = Q_j P_j = I $$</span> <span class="math-container">$$ P_j^T H P_j = D_j $$</span> <span class="math-container">$$ Q_j^T D_j Q_j = H $$</span></p> <p><span class="math-container">$$ H = \left( \begin{array}{rrr} 3 &amp; 2 &amp; 0 \\ 2 &amp; 3 &amp; 2 \\ 0 &amp; 2 &amp; 3 \\ \end{array} \right) $$</span></p> <p>==============================================</p> <p><span class="math-container">$$ E_{1} = \left( \begin{array}{rrr} 1 &amp; - \frac{ 2 }{ 3 } &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) $$</span> <span class="math-container">$$ P_{1} = \left( \begin{array}{rrr} 1 &amp; - \frac{ 2 }{ 3 } &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) , \; \; \; Q_{1} = \left( \begin{array}{rrr} 1 &amp; \frac{ 2 }{ 3 } &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) , \; \; \; D_{1} = \left( \begin{array}{rrr} 3 &amp; 0 &amp; 0 \\ 0 &amp; \frac{ 5 }{ 3 } &amp; 2 \\ 0 &amp; 2 &amp; 3 \\ \end{array} \right) $$</span></p> <p>==============================================</p> <p><span class="math-container">$$ E_{2} = \left( \begin{array}{rrr} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; - \frac{ 6 }{ 5 } \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) $$</span> <span class="math-container">$$ P_{2} = \left( \begin{array}{rrr} 1 &amp; - \frac{ 2 }{ 3 } &amp; \frac{ 4 }{ 5 } \\ 0 &amp; 1 &amp; - \frac{ 6 }{ 5 } \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) , \; \; \; Q_{2} = \left( \begin{array}{rrr} 1 &amp; \frac{ 2 }{ 3 } &amp; 0 \\ 0 &amp; 1 &amp; \frac{ 6 }{ 5 } \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) , \; \; \; D_{2} = \left( \begin{array}{rrr} 3 &amp; 0 &amp; 0 \\ 0 &amp; \frac{ 5 }{ 3 } &amp; 0 \\ 0 &amp; 0 &amp; \frac{ 3 }{ 5 } \\ \end{array} \right) $$</span></p> <p>==============================================</p> <p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p> <p><span class="math-container">$$ P^T H P = D $$</span></p> <p><span class="math-container">$$\left( \begin{array}{rrr} 1 &amp; 0 &amp; 0 \\ - \frac{ 2 }{ 3 } &amp; 1 &amp; 0 \\ \frac{ 4 }{ 5 } &amp; - \frac{ 6 }{ 5 } &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 &amp; 2 &amp; 0 \\ 2 &amp; 3 &amp; 2 \\ 0 &amp; 2 &amp; 3 \\ \end{array} \right) \left( \begin{array}{rrr} 1 &amp; - \frac{ 2 }{ 3 } &amp; \frac{ 4 }{ 5 } \\ 0 &amp; 1 &amp; - \frac{ 6 }{ 5 } \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 &amp; 0 &amp; 0 \\ 0 &amp; \frac{ 5 }{ 3 } &amp; 0 \\ 0 &amp; 0 &amp; \frac{ 3 }{ 5 } \\ \end{array} \right) $$</span> <span class="math-container">$$ $$</span></p> <p><span class="math-container">$$ Q^T D Q = H $$</span></p> <p><span class="math-container">$$\left( \begin{array}{rrr} 1 &amp; 0 &amp; 0 \\ \frac{ 2 }{ 3 } &amp; 1 &amp; 0 \\ 0 &amp; \frac{ 6 }{ 5 } &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 &amp; 0 &amp; 0 \\ 0 &amp; \frac{ 5 }{ 3 } &amp; 0 \\ 0 &amp; 0 &amp; \frac{ 3 }{ 5 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 &amp; \frac{ 2 }{ 3 } &amp; 0 \\ 0 &amp; 1 &amp; \frac{ 6 }{ 5 } \\ 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 &amp; 2 &amp; 0 \\ 2 &amp; 3 &amp; 2 \\ 0 &amp; 2 &amp; 3 \\ \end{array} \right) $$</span></p> <p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p>
probability
<p>Your friend flips a coin 7 times and you flip a coin 8 times; the person who got the most tails wins. If you get an equal amount, your friend wins.</p> <p>There is a 50% chance of you winning the game and a 50% chance of your friend winning.</p> <p>How can I prove this? The way I see it, you get one more flip than your friend so you have a 50% chance of winning if there is a 50% chance of getting a tails.</p> <p>I even wrote a little script to confirm this suspicion:</p> <pre><code>from random import choice coin = ['H', 'T'] def flipCoin(count, side): num = 0 for i in range(0, count): if choice(coin) == side: num += 1 return num games = 0 wins = 0 plays = 88888 for i in range(0, plays): you = flipCoin(8, 'T') friend = flipCoin(7, 'T') games += 1 if you &gt; friend: wins += 1 print('Games: ' + str(games) + ' Wins: ' + str(wins)) probability = wins/games * 100.0 print('Probability: ' + str(probability) + ' from ' + str(plays) + ' games.') </code></pre> <p>and as expected,</p> <pre><code>Games: 88888 Wins: 44603 Probability: 50.17887678876789 from 88888 games. </code></pre> <p>But how can I prove this?</p>
<p>Well, let there be two players $A$ and $B$. Let them flip $7$ coins each. Whoever gets more tails wins, ties are discounted. It's obvious that both players have an equal probability of winning $p=0.5$.</p> <p>Now let's extend this. As both players have equal probability of winning the first seven tosses, I think we can discard them and view the 8th toss as a tiebreaker. So let's give player $A$ the 8th toss: if he gets a tail, he wins, otherwise, he loses. So with $p = 0.5$, he will either win or lose this 8th toss. Putting it like this, we can see that the 8th toss for player $A$ is equivalent to giving both players another toss and discarding ties, so both players have winning probabilities of $0.5$.</p>
<p>The probability distribution of the number of tails flipped by you is binomial with parameters $n = 8$, and $p$, where we will take $p$ to be the probability of obtaining tails in a single flip. Then the random number of tails you flipped $Y$ has the probability mass function $$\Pr[Y = k] = \binom{8}{k} p^k (1-p)^{8-k}.$$ Similarly, the number of tails $F$ flipped by your friend is $$\Pr[F = k] = \binom{7}{k} p^k (1-p)^{7-k},$$ assuming that the coin you flip and the coin your friend flips have the same probability of tails.</p> <p>Now, suppose we are interested in calculating $$\Pr[Y &gt; F],$$ the probability that you get strictly more tails than your friend (and therefore, you win). An exact calculation would then require the evaluation of the sum $$\begin{align*} \Pr[Y &gt; F] &amp;= \sum_{k=0}^7 \sum_{j=k+1}^8 \Pr[Y = j \cap F = k] \\ &amp;= \sum_{k=0}^7 \sum_{j=k+1}^8 \binom{8}{j} p^j (1-p)^{8-j} \binom{7}{k} p^k (1-p)^{7-k}, \end{align*} $$ since your outcome $Y$ is independent of his outcome $F$. For such a small number of trials, this is not hard to compute: $$\begin{align*} \Pr[Y &gt; F] &amp;= p^{15}+7 p^{14} q+77 p^{13} q^2+203 p^{12} q^3+903 p^{11} q^4+1281 p^{10} q^5 \\ &amp;+3115 p^9 q^6+2605 p^8 q^7+3830 p^7 q^8+1890 p^6 q^9+1722 p^5 q^{10} \\ &amp;+462 p^4 q^{11}+252 p^3 q^{12}+28 p^2 q^{13}+8 p q^{14}, \end{align*}$$ where $q = 1-p$. For $p = 1/2$--a fair coin--this is exactly $1/2$.</p> <p>That seems surprising! But there is an intuitive interpretation. Think of your final toss as a tiebreaker, in the event that both of you got the same number of tails after 7 trials each. If you win the final toss with a tail, you win because your tail count is now strictly higher. If not, your tail count is still the same as his, and under the rules, he wins. But the chances of either outcome for the tiebreaker is the same.</p>
logic
<p>As a physicist trying to understand the foundations of modern mathematics (in particular Model Theory) $-$ I have a hard time coping with the border between syntax and semantics. I believe a lot would become clearer for me, if I stated what I think the Gödel's Completeness Theorem is about (after studying various materials including Wikipedia it seems redundant for me) and someone knowledgeable would clarify my misconceptions. So here it goes:</p> <p>As I understand, if we have a set $U$ with a particular structure (functions, relations etc.) we can interpret it (through a particular signature, e.g. group signature $\{ e,\cdot \}$ ), as a model $\mathfrak{A}$ for a certain mathematical theory $\mathcal{T}$ (a theory being a set of axioms and its consequences). The theory is satisfied by $\mathfrak{A}$ only if $U$'s structure satisfies the axioms.</p> <p>Enter Gödel's theorem: For every first order theory $\mathcal{T}$ :</p> <p>$$\left( \exists \textrm{model } \mathfrak{A}: \mathfrak{A} \models \mathcal{T} \right) \iff \mathcal{T} \textrm{ is consistent}$$ So I'm confused. Isn't $\mathcal{T}$ being consistent a natural requirement which implicates that a set $U$ with a corresponding structure always exists (because of the ZFC's set theory freedom in constructing sets as we please without any concerns regarding what constitutes the set)? And that in turn always allows us to create a model $\mathfrak{A}$ with an interpretation of the signature of the theory $\mathcal{T}$ in terms of $U$'s structure?</p> <p>Where am I making mistakes? What concepts do I need to understand better in order to be able to properly comprehend this theorem and what model theory is and is not about? Please help!</p>
<p>It may help to look at things from a more general perspective. Presentations that focus on just first-order logic may obscure the fact that specific choices are implicit in the definitions of first-order logic; the general perspective highlights these choices. I want to write this up in detail, as a reference.</p> <h3>General "logics"</h3> <p>We define a particular type of general "logic" with negation. This definition is intended to be very general. In particular, it accommodates much broader types of "syntax" and "semantics" than first-order logic. </p> <p>A general "logic" will consist of:</p> <ul> <li><p>A set of "sentences" $L$. These do not have to be sentences in the sense of first-order logic, they can be any set of objects.</p></li> <li><p>A function $N: L \to L$ that assigns to each $x \in L$ a "negation" or "denial" $N(x)$.</p></li> <li><p>A set of "deductive rules", which are given as a closure operation on the powerset of $L$. So we have a function $c: 2^L \to 2^L$ such that</p> <ol> <li><p>$S \subseteq c(S)$ for each $S \subseteq L$</p></li> <li><p>$c(c(S)) = c(S)$ for each $S \subseteq L$</p></li> <li><p>If $S \subseteq S'$ then $c(S) \subseteq c(S')$. </p></li> </ol></li> <li><p>A set of "models" $M$. These do not have to be structures in the sense of first-order logic. The only assumption is that each $m \in M$ comes with a set $v_m \subseteq L$ of sentences that are "satisfied" (in some sense) by $M$: </p> <ol> <li><p>If $S \subseteq L$ and $x \in v_m$ for each $x \in S$ then $y \in v_m $ for each $y \in c(S)$</p></li> <li><p>There is no $m \in M$ and $x \in L$ with $x \in v_m$ and $N(x) \in v_m$</p></li> </ol></li> </ul> <p>The exact nature of the "sentences", "deductive rules", and "models", and the definition of a model "satisfying" a sentence are irrelevant, as long as they satisfy the axioms listed above. These axioms are compatible with both classical and intuitionistic logic. They are also compatible with infinitary logics such as $L_{\omega_1, \omega}$, with modal logics, and other logical systems.</p> <p>The main restriction in a general "logic" is that we have included a notion of negation or denial in the definition of a general "logic" so that we can talk about consistency.</p> <ul> <li><p>We say that a set $S \subseteq L$ is <strong>syntactically consistent</strong> if there is no $x \in L$ such that $x$ and $N(x)$ are both in $c(S)$.</p></li> <li><p>We say $S$ is <strong>semantically consistent</strong> if there is an $m \in M$ such that $x \in v_m$ for all $x \in S$. </p></li> </ul> <p>The definition of a general "logic" is designed to imply that each semantically consistent theory is syntactically consistent. </p> <h3>First-order logic as a general logic</h3> <p>To see how the definition of a general "logic" works, here is how to view first-order logic in any fixed signature as a general "logic". Fix a signature $\sigma$.</p> <ul> <li><p>$L$ will be the set of all $\sigma$-sentences. </p></li> <li><p>$N$ will take a sentence $x$ and return $\lnot x$, the canonical negation of $x$. </p></li> <li><p>$c$ will take $S \subseteq L$ and return the set of all $\sigma$-sentences provable from $S$. </p></li> <li><p>$M$ will be the set of <em>all</em> $\sigma$-structures. For each $m \in M$, $v_m$ is given by the usual Tarski definition of truth.</p></li> </ul> <p>With these definitions, syntactic consistency and semantic consistency in the general sense match up with syntactic consistency and semantic consistency as usually defined for first-order logic.</p> <h3>The completeness theorem</h3> <p>Gödel's completeness theorem simply says that, if we treat first-order logic in a fixed signature as a general "logic" (as above) then syntactic consistency is equivalent to semantic consistency. </p> <p>The benefit of the general perspective is that we can see how things could go wrong if we change just one part of the interpretation of first-order logic with signature $\sigma$ as a general "logic":</p> <ol> <li><p>If we were to replace $c$ with a weaker operator, syntactic consistency may not imply semantic consistency. For example, we could take $c(S) = S$ for all $S$. Then there would be syntactically consistent theories that have no model. In practical terms, making $c$ weaker means removing deduction rules.</p></li> <li><p>If we were to replace $M$ with a smaller class of models, syntactic consistency may not imply semantic consistency. For example, if we we take $M$ to be just the set of <em>finite</em> $\sigma$-structures, there are syntactically consistent theories that have no model. In practical terms, making $M$ smaller means excluding some structures from consideration.</p></li> <li><p>If we were to replace $c$ with a stronger closure operator, semantic consistency may not imply syntactic consistency. For example, we could take $c(S) = L$ for all $S$. Then there would be semantically consistent theories that are syntactically inconsistent. In practical terms, making $c$ stronger means adding new deduction rules.</p></li> </ol> <p>On the other hand, some changes would preserve the equivalence of syntactic and semantic consistency. For example, if we take $M$ to be just the set of <em>finite or countable</em> $\sigma$-structures, we can still prove the corresponding completeness theorem for first-order logic. In this sense, the choice of $M$ to be the set of <em>all</em> $\sigma$-structures is arbitrary.</p> <h3>Other completeness theorems</h3> <p>We say that the "completeness theorem" for a general "logic" is the theorem that syntactic consistency is equivalent to semantic consistency in that logic.</p> <ul> <li><p>There is a natural completeness theorem for intuitionistic first-order logic. Here we let $c$ be the closure operator derived from any of the usual deductive systems for intuitionistic logic, and let $M$ be the set of Kripke models. </p></li> <li><p>There is a completeness theorem for second-order logic (in a fixed signature) with Henkin semantics. Here we let $c$ be the closure operator derived from the usual deductive system for second-order logic, and let $M$ be the set of Henkin models. On the other hand, if we let $M$ be the set of all "full" models, the corresponding completeness theorem fails, because this class of models is too small.</p></li> <li><p>There are similar completeness theorems for propositional and first-order modal logics using Kripke frames.</p></li> </ul> <p>In each of those three cases, the historical development began with a deductive system, and the corresponding set of models was identified later. But, in other cases, we may begin with a set of models and look for a deductive system (including, in this sense, a set of axioms) that leads to a generalized completeness theorem. This is related to a common problem in model theory, which is to determine whether a given class of structures is "axiomatizable".</p>
<p>The usual form of the completeness theorem is this: $ T \models \phi \implies T \vdash\phi$, or that if $\phi$ is true in all models $\mathcal{M} \models T$, then there is a proof of $\phi$ from $T$. This is a non-trivial statement, structures and models are about sets with operations and relations that satisfy sentences. Proofs ignore the sets with structure and just gives rules for deriving new sentences from old. </p> <p>If you go to second order logic, this is no longer true. We can have a theory $PA$, which only has one model $\mathbb N \models PA$, but there are sentences $PA \models \phi$ with $PA \not\vdash \phi$ ("true but not provable" sentences). This follows from the incompleteness theorem which says that truth in the particular model $\mathbb N$ cannot be pinned down by proofs. The way first order logic avoids this is by the fact that a first order theory can't pin down only one model $\mathbb N \models PA$. It has to also admit non-standard models (this follows from Lowenheim-Skolem).</p> <p>This theorem, along with the soundness theorem $T\vdash \phi \implies T\models \phi$ give a strong correspondence between syntax and semantics of first order logic.</p> <p>Your main confusion is that consistency here is a syntactic notion, so it doesn't directly have anything to do with models. The correspondence between the usual form of the completeness theorem, and your form is by using a contradiction in place of $\phi$, and taking the contrapositive. So if $T \not \vdash \bot$ ($T$ is consistent), then $T \not \models \bot$. That is, if $T$ is consistent, then there exists a model $\mathcal M \models T$ such that $\mathcal M \not \models \bot \iff \mathcal M \models \top$, but that's a tautology, so we just get that there exists a model of $T$. </p>
logic
<p>According to <a href="http://en.wikipedia.org/wiki/Logical_connective#Order_of_precedence">the precedence of logical connectives</a>, operator $\rightarrow$ gets higher precedence than $\leftrightarrow$ operator. But what about associativity of $\rightarrow$ operator?</p> <p>The implies operator ($\rightarrow$) does not have the associative property. That means that $(p \rightarrow q) \rightarrow r$ is not equivalent to $p \rightarrow (q \rightarrow r)$. Because of that, the question comes op how $p \rightarrow q \rightarrow r$ should be interpreted.</p> <p>The proposition $p \rightarrow q \rightarrow r$ can be defined in multiple ways that make sense:</p> <ul> <li>$(p \rightarrow q) \rightarrow r$ (left associativity)</li> <li>$p \rightarrow (q \rightarrow r)$ (right associativity)</li> <li>$(p \rightarrow q) \land (q \rightarrow r)$</li> </ul> <p>Which one of these definitions is used?</p> <p>I could not locate any book/webpage that mentions about associativity of logical operators in discrete mathematics.</p> <p>Please also cite the reference (book/reliable webpage) that you use to answer my question (as I'm planning to add this to wikipedia page about 'logical connectives').</p> <p>Thanks.</p> <p>PS: I got this question when I saw this problem: Check if following compound proposition is tautology or not:</p> <p>$$ \mathrm{p} \leftrightarrow (\mathrm{q} \wedge \mathrm{r}) \rightarrow \neg\mathrm{r} \rightarrow \neg\mathrm{p}$$</p>
<p>When you enter $p \Rightarrow q \Rightarrow r$ in <a href="http://en.wikipedia.org/wiki/Mathematica">Mathematica</a> (with <code>p \[Implies] q \[Implies] r</code>), it displays $p \Rightarrow (q \Rightarrow r)$.</p> <p>That makes it plausible that the $\rightarrow$ operator is generally accepted as right-associative.</p>
<p>Some logical operators are associative: both <span class="math-container">$\wedge$</span> and <span class="math-container">$\vee$</span> are associative, as a simple check of truth tables verifies. Likewise, the biconditional <span class="math-container">$\leftrightarrow$</span> is associative.</p> <p>However, the implication <span class="math-container">$\rightarrow$</span> is <em>not</em> associative. Compare <span class="math-container">$(p\rightarrow q)\rightarrow r$</span> and <span class="math-container">$p\rightarrow(q\rightarrow r)$</span>. If all of <span class="math-container">$p$</span>, <span class="math-container">$q$</span>, and <span class="math-container">$r$</span> are false, then <span class="math-container">$p\rightarrow (q\rightarrow r)$</span> is true, because the antecedent is false; but <span class="math-container">$(p\rightarrow q\rightarrow r$</span> is false, because <span class="math-container">$r$</span> is false, but <span class="math-container">$p\rightarrow q$</span> is true. They also disagree with <span class="math-container">$p$</span> and <span class="math-container">$r$</span> are false but <span class="math-container">$q$</span> is true: then <span class="math-container">$p\rightarrow(q\rightarrow r)$</span> is true because the antecedent is false, but <span class="math-container">$(p\rightarrow q)$</span> is true, <span class="math-container">$r$</span> false, so <span class="math-container">$(p\rightarrow q)\rightarrow r$</span> is false.</p> <p>Since they take different values at some truth assignments, the two propositions are not equivalent, so <span class="math-container">$\to$</span> is not associative.</p>
geometry
<p>It seems, at times, that physicists and mathematicians mean different things when they say the word "tensor." From my perspective, when I say tensor, I mean "an element of a tensor product of vector spaces." </p> <p>For instance, here is a segment about tensors from Zee's book <em>Einstein Gravity in a Nutshell</em>:</p> <blockquote> <p>We already saw in the preceding chapter that a vector is defined by how it transforms: $V^{'i} = R^{ij}V^j$ . Consider a collection of “mathematical entities” $T^{ij}$ with $i , j = 1, 2, . . . , D$ in $D$-dimensional space. If they transform under rotations according to $T^{ij} \to T^{'ij} = R^{ik}R^{jl}T^{kl}$ then we say that $T$ transforms like a tensor.</p> </blockquote> <p>This does not really make any sense to me. Even for "vectors," and before we get to "tensors," it seems like we'd have to be given a sense of what it means for an object to "transform." How do they divine these transformation rules?</p> <p>I am not completely formalism bound, but I have no idea how they would infer these transformation rules without a notion of what the object is <em>first</em>. For me, if I am given, say, $v \in \mathbb{R}^3$ endowed with whatever basis, I can <em>derive</em> that any linear map is given by matrix multiplication as it seems the physicists mean. But, I am having trouble even interpreting their statement. </p> <p>How do you derive how something "transforms" without having a notion of what it is? If you want to convince me that the moon is made of green cheese, I need to at least have a notion of what the moon is first. The same is true of tensors. </p> <p>My questions are:</p> <ul> <li>What exactly are the physicists saying, and can someone translate what they're saying into something more intelligible? How can they get these "transformation rules" without having a notion of what the thing is that they are transforming?</li> <li>What is the relationship between what physicists are expressing versus mathematicians? </li> <li>How can I talk about this with physicists without being accused of being a stickler for formalism and some kind of plague? </li> </ul>
<p>What a physicist probably means when they say &quot;tensor&quot; is &quot;a global section of a tensor bundle.&quot; I'll try and break it down to show the connection to what mathematicians mean by tensor.</p> <p>Physicists always have a manifold <span class="math-container">$M$</span> lying around. In classical mechanics or quantum mechanics, this manifold <span class="math-container">$M$</span> is usually flat spacetime, mathematically <span class="math-container">$\mathbb{R}^4$</span>. In general relativity, <span class="math-container">$M$</span> is the spacetime manifold whose geometry is governed by Einstein's equations.</p> <p>Now, with this underlying manifold <span class="math-container">$M$</span> we can discuss what it means to have a vector field on <span class="math-container">$M$</span>. Manifolds are locally euclidean, so we know what tangent vector means locally on <span class="math-container">$M$</span>. The question is, how do you make sense of a vector field globally? The answer is, you specify an open cover of <span class="math-container">$M$</span> by coordinate patches, say <span class="math-container">$\{U_\alpha\}$</span>, and you specify vector fields <span class="math-container">$V_\alpha=(V_\alpha)^i\frac{\partial}{\partial x^i}$</span> defined locally on each <span class="math-container">$U_\alpha$</span>. Finally, you need to ensure that on the overlaps <span class="math-container">$U_\alpha \cap U_\beta$</span> that <span class="math-container">$V_\alpha$</span> &quot;agrees&quot; with <span class="math-container">$V_\beta$</span>. When you take a course in differential geometry, you study vector fields and you show that the proper way to patch them together is via the following relation on their components: <span class="math-container">$$ (V_\alpha)^i = \frac{\partial x^i}{\partial y^j} (V_\beta)^j $$</span> (here, Einstein summation notation is used, and <span class="math-container">$y^j$</span> are coordinates on <span class="math-container">$U_\beta$</span>). With this definition, one can define a vector bundle <span class="math-container">$TM$</span> over <span class="math-container">$M$</span>, which should be thought of as the union of tangent spaces at each point. The compatibility relation above translates to saying that there is a well-defined global section <span class="math-container">$V$</span> of <span class="math-container">$TM$</span>. So, when a physicist says &quot;this transforms like this&quot; they're implicitly saying &quot;there is some well-defined global section of this bundle, and I'm making use of its compatibility with respect to different choices of coordinate charts for the manifold.&quot;</p> <p>So what does this have to do with mathematical tensors? Well, given vector bundles <span class="math-container">$E$</span> and <span class="math-container">$F$</span> over <span class="math-container">$M$</span>, one can form their tensor product bundle <span class="math-container">$E\otimes F$</span>, which essentially is defined by <span class="math-container">$$ E\otimes F = \bigcup_{p\in M} E_p\otimes F_p $$</span> where the subscript <span class="math-container">$p$</span> indicates &quot;take the fiber at <span class="math-container">$p$</span>.&quot; Physicists in particular are interested in iterated tensor powers of <span class="math-container">$TM$</span> and its dual, <span class="math-container">$T^*M$</span>. Whenever they write &quot;the tensor <span class="math-container">$T^{ij...}_{k\ell...}$</span> transforms like so and so&quot; they are talking about a global section <span class="math-container">$T$</span> of a tensor bundle <span class="math-container">$(TM)^{\otimes n} \otimes (T^*M)^{\otimes m}$</span> (where <span class="math-container">$n$</span> is the number of upper indices and <span class="math-container">$m$</span> is the number of lower indices) and they're making use of the well-definedness of the global section, just like for the vector field.</p> <p>Edit: to directly answer your question about how they get their transformation rules, when studying differential geometry one learns how to take compatibility conditions from <span class="math-container">$TM$</span> and <span class="math-container">$T^*M$</span> and turn them into compatibility relations for tensor powers of these bundles, thus eliminating any guesswork as to how some tensor should &quot;transform.&quot;</p> <p>For more on this point of view, Lee's book on Smooth Manifolds would be a good place to start.</p>
<p>Being a physicist by training maybe I can help.</p> <p>The "physicist" definition of a vector you allude to, in more mathematicians-friendly terms would become something like</p> <blockquote> <p>Let $V$ be a vector space and fix a reference frame $\mathcal{F}$ (mathematicians lingo: a basis.) A collection $\{v^1, \ldots, v^n\}$ of real numbers is called a vector if upon a change of reference frame $\mathcal{F}^\prime = R ^{-1} \mathcal{F}$ it becomes the collection $\{ v^{\prime 1}, \dots, v^{\prime n}\}$ where $v^{\prime i} =R^i_{\ j} v^j$.</p> </blockquote> <p>If you like, you are defining a vector as an equivalence class of $n$-tuples of real numbers.</p> <p>Yes, in many physics books most of what I wrote is tacitly implied/shrugged off as non-important. Anyway, the definition of tensors as collections of numbers transforming according to certain rules is not so esoteric/rare as far as I am aware, and as others have pointed out it's also how mathematicians thought about them back in the past.</p> <p>Physicists often prefer to describe objects in what they find to be more intuitive and less abstract terms, and one of their strength is the ability to work with vaguely defined objects! (Yes, I consider it a strength and yes, it has its drawback and pitfalls, no need to start arguing about that).</p> <p>The case of tensors is similar, just think of the collection of numbers with indices as the components with respect to some basis. Be warned that sometimes what a physicist calls a tensor is actually a tensor field.</p> <p>As to why one would use the definition in terms of components rather than more elegant invariant ones: it takes less technology and is more down-to-earth then introducing a free module over a set and quotienting by an ideal.</p> <p>Finally, regarding how to communicate with physicists: this has always been a struggle on both sides but</p> <ol> <li><p>Many physicists, at least in the general relativity area, are familiar with the definition of a tensor in terms of multilinear maps. In fact, that is how they are defined in all GR books I have looked at (Carroll, Misner-Thorne-Wheeler, Hawking-Ellis, Wald).</p></li> <li><p>It wouldn't hurt for you to get acquainted, if not proficient, with the index notation. It has its own strengths <em>and</em> is still intrinsic. See Wald or the first volume of Penrose-Rindler "Spinors and space-time" under abstract index notation for more on that.</p></li> </ol>
probability
<p>What is an intuitive interpretation of the 'events' $$\limsup A_n:=\bigcap_{n=0}^{\infty}\bigcup_{k=n}^{\infty}A_k$$ and $$\liminf A_n:=\bigcup_{n=0}^{\infty}\bigcap_{k=n}^{\infty}A_k$$ when $A_n$ are subsets of a measured space $(\Omega, F,\mu)$. Of the first it should be that 'an infinite number of those events is verified', but I don't see how to explain (or interpret this). Thanks for any help!</p>
<p>Try reading it piece by piece. Recall that $A\cup B$ means that at least one of $A$, $B$ happens and $A\cap B$ means that both $A$ and $B$ happen. Infinite unions and intersections are interpreted similarly. In your case, $\bigcup_{k=n}^{\infty}A_k$ means that at least one of the events $A_k$ for $k\geq n$ happens. In other words "there exists $k\geq n$ such that $A_k$ happens".</p> <p>Now, let $B_n=\bigcup_{k=n}^{\infty}A_k$ to simplify notation a bit. This gives us $\bigcap_{n=0}^{\infty}\bigcup_{k=n}^{\infty}A_k = \bigcap_{n=0}^{\infty}B_n$. This is interpreted as "all of the events $B_n$ for $n\geq 0$ happen" which is the same as "for each $n\geq 0$ the event $B_n$ happens". Combined with the above interpretation, this tells us that that $\limsup A_n$ means "for each $n\geq 0$ it happens that there is a $k\geq n$ such that $A_k$ happens". This is precisely the same as saying that infinitely many of the events $A_k$ happen.</p> <p>The other one is interpreted similarly: $\bigcap_{k=n}^{\infty}A_k$ means that for all $k\geq n$ the event $A_k$ happens. So, $\bigcup_{n=0}^{\infty}\bigcap_{k=n}^{\infty}A_k$ says that for at least one $n\geq0$ the event $\bigcap_{k=n}^{\infty}A_k$ will happen, i.e.: there is a $n\geq 0$ such that for all $k\geq n$ the event $A_k$ happens. In other words: $\liminf A_n$ is the event that from some point on, every event happens.</p> <p><strong>Edit:</strong> As requested by Diego, I'm adding a further explanation. Sets are naturally ordered by inclusion $\subseteq$. This is a partial order, even a lattice. (Putting aside the fact that the universe of sets is not a set.) In fact, every family of sets has an $\inf$ and $\sup$ with respect to $\subseteq$, which can be defined by: $$\inf_{\lambda\in\Lambda}A_\lambda =\bigcap_{\lambda\in\Lambda}A_\lambda$$ and $$\sup_{\lambda\in\Lambda}A_\lambda =\bigcup_{\lambda\in\Lambda}A_\lambda.$$</p> <p>Now, the usual definition of $\limsup$ and $\liminf$ (of sequences of real numbers) can be rephrased in terms of infima and suprema as follows: $$\liminf_{n\to\infty}a_n=\sup_{n\geq 0}\inf_{k\geq n} a_n$$ and $$\limsup_{n\to\infty}a_n=\inf_{n\geq 0}\sup_{k\geq n} a_n.$$</p> <p>We can now use the same definition for sets: $$\liminf_{n\to\infty}A_n=\sup_{n\geq 0}\inf_{k\geq n} A_n$$ and $$\limsup_{n\to\infty}A_n=\inf_{n\geq 0}\sup_{k\geq n} A_n.$$ Rewriting this in terms of $\bigcup$ and $\bigcap$, we get precisely the definitions from the question.</p>
<p>The $\limsup$ is the collection of all elements which appear in every tail of the sequence, namely results which occur infinitely often in the sequence of events.</p> <p>The $\liminf$ is the union of all elements appearing in <em>all</em> events from a certain point in time, namely results which occur in all but finitely many events of the sequence.</p>
logic
<p>I read that Presburger arithmetic is decidable while Peano arithmetic is undecidable. Peano arithmetic extends Presburger arithmetic just with the addition of the multiplication operator. Can someone please give me the 'intuitive' idea behind this?</p> <p>Or probably a formula in Peano arithmetic that cannot be proved. Does it have something to do with the self reference paradox in Gödel's Incompleteness Theorem?</p>
<p>Essentially, yes, but I believe the undecidability of Peano arithmetic (henceforth, PA) follows from the way the proof of Gödel's incompleteness theorem goes, rather than being a consequence of the fact that PA is incomplete. The proof (outlined below) starts by showing that PA can talk about computable relations, and goes on to show from this how you can construct an unprovable sentence. However, we can take a different approach to show that PA is undecidable: if PA can talk about computable relations, then you can formulate a sentence in the language of PA that is true if and only if a given algorithm halts / does not halt. (Briefly: An algorithm is the same thing as a computable partial function, and an algorithm halts on some input if and only if the corresponding partial function is defined on that input.) So if an algorithm can decide whether arbitrary sentences in PA are provable or not, we have an algorithm which solves the halting problem... but there are nonesuch. So the key point is that PA is rich enough to talk about computation. An essential ingredient for talking about computation is the ability to encode a pair of numbers as a number and the ability to recover the original pair from the encoding. It's not immediately clear to me that $+$ is insufficient to do this, but it's certainly plausible that you need at least two binary operations.</p> <p>Here is a sketch of the proof of Gödel's first incompleteness theorem: First let's select a sufficiently powerful theory $T$, e.g. PA, and assume that it is a consistent theory, i.e. does not prove a contradiction.</p> <ol> <li>We show that we can encode formulae and proofs in the models of $T$.</li> <li>We show that $T$ is powerful enough to talk about computable relations.</li> <li>We show that there is a computable relation $\mathrm{Prf}(m, n)$ which holds if and only if $m$ encodes a valid proof of the sentence encoded by $n$.</li> <li>The above shows that there is a computable relation $Q(m, n)$ which holds if and only if $n$ encodes a formula $F(-)$ with one free variable and $m$ does <em>not</em> encode a valid proof of $F(n)$.</li> <li>So we can define a formula $P(x)$ by $\forall m. Q(m, x)$. This means $P(x)$ holds if and only if there is no valid proof of $F(x)$, assuming $x$ encodes a formula $F(-)$ with one free variable.</li> <li>But $P(-)$ is a formula with one free variable, and it can be encoded by some number $n$. So is the sentence $P(n)$ provable or not? Suppose it were. Then, that means $P(n)$ is true, so the theory asserts that there is no valid proof of $P(n)$ — a contradiction. </li> <li>So we are forced to conclude that the sentence $P(n)$ is not provable. This is the Gödel sentence which we wished to prove the existence of, so we are done.</li> </ol> <p>Note I haven't said anything about whether $P(n)$ is actually <em>true</em>. It turns out there is some subtlety here. Gödel's <em>completeness</em> theorem tells us that everything that can be proven in a first-order theory is true in every model of that theory and that every sentence which is true in every model of a first-order theory can be proven from the axioms of that theory. With some stronger consistency assumptions, we can also show that $\lnot P(n)$ is also not provable in PA, and this means that there are models of PA where $P(n)$ is true and models where it is false.</p> <p>The key point is that the phrases "$n$ encodes a formula ..." and "$m$ encodes a valid proof of ..." are strictly <em>outside</em> the theory. The interpretation of a particular number as a formula or proof is defined externally, and we only define it for "standard" numbers. The upshot of this is that in a model $\mathcal{M}$ of PA where $P(n)$ is false, there is some non-standard number $m \in \mathcal{M}$ which $\mathcal{M}$ "believes" is a valid proof of $P(n)$, but because it's non-standard, we cannot translate it into a real proof.</p>
<p>While this may not be the intuitive idea you're looking for, I'll note that the same ideas behind Godel's proof that Peano Arithmetic is incomplete (that is, that there are true sentences of PA that can't be proven) can also be applied to show that Presburger Arithmetic is 'hard' in a tangible sense; essentially, one can encode multiplication of small enough numbers (specifically, the predicate '$x\times y = z$' with $x$, $y$ and $z$ 'only' doubly-exponential in the length of the formula) into a formula involving only addition. This means that if sentences in Presburger Arithmetic of length $n$ could be decided without multiplying numbers of size at least (roughly) $2^{2^n}$, then Presburger Arithmetic would suffer from the same Godelian paradox that Peano Arithmetic does and be incomplete. Paul Young wrote up a much better exposition of this for a conference a couple of decades ago than I can possibly give; have a look at <a href="http://books.google.com/books?id=2vvg3mRzDtwC&amp;pg=PA503&amp;lpg=PA503">http://books.google.com/books?id=2vvg3mRzDtwC&amp;pg=PA503&amp;lpg=PA503</a> for the details.</p>
linear-algebra
<p>What is an <em>intuitive</em> meaning of the null space of a matrix? Why is it useful?</p> <p>I'm not looking for textbook definitions. My textbook gives me the definition, but I just don't "get" it.</p> <p>E.g.: I think of the <em>rank</em> $r$ of a matrix as the minimum number of dimensions that a linear combination of its columns would have; it tells me that, if I combined the vectors in its columns in some order, I'd get a set of coordinates for an $r$-dimensional space, where $r$ is minimum (please correct me if I'm wrong). So that means I can relate <em>rank</em> (and also dimension) to actual coordinate systems, and so it makes sense to me. But I can't think of any physical meaning for a null space... could someone explain what its meaning would be, for example, in a coordinate system?</p> <p>Thanks!</p>
<p>If $A$ is your matrix, the null-space is simply put, the set of all vectors $v$ such that $A \cdot v = 0$. It's good to think of the matrix as a linear transformation; if you let $h(v) = A \cdot v$, then the null-space is again the set of all vectors that are sent to the zero vector by $h$. Think of this as the set of vectors that <em>lose their identity</em> as $h$ is applied to them.</p> <p>Note that the null-space is equivalently the set of solutions to the homogeneous equation $A \cdot v = 0$.</p> <p>Nullity is the complement to the rank of a matrix. They are both really important; here is a <a href="https://math.stackexchange.com/questions/21100/importance-of-rank-of-a-matrix">similar question</a> on the rank of a matrix, you can find some nice answers why there.</p>
<p>This is <a href="https://math.stackexchange.com/a/987657">an answer</a> I got from <a href="https://math.stackexchange.com/q/987146">my own question</a>, it's pretty awesome!</p> <blockquote> <p>Let's suppose that the matrix A represents a physical system. As an example, let's assume our system is a rocket, and A is a matrix representing the directions we can go based on our thrusters. So what do the null space and the column space represent?</p> <p>Well let's suppose we have a direction that we're interested in. Is it in our column space? If so, then we can move in that direction. The column space is the set of directions that we can achieve based on our thrusters. Let's suppose that we have three thrusters equally spaced around our rocket. If they're all perfectly functional then we can move in any direction. In this case our column space is the entire range. But what happens when a thruster breaks? Now we've only got two thrusters. Our linear system will have changed (the matrix A will be different), and our column space will be reduced.</p> <p>What's the null space? The null space are the set of thruster intructions that completely waste fuel. They're the set of instructions where our thrusters will thrust, but the direction will not be changed at all.</p> <p>Another example: Perhaps A can represent a rate of return on investments. The range are all the rates of return that are achievable. The null space are all the investments that can be made that wouldn't change the rate of return at all.</p> <p>Another example: room illumination. The range of A represents the area of the room that can be illuminated. The null space of A represents the power we can apply to lamps that don't change the illumination in the room at all.</p> </blockquote> <p>-- <a href="https://math.stackexchange.com/a/987657">NicNic8</a></p>
linear-algebra
<p>I have the following <span class="math-container">$n\times n$</span> matrix:</p> <p><span class="math-container">$$A=\begin{bmatrix} a &amp; b &amp; \ldots &amp; b\\ b &amp; a &amp; \ldots &amp; b\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\ b &amp; b &amp; \ldots &amp; a\end{bmatrix}$$</span></p> <p>where <span class="math-container">$0 &lt; b &lt; a$</span>.</p> <blockquote> <p>I am interested in the expression for the determinant <span class="math-container">$\det[A]$</span> in terms of <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$n$</span>. This seems like a trivial problem, as the matrix <span class="math-container">$A$</span> has such a nice structure, but my linear algebra skills are pretty rusty and I can't figure it out. Any help would be appreciated.</p> </blockquote>
<p>Add row 2 to row 1, add row 3 to row 1,..., add row $n$ to row 1, we get $$\det(A)=\begin{vmatrix} a+(n-1)b &amp; a+(n-1)b &amp; a+(n-1)b &amp; \cdots &amp; a+(n-1)b \\ b &amp; a &amp; b &amp;\cdots &amp; b \\ b &amp; b &amp; a &amp;\cdots &amp; b \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ b &amp; b &amp; b &amp; \ldots &amp; a \\ \end{vmatrix}$$ $$=(a+(n-1)b)\begin{vmatrix} 1 &amp; 1 &amp; 1 &amp; \cdots &amp; 1 \\ b &amp; a &amp; b &amp;\cdots &amp; b \\ b &amp; b &amp; a &amp;\cdots &amp; b \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ b &amp; b &amp; b &amp; \ldots &amp; a \\ \end{vmatrix}.$$ Now add $(-b)$ of row 1 to row 2, add $(-b)$ of row 1 to row 3,..., add $(-b)$ of row 1 to row $n$, we get $$\det(A)=(a+(n-1)b)\begin{vmatrix} 1 &amp; 1 &amp; 1 &amp; \cdots &amp; 1 \\ 0 &amp; a-b &amp; 0 &amp;\cdots &amp; 0 \\ 0 &amp; 0 &amp; a-b &amp;\cdots &amp; 0 \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ 0 &amp; 0 &amp; 0 &amp; \ldots &amp; a-b \\ \end{vmatrix}=(a+(n-1)b)(a-b)^{n-1}.$$</p>
<p>SFAICT this route hasn't been mentioned yet, so:</p> <p>Consider the decomposition</p> <p>$$\small\begin{pmatrix}a&amp;b&amp;\cdots&amp;b\\b&amp;a&amp;\cdots&amp;b\\\vdots&amp;&amp;\ddots&amp;\vdots\\b&amp;\cdots&amp;b&amp;a\end{pmatrix}=\begin{pmatrix}a-b&amp;&amp;&amp;\\&amp;a-b&amp;&amp;\\&amp;&amp;\ddots&amp;\\&amp;&amp;&amp;a-b\end{pmatrix}+\begin{pmatrix}\sqrt b\\\sqrt b\\\vdots\\\sqrt b\end{pmatrix}\cdot\begin{pmatrix}\sqrt b&amp;\sqrt b&amp;\cdots&amp;\sqrt b\end{pmatrix}$$</p> <p>Having this decomposition allows us to use the Sherman-Morrison-Woodbury formula for determinants:</p> <p>$$\det(\mathbf A+\mathbf u\mathbf v^\top)=(1+\mathbf v^\top\mathbf A^{-1}\mathbf u)\det\mathbf A$$</p> <p>where $\mathbf u$ and $\mathbf v$ are column vectors. The corresponding components are simple, and thus the formula is easily applied (letting $\mathbf e$ denote the column vector whose components are all $1$'s):</p> <p>$$\begin{align*} \begin{vmatrix}a&amp;b&amp;\cdots&amp;b\\b&amp;a&amp;\cdots&amp;b\\\vdots&amp;&amp;\ddots&amp;\vdots\\b&amp;\cdots&amp;b&amp;a\end{vmatrix}&amp;=\left(1+(\sqrt{b}\mathbf e)^\top\left(\frac{\sqrt{b}}{a-b}\mathbf e\right)\right)(a-b)^n\\ &amp;=\left(1+\frac{nb}{a-b}\right)(a-b)^n=(a+(n-1)b)(a-b)^{n-1} \end{align*}$$</p> <p>where we used the fact that $\mathbf e^\top\mathbf e=n$.</p>
geometry
<p>Coming from a physics background, my understanding of geometry (in a very generic sense) is that it involves taking a space and adding some extra structure to it. The extra structure takes some local data about the space as its input and outputs answers to local or global questions about the space + structure. We can use it to probe either the structure itself or the underlying space it lives on. For example, we can take a smooth manifold and add a Riemannian metric and a connection, and then we can ask about distances between points, curvature, geodesics, etc. In symplectic geometry, we take an even-dimensional manifold and add a symplectic form, and then we can ask about... well, honestly, I don't know. But I'm sure there is interesting stuff you can ask.</p> <p>Knowing very little about algebraic geometry, I am wondering what the &quot;geometry&quot; part is. I am assuming that the spaces in this case are algebraic varieties, but what is the extra structure that gets added? What sorts of questions can we answer with this extra structure that we couldn't answer without it?</p> <p>I have to guess that this is a little more complicated than just taking a manifold and adding a metric, otherwise I would expect to be able to find this explained in a relatively straightforward way somewhere. If it turns out the answer is &quot;it's hard to explain, and you just need to read an algebraic geometry text,&quot; then that's fine. In that case, it would be interesting to try to get a sense of <em>why</em> it's more complicated. (I have a guess, which is that varieties tend to be a lot less tame than manifolds, so you have to jump through more technical hoops to tack on extra stuff to them, but that's pure speculation.)</p>
<p>This is a big complicated question and many different kinds of answers could be given at many different levels of sophistication. The very short answer is that the geometry in algebraic geometry comes from considering only polynomial functions as the meaningful functions. Here is essentially the simplest nontrivial example I know of:</p> <p>Consider the intersection of the unit circle <span class="math-container">$\{ x^2 + y^2 = 1 \}$</span> with a vertical line <span class="math-container">$\{ x = c \}$</span>, for different values of the parameter <span class="math-container">$c$</span>. If <span class="math-container">$-1 &lt; c &lt; 1$</span> we get two intersection points. If <span class="math-container">$c &gt; 1$</span> or <span class="math-container">$c &lt; -1$</span> we get no (real) intersection points. But something special happens at <span class="math-container">$c = \pm 1$</span>: in this case the vertical lines <span class="math-container">$x = \pm 1$</span> are tangent to the circle. This tangency is invisible if we just consider the &quot;set-theoretic intersection&quot; of the circle and the line, which consists of a single point; for various reasons (e.g. to make <a href="https://en.wikipedia.org/wiki/B%C3%A9zout%27s_theorem" rel="noreferrer">Bezout's theorem</a> true) we'd like a way to formalize the intuition that this intersection has &quot;multiplicity two&quot; in some sense, and so is geometrically more interesting than just a single point.</p> <p>This can be done by taking what is called the <a href="https://en.wikipedia.org/wiki/Scheme-theoretic_intersection" rel="noreferrer">scheme-theoretic intersection</a>. This is a complicated name for a simple idea: instead of asking directly what the intersection is, we ask what the ring of <em>polynomial functions</em> on the intersection is. The ring of polynomial functions on the unit circle is the <a href="https://en.wikipedia.org/wiki/Quotient_ring" rel="noreferrer">quotient ring</a> <span class="math-container">$\mathbb{R}[x, y]/(x^2 + y^2 - 1)$</span>, while the ring of polynomial functions on the vertical line is the quotient ring <span class="math-container">$\mathbb{R}[x, y]/(x - c) \cong \mathbb{R}[y]$</span>. It turns out that the ring of polynomial functions on the intersection is the quotient by both of the defining polynomials, which gives, say at <span class="math-container">$x = 1$</span> to be concrete,</p> <p><span class="math-container">$$\mathbb{R}[x, y]/(x^2 - y^2 - 1, x - 1) \cong \mathbb{R}[y]/y^2.$$</span></p> <p>This is a funny ring: it has a nontrivial <a href="https://en.wikipedia.org/wiki/Nilpotent" rel="noreferrer">nilpotent</a>! That nilpotent <span class="math-container">$y$</span> is exactly telling us the sense in which the intersection has &quot;multiplicity two&quot;; it's saying that a function on the scheme-theoretic intersection records not only its value at the intersection point but its derivative with respect to tangent vectors at the intersection point, reflecting the geometric fact that the unit circle and the line are tangent and so share a tangent space. In other words it is saying, roughly speaking, that the intersection is &quot;two points infinitesimally close together, connected by an infinitesimally short vector.&quot;</p> <p>Adding nilpotents to geometry takes some getting used to but it turns out to be very useful; among other things it is possible to define tangent spaces in algebraic geometry this way (<a href="https://en.wikipedia.org/wiki/Zariski_tangent_space" rel="noreferrer">Zariski tangent spaces</a>), hence to define <a href="https://en.wikipedia.org/wiki/Lie_algebra" rel="noreferrer">Lie algebras</a> of <a href="https://en.wikipedia.org/wiki/Algebraic_group" rel="noreferrer">algebraic groups</a> in a purely algebraic way.</p> <p>So, this is one story you can tell about what kind of geometry algebraic geometry captures, and there are many others, for example the rich story of <a href="https://en.wikipedia.org/wiki/Arithmetic_geometry" rel="noreferrer">arithmetic geometry</a> and its applications to number theory. It's difficult to say anything remotely complete here because algebraic geometry is <em>absurdly</em> general and the sorts of geometry it is capable of capturing veer off in wildly different directions depending on what you're interested in.</p>
<p>A classical manifold is a space that locally looks like <span class="math-container">$\mathbb{R}^n$</span>; or, via results like the Whitney embedding theorem, a suitably nice subspace of some <span class="math-container">$\mathbb{R}^N$</span>. If &quot;looks like&quot; involves some notion of smoothness, for example, then we can expand into differential geometry and talk about constructions like tangent spaces and differential forms. If we stick to just continuity, then we can still work with some constructions like homology and cohomology (just not, say, de Rham cohomology), and we can deal with more pathological spaces and functions between them.</p> <p>A natural question to ask, then, is what's so special about <span class="math-container">$\mathbb{R}^n$</span>? We can consider spaces that locally look like an arbitrary Banach space, for example. (I don't think this is a particularly popular approach, at least, at the undergrad/early grad school level, but Abraham, Marsden, and Ratiu works in this category.) The starting point of algebraic geometry is wanting to deal with spaces over an arbitrary commutative ring. It's not clear how continuity or smoothness should map over to this case, but at the very least polynomials make sense over an arbitrary ring, and we can look at the space like <span class="math-container">$V(f) = \{x\in k^n:\, f(x) = 0\}$</span> for a polynomial <span class="math-container">$f\in k[X_1, \dots, X_n]$</span>. But that's not exactly what we want either; for the important case of <span class="math-container">$k$</span> finite, for example, <span class="math-container">$V(f)$</span> is just a finite collection of points.</p> <p>The analogy that turns out to work is going in the opposite direction, and trying to generalize the idea of functions on a manifold. To that end, algebraic geometry works with locally ringed spaces, which are pairs <span class="math-container">$(X, \mathcal{O}_X)$</span> with <span class="math-container">$X$</span> a topological space and <span class="math-container">$\mathcal{O}_X$</span> a sheaf of rings on <span class="math-container">$X$</span> satisfying properties roughly analogous to what you'd expect for, say, smooth functions on a manifold. In rough terms, what you wind up with is a space that locally looks like the spectrum of a commutative ring--- but unlike the case of real manifolds, that ring can vary along the space. That's admittedly a vague analogy, and it takes a lot of technical results to even talk about the resulting object. But if you're familiar with vector bundles, for example, then consider Swan's Theorem: For a smooth, connected, closed manifold <span class="math-container">$X$</span>, the sections functor <span class="math-container">$\Gamma(\cdot)$</span> gives an equivalence between vector bundles over <span class="math-container">$X$</span> and f.g., projective modules over the ring <span class="math-container">$C^\infty(X)$</span>.</p> <p>So, what makes this algebraic thing we've constructed look geometric? Smoothness doesn't make sense outside of <span class="math-container">$\mathbb{R}^n$</span>, but if we're working with polynomials, they have a formal derivative that allows us to do roughly the same thing. More generally, a local ring <span class="math-container">$(R, \mathfrak{m})$</span> has a cotangent space <span class="math-container">$\mathfrak{m}/\mathfrak{m}^2$</span> that's roughly analogous to the cotangent space of a manifold; and with a bit of work, we can get something that at least has some of the formal properties one wants for a tangent or cotangent space. Even though the topology we're working with turns out to be much more complicated than the case of manifolds (the Zariski topology, for example, is generally non-Hausdorff), we still have a notion of cohomology (the simplest being Cech cohomology with a sheaf). There's a massive jump in abstraction and technical requirements compared with the more geometric case, but algebraic geometry turns out to be the right extension of more familiar geometry when dealing with things such as, say, number fields.</p>
probability
<p>I just learned about the Monty Hall problem and found it quite amazing. So I thought about extending the problem a bit to understand more about it.<hr> In this modification of the Monty Hall Problem, instead of three doors, we have four (or maybe $n$) doors, one with a car and the other three (or $n-1$) with a goat each (I want the car).</p> <p>We need to choose any one of the doors. After we have chosen the door, Monty deliberately reveals one of the doors that has a goat and asks us if we wish to change our choice.</p> <p>So should we switch the door we have chosen, or does it not matter if we switch or stay with our choice?</p> <p>It would be even better if we knew the probability of winning upon switching given that Monty opens $k$ doors.</p>
<p><em>I decided to make an answer out of <a href="https://math.stackexchange.com/questions/608957/monty-hall-problem-extended/609552#comment1283624_608977">my comment</a>, just for the heck of it.</em></p> <hr> <h2>$n$ doors, $k$ revealed</h2> <p>Suppose we have $n$ doors, with a car behind $1$ of them. The probability of choosing the door with the car behind it on your first pick, is $\frac{1}{n}$. </p> <p>Monty then opens $k$ doors, where $0\leq k\leq n-2$ (he has to leave your original door and at least one other door closed).</p> <p>The probability of picking the car if you choose a different door, is the chance of not having picked the car in the first place, which is $\frac{n-1}{n}$, times the probability of picking it <em>now</em>, which is $\frac{1}{n-k-1}$. This gives us a total probability of $$ \frac{n-1}{n}\cdot \frac{1}{n-k-1} = \frac{1}{n} \cdot \frac{n-1}{n-k-1} \geq \frac{1}{n} $$</p> <p><strong>No doors revealed</strong><br> If Monty opens no doors, $k = 0$ and that reduces to $\frac{1}{n}$, which means your odds remain the same.</p> <p><strong>At least one door revealed</strong><br> For all $k &gt; 0$, $\frac{n-1}{n-k-1} &gt; 1$ and so the probabilty of picking the car on your second guess is greater than $\frac{1}{n}$.</p> <p><strong>Maximum number of doors revealed</strong><br> If $k$ is at its maximum value of $n-2$, the probability of picking a car after switching becomes $$\frac{1}{n}\cdot \frac{n-1}{n-(n-2)-1} = \frac{1}{n}\cdot \frac{n-1}{1} = \frac{n-1}{n}$$ For $n=3$, this is the solution to the original Monty Hall problem.</p> <p>Switch.</p>
<p>By not switching, you win a car if and only if you chose correctly initially. This happens with probability $\frac{1}{4}$. If you switch, you win a car if and only if you chose <em>incorrectly</em> initially, and then of the remaining two doors, you choose correctly. This happens with probability $\frac{3}{4}\times\frac{1}{2}=\frac{3}{8}$. So if you choose to switch, you are more likely to win a car than if you do not switch.</p> <p>You never told me whether you'd prefer to win a car or a goat though, so I can't tell you what to do. </p> <p><a href="http://xkcd.com/1282" rel="noreferrer"><img src="https://imgs.xkcd.com/comics/monty_hall.png" alt="xkcd #1282" title="A few minutes later, the goat from behind door C drives away in the car."></a></p>
matrices
<p>I know that in <span class="math-container">$A\textbf{x}=\lambda \textbf{x}$</span>, <span class="math-container">$\textbf{x}$</span> is the right eigenvector, while in <span class="math-container">$\textbf{y}A =\lambda \textbf{y}$</span>, <span class="math-container">$\textbf{y}$</span> is the left eigenvector.</p> <p>But what is the significance of left and right eigenvectors? How do they differ from each other geometrically?</p>
<p>The (right) eigenvectors for $A$ correspond to lines through the origin that are sent to themselves (or $\{0\}$) under the action $x\mapsto Ax$. The action $y\mapsto yA$ for row vectors corresponds to an action of $A$ on hyperplanes: each row vector $y$ defines a hyperplane $H$ given by $H=\{\text{column vectors }x: yx=0\}$. The action $y\mapsto yA$ sends the hyperplane $H$ defined by $y$ to a hyperplane $H'$ given by $H'=\{x: Ax\in H\}$. (This is because $(yA)x=0$ iff $y(Ax)=0$.) A left eigenvector for $A$, then, corresponds to a hyperplane fixed by this action.</p>
<p>The set of left eigenvectors and right eigenvectors together form what is known as a Dual Basis and Basis pair.</p> <p><a href="http://en.wikipedia.org/wiki/Dual_basis">http://en.wikipedia.org/wiki/Dual_basis</a></p> <p>In simpler terms, if you arrange the right eigenvectors as columns of a matrix B, and arrange the left eigenvectors as rows of a matrix C, then BC = I, in other words B is the inverse of C</p>
logic
<p>This is not a complaint about my proofs course being difficult, or how I can learn to prove things better, as all other questions of this flavour on Google seem to be. I am asking in a purely technical sense (but still only with regards to mathematics; that's why I deemed this question most appropriate to this Stack Exchange).</p> <p>To elaborate: it seems to me that if you have a few (mathematical) assumptions and there is a logical conclusion which can be made from those assumptions, that conclusion shouldn't be too hard to draw. It literally <strong>follows</strong> from the assumptions! However, this clearly isn't the case (for a lot of proofs, at least). The Poincaré Conjecture took just short of a century to prove. I haven't read <a href="http://www.ims.cuhk.edu.hk/%7Eajm/vol10/10_2.pdf" rel="nofollow noreferrer">the proof itself</a>, but it being ~320 pages long doesn't really suggest easiness. And there are countless better examples of difficulty. In 1993, Wiles announced the final proof of Fermat's Last Theorem, which was originally stated by Fermat in 1637 and was &quot;considered inaccessible to prove by contemporaneous mathematicians&quot; (<a href="https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem" rel="nofollow noreferrer">Wikipedia article on the proof</a>).</p> <p>So clearly, in many cases, mathematicians have to bend over backwards to prove certain logical conclusions. What is the reason for this? Is it humans' lack of intelligence? Lack of creativity? There is the field of <a href="https://en.wikipedia.org/wiki/Automated_theorem_proving" rel="nofollow noreferrer">automated theorem proving</a> which I tried to seek some insight from, but it looks like the algorithms produced from this field are subpar when compared to humans, and even these algorithms are obscenely difficult to implement. So seemingly certain proofs are actually inherently difficult. So I plead again - why is this?</p> <p>(EDIT) To rephrase my question: are there any <strong>inherent mathematical reasons</strong> for why difficult proofs exist?</p>
<p>Although this question may superficially look opinion-based, in actual fact there is an objective answer. The core reason is that the halting problem cannot be solved computably, and statements about halting behaviour get arbitrarily difficult to prove, and elementary arithmetic is sufficient to express notions that are at least as general as statements about halting behaviour.</p> <p>Now the details. First understand <a href="https://math.stackexchange.com/q/2486348/21820">the incompleteness theorem</a>. Next, observe that any reasonable foundational system can reason about programs, via the use of Godel coding to express and reason about finite program execution. Notice that all this reasoning about programs can occur within a tiny fragment of PA (1st-order Peano Arithmetic) called <a href="https://en.wikipedia.org/wiki/Peano_arithmetic#Equivalent_axiomatizations" rel="noreferrer">PA<sup>−</sup></a>. Thus the incompleteness theorem imply that, no matter what your foundational system is (as long as it is reasonable), there would always be true arithmetical sentences that it cannot prove, and these sentences can be explicitly written down and are not that long.</p> <p>Furthermore, the same reduction to the halting problem implies that you cannot even computably determine whether some arithmetical sentence is a theorem of your favourite foundational system S or not. This actually implies that there is <strong>no computable bound</strong> on the length of a shortest proof of a given theorem! To be precise, there is no computable function <span class="math-container">$f$</span> such that for every string <span class="math-container">$X$</span> we have that either <span class="math-container">$X$</span> is not a theorem of S or there is a proof of <span class="math-container">$X$</span> over S of length at most <span class="math-container">$f(X)$</span>. This provides the (at first acquaintance surprising) answer to your question:</p> <blockquote> <p>Logically forced conclusions from an explicitly described set of assumptions may take a big number of steps to deduce, so big that there is no computable bound on that number of steps! So, yes, proofs are actually inherently hard!</p> </blockquote> <p>Things are even worse; if you believe that S does not prove any false arithmetic sentence (which you should otherwise why are you using S?), then we can explicitly construct an arithmetical sentence Q such that S proves Q but you must believe that no proof of Q over S has less than <span class="math-container">$2^{10000}$</span> symbols!</p> <p>And in case you think that such phenomena may not occur in the mathematics that non-logicians come up with, consider the fact that <a href="http://raganwald.com/assets/fractran/Conway-On-Unsettleable-Arithmetical-Problems.pdf" rel="noreferrer">the generalized Collatz problem is undecidable</a>, and the fact that <a href="https://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem" rel="noreferrer">Hilbert's tenth problem</a> was proposed with no idea that it would be computably unsolvable. Similarly, many other discrete combinatorial problems such as <a href="https://en.wikipedia.org/wiki/Wang_tile#Domino_problem" rel="noreferrer">Wang tiling</a> were eventually found to be computably unsolvable.</p>
<p>You received already good answers, but I want to add a point that has not been covered: combinatorics.</p> <p>True, you may start with few simple assumptions and operations, but how many ordered ways are there to combine those assumptions? In a naive brute force approach, assuming that the shortest solution (ignore the fact that this is uncomputable, we just need a lower bound) takes n steps, including using assumptions or using operators; then there are <span class="math-container">$ n! $</span> ways to take all the n steps, but only 1 correct solution. And we do not even know n in advance!</p> <p>So proofs are hard because naive brute force is not a working option.</p>
probability
<p>If someone asked me what it meant for $X$ to be standard normally distributed, I would tell them it means $X$ has probability density function $f(x) = \frac{1}{\sqrt{2\pi}}\mathrm e^{-x^2/2}$ for all $x \in \mathbb{R}$.</p> <p>More rigorously, I could alternatively say that $f$ is the Radon-Nikodym derivative of the distribution measure of $X$ w.r.t. the Lebesgue measure on $\mathbb{R}$, or $f = \frac{\mathrm d \mu_X}{\mathrm d\lambda}$. As I understand it, $f$ re-weights the values $x \in \mathbb{R}$ in such a way that $$ \int_B \mathrm d\mu_X = \int_B f\, \mathrm d\lambda $$ for all Borel sets $B$. In particular, the graph of $f$ lies below one everywhere: <a href="https://i.sstatic.net/2Uit5.jpg" rel="noreferrer"><img src="https://i.sstatic.net/2Uit5.jpg" alt="normal pdf"></a> </p> <p>so it seems like $f$ is re-weighting each $x \in \mathbb{R}$ to a smaller value, but I don't really have any intuition for this. I'm seeking more insight into viewing $f$ as a change of measure, rather than a sort of distribution describing how likely $X$ is. </p> <p>In addition, does it make sense to ask "which came first?" The definition for the standard normal pdf as just a function used to compute probabilities, or the pdf as a change of measure?</p>
<p>Your understanding of the basic math itself seems pretty solid, so I'll just try to provide some extra intuition.</p> <p>When we integrate a function $g$ with respect to the Lebesgue measure $\lambda$, we find its "area under the curve" or "volume under the surface", etc... This is obvious since the Lebesgue measure assigns the ordinary notion of length (area, etc) to all possible integration regions over the domain of $g$. Therefore, I say that integrating with respect to the Lebesgue measure (which is equivalent in value to Riemannian integration) is a <em>calculation to find the "volume" of some function.</em></p> <p>Let's pretend for a moment that when performing integration, we are always forced to do it over the <em>entire</em> domain of the integrand. Meaning we are only allowed to compute $$\int_B g \,d\lambda\ \ \ \ \text{if}\ \ \ \ B=\mathbb{R}^n$$ where $\mathbb{R}^n$ is assumed to be the entire domain of $g$.</p> <p>With that restriction, what could we do if we only cared about the volume of $g$ over the region $B$? Well, we could define an <a href="https://en.wikipedia.org/wiki/Indicator_function" rel="noreferrer">indicator function</a> for the set $B$ and integrate its product with $g$, $$\int_{\mathbb{R}^n} \mathbf{1}_B g \,d\lambda$$</p> <p>When we do something like this, we are taking the mindset that our goal is to nullify $g$ wherever we don't care about it... but that isn't the only way to think about it. We can instead try to nullify $\mathbb{R}^n$ itself wherever we don't care about it. We would compute the integral then as, $$\int_{\mathbb{R}^n} g \,d\mu$$ where $\mu$ is a measure that behaves just like $\lambda$ for Borel sets that are subsets of $B$, but returns zero for Borel sets that have no intersection with $B$. Using this measure, it doesn't matter that $g$ has value outside of $B$, because $\mu$ will give that support no consideration.</p> <p>Obviously, these integrals are just different ways to think about the same thing, $$\int_{\mathbb{R}^n} g \,d\mu = \int_{\mathbb{R}^n} \mathbf{1}_B g \,d\lambda$$ The function $\mathbf{1}_B$ is clearly the density of $\mu$, its Radon–Nikodym derivative with respect to the Lebesgue measure, or by directly matching up symbols in the equation, $$d\mu = f\,d\lambda$$ where here $f = \mathbf{1}_B$. The reason for showing you all this was to show how we can <em>think of changing measure as a way to tell an integral how to only compute the volume we <strong>care</strong> about</em>. Changing measure allows us to discount parts of the support of $g$ instead of discounting parts of $g$ itself, and the Radon–Nikodym chainrule formalizes their equivalence.</p> <p>The cool thing about this, is that our measures don't have to be as bipolar as the $\mu$ I constructed above. They don't have to completely not care about support outside $B$, but instead can just care about support outside $B$ <em>less</em> than inside $B$.</p> <p>Think about how we might find the total mass of some physical object. We integrate over all of space (the <em>entire</em> domain where particles can exist) but use a measure $m$ that returns larger values for regions in space where there is "more mass" and smaller values (down to zero) for regions in space where there is "less mass". It doesn't have to be just mass vs no-mass, it can be everything in between too, and the Radon–Nikodym derivative of this measure is indeed the literal "density" of the object.</p> <p>So what about probability? Just like with the mass example, we are encroaching on the world of physical modeling and leaving abstract mathematics. Formally, a measure is a probability measure if it returns 1 for the Borel set that is the union of all the other Borel sets. When we consider these Borel sets to model physical "events", this notion makes intuitive modeling sense... we are just defining the probability (measure) of <em>anything</em> happening to be 1.</p> <p>But why 1? <strong>Arbitrary convenience.</strong> In fact, some people don't use 1! Some people use 100. Those people are said to use the "percent" convention. What is the probability that if I flip this coin, it lands on heads or tails? 100... percent. We could have used literally any positive real number, but 1 is just a nice choice. Note that the Lebesgue measure is not a probability measure because $\lambda(\mathbb{R}^n) = \infty$.</p> <p>Anyway, what people are doing with probability is designing a measure that models how much significance they give to various events - which are Borel sets, which are regions in the domain; they are just defining how much they value parts of the domain itself. As we saw before with the measure $\mu$ I constructed, the easiest way to write down your measure is by writing its density.</p> <p>Fun to note: "expected value" of $g$ is just its volume with respect to the given probability measure $P$, and "covariance" of $g$ with $h$ is just their inner product with respect to $P$. Letting $\Omega$ be the entire domain of both $g$ and $h$ (also known as the sample space), if $g$ and $h$ have zero mean, $$\operatorname{cov}(g, h) = \int_{x \in \Omega}g(x)h(x)f(x)\ dx = \int_{\Omega}gh\ dP = \langle g, h \rangle_P$$</p> <p>I'll let you show that the <a href="https://en.wikipedia.org/wiki/Pearson_correlation_coefficient#Definition" rel="noreferrer">correlation coefficient</a> for $g$ and $h$ is just the "cosine of the angle between them".</p> <p>Hope this helps! Measure theory is definitely the modern way of viewing things, and people began to understand "weighted Riemannian integrals" well before they realized the other viewpoint: "weighting" the domain instead of the integrand. Many people attribute this viewpoint's birth to Lebesgue integration, where the operation of integration was first (notably) restated in terms of an arbitrary measure, as opposed to Riemnnian integration which tacitly <em>always</em> assumed the Lebesgue measure.</p> <p>I noticed you brought up the normal distribution specifically. The normal distribution is special for a lot of reasons, but it is by no means some de-facto probability density. There are an infinite number of equally valid probability measures (with their associated densities). The normal distribution is really only so important because of the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem" rel="noreferrer">central limit theorem</a>.</p>
<p>The case you are referring to is valid. In your example, Radon-Nikodym serves as a reweighting of the Lebesgue measure and it turns out that the Radon-Nikodym is the pdf of the given distribution. </p> <p>However, Radon-Nikodym is a more general concept. Your example converts Lebesgue measure to a normal probability measure whereas Radon-Nikodym can be used to convert any measure to another measure as long as they meet certain technical conditions. </p> <p>A quick recap of the intuition behind measure. A measure is a set function that takes a set as an input and returns a non-negative number as output. For example length, volume, weight, and probability are all examples of measures. </p> <p>So what if I have one measure that returns length in meters and another measure that returns length in kilometer? A Radon-Nikodym is to convert these two measures. What is the Radon-Nikodym in this case? It is a constant number 1000. </p> <p>Similarly, another Radon-Nikodym can be used to convert a measure that returns the weight in kg to another measure that returns the weight in lbs. </p> <p>Back to your example, pdf is used to convert a Lebesgue measure to a normal probability measure, but this is just one example usage of measure. </p> <p>Starting from a Lebesgue measure, you can define Radon-Nikodym that generates other useful measures (not necessarily probability measure). </p> <p>Hope this clarifies it. </p>
logic
<p>It is said that our current basis for mathematics are the ZFC-axioms. </p> <p><strong>Question:</strong> Where are these axioms in our mathematics? When do we use them? I have now studied math for a year, and have yet to run into a single one of these ZFC axioms. How can this be if they are supposed to be the basis for everything I have done so far? </p>
<p>I think this is a very good question. I don't have the time right now to write a complete answer, but let me make a few quick points (and I'll add more when I get the chance):</p> <ul> <li><p><strong>Most mathematics is not done in ZFC</strong>. Most mathematics, in fact, isn't done axiomatically at all: rather, we simply use propositions which seem "intuitively obvious" without comment. This is true even when it looks like we're being rigorous: for example, when we formally define the real numbers in a real analysis class (by Cauchy sequences, Dedekind cuts, or however), we (usually) <em>don't</em> set forth a list of axioms of set theory which we're using to do this. The reason is, that the facts about sets which we need seem to be utterly tame: for example, that the intersection of two sets is again a set. </p></li> <li><p><strong>ZFC arose in response to a historical need.</strong> The history of modern logic is fascinating, and I don't want to do it injustice; let me just say (wildly oversimplifying) that you only really need to axiomatize mathematics if there's real danger of different people using different axioms implicitly, without realizing it. One standard example here is the axiom of choice, which very reasonable people alternately find perfectly intuitive and clearly false. So ZFC, very roughly speaking, won the job of being the "default" axiom system for mathematics: you're perfectly free to prove theorems using (say) NF instead, but it's considered gauche if you don't explicitly say that's what you're doing. Are there reasons to prefer some other system to ZFC? I'm a very pro-ZFC person, but even I'd have to say yes. The point isn't that ZFC is perfect, though; it's that it's strong enough to address the vast majority of our mathematical needs, while also being reasonable enough that it doesn't cause huge problems most of the time. This strength, by the way, is crucial: we don't want to have to keep updating our axiomatic framework to allow us to do basic mathematics, so overshooting in terms of strength is (I would argue) preferable (the counterargument is that overshooting runs a greater risk of inconsistency; but that's an issue for a different question, or at least a bit later when I have more time to write).</p></li> <li><p><strong>Even in the ZFC context, ZFC is usually overkill.</strong> OK, let's say you buy the ZFC sales pitch (I certainly did - and I just love the complimentary toaster!). Then you have some class of theorems you want to prove, and - after expressing them in the language of ZFC (which is frankly a tedious process, and one of the practical objections to ZFC emerges from this) - proceed to prove them from the ZFC axioms. But then you notice that you didn't use most of the ZFC axioms at all! This is in fact the norm - <em>replacement</em> especially is overkill in most situations. This isn't a problem, though: ZFC doesn't claim to be minimal, in any sense. And in fact the study of what axioms are <em>needed</em> to prove a given theorem is quite rich: see e.g. <em>reverse mathematics</em>.</p></li> </ul> <p>Tl;dr: I would say that the sentence "ZFC is the foundation for modern mathematics," while broadly correct, is hiding a <em>lot</em> of context. In particular:</p> <ul> <li><p>Most of the time you're not going to actually be using axioms at all.</p></li> <li><p>ZFC's claim to prominence is primarily sociological/historical; we could equally well have gone with NF, or something completely different.</p></li> <li><p>The ZFC axioms are wildly overpowered for most of mathematics; in particular, you probably won't really need the whole of ZFC for almost anything you do.</p></li> <li><p>Most of all: <strong>ZFC doesn't come first</strong>. <em>Mathematics</em> comes first; ZFC is a <em>mathematical</em> theory that, among other things, "absorbs" the vast majority of mathematics in a certain way. But you can do math without ZFC. (It's just that you run the risk of accidentally invoking an "obvious" set-theoretic principle which isn't so obvious, and so conflicting with other results which invoke the "obvious" negation of your "obvious" axiom. ZFC provides a general language for us to do math in, so we don't have to worry about things like this. But in practice, this almost never occurs.)</p></li> </ul> <hr> <p>Note that you could also ask this question with regard to formal logic - specifically, classical first-order logic - in general; and there has been a lot written about this (I'll add some citations when I have more time). But that's going <em>very</em> far afield.</p> <p>Really tl;dr (and, I should add, in conflict with a number of people - this is my opinion): foundations do not <em>enable</em>, but rather <em>serve</em>, mathematics.</p>
<p>"My house is supposedly built on concrete pad foundations, but I've been fixing the pipes upstairs, and I haven't seen any foundation yet."</p> <p>This is analogous - you don't see them because they're so deep beneath the surface of where you're working. If you lifted the floorboards and poked around, you'd find the foundations.</p> <p>Though they might actually be wooden pile or columns-on-ball-bearings, in the same way you might actually be using a system besides ZFC. Though until you go and check, you probably won't know the difference.</p> <p>As going down to the foundation level is way beyond scope for most house repairs, so is going down to axioms beyond scope for most mathematics.</p>
matrices
<p>I have one triangle in <span class="math-container">$3D$</span> space that I am tracking in a simulation. Between time steps I have the previous normal of the triangle and the current normal of the triangle along with both the current and previous <span class="math-container">$3D$</span> vertex positions of the triangles.</p> <p>Using the normals of the triangular plane I would like to determine a rotation matrix that would align the normals of the triangles thereby setting the two triangles parallel to each other. I would then like to use a translation matrix to map the previous onto the current, however this is not my main concern right now.</p> <p>I have found <a href="http://forums.cgsociety.org/archive/index.php/t-741227.html" rel="noreferrer">this</a> website that says I must</p> <ul> <li>determine the cross product of these two vectors (to determine a rotation axis)</li> <li>determine the dot product ( to find rotation angle)</li> <li>build quaternion (not sure what this means)</li> <li>the transformation matrix is the quaternion as a <span class="math-container">$3 \times 3$</span> (not sure)</li> </ul> <p>Any help on how I can solve this problem would be appreciated.</p>
<p>Suppose you want to find a rotation matrix $R$ that rotates unit vector $a$ onto unit vector $b$.</p> <p>Proceed as follows:</p> <p>Let $v = a \times b$</p> <p>Let $s = \|v\|$ (sine of angle)</p> <p>Let $c = a \cdot b$ (cosine of angle)</p> <p>Then the rotation matrix R is given by: $$R = I + [v]_{\times} + [v]_{\times}^2\frac{1-c}{s^2},$$</p> <p>where $[v]_{\times}$ is the skew-symmetric cross-product matrix of $v$, $$[v]_{\times} \stackrel{\rm def}{=} \begin{bmatrix} \,\,0 &amp; \!-v_3 &amp; \,\,\,v_2\\ \,\,\,v_3 &amp; 0 &amp; \!-v_1\\ \!-v_2 &amp; \,\,v_1 &amp;\,\,0 \end{bmatrix}.$$</p> <p>The last part of the formula can be simplified to $$ \frac{1-c}{s^2} = \frac{1-c}{1-c^2} = \frac{1}{1+c}, $$ revealing that it is <em>not</em> applicable only for $\cos(\angle(a, b)) = -1$, i.e., if $a$ and $b$ point into exactly opposite directions.</p>
<p>Using <a href="https://math.stackexchange.com/a/180436/76513">Kjetil's answer</a> answer, with <a href="https://math.stackexchange.com/users/17349/process91">process91</a>'s comment, we arrive at the following procedure.</p> <h2>Derivation</h2> <p>We are given two unit column vectors, $A$ and $B$ ($\|A\|=1$ and $\|B\|=1$). The $\|\circ\|$ denotes the L-2 norm of $\circ$.</p> <p>First, note that the rotation from $A$ to $B$ is just a 2D rotation on a plane with the normal $A \times B$. A 2D rotation by an angle $\theta$ is given by the following augmented matrix: $$G=\begin{pmatrix} \cos\theta &amp; -\sin\theta &amp; 0 \\ \sin\theta &amp; \cos\theta &amp; 0 \\ 0 &amp; 0 &amp; 1 \end{pmatrix}.$$</p> <p>Of course we don't want to actually <em>compute</em> any trig functions. Given our unit vectors, we note that $\cos\theta=A\cdot B$, and $\sin\theta=||A\times B||$. Thus $$G=\begin{pmatrix} A\cdot B &amp; -\|A\times B\| &amp; 0 \\ \|A\times B\| &amp; A\cdot B &amp; 0 \\ 0 &amp; 0 &amp; 1\end{pmatrix}.$$</p> <p>This matrix represents the rotation from $A$ to $B$ in the base consisting of the following column vectors:</p> <ol> <li><p>normalized <a href="http://en.wikipedia.org/wiki/Vector_projection#Vector_projection" rel="noreferrer">vector projection</a> of $B$ onto $A$: $$u={(A\cdot B)A \over \|(A\cdot B)A\|}=A$$</p></li> <li><p>normalized <a href="http://en.wikipedia.org/wiki/Vector_projection#Vector_rejection" rel="noreferrer">vector rejection</a> of $B$ onto $A$: $$v={B-(A\cdot B)A \over \|B- (A\cdot B)A\|}$$</p></li> <li><p>the cross product of $B$ and $A$: $$w=B \times A$$</p></li> </ol> <p>Those vectors are all orthogonal, and form an orthogonal basis. This is the detail that Kjetil had missed in his <a href="https://math.stackexchange.com/a/180436/76513">answer</a>. You could also normalize $w$ and get an orthonormal basis, if you needed one, but it doesn't seem necessary.</p> <p>The basis change matrix for this basis is: $$F=\begin{pmatrix}u &amp; v &amp; w \end{pmatrix}^{-1}=\begin{pmatrix} A &amp; {B-(A\cdot B)A \over \|B- (A\cdot B)A\|} &amp; B \times A\end{pmatrix}^{-1}$$</p> <p>Thus, in the original base, the rotation from $A$ to $B$ can be expressed as right-multiplication of a vector by the following matrix: $$U=F^{-1}G F.$$</p> <p>One can easily show that $U A = B$, and that $\|U\|_2=1$. Also, $U$ is the same as the $R$ matrix from <a href="https://math.stackexchange.com/a/476311/76513">Rik's answer</a>.</p> <h2>2D Case</h2> <p>For the 2D case, given $A=\left(x_1,y_1,0\right)$ and $B=\left(x_2,y_2,0\right)$, the matrix $G$ is the forward transformation matrix itself, and we can simplify it further. We note $$\begin{aligned} \cos\theta &amp;= A\cdot B = x_1x_2+y_1y_2 \\ \sin\theta &amp;= \| A\times B\| = x_1y_2-x_2y_1 \end{aligned}$$</p> <p>Finally, $$U\equiv G=\begin{pmatrix} x_1x_2+y_1y_2 &amp; -(x_1y_2-x_2y_1) \\ x_1y_2-x_2y_1 &amp; x_1x_2+y_1y_2 \end{pmatrix}$$ and $$U^{-1}\equiv G^{-1}=\begin{pmatrix} x_1x_2+y_1y_2 &amp; x_1y_2-x_2y_1 \\ -(x_1y_2-x_2y_1) &amp; x_1x_2+y_1y_2 \end{pmatrix}$$</p> <h2>Octave/Matlab Implementation</h2> <p>The basic implementation is very simple. You could improve it by factoring out the common expressions of <code>dot(A,B)</code> and <code>cross(B,A)</code>. Also note that $||A\times B||=||B\times A||$.</p> <pre><code>GG = @(A,B) [ dot(A,B) -norm(cross(A,B)) 0;\ norm(cross(A,B)) dot(A,B) 0;\ 0 0 1]; FFi = @(A,B) [ A (B-dot(A,B)*A)/norm(B-dot(A,B)*A) cross(B,A) ]; UU = @(Fi,G) Fi*G*inv(Fi); </code></pre> <p>Testing:</p> <pre><code>&gt; a=[1 0 0]'; b=[0 1 0]'; &gt; U = UU(FFi(a,b), GG(a,b)); &gt; norm(U) % is it length-preserving? ans = 1 &gt; norm(b-U*a) % does it rotate a onto b? ans = 0 &gt; U U = 0 -1 0 1 0 0 0 0 1 </code></pre> <p>Now with random vectors:</p> <pre><code>&gt; vu = @(v) v/norm(v); &gt; ru = @() vu(rand(3,1)); &gt; a = ru() a = 0.043477 0.036412 0.998391 &gt; b = ru() b = 0.60958 0.73540 0.29597 &gt; U = UU(FFi(a,b), GG(a,b)); &gt; norm(U) ans = 1 &gt; norm(b-U*a) ans = 2.2888e-16 &gt; U U = 0.73680 -0.32931 0.59049 -0.30976 0.61190 0.72776 -0.60098 -0.71912 0.34884 </code></pre> <h2>Implementation of Rik's Answer</h2> <p>It is computationally a bit more efficient to use <a href="https://math.stackexchange.com/a/476311/76513">Rik's</a> answer. This is also an Octave/MatLab implementation.</p> <pre><code>ssc = @(v) [0 -v(3) v(2); v(3) 0 -v(1); -v(2) v(1) 0] RU = @(A,B) eye(3) + ssc(cross(A,B)) + \ ssc(cross(A,B))^2*(1-dot(A,B))/(norm(cross(A,B))^2) </code></pre> <p>The results produced are same as above, with slightly smaller numerical errors since there are less operations being done.</p>
linear-algebra
<blockquote> <p>Suppose $A$ and $B$ are similar matrices. Show that $A$ and $B$ have the same eigenvalues with the same geometric multiplicities.</p> <p><strong>Similar matrices</strong>: Suppose $A$ and $B$ are $n\times n$ matrices over $\mathbb R$ or $\mathbb C$. We say $A$ and $B$ are similar, or that $A$ is similar to $B$, if there exists a matrix $P$ such that $B = P^{-1}AP$.</p> </blockquote>
<p><span class="math-container">$B = P^{-1}AP \ \Longleftrightarrow \ PBP^{-1} = A$</span>. If <span class="math-container">$Av = \lambda v$</span>, then <span class="math-container">$PBP^{-1}v = \lambda v \ \Longrightarrow \ BP^{-1}v = \lambda P^{-1}v$</span>. So, if <span class="math-container">$v$</span> is an eigenvector of <span class="math-container">$A$</span>, with eigenvalue <span class="math-container">$\lambda$</span>, then <span class="math-container">$P^{-1}v$</span> is an eigenvector of <span class="math-container">$B$</span> with the same eigenvalue. So, every eigenvalue of <span class="math-container">$A$</span> is an eigenvalue of <span class="math-container">$B$</span> and since you can interchange the roles of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> in the previous calculations, every eigenvalue of <span class="math-container">$B$</span> is an eigenvalue of <span class="math-container">$A$</span> too. Hence, <span class="math-container">$A$</span> and <span class="math-container">$B$</span> have the same eigenvalues.</p> <p>Geometrically, in fact, also <span class="math-container">$v$</span> and <span class="math-container">$P^{-1}v$</span> are the same vector, written in different coordinate systems. Geometrically, <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are matrices associated with the same endomorphism. So, they have the same eigenvalues and geometric multiplicities.</p>
<p>The matrices $A$ and $B$ describe the same linear transformation $L$ of some vector space $V$ with respect to different bases. For any $\lambda\in{\mathbb C}$ the set $E_\lambda:=\lbrace x\in V\ |\ Lx=\lambda x\rbrace$ is a well defined subspace of $V$ and therefore has a clearcut dimension ${\rm dim}(E_\lambda)\geq0$ which is independent of any basis one might chose for $V$. Of course, for most $\lambda\in{\mathbb C}$ this dimension is $0$, which means $E_\lambda=\{{\bf 0}\}$. If $\lambda$ is actually an eigenvalue of $L$ then ${\rm dim}(E_\lambda)$ is called the <em>(geometric) multiplicity</em> of this eigenvalue. </p> <p>So there is actually nothing to prove.</p>
combinatorics
<p>Let me denote a sequence of distinct natural numbers by $x_n$ whose terms are determined as follows: $x_1$ is $1$ and $x_2$ is the smallest distinct natural number $n$ such that $x_1+x_2$ is divisible by $2$, which is $3$. Similarly $x_n$ is the smallest distinct natural number such that $x_1+...+x_n$ is divisible by $n$. Thus $x_1=1$, $x_2=3$, $x_3=2$.</p> <p>Now the problem is to show that (i) $x_n$ is surjective (ii) $x_{(x_n)}$ is $n$, for example $x_2=3$ and $x_3=2$</p>
<p>As other posters have noticed, this is closely related to the golden ratio $\varphi = \frac{1+\sqrt{5}}{2}$. </p> <p>@ Ross Millikan your answer is a great place to start. </p> <p>$\frac{1}{\varphi}+\frac{1}{\varphi ^2}=1$, so we can partition the natural numbers greater than 1 into numbers of the forms $\lceil {n \varphi} \rceil, \lceil {n \varphi ^2} \rceil$. (Add 1 to the beatty sequences)</p> <p>Define $a_1=1, a_{\lceil {n \varphi} \rceil}=\lceil {n \varphi ^2} \rceil, a_{\lceil {n \varphi ^2} \rceil}=\lceil {n \varphi} \rceil$. Clearly $a_{a_n}=n$.</p> <p>That's where the next crucial observation that $\sum\limits_{i=1}^{n}{a_i}=n \lceil{\frac{n}{\varphi}} \rceil$ comes in.</p> <p>We proceed by induction on $n$ to show that $\sum\limits_{i=1}^{n}{a_i}=n \lceil{\frac{n}{\varphi}} \rceil$ and that $a_n$ is the smallest integer satisfying the divisiblity relation $n \mid \sum\limits_{i=1}^{n}{a_i}$.</p> <p>When $n=1$, we have $1=1 \lceil{\frac{1}{\varphi}} \rceil$, which is true. When $n=2$, we have $1+3=2 \lceil{\frac{2}{\varphi}} \rceil$, which is true.</p> <p>Suppose it holds for $n=k \geq 2$. Then $\sum\limits_{i=1}^{k}{a_i}=k \lceil{\frac{k}{\varphi}} \rceil$. Consider 2 cases:</p> <p><strong>Case 1:</strong> $k+1=\lceil{m \varphi} \rceil$ for some $m \in \mathbb{Z}^+$. Then $a_{k+1}=\lceil{m \varphi ^2} \rceil=m+\lceil{m \varphi} \rceil=m+k+1$.</p> <p>Now $k+1=m \varphi +1- \{m \varphi \}$ so $m-1&lt;m-\frac{\{m \varphi \}}{\varphi}=\frac{k}{\varphi}&lt;m&lt;m+\frac{1-\{m \varphi \}}{\varphi}=\frac{k+1}{\varphi}&lt;m+1$. Thus $\lceil{\frac{k}{\varphi}} \rceil=m, \lceil{\frac{k+1}{\varphi}} \rceil=m+1$.</p> <p>$\sum\limits_{i=1}^{k+1}{a_i}=k \lceil{\frac{k}{\varphi}} \rceil+a_{k+1}=km+(m+k+1)=(k+1)(m+1)=(k+1) \lceil{\frac{k+1}{\varphi}} \rceil$.</p> <p><em>Minimality:</em> Note that $a_{k+1}=m+k+1&lt;2(k+1)$, and because we have a partition, $a_{k+1} \not =a_i \, \forall i$, so it suffices to show that $a_i=m$ for some $i$ with $1 \leq i \leq k$, or equivalently, $a_m \leq k$. If $m=\lceil{b \varphi ^2 } \rceil$ for some $b \in \mathbb{Z}^+$, we are done since $a_m&lt;m \leq k$. Otherwise $m=\lceil{b \varphi} \rceil$ for some $b \in \mathbb{Z}^+$, so $a_m=\lceil{b \varphi ^2 } \rceil \leq \lceil{\lceil b \varphi \rceil \varphi } \rceil=\lceil{m \varphi } \rceil=k+1$. Since $k+1=\lceil{m \varphi} \rceil \not =\lceil{b \varphi ^2 } \rceil = a_m$ (Partition), we now have the desired inequality $a_m \leq k$.</p> <p><strong>Case 2:</strong> $k+1=\lceil{m \varphi ^2} \rceil=m+\lceil{m \varphi} \rceil$ for some $m \in \mathbb{Z}^+$. Then $a_{k+1}=\lceil{m \varphi} \rceil=k+1-m$.</p> <p>Now $k+1=m \varphi ^2 +1-\{m \varphi ^2 \}$.</p> <p>As we have a partition, $k+1$ cannot be written in the form $\lceil{i \varphi} \rceil$. Thus $\exists c \in \mathbb{Z}^+$ such that $\lceil{c \varphi} \rceil&lt;k+1&lt;\lceil{(c+1) \varphi} \rceil$. <strong>(Since $k&gt;1$.)</strong></p> <p>This gives $\lceil{c \varphi} \rceil=k, \lceil{(c+1) \varphi} \rceil=k+2$.</p> <p>Thus: $k=c \varphi +1-\{c \varphi \}, k+2=(c+1) \varphi +1-\{(c+1) \varphi \}$, so \begin{align} c&lt;c+\frac{1-\{c \varphi \}}{\varphi}=\frac{k}{\varphi}=m \varphi -\frac{\{m \varphi ^2 \}}{\varphi}&lt;m \varphi &amp; &lt;m \varphi +\frac{1-\{m \varphi ^2 \}}{\varphi} \\ &amp; =\frac{k+1}{\varphi} \\ &amp; =c+1-\frac{\{(c+1) \varphi \}}{\varphi} \\ &amp; &lt;c+1 \end{align}</p> <p>We get $\lceil \frac{k}{\varphi} \rceil =\lceil m \varphi \rceil =\lceil \frac{k+1}{\varphi} \rceil =c+1$, so $k+1=m+c+1, a_{k+1}=c+1$.</p> <p>$\sum\limits_{i=1}^{k+1}{a_i}=k \lceil{\frac{k}{\varphi}} \rceil +a_{k+1}=k(c+1)+(c+1)=(k+1)(c+1)=(k+1) \lceil{\frac{k+1}{\varphi}} \rceil$</p> <p><em>Minimality:</em> Also note that $a_{k+1}=k+1-m&lt;k+1$, so $a_{k+1}$ is the smallest integer satisfying the divisibility relation, and because we have a partition, $a_{k+1} \not =a_i \, \forall i$.</p> <p>We are thus done by induction. Now, since $x_n$ is unique and $a_n$ satisfies the given conditions, $x_n=a_n \, \forall n \in \mathbb{Z}^+$ and we are done. Both i) $x_n$ is surjective and ii) $x_{x_n}=n$ follow trivially.</p>
<p>I think it is inevitable here to just pose an expression for the solution and then prove that it satisfies the requirements, rather than deduce the solution directly form the requirements. So I'll use the form of the solution that was suggested; on the other hand I will prove everything about it directly rather than refer to theory.</p> <p>The question discriminates against $0$, denying it citizenship of the natural numbers, and this bias (for me) complicates both formulae and understanding. Therefore I'll adapt the question as follows by first proclaiming $\def\N{\mathbf N}0\in\N$ (naturalisation of $0$), and then shifting everything one step down: $a_i=x_{i+1}-1$ for all $i\in\N$. Then the following formulation is clearly equivalent to the question: the sequence $(a_i)_{i\in\N}$ is determined the requirement that each $a_i\in\N$ is minimal such that $a_i\neq a_j$ for $0\leq j&lt;i$, and $a_0+a_1+\ldots+a_i$ is divisible by $i+1$ (in particular one has $a_0=0$). Show that the map $i\mapsto a_i$ is an involution of $\N$, in other words $a_{a_i}=i$ for all $i\in\N$ (this of course implies that the map is surjective).</p> <p>I will use floor and ceiling functions extensively; also I'll denote the complementary fractional part $\def\dn#1{\lfloor#1\rfloor}\def\up#1{\lceil#1\rceil}\up x-x$ of a real number $x$ by $\def\cfr#1{\operatorname{cf}(#1)}\cfr x$, and the golden ratio by $\phi$. The latter satisfies $1+\phi=\phi^2$ and therefore also $1/\phi+1=\phi$ and $1/\phi^2+1/\phi=1$, which identities will often be used without mention.</p> <p>For an integer $k&gt;0$ two complementary conditions will be important. The condition denoted $\def\lo{\operatorname{low}}\lo(k)$ will be said to hold if $$ \cfr{k\phi}=\cfr{k/\phi}&lt;1/\phi=1-1/\phi^2 \quad\text{or equivalently}\quad \cfr{k/\phi^2}&gt;1/\phi^2=1-1/\phi $$ while the condition denoted $\def\hi{\operatorname{high}}\hi(k)$ will be said to hold if $$ \cfr{k\phi}=\cfr{k/\phi}&gt;1/\phi=1-1/\phi^2 \quad\text{or equivalently}\quad \cfr{k/\phi^2}&lt;1/\phi^2=1-1/\phi $$ (the equivalences hold because $\cfr{k\phi}+\cfr{k/\phi^2}=1$). The cases are mutually exclusive, and equality cannot happen since that would mean $(k+1)/\phi\in\mathbf N$, which is impossible because $\phi$ is irrational.</p> <p>When $\lo(l)$, then putting $n=\up{l/\phi}$ one has $\cfr{l/\phi}=n-l/\phi&lt;1/\phi$, and so $n\phi-l=\phi(n-l/\phi)&lt;1$ whence $l=\dn{n\phi}$; conversely $\lo(\dn{n\phi})$ holds for every integer $n&gt;0$. Similarly when $\hi(h)$, then putting $n=\up{h/\phi^2}$ one has $\cfr{h/\phi^2}=n-h/\phi^2&lt;1/\phi^2$, so $n\phi^2-h=\phi^2(n-h/\phi^2)&lt;1$ whence $h=\dn{n\phi^2}$; conversely $\hi(\dn{n\phi^2})$ holds for every integer $n&gt;0$.</p> <p>Now define $(a_i)_{i\in\N}$ explicitly by $$ a_i= \begin{cases} 0 &amp; \text{if $i=0$,} \\ \up{i\phi}=i+\up{i/\phi} &amp; \text{if $\lo(i)$,} \\ \dn{i/\phi}=i-\up{i/\phi^2} &amp; \text{if $\hi(i)$.} \\ \end{cases} $$ Writing indices $l$ with $\lo(l)$ as $l=\dn{n\phi}$ we get $a_l=l+n=\dn{n\phi+n}=\dn{n\phi^2}$. Similarly writing indices $h$ with $\hi(h)$ as $h=\dn{n\phi^2}$, we get $a_h=h-n=\dn{n\phi^2-n}=\dn{n\phi}$. We see that $$ a_{\dn{n\phi}}=\dn{n\phi^2} \quad\text{and}\quad a_{\dn{n\phi^2}}=\dn{n\phi} \qquad\text{for all $n\in\N$,} $$ and since this covers all cases, clearly $i\mapsto A_i$ is an involution of $\N$.</p> <p>On to the divisibility condition, which follows from the following stronger fact. Although straightforward by induction, it does not really give any deep understanding why this divisibility holds.</p> <p><strong>Lemma 1</strong> <em>For all $k\in\N$ one has $\sum_{i&lt;k}a_i=k\dn{k/\phi}$.</em></p> <p><em>Proof</em> by induction on $k$. For $k=0$ the sum is empty and the right hand side is $0$. Now assume the lemma for $k$, deduce it for $k+1$. If $\hi(k)$, then $\phi\up{k/\phi}-k&gt;1$ whence $\up{k/\phi}&gt;(k+1)/\phi$ and $\dn{(k+1)/\phi}=\dn{k/\phi}$; also $a_k=\dn{k/\phi}$ in this case, and so $$ \sum_{i\leq k}a_i=k\dn{k/\phi}+\dn{k/\phi} =(k+1)\dn{k/\phi}=(k+1)\dn{(k+1)/\phi}. $$ If on the other hand $\lo(k)$, then $\phi\up{k/\phi}-k&lt;1$ whence $\up{k/\phi}&lt;(k+1)/\phi$ and $\dn{(k+1)/\phi}=\dn{k/\phi}+1$; also $a_k=\up{k\phi}$ in this case, and so $$ \sum_{i\leq k}a_i=k\dn{k/\phi}+\up{k\phi} =k\dn{k/\phi}+\dn{k/\phi}+1+k=(k+1)(\dn{k/\phi}+1)=(k+1)\dn{(k+1)/\phi}. $$</p> <p>Finally, to show that each $a_i$ is the <em>minimal</em> number to satisfy the requirements of injectivity and divisibility, the following will be used.</p> <p><strong>Lemma 2</strong> <em>Whenever $0\leq i&lt;\dn{k/\phi}$ one has $i\in\{a_0,a_1,\ldots,a_{k-1}\}$; moreover $\dn{k/\phi}\in\{a_0,a_1,\ldots,a_{k-1}\}$ holds if and only if $\lo(k)$.</em></p> <p><em>Proof</em>. From $i\leq\dn{k/\phi}-1$ and $a_i\in\{\up{i\phi},\dn{i/\phi}\}$ it follows (using $\phi&gt;1$) that $a_i&lt;k$, so $i=a_{a_i}\in\{a_0,a_1,\ldots,a_{k-1}\}$. Similarly for $i=\dn{k/\phi}$ one gets $a_i\leq k$, so we will have $\dn{k/\phi}\in\{a_0,a_1,\ldots,a_{k-1}\}$ <em>unless</em> $a_i=k$. But $a_i=k=a_{a_k}$ means that $i=a_k$ while we have $i&lt;k$, so this can only occur if $a_k&lt;k$, that is if $\hi(k)$; conversely if $\hi(k)$ then $i=\dn{k/\phi}=a_k$ and indeed $a_i=k$.</p> <p>Now to wrap everything up, the sequence $(a_i)_{i\in\N}$ defines an involution of $\N$ that by lemma $1$ satisfies the divisibility condition. To show that each $a_k$ is minimal to satisfy the divisibility condition and distinctness from previous values $a_i$, note first that unless $\lo(k)$ holds, the value $a_k=\dn{k/\phi}&lt;k+1$ is the first one in its congruence class modulo $k+1$, so divisibility alone shows it is minimal. When $\lo(k)$ does hold, lemma $2$ shows that all values $i\leq\dn{k/\phi}$ are already present among the previous values $a_0,a_1,\ldots,a_{k-1}$. But in this case $a_k=\up{k\phi}=\dn{k/\phi}+k+1$ is the second one in its congruence class modulo $k+1$, after the forbidden value $\dn{k/\phi}$, so one has minimality here as well.</p> <p><em>Addendum</em> Lemma 2 was only used for a small part above (just the case $i=\dn{k/\phi}$). However its full strength is useful to prove that the pairs $(k,a_k)$ also form the set of cold positions (safe pairs, losing positions) in the game of Wijthoff's Nim, where players take turns modifying a pair of natural numbers, decreasing either one of them, or both by the same amount, and a player incapable of moving (in position $(0,0)$) loses. Positions are classified inductively as cold/hot by: $(0,0)$ is cold; any position from which one can move to a cold position is hot; any position where one can move only to hot positions is cold. The fact that $\{\,(k,a_k)\mid k\in\N\,\}$ is the set of cold positions for this game translates into the following statement.</p> <p><strong>Proposition</strong> $(a_i)_{i\in\N}$ is the lexicographically minimal sequence for which the map $i\mapsto a_i$ is injective $\N\to\N$, and the map $i\mapsto a_i-i$ is injective $\def\Z{\mathbf Z}\N\to\Z$.</p> <p><em>Proof</em>. Since $\dn{n\phi^2}-\dn{n\phi}=n$, one easily sees that both maps are in fact bijections. Establishing minimality requires a bit more work. Lemma 2 shows that when $\hi(k)$ holds, the value $a_k=\dn{k/\phi}$ is the first value not yet used. For the case $\lo(k)$, first observe that the values $a_i-i$ for $0\leq i&lt;k$ form a contiguous segment of $\Z$: the value $k=0$ occupies $0$, the values $0&lt;l&lt;k$ with $\lo(l)$ are $l=\dn{n\phi}$ running through $0&lt;n&lt;k/\phi$ and occupy the corresponding positive values $\dn{n\phi^2}-\dn{n\phi}=n$, while the values $0&lt;h&lt;k$ with $\hi(h)$ are $h=\dn{n\phi^2}$ running through $0&lt;n&lt;k/\phi^2$, and occupy the corresponding negative values $\dn{n\phi}-\dn{n\phi^2}=-n$. Thus there are $k$ successive values $a_i-i$ that are forbidden for $a_k-k$, and the actual value $a_k=\up{k\phi}$ occupies the first positive free value $\up{k\phi}-k=\up{k/\phi}$. This means that to prove minimality in this case, it suffices to exclude as candidates for $a_k$ the values up to $a_k-(k+1)=\up{k\phi}-k-1=\dn{k/\phi}$ inclusive, but this is just what lemma 2 does. </p>
geometry
<p>I have an n-dimensional hyperplane: $w'x + b = 0$ and a point $x_0$. The shortest distance from this point to a hyperplane is $d = \frac{|w \cdot x_0+ b|}{||w||}$. I have no problem to prove this for 2 and 3 dimension space using algebraic manipulations, but fail to do this for an n-dimensional space. Can someone show a nice explanation for it?</p>
<p>There are many ways to solve this problem. In principal one can use Lagrange multipliers and solve a large system of equations, but my attempt to do so met with a road block. However, since you are working in $\mathbb{R}^n$ we have the privilege of orthogonal projection via the dot product. To this end we need to construct a vector from the plane to $x_0$ to project onto a vector perpendicular to the plane. Then we compute the <em>length of the projection</em> to determine the distance from the plane to the point.</p> <p>First, you have an affine hyperplane defined by $w \cdot x + b=0$ and a point $x_0$. Suppose that $X \in \mathbb{R}^n$ is a point satisfying $w \cdot X+b=0$, i.e. it is a point on the plane. You should construct the vector $x_0 - X$ which points from $X$ to $x_0$ so that you can project it onto the unique vector <em>perpendicular</em> to the plane. Some quick reasoning should tell you that this vector is, in fact, $w$. So we want to compute $\| \text{proj}_{w} (x_0-X)\|$. Some handy formulas give us $$ d=\| \text{proj}_{w} (x_0-X)\| = \left\| \frac{(x_0-X)\cdot w}{w \cdot w} w \right\| = |x_0 \cdot w - X \cdot w|\frac{\|w\|}{\|w\|^2} = \frac{|x_0 \cdot w - X \cdot w|}{\|w\|}$$ We chose $X$ such that $w\cdot X=-b$ so we get $$ d=\| \text{proj}_{w} (x_0-X)\| = \frac{|x_0 \cdot w +b|}{\|w\|} $$ as desired.</p> <p>This almost seems like cheating and purely heuristic based on Euclidean geometry. Indeed, I would be more satisfied with a solution via Lagrange multipliers since it would not have required the fact that $\mathbb{R}^n$ has an inner product and just needed the topology of $\mathbb{R}^n$ instead. But we have the inner product, so maybe geometry will suffice for us this time.</p> <p>To make this argument more concrete you should do each step in $\mathbb{R}^2$ for a line $y=mx+b$ and a point $(x_0,y_0)$.</p>
<p>Here is a Lagrange multiplier based solution.</p> <p>The goal is to minimize <span class="math-container">$ (x_0 - x)'(x_0 - x) $</span> subject to <span class="math-container">$ w'x + b = 0 $</span></p> <p>The Lagrangian is <span class="math-container">$ (x_0 - x)'(x_0 - x) - L(w'x + b) $</span></p> <p>The derivative of the Lagrangian is <span class="math-container">$ -2(x_0 - x) - Lw = 0 $</span></p> <p>Dot with <span class="math-container">$ w $</span>, we get <span class="math-container">$ -2w'(x_0 - x) - Lw'w = 0 \implies L = -\frac{2w'(x_0 - x)}{w'w} $</span></p> <p>Dot with <span class="math-container">$ (x_0 - x) $</span>, we get <span class="math-container">\begin{alignat*}{1} &amp; -2(x_0 - x)'(x_0 - x) - L(x_0 - x)'w = 0 \\ &amp; \implies -2(x_0 - x)'(x_0 - x) = -\frac{2w'(x_0 - x)}{w'w} (x_0 - x)'w \\ &amp; \implies (x_0 - x)'(x_0 - x) = \frac{\left(w'(x_0 - x)\right)^2}{w'w} \\&amp; \implies (x_0 - x)'(x_0 - x) = \frac{\left(w'x_0 + b\right)^2}{w'w} \end{alignat*}</span></p> <p>Taking square root gives the answer we wanted.</p>
logic
<p>I'm wondering if there are any non-standard theories (built upon ZFC with some axioms weakened or replaced) that make formal sense of hypothetical set-like objects whose "cardinality" is "in between" the finite and the infinite. In a world like that non-finite may not necessarily mean infinite and there might be a "set" with countably infinite "power set".</p>
<p>There's a few things I can think of which might fit the bill:</p> <ul> <li><p>We could work in a <em>non-<span class="math-container">$\omega$</span> model</em> of ZFC. In such a model, there are sets the model <em>thinks</em> are finite, but which are actually infinite; so there's a distinction between &quot;internally infinite&quot; and &quot;externally infinite.&quot; (A similar thing goes on in <em>non-standard analysis</em>.)</p> </li> <li><p>Although their existence is ruled out by the axiom of choice, it is consistent with ZF that there are sets which are not finite but are <strong>Dedekind-finite</strong>: they don't have any non-trivial self-injections (that is, Hilbert's Hotel doesn't work for them). Such sets are similar to genuine finite sets in a number of ways: for instance, you can show that a Dedekind-finite set can be even (= partitionable into pairs) or odd (= partitionable into pairs and one singleton) or neither but not both. And in fact it is consistent with ZF that the Dedekind-finite cardinalities are linearly ordered, in which case they form a nonstandard model of true arithmetic; see <a href="https://mathoverflow.net/questions/172329/does-sageevs-result-need-an-inaccessible">https://mathoverflow.net/questions/172329/does-sageevs-result-need-an-inaccessible</a>.</p> </li> <li><p>You could also work in <strong>non-classical logic</strong> - for instance, in a <strong>topos</strong>. I don't know much about this area, but lots of subtle distinctions between classically-equivalent notions crop up; I strongly suspect you'd find some cool stuff here.</p> </li> </ul>
<p>Well, there are a few notions of "infinite" sets that aren't equivalent in $\mathsf{ZF}.$ One sort is called Dedekind-infinite ("D-infinite", for short) which is a set with a countably infinite subset, or equivalently, a set which has a proper subset of the same cardinality. So, a set is D-finite if and only if the Pigeonhole Principle holds on that set. The more common notion is Tarski-infinite (usually just called "infinite"), which describes sets for which there is no injection into any set of the form $\{0,1,2,...,n\}.$</p> <p>It turns out, then, that the following are equivalent in $\mathsf{ZF}$:</p> <ol> <li>Every D-finite set is finite.</li> <li>D-finite unions of D-finite sets are D-finite.</li> <li>Images of D-finite sets are D-finite.</li> <li>Power sets of D-finite sets are D-finite.</li> </ol> <p>Without a weak Choice principle (anything that implies $\aleph_0$ to be the smallest infinite cardinality, rather than simply a minimal infinite cardinality), the following may occur: </p> <ol> <li>There may be infinite, D-finite sets. In particular, there may be infinite sets whose cardinality is not comparable to $\aleph_0.$ Put another way, there may be infinite sets such that removing an element from such a set makes a set with strictly smaller cardinality.</li> <li>There may be a D-finite set of D-finite sets whose union is D-infinite.</li> <li>There may be a surjective function from a D-finite set to a D-infinite set.</li> <li>There may be a D-finite set whose power set is D-infinite.</li> </ol>
game-theory
<p>There are $n$ boys and $n$ girls. Each of them is given a hat of only 4 possible (known) colors and doesn't know its color. Now each can only see all the colors of hats of those of the other gender and no contact is allowed, then each is asked to guess the color of their own hat <strong>at the same time</strong>. Determine whether such $n$ exists that there is a strategy that at least one child can guess right under any circumstances.</p> <p>When there are only 3 possible colors, the problem has been solved ($n=2$ is OK) through simple algebra. But when it comes to 4 colors, the problem seems much harder. Please help.</p> <p>Solution for 3 colors ($n=2$): we use 0, 1, 2 to represent the colors, the boys are $a, b$ and girls are $c, d$, respectively. Each boy knows the value of $c, d$, while each girl knows the value of $a, b$. Now consider the four number:$$a+b+c,$$ $$d+a-b,$$ $$d+b-c,$$ $$d+c-a.$$ It's easy to show at least one of them is divisible by 3. As a result, the strategy is: $A, B, C, D$ guess $c+d$, $c-d$, $-a-b$, $b-a$$\pmod 3$ respectively, and one of them must be right.</p>
<p>There is always such a strategy, no matter the number of colors, when $n$ is sufficiently large (depending on the number of colors). See the bottom of this answer for an explicit value of $n$ that works. For $4$ colors, it gives $n = 4^{144}$.</p> <p>Let $k$ be the number of colors. The idea is that a child can, on beforehand, make $k$ statements about the other group's colors, in such a way that in every scenario exactly one statement is true, and map each statement to a color. A simple example is, when Tom makes the $k$ obvious statements about Jade's color and maps them to, well, those colors. When the experiment starts, the girls may then assume that the statement corresponding to Tom's color is false. (Indeed, if Tom guesses right, nothing matters anymore.) In the example, they may assume Jade's color is not the same as Tom's. We will use more interesting statements.</p> <p>By the pigeonhole principle, among a group of $1+(b-1)k$ girls, there is a group of $b$ which has the same color. Fix $b$ for the moment and suppose $n \geq 1+(b-1)k$. (The choice of $b$ will depend on $k$ only.) Fix such a group $G$ of $1+(b-1)k$ girls. The children agree about an enumeration $G_1, \ldots, G_{\binom{1+(b-1)k}{b}}$ of the subsets of size $b$ of $G$.</p> <p>The key is that, given a group of $k$ girls that may assume they have the same color, they can each guess a different color, so that at least one of them will guess right. The problem is that the girls can never know which group of $k$ girls has the same color, but the boys can limit down the possibilities:</p> <p><strong>Lemma.</strong> Given a piece of information about the girls' colors, which takes values in a set of size $N$, the boys can limit down the number of values it can take to $k-1$, provided there are at least $\binom N{k-1}$ boys.</p> <p>Formally, if $C$ is the set of colors, and given a function $f : C^n \to I = \{1, \ldots, N\}$ known to the boys and girls, there exists a strategy which takes $\binom N{k-1}$ boys and makes that, if all those boys guess wrong, then there is a subset $J \subseteq I$ of size $k-1$ such that $f(x) \in J$. Where $x \in C^n$ denotes the vector of the girls' colors.</p> <p><em>Proof.</em> In the case $N = k-1$ (or smaller), this is clear: just let one boy map each element of $I$ to a different color. The point is that this is possible for very large $N$. When Tom chooses $k-1$ indices $i_j \in I$, maps each group $i_j$ to a color, and the statement "$f(x)$ is none of the $i_j$" to the $k$th color, then the girls may either assume that $f(x) \neq i_j$ for some $i_j$, or that $f(x)$ is one of the $i_j$, depending on Tom's color. In the latter case, we are happy: we've narrowed down the possibilities of $f(x)$ to $k-1$ numbers. We need a strategy for the former case, where the girls can only exclude one of the $k-1$ indices Tom chose, and we have no control about which it will be. It suffices to find subsets $U_j$ of size $k-1$ of $I$ such that, for each choice of elements $u_j \in U_j$, the complement $U - \cup_j\{u_j\}$ has cardinality at most $k-1$. This is of course possible, simply take all subsets of $I$ of size $k-1$. Thus, provided that there are at least $\binom{|I|}{k-1}$ boys, the girls can assume that there is a subset $J \subseteq I$ of size $a \leq k-1$, such that $f(x) \in J$. W.l.o.g. we may assume $a = k-1$; the girls can always make $J$ larger in a way they agreed about on beforehand. $\square$</p> <p>Let $I = \{1, \ldots, \binom{1+(b-1)k}{b}\}$. The boys can now make statements about which group $G_i$ has the same color (where the group with smallest index is chosen if there are multiple such groups). Call that group (with smallest index) $H$. This is our $f(x)$. Thus, provided that there are at least $\binom{|I|}{k-1}$ boys, the girls can assume that there is a subset $J \subseteq I$ of size $k-1$, such that $H = G_j$ for some $j \in J$. This $J$ is known to all the girls, by looking at the boys' colors.</p> <p>Now that we've limited the possibilities for $H$ to $\{G_j : j \in J \}$, we would like to apply our key idea to each of these groups: in each group, let the girls say all possible colors. The problem is that those groups are not necessarily disjoint. But the children know how to deal with that:</p> <p><strong>Lemma.</strong> Given integers $a,k \geq 1$ and $b \geq k+(a-1)(k-1)$, then given $a$ sets $G_1, \ldots, G_a$ of size $b$, there exist $m \leq a$ and a finite number of disjoint sets $T_1, \ldots, T_m$ of size $k$ such that each $G_j$ contains some $T_i$.</p> <p><em>Proof.</em> By induction on $a$. For $a=1$ this is trivial. Let $a&gt;1$ and suppose each intersection $G_a \cap G_i$ is at most of size $k-1$. Then we can select $k$ elements of $G_a$ that are not in any other $G_i$, take these to form a $T_j$, and proceed by induction. If some $|G_a \cap G_i| \geq k$, choose $k$ elements in their intersection and let them form a $T_j$. This $T_j$ works for any $G_l$ that contains those $k$ elements. Remove these elements from all $G_l$ and proceed by induction. $\square$</p> <p>The lemma implies, with $a = k-1$, that there exist disjoint groups $T_j$ of $k$ girls, such that the girls may assume each $G_j$ contains a $T_j$. (In practice, the girls must agree about such a choice of $T_j$'s for every possible $J \subseteq I$ of size $k-1$.) In particular, there exist disjoint groups of $k$ girls, at least one of which is contained in $H$. That is, at least one of which consists of girls with the same color. In each group $T_j$, let the girls guess all different colors. Then at least one girl guesses correctly (unless one of the boys guesses correctly).</p> <hr> <p>We conclude that $$n = \max \left(\binom{|I|}{k-1} , 1+(b-1)k \right) $$ suffices, where $|I| = \binom{1+(b-1)k}{b}$ and $b = k+(a-1)(k-1)$ and $a = k-1$. Using the bound $\binom xy \leq x^y$ and estimating $b \leq k^2$ and $1+(b-1)k \leq k^3$ we get that $$n \geq \left( (k^3)^{k^2}\right)^{k} = k^{3k^3}$$ suffices.</p>
<p>Here is what I've tried so far to find a possible solution. First I tested your solution for 3 colors against all 81 hat combinations to make sure it works since it's still pretty mind boggling to me. I also went back and looked at n=1 with 2 colors, where one person simply says the color they see, and the other person says the opposite of the color they see. I tried to find a pattern that might apply for 4 colors. </p> <p>In both examples the players have positionally unique strategies that are sensitive to the order of the inputs, &amp; every group input has a unique group output.</p> <p>With n=1, we have 1 variable <strong><em>a</em></strong>, which can expressed as <strong><em>a</em></strong> and <strong><em>-a</em></strong> to describe each player's strategy. </p> <p>With n= 2, we have 2 variables <strong><em>a</em></strong> and <strong><em>b</em></strong>, which can be expressed together as follows to form the strategies you posted:</p> <blockquote> <p><strong><em>a+b</em></strong>, <strong><em>a-b</em></strong>, <strong><em>-(a+b)</em></strong>, and <strong>-<em>(a-b)</em></strong> -- replacing a &amp; b in the first two expressions with c &amp; d </p> </blockquote> <p>With 3 variables we can write the following 8 possible strategies:</p> <blockquote> <p><strong><em>a+b+c</em></strong>, <strong><em>a+b-c</em></strong>, <strong><em>a-b+c</em></strong>, <strong><em>a-b-c</em></strong>, and their <strong>4 opposites</strong> -- making the whole expression negative and replacing a, b, &amp; c with d, e, &amp; f. </p> </blockquote> <p>I ran into trouble when deciding how many players to have. With 6 players (a thru f, resulting in 4096 hat combinations), we have to choose which 2 of the 8 available expressions to ignore. I haven't found a combination of 6 of them that produces a unique output (after %4) for every input. With 8 players, we can use all 8 expressions, but each expression only refers to three other players, which is not all the information each players has available.</p> <p>Hopefully this idea could go somewhere, but I've personally hit a block.</p>
linear-algebra
<p>How can we prove that the inverse of an upper (lower) triangular matrix is upper (lower) triangular?</p>
<p>Another method is as follows. An invertible upper triangular matrix has the form $A=D(I+N)$ where $D$ is diagonal (with the same diagonal entries as $A$) and $N$ is upper triangular with zero diagonal. Then $N^n=0$ where $A$ is $n$ by $n$. Both $D$ and $I+N$ have upper triangular inverses: $D^{-1}$ is diagonal, and $(I+N)^{-1}=I-N+N^2-\cdots +(-1)^{n-1}N^{n-1}$. So $A^{-1}=(I+N)^{-1}D^{-1}$ is upper triangular.</p>
<p>Personally, I prefer arguments which are more geometric to arguments rooted in matrix algebra. With that in mind, here is a proof.</p> <p>First, two observations on the geometric meaning of an upper triangular invertible linear map.</p> <ol> <li><p>Define <span class="math-container">$S_k = {\rm span} (e_1, \ldots, e_k)$</span>, where <span class="math-container">$e_i$</span> the standard basis vectors. Clearly, the linear map <span class="math-container">$T$</span> is upper triangular if and only if <span class="math-container">$T S_k \subset S_k$</span>.</p> </li> <li><p>If <span class="math-container">$T$</span> is in addition invertible, we must have the stronger relation <span class="math-container">$T S_k = S_k$</span>.</p> <p>Indeed, if <span class="math-container">$T S_k$</span> was a strict subset of <span class="math-container">$S_k$</span>, then <span class="math-container">$Te_1, \ldots, Te_k$</span> are <span class="math-container">$k$</span> vectors in a space of dimension strictly less than <span class="math-container">$k$</span>, so they must be dependent: <span class="math-container">$\sum_i \alpha_i Te_i=0$</span> for some <span class="math-container">$\alpha_i$</span> not all zero. This implies that <span class="math-container">$T$</span> sends the <b>nonzero</b> vector <span class="math-container">$\sum_i \alpha_i e_i$</span> to zero, so <span class="math-container">$T$</span> is not invertible.</p> </li> </ol> <p>With these two observations in place, the proof proceeds as follows. Take any <span class="math-container">$s \in S_k$</span>. Since <span class="math-container">$TS_k=S_k$</span> there exists some <span class="math-container">$s' \in S_k$</span> with <span class="math-container">$Ts'=s$</span> or <span class="math-container">$T^{-1}s = s'$</span>. In other words, <span class="math-container">$T^{-1} s$</span> lies in <span class="math-container">$S_k$</span>, so <span class="math-container">$T^{-1}$</span> is upper triangular.</p>
linear-algebra
<blockquote> <p>Alice and Bob play the following game with an $n \times n$ matrix, where $n$ is odd. Alice fills in one of the entries of the matrix with a real number, then Bob, then Alice and so forth until the entire matrix is filled. At the end, the determinant of the matrix is taken. If it is nonzero, Alice wins; if it is zero, Bob wins. Determine who wins playing perfect strategy each time. </p> </blockquote> <p>When $n$ is even it's easy to see why Bob wins every time. and for $n$ equal to $3$ I have brute-forced it. Bob wins. But for $n = 5$ and above I can't see who will win on perfect strategy each time. Any clever approaches to solving this problem?</p>
<p>I tried to approach it from Leibniz formula for determinants</p> <p>$$\det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n A_{i,\sigma_i}.$$</p> <p>There are $n!$ factorial terms in this sum. Alice will have $\frac{n^2+1}{2}$ moves whereas Bob has $\frac{n^2-1}{2}$ moves. There are $n^2$ variables (matrix entries). Each of them taken alone appear in $(n-1)!$ terms in this summation. Whenever Bob picks a zero in his first move for any entry in the matrix, $(n-1)!$ factorial terms of this go to zero. For instance, consider a $5 \times 5$ matrix. So there are 120 terms. In first move, whenever Bob makes any matrix entry zero, he zeros out 24 of this terms. In his second move, he has to pick that matrix entry which has least number of presence in the first zeroed-out 24 terms. There can be multiple such matrix entries. In face, it can be seen that there is surely another matrix entry appearing in 24 non-zero terms in the above sum. Since $n$ is odd in this case, the last chance will always be that of Alice. Because of that, one doesn't have to bother about this terms summing to zero. What Bob has to do if he has to win is that</p> <ul> <li><p>He has to make sure he touches at least once (in effect zeroes) each of this 120 terms. In the $n=5$ case, he has 12 chances. In this 12 chances he has to make sure that he zeros out all this 120 terms. In one sense, It means that he has to average at least 10 terms per chance of his. I looked at the $n=3$ case, bob has 4 chances there and 6 terms, he can zero out all of them in 3 moves. </p></li> <li><p>He has to make sure that Alice doesn't get hold of all the matrix entries in one single term in 120 terms, because then it will be non-zero, and since the last chance is hers, Bob won't be able to zero it out, so she will win. </p></li> </ul> <p>As per above explanation, in the $5 \times 5$, he has to just average killing 10 terms in each chance which seems quite easy to do. I feel this method is a bit easy to generalize and many really clever people in here can do it. </p> <p>EDIT----------------</p> <p>In response to @Ross Milikan, I tried to look at solving $5 \times 5$ case, this is the approach. Consider $5 \times 5$ matrix with its entries filled in by the english alphabets row-wise, so that the matrix of interest is </p> <p>\begin{align} \begin{bmatrix} a &amp; b &amp; c &amp; d&amp; e \\ f&amp; g &amp; h &amp;i&amp; j \\k&amp; l&amp; m&amp; n&amp; o \\ p&amp; q&amp; r&amp; s&amp; t\\ u&amp; v&amp; w&amp; x&amp; y \end{bmatrix} \end{align}</p> <p>Without loss of Generality (WLOG), let Alice pick up $a$ (making any entry zero is advantageous for her). Lets say Bob picks up $b$ (again WLOG, picking up any entry is same). This helps Bob to zero out 24 terms in the total 120. Alice has to pick up one entry in this first row itself otherwise she will be in a disadvantage (since then, Bob gets to pick the 3 terms in total from the first row and gets 72 terms zeroed out). So concerning the first row, Alice picks 3 of them, Bob picks 2 of them (say $b$ and $d$), and hence he zeros out 48 terms of the total 120. Now note that next move is Bob's. Let us swap the second column and first column. This doesn't change the determinant other than its sign. Look at the modified matrix</p> <p>\begin{align} \begin{bmatrix} 0 &amp; \otimes &amp; \otimes &amp; 0 &amp; \otimes \\ g &amp; f &amp; h &amp;i&amp; j \\l&amp; k&amp; m&amp; n&amp; o \\ q&amp; p&amp; r&amp; s&amp; t\\ v&amp; u&amp; w&amp; x&amp; y \end{bmatrix} \end{align}</p> <p>where $0$ is put in entries Bob has modified and $\otimes$ has been put in entries modified by Alice. Now in the first column, lets say Bob gets hold of $g$ and $q$, and alice gets hold of $l$ and $v$. Again Alice has to do this and any other move will put her in a disadvantage. Bob had made 4 moves already, the next move is his and now the matrix will look like, </p> <p>\begin{align} \begin{bmatrix} 0 &amp; \otimes &amp; \otimes &amp; 0 &amp; \otimes \\ 0 &amp; f &amp; h &amp;i&amp; j \\ \otimes &amp; k &amp; m&amp; n&amp; o \\ 0 &amp; p&amp; r&amp; s&amp; t\\ \otimes &amp; u&amp; w&amp; x&amp; y \end{bmatrix} \end{align}</p> <p>Now we are left with the lower $4 \times 4$ matrix, Bob is left with 8 chances, and the first move is his. Compare this with $4 \times 4$ case, it looks intuitively that Bob should win. </p>
<p>The problem states that <span class="math-container">$n$</span> is odd, ie., <span class="math-container">$n \geq 3$</span>.</p> <p>With Alice and Bob filling the determinant slots in turns, the objective is for Alice to obtain a non-zero determinant value and for Bob to obtain a zero determinant value.</p> <p>So, Alice strives to get <em>all rows and columns to be linearly independent</em> of each other whereas Bob strives to get <em>at least two rows or columns linearly dependent</em>. Note that these are equivalent ways of saying: Obtain a non-zero determinant or a zero determinant respectively that correspond with Alice and Bob's game objectives.</p> <p>Intuitively, it feels like the game is stacked against Alice because her criteria is more restrictive than Bob's. But, lets get a mathematical proof while simultaneously outlining Bob's winning strategy.</p> <p>Since Alice starts the game, Bob gets the even numbered moves. So, Bob chooses <span class="math-container">$r = [r_0, r_1, \dots, r_{n-1}]$</span>, a row vector of scalars and <span class="math-container">$c, c^T = [c_0, c_1, \dots, c_{n-1}]$</span>, a column vector of scalars that he will use to create linear dependency relationship between vectors such that, <span class="math-container">$u \ne v, w \ne x$</span> and</p> <p><span class="math-container">$$r_u \times R_u + r_v \times R_v = \mathbf{0}$$</span> <span class="math-container">$$c_w \times C_w + c_x \times C_x = \mathbf{0}^T$$</span></p> <p>where <span class="math-container">$\mathbf{0}$</span> is the row vector with all columns set to zero.</p> <p>He doesn't fill <span class="math-container">$r, c$</span> immediately, but only when Alice makes the first move in a given column or row that is not necessarily the first move of the game (update: see <em>Notes</em> section. Bob decides the value for <span class="math-container">$r$</span> or <span class="math-container">$c$</span> in the last move in a pair of rows or columns). [We will shortly prove that Alice will be making the first move in any given column or row]. When Alice makes her move, Bob calculates the value of <span class="math-container">$r_v$</span> or <span class="math-container">$c_x$</span> based on the value that Alice has previously filled. Once the vector cell for <span class="math-container">$r, c$</span> is filled, he doesn't change it for the remainder of the game.</p> <p>With this strategy in place, he strives to always play his moves (the even numbered moves in the game) ensuring that the linear dependence relation for the pair of rows (or columns) is maintained. The example below shows one of Bob's moves for rows <span class="math-container">$r_k, r_{k+1}$</span>. Note that these could be any pair of rows, not necessarily consecutive ones. Also, this doesn't have to be a pair of rows, it also works with pairs of columns.</p> <p><a href="https://i.sstatic.net/XmjYE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XmjYE.png" alt="enter image description here" /></a></p> <p>Bob never makes a move that fills the first cell in a row or column as part of his strategy. He always follows. Therefore, Alice is forced to make that move.</p> <p>It would be impossible for Alice to beat this strategy because even if she chooses to fill a different row or column, she would be the first one to initiate the move for that row or column and Bob will follow this strategy there with the even numbered moves. It is easy to see that Alice always makes the odd numbered move in any row or column and Bob follows the even numbered move with his strategy of filling in the cell so that the linear dependence condition is met.</p> <p>So, even though Alice gets the last move, Bob will make a winning move (in an earlier even numbered move) causing two rows or two columns to be linearly dependent causing the determinant to evaluate to zero regardless of what Alice does.</p> <hr /> <p><strong>Game play for specific problem</strong></p> <p><em>(posed by @Misha Lavrov in comments)</em></p> <p>The moves are numbered sequentially. Odd numbered moves are by Alice and Even numbered moves are by Bob. The 'x' and 'o' are just indicators and can be any real number filled by Alice or Bob respectively. The problem posed is for <span class="math-container">$n=5$</span> where Alice and Bob have made the following <span class="math-container">$4$</span> moves and it is Alice's turn.</p> <p><a href="https://i.sstatic.net/OprlI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OprlI.png" alt="enter image description here" /></a></p> <p>Note that if Alice makes her move in any of the yellow or green cells, Bob continues with the pairing strategy described above.</p> <p>The game gets interesting if Alice fills any of the blue cells. There are three possibilities:</p> <p><em>Alice's move (type 1):</em></p> <p><a href="https://i.sstatic.net/yuCVv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yuCVv.png" alt="enter image description here" /></a></p> <p>Alice fills one of the first two cells in row 5. Bob can continue the game in columns 1 and 2 and it does not matter what he or Alice fill in any of the cells in the column because Bob will need to have the last move in just one pair of rows or columns in order to win.</p> <p>What happens if Alice chooses her next move outside of the first two columns? That takes us to Alice's moves - type 2 and 3.</p> <p><em>Alice's move (type 2):</em></p> <p><a href="https://i.sstatic.net/N7WUF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N7WUF.png" alt="enter image description here" /></a></p> <p>Alice can choose to fill any cell in columns <span class="math-container">$3,4$</span>. This becomes the first cell in a pair of columns filled by Alice and Bob falls back to his <strong>following</strong> strategy.</p> <p><em>Alice's move (type 3):</em></p> <p><a href="https://i.sstatic.net/t16Yz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t16Yz.png" alt="enter image description here" /></a></p> <p>This is also similar to type 2 in that Bob uses the <strong>following</strong> strategy.</p> <p>So, regardless of where Alice fills, Bob can use the following strategy to force the last move in a pair of columns or rows to be his and he can ensure that he fills the last cell in that pair with a value that ensures the pair (of rows or columns) is linearly dependent.</p> <p>While the above example shows adjacent columns, Bob's strategy works for any pair of rows or columns. He ensures he always has the last move in any pair of rows or columns and hence he is guaranteed a win when he finishes any pair of rows or columns. The rest of the moves are redundant.</p> <p>This is guaranteed when <span class="math-container">$n$</span> is odd since he always makes the second move in the game.</p> <hr /> <p><strong>Short proof</strong></p> <p>In at least one pair of rows or columns Bob always makes the last move since he plays the second move in the game and each player alternates. Alice cannot avoid playing any pair of rows or columns completely because, Bob can fill a row or column in that pair with zeros and will win by default in <span class="math-container">$2n$</span> moves in that case.</p> <p><em><strong>Notes:</strong></em></p> <ul> <li>I originally mentioned that Bob chooses <span class="math-container">$r_k$</span> and <span class="math-container">$c_x$</span> after the first move by Alice in row <span class="math-container">$k$</span> or column <span class="math-container">$x$</span>. In fact, he doesn't have to make the decision until he fills the last cell in the row or column.</li> </ul>
linear-algebra
<p>Recently, I answered <a href="https://math.stackexchange.com/q/1378132/80762">this question about matrix invertibility</a> using a solution technique I called a &quot;<strong>miracle method</strong>.&quot; The question and answer are reproduced below:</p> <blockquote> <p><strong>Problem:</strong> Let <span class="math-container">$A$</span> be a matrix satisfying <span class="math-container">$A^3 = 2I$</span>. Show that <span class="math-container">$B = A^2 - 2A + 2I$</span> is invertible.</p> <p><strong>Solution:</strong> Suspend your disbelief for a moment and suppose <span class="math-container">$A$</span> and <span class="math-container">$B$</span> were scalars, not matrices. Then, by power series expansion, we would simply be looking for <span class="math-container">$$ \frac{1}{B} = \frac{1}{A^2 - 2A + 2} = \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A^4}{8}-\frac{A^5}{8} + \cdots$$</span> where the coefficient of <span class="math-container">$A^n$</span> is <span class="math-container">$$ c_n = \frac{1+i}{2^{n+2}} \left((1-i)^n-i (1+i)^n\right). $$</span> But we know that <span class="math-container">$A^3 = 2$</span>, so <span class="math-container">$$ \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A^4}{8}-\frac{A^5}{8} + \cdots = \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A}{4}-\frac{A^2}{4} + \cdots $$</span> and by summing the resulting coefficients on <span class="math-container">$1$</span>, <span class="math-container">$A$</span>, and <span class="math-container">$A^2$</span>, we find that <span class="math-container">$$ \frac{1}{B} = \frac{2}{5} + \frac{3}{10}A + \frac{1}{10}A^2. $$</span> Now, what we've just done should be total nonsense if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are really matrices, not scalars. But try setting <span class="math-container">$B^{-1} = \frac{2}{5}I + \frac{3}{10}A + \frac{1}{10}A^2$</span>, compute the product <span class="math-container">$BB^{-1}$</span>, and you'll find that, <strong>miraculously</strong>, this answer works!</p> </blockquote> <p>I discovered this solution technique some time ago while exploring a similar problem in Wolfram <em>Mathematica</em>. However, I have no idea why any of these manipulations should produce a meaningful answer when scalar and matrix inversion are such different operations. <strong>Why does this method work?</strong> Is there something deeper going on here than a serendipitous coincidence in series expansion coefficients?</p>
<p>The real answer is the set of $n\times n$ matrices forms a Banach algebra - that is, a Banach space with a multiplication that distributes the right way. In the reals, the multiplication is the same as scaling, so the distinction doesn't matter and we don't think about it. But with matrices, scaling and multiplying matrices is different. The point is that there is no miracle. Rather, the argument you gave only uses tools from Banach algebras (notably, you didn't use commutativity). So it generalizes nicely. </p> <p>This kind of trick is used all the time to great effect. One classic example is proving that when $\|A\|&lt;1$ there is an inverse of $1-A$. One takes the argument about geometric series from real analysis, checks that everything works in a Banach algebra, and then you're done. </p>
<p>Think about how you derive the finite version of the geometric series formula for scalars. You write:</p> <p>$$x \sum_{n=0}^N x^n = \sum_{n=1}^{N+1} x^n = \sum_{n=0}^N x^n + x^{N+1} - 1.$$</p> <p>This can be written as $xS=S+x^{N+1}-1$. So you move the $S$ over, and you get $(x-1)S=x^{N+1}-1$. Thus $S=(x-1)^{-1}(x^{N+1}-1)$.</p> <p>There is only one point in this calculation where you needed to be careful about commutativity of multiplication, and that is in the step where you multiply both sides by $(x-1)^{-1}$. In the above I was careful to write this on the <em>left</em>, because $xS$ originally multiplied $x$ and $S$ with $x$ on the left. Thus, provided we do this one multiplication step on the left, everything we did works when $x$ is a member of any ring with identity such that $x-1$ has a multiplicative inverse. </p> <p>As a result, if $A-I$ is invertible, then </p> <p>$$\sum_{n=0}^N A^n = (A-I)^{-1}(A^{N+1}-I).$$</p> <p>Moreover, if $\| A \| &lt; 1$ (in any operator norm), then the $A^{N+1}$ term decays as $N \to \infty$. As a result, the partial sums are Cauchy, and so if the ring in question is also complete with respect to this norm, you obtain</p> <p>$$\sum_{n=0}^\infty A^n = (I-A)^{-1}.$$</p> <p>In particular, in this situation we recover the converse: if $\| A \| &lt; 1$ then $I-A$ is invertible.</p>
combinatorics
<p>A stack of silver coins is on the table. </p> <p>For each step we can either remove a silver coin and write the number of gold coins on a piece of paper, or we can add a gold coin and write the number of silver coins on another piece of paper. </p> <p>We stop when only gold coins are left.</p> <p>Prove that the sums of numbers on these two papers are equal.</p> <p>Tried playing the game and the problem seems right each time, but I can't proceed from there. Is it done by induction?</p>
<p>The state of the game can be desribed by $$ (g,s,G,S), $$ where $g$ is the number of golden coins on the table, $s$ is the number of silver coins on the table, $G$ is the sum of the numbers in the first paper, and $S$ is the sum of the numbers in the second paper. The initial state is $(0,n,0,0)$, and we want to show that if the state of the game is $(g,0,G,S)$, then $G=S$.</p> <p>If we are at $(g_i,s_i,G_i,S_i)$ and add a golden coin, the state changes to $$ (g_{i+1},s_{i+1},G_{i+1},S_{i+1}) = (g_i+1,s_i,G_i,S_i+s_i), $$ and if we remove a silver coin, the state changes to $$ (g_{i+1},s_{i+1},G_{i+1},S_{i+1}) = (g_i,s_i-1,G_i+g_i,S_i). $$</p> <p>One plan to solve the problem is to find an <em>invariant</em>, for example, a function from $(g,s,G,S)$ to integers, such that these transformations do not change the value of that function. Looking at the equations for a while suggests something with $gs$ because that's how we would get changes of size $g$ and $s$. A bit more looking gives us $$ f(g,s,G,S) = gs+G-S. $$ Once we have found the above formula, it is easy to verify that a step does not affect the value of $gs+G-S$. </p> <p>Thus if we start from $f(0,n,0,0)=0$ and end with $f(g,0,G,S) = G-S$, we can see that $G=S$. </p>
<p>When you add a gold coin, you write $n$ for the number of silver coins left.</p> <p>Every time you remove one of those $n$ silver coins, that gold coin gets counted once as part of the number of gold coins - a total of $n$ times, since all the silver coins are eventually removed.</p>
combinatorics
<p>Let <span class="math-container">$G=(V,E)$</span> be a directed acyclic graph. Define the set of all directed paths in <span class="math-container">$G$</span> by <span class="math-container">$\Gamma$</span>. Given a subset <span class="math-container">$W\subseteq V$</span>, let <span class="math-container">$\Gamma_W\subseteq \Gamma$</span> be the set of all paths <span class="math-container">$\gamma\in\Gamma$</span> supported on <span class="math-container">$V\backslash W$</span> (i.e all vertices in <span class="math-container">$\gamma$</span> belong to <span class="math-container">$V\backslash W$</span>). Now define <span class="math-container">$l(W)$</span> to be: <span class="math-container">$$l(W)=\max_{\gamma\in \Gamma_W} |\gamma|$$</span> Where <span class="math-container">$|\gamma|$</span> is the number of vertices in <span class="math-container">$\gamma$</span>.</p> <p>I want to prove (or disprove) the following claim:</p> <p><span class="math-container">${\bf Claim:}$</span> For every <span class="math-container">$\epsilon&gt;0$</span> and every <span class="math-container">$k&gt;0$</span>, there are constants <span class="math-container">$L$</span> and <span class="math-container">$N$</span> such that for any directed acyclic graph <span class="math-container">$G=(V, E)$</span> satisfying <span class="math-container">$|V|&gt;N$</span> with the sum of incoming and outgoing degrees bounded by <span class="math-container">$k$</span>, there exists a subset <span class="math-container">$W\subseteq V$</span> such that <span class="math-container">$\frac{|W|}{|V|}&lt;\epsilon$</span> and <span class="math-container">$l(W)&lt;L$</span>.</p> <p>The claim is true for directed trees (see edit 1 for a proof) but the same proof idea fails to work in more general DAGs. Moreover, the statement fails to be true if we remove the constant degree requirement for <span class="math-container">$G$</span>. Indeed, the maximal topological order on vertices indexed from 1 to n can not be &quot;blocked&quot; for any <span class="math-container">$\epsilon&gt;0$</span> by any set <span class="math-container">$W$</span> of size linear in <span class="math-container">$n$</span>.</p> <p>Any direction or idea would be welcome.</p> <p>Edit 1:</p> <p>For trees, a standard proof would go like this: For <span class="math-container">$0\leq i\leq L-1$</span>, Define <span class="math-container">$W_L^i$</span> to be the set of all vertices reachable from the root with a directed path of length <span class="math-container">$i \pmod L$</span>. Since the graph is a tree, any such path is uniquely defined for every vertex and therefore, for a given <span class="math-container">$L$</span>, the set <span class="math-container">$\{W_L^i\}_{0\leq i\leq L-1}$</span> gives a partition of <span class="math-container">$V$</span>. Therefore choosing <span class="math-container">$L=\frac{1}{\epsilon}$</span>, there is some <span class="math-container">$i_0$</span> such that <span class="math-container">$|W_L^{i_0}|$</span> is at most <span class="math-container">$\frac{|V|}{L}=\epsilon |V|$</span>. It is left to show that every <span class="math-container">$W_L^i$</span> is indeed <span class="math-container">$L$</span>-blocking, but this is trivial since any step in a directed path down the tree increases the distance from the root by exactly 1, so the longest path containing no vertices from <span class="math-container">$W_L^i$</span> has to be of length at most <span class="math-container">$L-1$</span> (Connecting 2 adjacent floors in <span class="math-container">$W_L^i$</span>)</p> <p>Edit 2:</p> <p>In general, the claim is true for any DAG for the special case of <span class="math-container">$\epsilon = \frac{2k}{2k+1}$</span> and <span class="math-container">$L=1$</span>. To see that consider the following algorithm:</p> <p>1- choose a vertex <span class="math-container">$v$</span> in the graph that still has neighbours. Keep <span class="math-container">$v$</span>, and remove all of its neighbours (in both directions) from graph</p> <p>2 - if any non isolated vertex is left, go back to 1. Otherwise exit.</p> <p>The resulting graph is completely disconnected (<span class="math-container">$L=1$</span>) and we removed at most an <span class="math-container">$\epsilon=\frac{2k}{2k+1}$</span> fraction of vertices from the graph. The claim follows.</p> <p>Edit 3:</p> <p>As Misha Lavrov showed, the previous bound can be made tighter and we can prove the claim for <span class="math-container">$\epsilon=\frac{k}{k+1}$</span>. I discovered that this bound is not tight when the DAG has total degree bounded by 3. In this case, I will prove the claim for any <span class="math-container">$\epsilon&gt;\frac{1}{2}$</span> where the previous bound is only <span class="math-container">$\epsilon=\frac{2}{3}$</span>. Define the in-degree and out-degree of a vertex <span class="math-container">$v$</span> in <span class="math-container">$G$</span> by <span class="math-container">$in(v)$</span> and <span class="math-container">$out(v)$</span> respectively. From the assumption, for all <span class="math-container">$v \in V$</span>, <span class="math-container">$in(v)+out(v)\leq 3$</span>. Define 4 sets: <span class="math-container">$\{V_i\}_{i=0}^3$</span> by: <span class="math-container">$$V_i=\{v\in V | in(v)=i\}$$</span></p> <p>Obviously, <span class="math-container">$\{V_i\}_{i=0}^3$</span> forms a partition of <span class="math-container">$V$</span>. Therefore, either of the sets <span class="math-container">$V_1$</span> or <span class="math-container">$V_2$</span> has cardinality at most <span class="math-container">$\frac{n}{2}$</span>. W.l.o.g, assume it is <span class="math-container">$V_1$</span>. Let <span class="math-container">$G'$</span> be the subgraph of <span class="math-container">$G$</span> induced by <span class="math-container">$V_2$</span>. Obviously, for all <span class="math-container">$v\in V_2$</span>, <span class="math-container">$out(v)\leq 1$</span> and therefore, <span class="math-container">$G'$</span> is a disjoint union of directed trees with one vertex sink. Using the proof for trees, for any <span class="math-container">$\epsilon&gt;0$</span>, we can find a subset <span class="math-container">$W\subseteq V_2$</span> such that <span class="math-container">$\frac{|W|}{|V_2|}&lt;\epsilon$</span> and <span class="math-container">$W$</span> is <span class="math-container">$\frac{1}{\epsilon}$</span>-blocking in <span class="math-container">$G'$</span>. Finally, define <span class="math-container">$W'= V_1 \cup W$</span>. On one hand, <span class="math-container">$|W'|$</span> is upper bounded by <span class="math-container">$(\frac{1}{2}+\epsilon)n$</span> and on the other hand <span class="math-container">$W'$</span> is <span class="math-container">$(\frac{1}{\epsilon}+2)$</span>-blocking since a directed path in <span class="math-container">$G$</span> can either stay in <span class="math-container">$V_2$</span> and get blocked by <span class="math-container">$W$</span> or get outside of <span class="math-container">$V_2$</span> and either be blocked by <span class="math-container">$V_1$</span> or do one last step from <span class="math-container">$V_0$</span>, to <span class="math-container">$V_3$</span> or both.</p> <p>This proves the claim.</p> <p>PS. <a href="https://mathoverflow.net/questions/309293/blocking-directed-paths-on-a-dag-with-a-linear-number-of-vertex-defects">Crossposted</a> at MO.</p>
<p>I know it is a bad style but I have no time for proper typing now (maybe I'll do it later), so <a href="https://drive.google.com/file/d/1FoH0Rj90ZAS-Oo-4xvHeS8CrQJ_Zu5yh/view?usp=sharing" rel="nofollow noreferrer">here</a> is a set of handwritten notes with a counterexample. Questions are welcome, as usual :)</p>
<p>The following is a writeup of Fedja's solution (with some details added).</p> <hr /> <p><span class="math-container">$\Large{\text{The Problem}}$</span></p> <p>For every <span class="math-container">$\epsilon &gt; 0$</span> and <span class="math-container">$d \ge 1$</span>, are there <span class="math-container">$L,N$</span> so that for any directed acyclic graph with at least <span class="math-container">$N$</span> vertices and max-degree at most <span class="math-container">$d$</span>, one can remove at most <span class="math-container">$\epsilon N$</span> vertices to destroy all paths of length <span class="math-container">$L$</span>?</p> <p>Here, &quot;max-degree&quot; is the maximum over all the out-degrees and the in-degrees of the vertices.</p> <hr /> <p><span class="math-container">$\Large{\text{The Counterexample}}$</span></p> <p>Let <span class="math-container">$k = 10^{100}$</span>. We disprove the problem for <span class="math-container">$\epsilon := 10^{-5}$</span> and <span class="math-container">$d := 2k/\epsilon = 2\cdot 10^{105}$</span>.</p> <p>Take <span class="math-container">$L \ge 1$</span>. We shall find arbitrarily large directed acyclic graphs with max degree at most <span class="math-container">$k$</span> so that removing any <span class="math-container">$\epsilon N$</span> vertices does not destroy all paths of length <span class="math-container">$L$</span>.</p> <p>Take <span class="math-container">$N$</span> a large power of <span class="math-container">$2$</span>. Put a random graph on vertices <span class="math-container">$1,2,\dots,N$</span> by including the directed edge <span class="math-container">$u \to v$</span>, for <span class="math-container">$u &lt; v$</span>, with probability <span class="math-container">$\frac{\alpha}{v-u}$</span>, where <span class="math-container">$\alpha$</span> satisfies <span class="math-container">$\sum_{v=2}^N \frac{\alpha}{v-1} = k$</span>. Note <span class="math-container">$\alpha \approx \frac{k}{\log N}$</span> (so indeed <span class="math-container">$\frac{\alpha}{v-u} \in [0,1]$</span>). Call the formed random graph <span class="math-container">$\mathcal{G}_N$</span>.</p> <p>Note the expected number of edges in the graph is <span class="math-container">$\sum_{u=1}^N \sum_{v = u+1}^N \frac{\alpha}{v-u} \approx \alpha N\log(N) \approx kN$</span>. So with probability at least <span class="math-container">$\frac{1}{3}$</span>, say, the number of edges in our random graph is at most <span class="math-container">$2kN$</span>. Assume that our random graph has at most <span class="math-container">$2kN$</span> edges. Remove all vertices whose in-degree is at least <span class="math-container">$\frac{k}{\epsilon}$</span>, and remove all vertices whose out-degree is at least <span class="math-container">$\frac{k}{\epsilon}$</span>. We removed at most <span class="math-container">$4\epsilon N$</span> vertices.</p> <p>The remaining (random) graph, denoted <span class="math-container">$\overline{\mathcal{G}}_N$</span>, is such that all vertices have in-degree plus out-degree upper bound by <span class="math-container">$\frac{2k}{\epsilon} = d$</span>. We will prove that, with probability <span class="math-container">$1-o_{N \to \infty}(1)$</span> (conditioned on the at-least-<span class="math-container">$1/3$</span> probability event of the initial graph having at most <span class="math-container">$2kN$</span> edges) we cannot remove <span class="math-container">$\epsilon N$</span> vertices to block all paths of length <span class="math-container">$L$</span>.</p> <hr /> <p><span class="math-container">$\Large{\text{Proof that the Counterexample Works}}$</span></p> <p>For the proof that the counterexample works, we may keep the vertices with in-degree or out-degree at least <span class="math-container">$\frac{k}{\epsilon}$</span> (call them for a brief moment <span class="math-container">$\textit{popular}$</span> vertices). Indeed, if we can block all paths of length <span class="math-container">$L$</span> by removing <span class="math-container">$\epsilon N$</span> vertices in the graph with the popular vertices removed, then we can block all paths of length <span class="math-container">$L$</span> by removing <span class="math-container">$5\epsilon N$</span> vertices in the graph with the popular vertices not removed. Let <span class="math-container">$c = 10^{-3}$</span>. Note <span class="math-container">$c &gt; 30\epsilon$</span>.</p> <p><span class="math-container">$\textbf{Definition}$</span>: Call a graph \textit{<span class="math-container">$L$</span>-tame} if its vertices can be partitioned into <span class="math-container">$L$</span> sets <span class="math-container">$V_0,\dots,V_{L-1}$</span> such that (1) the number of edges <span class="math-container">$u \to v$</span> with <span class="math-container">$u \in V_i, v \in V_j$</span> for <span class="math-container">$i \le j$</span> is at most <span class="math-container">$ckN$</span> and (2) for every <span class="math-container">$1 \le i \le L-1$</span> and every <span class="math-container">$u \in V_i$</span>, there exists an edge <span class="math-container">$u \to v$</span> with <span class="math-container">$v \in V_{i-1}$</span>.</p> <p>We prove the following lemma in the next section.</p> <blockquote> <p><span class="math-container">$\textbf{Main Lemma}$</span>: For every <span class="math-container">$L \in \mathbb{N}$</span>, <span class="math-container">$\Pr[\mathcal{G}_N$</span> is <span class="math-container">$L$</span>-tame<span class="math-container">$] \to 0$</span> as <span class="math-container">$N \to \infty$</span>.</p> </blockquote> <p>Assuming the Main Lemma, we now go on to prove that the counterexample works (with high probability). We begin with a lemma.</p> <blockquote> <p><span class="math-container">$\textbf{Lemma 1}$</span>: In <span class="math-container">$\mathcal{G}_N$</span>, the probability that any fixed <span class="math-container">$\epsilon N$</span> vertices have more than <span class="math-container">$ckN$</span> outgoing edges is at most <span class="math-container">$e^{3k\epsilon N}e^{-ckN}$</span>.</p> </blockquote> <blockquote> <p>Proof: By Markov's inequality, letting <span class="math-container">$E_o$</span> denote the number of outgoing edges from our fixed <span class="math-container">$\epsilon N$</span> vertices, it suffices to show that <span class="math-container">$\mathbb{E}[e^{E_0}] \le e^{3k \epsilon N}$</span>. And this follows from the fact that <span class="math-container">$1$</span> has the most out edges in expectation: <span class="math-container">\begin{align*} \mathbb{E}[e^{E_0}] &amp;\le \left(\prod_{v=2}^N (1-\frac{\alpha}{v-1}+\frac{\alpha}{v-1}e)\right)^{\epsilon N} \\ &amp;\le e^{\left(\sum_{v=2}^N \frac{2\alpha}{v-1}\right)\epsilon N} \\ &amp;\le e^{3k\epsilon N}.\end{align*}</span> <span class="math-container">$\square$</span></p> </blockquote> <blockquote> <p><span class="math-container">$\textbf{Lemma 2}$</span>: The probability that <span class="math-container">$\mathcal{G}_N$</span> has <span class="math-container">$5\epsilon N$</span> vertices so that removing them destroys all paths of length <span class="math-container">$L$</span> is at most <span class="math-container">$o(1)$</span> as <span class="math-container">$N \to \infty$</span>.</p> </blockquote> <blockquote> <p>Proof: Suppose that we can block <span class="math-container">$5\epsilon N$</span> vertices so that the longest path is of length at most <span class="math-container">$L-1$</span>. By Lemma 1, the probability that there are <span class="math-container">$5\epsilon N$</span> vertices that have more than <span class="math-container">$ckN$</span> outgoing edges is at most <span class="math-container">${N \choose 5\epsilon N} e^{15k\epsilon N}e^{-ckN} \le e^Ne^{15k\epsilon N}e^{-ckN} \le e^{-\frac{1}{2}ckN}$</span>, which is <span class="math-container">$o(1)$</span>, so we may suppose there are not <span class="math-container">$5\epsilon N$</span> such vertices. We then argue that our graph is <span class="math-container">$L$</span>-tame, which will finish the proof of Lemma 2, in light of the Main Lemma.</p> </blockquote> <blockquote> <p>Define <span class="math-container">$V_0$</span> to be the blocked vertices and the vertices of outdegree <span class="math-container">$0$</span>, and, for <span class="math-container">$1 \le i \le L-1$</span>, <span class="math-container">$V_i$</span> to be the vertices <span class="math-container">$v$</span> such that the longest unblocked path starting at <span class="math-container">$v$</span> has length <span class="math-container">$i$</span> (i.e., the number of edges in the longest path is <span class="math-container">$i$</span>). Note that since the graph is acyclic, there can be no edges from <span class="math-container">$V_i$</span> to <span class="math-container">$V_j$</span> if <span class="math-container">$1 \le i \le j$</span>. Therefore, the only edges from <span class="math-container">$V_i$</span> to <span class="math-container">$V_j$</span> with <span class="math-container">$i \le j$</span> are out-edges from the blocked points. Hence, property (1) of an <span class="math-container">$L$</span>-tame graph is satisfied. And property (2) is satisfied by the definition of the <span class="math-container">$V_i$</span>'s. <span class="math-container">$\square$</span></p> </blockquote> <p>The proof that the counterexample works now immediately follows, since conditioning on an event that has probability at least <span class="math-container">$1/3$</span> still makes it so that the probability there are <span class="math-container">$5\epsilon N$</span> vertices that can be removed to destroy all paths of length <span class="math-container">$L$</span> is <span class="math-container">$o(1)$</span> as <span class="math-container">$N \to \infty$</span>.</p> <hr /> <p><span class="math-container">$\Large{\text{Proof of Main Lemma}}$</span></p> <p>We use the crude union bound <span class="math-container">$$\Pr[\mathcal{G}_N \text{ is } \text{$L$-tame}] \le \sum_{[N] = V_0\sqcup\dots\sqcup V_{L-1}} \Pr[V_0,\dots,V_{L-1} \text{ satisfies (1),(2)}].$$</span> For fixed <span class="math-container">$V_0,\dots,V_{L-1}$</span>, the conditions (1) and (2) in the definition of <span class="math-container">$L$</span>-tame refer to disjoint sets of edges, so by independence, we have <span class="math-container">$$\Pr[V_0,\dots,V_{L-1} \text{ works for } \mathcal{G}_N] = \Pr[V_0,\dots,V_{L-1} \text{ satisfy (1)}]\times \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}].$$</span></p> <p>In the next section, we show <span class="math-container">$$\Pr[V_0,\dots,V_{L-1} \text{ satisfy (1)}] \le e^{-kN/160}$$</span> for each (fixed) partition <span class="math-container">$V_0,\dots,V_{L-1}$</span> of <span class="math-container">$[N]$</span>. In the section after, we show that <span class="math-container">$$\sum_{[N] = V_0\sqcup\dots\sqcup V_{L-1}} \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}] \le (N+1)^L e^{2N} k^N.$$</span> Combining the results of the two sections gives that <span class="math-container">$$\Pr[\mathcal{G}_N \text{ is } \text{$L$-tame}] \le e^{-kN/160}(N+1)^L e^{2N}k^N,$$</span> which clearly goes to <span class="math-container">$0$</span> as <span class="math-container">$N \to \infty$</span>.</p> <hr /> <p><span class="math-container">$\Large{\text{Bounding $\Pr[V_0,\dots,V_{L-1} \text{ satisfy (1)}]$}}$</span></p> <p>In this section, fix a partition <span class="math-container">$[N] = V_0\sqcup\dots\sqcup V_{L-1}$</span>. We shall show <span class="math-container">$$\Pr[V_0,\dots,V_{L-1} \text{ satisfy (1)}] \le e^{-kN/160}.$$</span> To this end, define an edge <span class="math-container">$\vec{e}$</span> to be \textit{bad} if it connects <span class="math-container">$u \to v$</span> with <span class="math-container">$u \in V_i$</span> and <span class="math-container">$v \in V_j$</span> where <span class="math-container">$i \le j$</span>. Let <span class="math-container">$p_{\vec{e}} = \frac{\alpha}{v-u}$</span> be the probability of inclusion of the edge <span class="math-container">$\vec{e}$</span>.</p> <p>\vs</p> <blockquote> <p><span class="math-container">$\textbf{Lemma 3}$</span>: It holds that <span class="math-container">$$\sum_{\vec{e} \text{ bad}} p_{\vec{e}} \ge \left[\frac{1}{16}\log N + 3\sum_{i=0}^{L-1} P_i\log P_i\right]\alpha N,$$</span> where <span class="math-container">$P_i := \frac{|V_i|}{N}$</span>.</p> </blockquote> <blockquote> <p>Proof: We induct on <span class="math-container">$N$</span> (for <span class="math-container">$N$</span> a power of <span class="math-container">$2$</span>). For <span class="math-container">$N=1$</span>, the inequality becomes <span class="math-container">$0 \ge 0$</span>. Now suppose the inequality holds for <span class="math-container">$N$</span> (for any partition), and let's work with a partition of <span class="math-container">$[2N]$</span>. Let <span class="math-container">$$P_i^- = \frac{|V_i\cap[1,N]|}{N}$$</span> <span class="math-container">$$P_i^+ = \frac{|V_i\cap[N+1,2N]|}{N},$$</span> so that, by the induction hypothesis, <span class="math-container">$$\sum_{\substack{\vec{e} \text{ bad} \\ \vec{e} \text{ internal}}} p_{\vec{e}} \ge \left[\frac{2}{16}\log N+3\sum_{i=0}^{L-1}\left(P_i^-\log P_i^- + P_i^+\log P_i^+\right)\right]\alpha N,$$</span> where an edge <span class="math-container">$\vec{e}$</span> is \textit{internal} if it lies in <span class="math-container">$[1,N]$</span> or in <span class="math-container">$[N+1,2N]$</span>. Above, we just critically used the fact that <span class="math-container">$\frac{1}{j-i}$</span> weights all scales the same. Now, using the trivial <span class="math-container">$\frac{\alpha}{j-i} \ge \frac{\alpha}{2N}$</span>, we have <span class="math-container">\begin{align*} \sum_{\substack{\vec{e} \text{ bad} \\ \vec{e} \text{ external}}} p_{\vec{e}} &amp;\ge \frac{\alpha}{2N}\sum_{i \le j} |V_i\cap [1,N]|\cdot |V_j\cap [N+1,2N]| \\ &amp;= \frac{1}{2}\alpha N\sum_{i \le j} P_i^-P_j^+.\end{align*}</span> Thus, <span class="math-container">$$\sum_{\vec{e} \text{ bad}} p_{\vec{e}} \ge \left[\frac{1}{16}\log N + \frac{3}{2}\sum_{i=0}^{L-1}\left(P_i^-\log P_i^- + P_i^+\log P_i^+\right)+\frac{1}{4}\sum_{i \le j} P_i^-P_j^+\right]\alpha(2N),$$</span> and we wish to obtain <span class="math-container">$$\sum_{\vec{e} \text{ bad}} p_{\vec{e}} \ge \left[\frac{1}{16}\log(2N)+3\sum_{i=0}^{L-1} P_i\log P_i\right]\alpha(2N).$$</span> So, multiplying through by <span class="math-container">$2$</span>, we just need to show <span class="math-container">$$3\sum_{i=0}^{L-1}\left(P_i^-\log P_i^-+P_i^+\log P_i^+-2 P_i\log P_i\right)+\frac{1}{2}\sum_{i \le j} P_i^-P_j^+ \ge \frac{1}{8}\log 2.$$</span> Note, for <span class="math-container">$x \in [-1,1]$</span>, <span class="math-container">$(1+x)\log(1+x)+(1-x)\log(1-x) \ge x^2$</span> (Taylor expand), so, for any <span class="math-container">$x,y \ge 0$</span>, we have <span class="math-container">\begin{align*} x\log x+y\log y-2(\frac{x+y}{2})\log(\frac{x+y}{2}) &amp;= \frac{x+y}{2}\left[\frac{2x}{x+y}\log(\frac{2x}{x+y})+\frac{2y}{x+y}\log(\frac{2y}{x+y})\right] \\ &amp;\ge \frac{x+y}{2}\left(\frac{x-y}{x+y}\right)^2 \\ &amp;= \frac{(x-y)^2}{2(x+y)}.\end{align*}</span> We thus obtain <span class="math-container">$$P_i^-\log P_i^-+P_i^+\log P_i^+ - 2P\log P \ge \frac{(P^--P^+)^2}{4P},$$</span> so it suffices to show <span class="math-container">$$3\sum_i \frac{(P_i^--P_i^+)^2}{4P_i}+\frac{1}{2}\sum_{i \le j} P_i^-P_j^+ \ge \frac{1}{8}\log 2.$$</span> Let <span class="math-container">$$I_1 = \{i : P_i^+ \le \frac{1}{2}P_i^-\}$$</span> <span class="math-container">$$I_2 = \{i : P_i^+ &gt; \frac{1}{2} P_i^-\}.$$</span> For <span class="math-container">$i \in I_1$</span>, we have <span class="math-container">$$P_i^--P_i^+ \ge \frac{1}{2} P_i^-$$</span> and <span class="math-container">$$P_i = \frac{P_i^-+P_i^+}{2} \le \frac{3}{4}P_i^-,$$</span> so <span class="math-container">$$3\sum_i \frac{(P_i^--P_i^+)^2}{4P_i} \ge 3\sum_{i \in I_1} \frac{\frac{1}{4}(P_i^-)^2}{3P_i^-} = \frac{1}{4}\sum_{i \in I_1} P_i^-.$$</span> Also, <span class="math-container">$$\frac{1}{2}\sum_{i \le j} P_i^- P_j^+ \ge \frac{1}{2}\sum_{\substack{i \le j \\ i,j \in I_2}} P_i^-P_j^+ \ge \frac{1}{4}\sum_{\substack{i \le j \\ i,j \in I_2}} P_i^-P_j^- \ge \frac{1}{8}\left(\sum_{i \in I_2} P_i^-\right)^2.$$</span> Letting <span class="math-container">$x = \sum_{i \in I_1} P_i^-$</span> and noting <span class="math-container">$\sum_{i \in I_2} P_i^- = 1-x$</span>, we obtain <span class="math-container">$$3\sum_i \frac{(P_i^--P_i^+)^2}{4P_i}+\frac{1}{2}\sum_{i \le j} P_i^-P_j^+ \ge \frac{1}{4}x+\frac{1}{8}(1-x)^2 = \frac{1}{8}(1+x^2) \ge \frac{1}{8} \ge \frac{\log 2}{8}.$$</span> We are done. <span class="math-container">$\square$</span></p> </blockquote> <blockquote> <p><span class="math-container">$\textbf{Corollary}$</span>: The probability that there are at most <span class="math-container">$ckN$</span> bad edges is at most <span class="math-container">$e^{-\frac{kN}{160}}$</span>.</p> </blockquote> <blockquote> <p>Proof: Let <span class="math-container">$\mathcal{P} = \sum_{\vec{e} \text{ bad}} p_{\vec{e}}$</span> denote the expected number of bad edges. By Lemma 3, we have <span class="math-container">$$\mathcal{P} \ge \left[\frac{1}{16}\log N-3\log L\right]\alpha N,$$</span> so for <span class="math-container">$N$</span> large enough, we have <span class="math-container">$$\mathcal{P} \ge \frac{1}{20}(\log N)\alpha N = \frac{kN}{20}.$$</span> Now, let <span class="math-container">$|E_b|$</span> denote the (random) number of bad edges. For <span class="math-container">$\tau &gt; 0$</span>, using independence, we have <span class="math-container">$$\mathbb{E} e^{\tau(\mathcal{P}-|E_b|)} = \prod_{\vec{e} \text{ bad}} \left[(1-p_{\vec{e}})e^{p_{\vec{e}}\tau}+p_{\vec{e}}e^{(p_{\vec{e}}-1)\tau}\right].$$</span></p> </blockquote> <blockquote> <p>We claim that <span class="math-container">$f(\tau) := (1-p)e^{p\tau}+pe^{(p-1)\tau} \le e^{\frac{p}{2}\tau^2}$</span> for all <span class="math-container">$\tau \in [0,1]$</span>, for any <span class="math-container">$p \in [0,1]$</span>. Indeed, <span class="math-container">\begin{align*} f(\tau) &amp;\le 1+\frac{1}{2}\left(\max_{\xi \in [0,\tau]} f''(\xi)\right)\tau^2 \\ &amp;= 1+\frac{1}{2}\max_{\xi \in [0,\tau]} \left[(1-p)p^2e^{p\xi}+p(1-p)^2e^{(p-1)\xi}\right]\tau^2 \\ &amp;\le 1+\frac{1}{2}\left(\max_{\xi \in [0,\tau)} p(1-p)e^{p\xi}\right)\tau^2 \\ &amp;\le 1+\frac{p}{2}\tau^2 \\ &amp;\le e^{\frac{p}{2}\tau^2}.\end{align*}</span></p> </blockquote> <blockquote> <p>Therefore, for <span class="math-container">$\tau \in [0,1]$</span>, we have <span class="math-container">$$\mathbb{E} e^{\tau(\mathcal{P}-|E_b|)} \le e^{\frac{\mathcal{P}}{2}\tau^2},$$</span> so <span class="math-container">$$\Pr\left[|E_b| \le ckN\right] \le \Pr\left[|E_b| \le \frac{\mathcal{P}}{2}\right] \le e^{\frac{\mathcal{P}}{2}(\tau^2-\tau)},$$</span> and taking <span class="math-container">$\tau = \frac{1}{2}$</span> gives an upper bound of <span class="math-container">$e^{-\frac{\mathcal{P}}{8}} \le e^{-\frac{kN}{160}}$</span>. <span class="math-container">$\square$</span></p> </blockquote> <hr /> <p><span class="math-container">$\Large{\text{Bounding $\sum_{[N] = V_0\sqcup\dots\sqcup V_{L-1}} \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}]$}}$</span></p> <p>We wish to show <span class="math-container">$$\sum_{[N] = V_0\sqcup\dots\sqcup V_{L-1}} \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}] \le (N+1)^L e^{2N} k^N.$$</span> Fix some <span class="math-container">$N_0,\dots, N_{L-1} \in \{0,\dots,N\}$</span> that add up to <span class="math-container">$N$</span>. Since there are at most <span class="math-container">$(N+1)^L$</span> choices for <span class="math-container">$(N_0,\dots,N_{L-1})$</span>, it suffices to show <span class="math-container">$$\sum_{\substack{[N] = V_0\sqcup\dots\sqcup V_{L-1} \\ |V_i| = N_i \hspace{1.5mm} \forall 0 \le i \le L-1}} \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}] \le e^{2N} k^N.$$</span> Note that we may rewrite <span class="math-container">$$\sum_{\substack{[N] = V_0\sqcup\dots\sqcup V_{L-1} \\ |V_i| = N_i \hspace{1.5mm} \forall 0 \le i \le L-1}} \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}]$$</span> <span class="math-container">$$ = \sum_{V_0 \subseteq [N], |V_0| = N_0} \sum_{\substack{V_1 \subseteq [N], |V_1| = N_1 \\ V_1 \cap V_0 = \emptyset}} \dots \sum_{\substack{V_{L-1} \subseteq [N], |V_{L-1}| = N_{L-1} \\ V_{L-1} \cap V_j = \emptyset \hspace{1.5mm} \forall 0 \le j \le L-2}} \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}].$$</span></p> <p>We write <span class="math-container">$V_{j+1} \rightrightarrows V_j$</span> if each vertex in <span class="math-container">$V_{j+1}$</span> has an edge to <span class="math-container">$V_j$</span>, so that the above becomes <span class="math-container">$$\sum_{V_0 \subseteq [N], |V_0| = N_0} \sum_{\substack{V_1 \subseteq [N], |V_1| = N_1 \\ V_1 \cap V_0 = \emptyset}} \dots \sum_{\substack{V_{L-1} \subseteq [N], |V_{L-1}| = N_{L-1} \\ V_{L-1} \cap V_j = \emptyset \hspace{1.5mm} \forall 0 \le j \le L-2}} \Pr[V_{j+1} \rightrightarrows V_j \hspace{1.5mm} \hspace{1.5mm} \forall 0 \le j \le L-2],$$</span> which by repeated application of independence is equal to <span class="math-container">$$\sum_{V_0 \subseteq [N], |V_0| = N_0} \sum_{\substack{V_1 \subseteq [N], |V_1| = N_1 \\ V_1 \cap V_0 = \emptyset}} \dots \sum_{\substack{V_{L-1} \subseteq [N], |V_{L-1}| = N_{L-1} \\ V_{L-1} \cap V_j = \emptyset \hspace{1.5mm} \forall 0 \le j \le L-2}} \prod_{j=0}^{L-2} \Pr[V_{j+1} \rightrightarrows V_j].$$</span></p> <blockquote> <p><span class="math-container">$\textbf{Lemma 4}$</span>: Suppose <span class="math-container">$V_0,\dots,V_j$</span> are fixed, pairwise-disjoint subsets of <span class="math-container">$[N]$</span> with <span class="math-container">$|V_i| = N_i$</span> for <span class="math-container">$0 \le i \le j$</span>. Then <span class="math-container">$$\sum_{\substack{V_{j+1} \subseteq [N], |V_{j+1}| = N_{j+1} \\ V_{j+1} \cap V_i = \emptyset \hspace{1.5mm} \forall 0 \le i \le j}} \Pr[V_{j+1} \rightrightarrows V_j] \le e^{N_{j+1}}k^{N_{j+1}}\left(\frac{N_j}{N_{j+1}}\right)^{N_{j+1}}.$$</span></p> </blockquote> <blockquote> <p>Proof: Note <span class="math-container">$$\Pr[V_{j+1} \rightrightarrows V_j] = \prod_{u \in V_{j+1}}1\left[u \text{ has an edge to } V_j\right] \le \prod_{u \in V_{j+1}}\left(\sum_{v \in V_j} 1\left[u \text{ has an edge to } v\right]\right).$$</span> For <span class="math-container">$u \in V_{j+1}$</span>, let <span class="math-container">$\sigma(u) = \sum_{v \in V_j} 1[u \text{ has an edge to } v]$</span>. Then, <span class="math-container">\begin{align*}\sum_{\substack{V_{j+1} \subseteq [N], |V_{j+1}| = N_{j+1} \\ V_{j+1} \cap V_i = \emptyset \hspace{1.5mm} \forall 0 \le i \le j}} \prod_{u \in V_{j+1}} \sigma(u) &amp;\le \frac{1}{N_{j+1}!}\left(\sum_{u \in \{1,\dots,N\}} \sigma(u)\right)^{N_{j+1}} \\ &amp;\le \frac{1}{N_{j+1}!}\left(kN_j\right)^{N_{j+1}}.\end{align*}</span> The Lemma then follows from Stirling's approximation. <span class="math-container">$\square$</span></p> </blockquote> <p>Repeated application of Lemma 4 thus gives <span class="math-container">\begin{align*}\sum_{\substack{[N] = V_0\sqcup\dots\sqcup V_{L-1} \\ |V_i| = N_i \hspace{1.5mm} \forall 0 \le i \le L-1}} \Pr[V_0,\dots,V_{L-1} \text{ satisfy (2)}] &amp;\le e^{N_1+\dots+N_{L-1}} k^{N_1+\dots+N_{L-1}}\prod_{j=0}^{L-2} \left(\frac{N_j}{N_{j+1}}\right)^{N_{j+1}} \\ &amp;\le e^N k^N \prod_{j=0}^{L-2} e^{N_j} \\ &amp;\le e^{2N}k^N,\end{align*}</span> where we used <span class="math-container">$N_j/N_{j+1} \le e^{N_j/N_{j+1}}$</span>.</p>
linear-algebra
<p>I am helping my brother with Linear Algebra. I am not able to motivate him to understand what double dual space is. Is there a nice way of explaining the concept? Thanks for your advices, examples and theories.</p>
<p>If $V$ is a finite dimensional vector space over, say, $\mathbb{R}$, the dual of $V$ is the set of linear maps to $\mathbb{R}$. This is a vector space because it makes sense to add functions $(\phi + \psi)(v) = \phi(v) + \psi(v)$ and multiply them by scalars $(\lambda\phi)(v) = \lambda(\phi(v))$ and these two operations satisfy all the usual axioms.</p> <p>If $V$ has dimension $n$, then the dual of $V$, which is often written $V^\vee$ or $V^*$, also has dimension $n$. Proof: pick a basis for $V$, say $e_1, \ldots, e_n$. Then for each $i$ there is a unique linear function $\phi_i$ such that $\phi_i(e_i) = 1$ and $\phi_i(e_j) = 0$ whenever $i \neq j$. It's a good exercise to see that these maps $\phi_i$ are linearly independent and span $V^*$.</p> <p>So given a basis for $V$ we have a way to get a basis for $V^*$. It's true that $V$ and $V^*$ are isomorphic, but the isomorphism depends on the choice of basis (check this by seeing what happens if you change the basis).</p> <p>Now let's talk about the double dual, $V^{**}$. First, what does it mean? Well, it means what it says. After all, $V^*$ is a vector space, so it makes sense to take its dual. An element of $V^{**}$ is a function that eats elements of $V^*$, i.e. a function that eats functions that eat elements of $V$. This can be a little hard to grasp the first few times you see it. I will use capital Greek letters for elements of $V^{**}$. </p> <p>Now, here is the trippy thing. Let $v \in V$. I am going to build an element $\Phi_v$ of $V^{**}$. An element of $V^{**}$ should be a function that eats functions that eat vectors in $V$ and returns a number. So we are going to set $$ \Phi_v(f) = f(v). $$</p> <p>You should check that the association $v \mapsto \Phi_v$ is linear (so $\Phi_{\lambda v} = \lambda\Phi_v$ and $\Phi_{v + w} = \Phi_v + \Phi_w$) and is an isomorphism (one-to-one and onto)! This isomorphism didn't depend on choosing a basis, so there's a sense in which $V$ and $V^{**}$ have more in common than $V$ and $V^*$ do.</p> <p>In fancier language, $V$ and $V^*$ are isomorphic, but not naturally isomorphic (you have to make a choice of basis); $V$ and $V^{**}$ are naturally isomorphic.</p> <p>Final remark: someone will surely have already said this by the time I've edited and submitted this post, but when $V$ is infinite dimensional, it's not always true anymore that $V = V^{**}$. The map $v \mapsto \Phi_v$ is injective, but not necessarily surjective, in this case.</p>
<p>Actually it's quite simple: If you have a vector space, <em>any</em> vector space, you can define linear functions on that space. The set of <em>all</em> those functions is the dual space of the vector space. The important point here is that it doesn't matter what this original vector space is. You have a vector space $V$, you have a corresponding dual $V^*$.</p> <p>OK, now you have linear functions. Now if you add two linear functions, you get again a linear function. Also if you multiply a linear function with a factor, you get again a linear function. Indeed, you can check that linear functions fulfill all the vector space axioms this way. Or in short, the dual space is a vector space in its own right.</p> <p>But if $V^*$ is a vector space, then it comes with everything a vector space comes with. But as we have seen in the beginning, one thing every vector space comes with is a dual space, the space of all linear functions on it. Therefore also the dual space $V^*$ has a corresponding dual space, $V^{**}$, which is called double dual space (because "dual space of the dual space" is a bit long).</p> <p>So we have the dual space, but we also want to know what sort of functions are in that double dual space. Well, such a function takes a vector from $V^*$, that is, a linear function on $V$, and maps that to a scalar (that is, to a member of the field the vector space is based on). Now, if you have a linear function on $V$, you already know a way to get a scalar from that: Just apply it to a vector from $V$. Indeed, it is not hard to show that if you just choose an arbitrary fixed element $v\in V$, then the function $F_v\colon\phi\mapsto\phi(v)$ indeed is a linear function on $V^*$, and thus a member of the double dual $V^{**}$. That way we have not only identified certain members of $V^{**}$ but in addition a natural mapping from $V$ to $V^{**}$, namely $F\colon v\mapsto F_v$. It is not hard to prove that this mapping is linear and injective, so that the functions in $V^{**}$ corresponding to vectors in $V$ form a subspace of $V^{**}$. Indeed, if $V$ is finite dimensional, it's even <em>all</em> of $V^{**}$. That's easy to see if you know that $\dim(V^*)=\dim{V}$ and therefore $\dim(V^{**})=\dim{V^*}=\dim{V}$. On the other hand, since $F$ is injective, $\dim(F(V))=\dim(V)$. However for finite dimensional vector spaces, the only subspace of the same dimension as the full space is the full space itself. However if $V$ is infinite dimensional, $V^{**}$ is larger than $V$. In other words, there are functions in $V^{**}$ which are not of the form $F_v$ with $v\in V$.</p> <p>Note that since $V^{**}$ again is a vector space, it <em>also</em> has a dual space, which again has a dual space, and so on. So in principle you have an infinite series of duals (although only for infinite vector spaces they are all different).</p>
logic
<p>I am just a high school student, and I haven't seen much in mathematics (calculus and abstract algebra).</p> <p>Mathematics is a system of axioms which you choose yourself for a set of undefined entities, such that those entities satisfy certain basic rules you laid down in the first place on your own.</p> <p>Now using these laid-down rules and a set of other rules for a subject called logic which was established similarly, you define certain quantities and name them using the undefined entities and then go on to prove certain statements called theorems.</p> <p>Now what is a proof exactly? Suppose in an exam, I am asked to prove Pythagoras' theorem. Then I prove it using only one certain system of axioms and logic. It isn't proved in all the axiom-systems in which it could possibly hold true, and what stops me from making another set of axioms that have Pythagoras' theorem as an axiom, and then just state in my system/exam &quot;this is an axiom, hence can't be proven&quot;.</p> <p><strong>EDIT</strong> : How is the term &quot;wrong&quot; defined in mathematics then? You can say that proving Fermat's Last Theorem using the number theory axioms was a difficult task but then it can be taken as an axiom in another set of axioms.</p> <p>Is mathematics as rigorous and as thought-through as it is believed and expected to be? It seems to me that there many loopholes in problems as well as the subject in-itself, but there is a false backbone of rigour that seems true until you start questioning the very fundamentals.</p>
<p>There are really two very different kinds of proofs:</p> <ul> <li><p><em>Informal proofs</em> are what mathematicians write on a daily basis to convince themselves and other mathematicians that particular statements are correct. These proofs are usually written in prose, although there are also geometrical constructions and "proofs without words". </p></li> <li><p><em>Formal proofs</em> are mathematical objects that model informal proofs. Formal proofs contain absolutely every logical step, with the result that even simple propositions have amazingly long formal proofs. Because of that, formal proofs are used mostly for theoretical purposes and for computer verification. Only a small percentage of mathematicians would be able to write down any formal proof whatsoever off the top of their head. </p></li> </ul> <p>With a little humor, I should say there is a third kind of proof: </p> <ul> <li><em>High-school proofs</em> are arguments that teachers force their students to reproduce in high school mathematics classes. These have to be written according to very specific rules described by the teacher, which are seemingly arbitrary and not shared by actual informal or formal proofs outside high-school mathematics. High-school proofs include the "two-column proofs" where the "steps" are listed on one side of a vertical line and the "reasons" on the other. The key thing to remember about high-school proofs is that they are only an imitation of "real" mathematical proofs.</li> </ul> <p>Most mathematicians learn about mathematical proofs by reading and writing them in classes. Students develop proof skills over the course of many years in the same way that children learn to speak - without learning the rules first. So, as with natural languages, there is no firm definition of "what is an informal proof", although there are certainly common patterns. </p> <p>If you want to learn about proofs, the best way is to read some real mathematics written at a level you find comfortable. There are many good sources, so I will point out only two: <a href="http://www.maa.org/pubs/mathmag.html">Mathematics Magazine</a> and <a href="http://www.maa.org/mathhorizons/">Math Horizons</a> both have well-written articles on many areas of mathematics. </p>
<p>Starting from the end, if you take Pythagoras' Theorem as an axiom, then proving it is very easy. A proof just consists of a single line, stating the axiom itself. The modern way of looking at axioms is not as things that can't be proven, but rather as those things that we explicitly state as things that hold. </p> <p>Now, exactly what a proof is depends on what you choose as the rules of inference in your logic. It is important to understand that a proof is a typographical entity. It is a list of symbols. There are certain rules of how to combine certain lists of symbols to extend an existing proof by one more line. These rules are called inference rules. </p> <p>Now, remembering that all of this happens just on a piece of paper - the proof consist just of marks on paper, where what you accept as valid proof is anything that is obtained from the axioms by following the inference rules - we would somehow like to relate this to properties of actual mathematical objects. To understand that, another technicality is required. If we are to write a proof as symbols on a piece of paper we had better have something telling us which symbols are we allowed to use, and how to combine them to obtain what are called terms. This is provided by the formal concept of a language. Now, to relate symbols on a piece of paper to mathematical objects we turn to semantics. First the language needs to be interpreted (another technical thing). Once the language is interpreted each statement (a statement is a bunch of terms put together in a certain way that is trying to convey a property of the objects we are interested in) becomes either true or false. </p> <p>This is important: Before an interpretation was made, we could still prove things. A statement was either provable or not. Now, with an interpretation at hand, each statement is also either true or false (in that particular interpretation). So, now comes the question whether or not the rules of inference are <em>sound</em>. That is to say, whether those things that are provable from the axioms are actually true in each and every interpretation where these axioms hold. Of course we absolutely must choose the inference rules so that they are sound. </p> <p>Another question is whether we have completeness. That is, if a statement is true under each and every interpretation where the axioms hold, does it follow that a proof exists? This is a very subtle question since it relates semantics (a concept that is quite illusive) to provability (a concept that is very trivial and completely mechanical). Typically, proving that a logical system is complete is quite hard. </p> <p>I hope this satisfies your curiosity, and thumbs up for your interest in these issues!</p>
linear-algebra
<p>I am reading the Wikipedia article about <a href="https://en.wikipedia.org/wiki/Support_vector_machine" rel="noreferrer">Support Vector Machine</a> and I don't understand how they compute the distance between two hyperplanes.</p> <p>In the article, </p> <blockquote> <p>By using geometry, we find the distance between these two hyperplanes is $\frac{2}{\|\mathbf{w}\|}$</p> </blockquote> <p>I don't understand how the find that result.</p> <p><img src="https://i.sstatic.net/7sFL3.png" alt="is a link to the image"> </p> <h2>What I tried</h2> <p>I tried setting up an example in two dimensions with an hyperplane having the equation $y = -2x+5$ and separating some points $A(2,0)$, $B(3,0)$ and $C(0,4)$, $D(0,6)$ . </p> <p>If I take a vector $\mathbf{w}(-2,-1)$ normal to that hyperplane and compute the margin with $\frac{2}{\|\mathbf{w}\|}$ I get $\frac{2}{\sqrt{5}}$ when in my example the margin is equal to 2 (distance between $C$ and $D$). </p> <p><strong>How did they come up with $\frac{2}{\|\mathbf{w}\|}$</strong> ?</p>
<p>Let $\textbf{x}_0$ be a point in the hyperplane $\textbf{wx} - b = -1$, i.e., $\textbf{wx}_0 - b = -1$. To measure the distance between hyperplanes $\textbf{wx}-b=-1$ and $\textbf{wx}-b=1$, we only need to compute the perpendicular distance from $\textbf{x}_0$ to plane $\textbf{wx}-b=1$, denoted as $r$.</p> <p>Note that $\frac{\textbf{w}}{\|\textbf{w}\|}$ is a unit normal vector of the hyperplane $\textbf{wx}-b=1$. We have $$ \textbf{w}(\textbf{x}_0 + r\frac{\textbf{w}}{\|\textbf{w}\|}) - b = 1 $$ since $\textbf{x}_0 + r\frac{\textbf{w}}{\|\textbf{w}\|}$ should be a point in hyperplane $\textbf{wx}-b = 1$ according to our definition of $r$.</p> <p>Expanding this equation, we have \begin{align*} &amp; \textbf{wx}_0 + r\frac{\textbf{w}\textbf{w}}{\|\textbf{w}\|} - b = 1 \\ \implies &amp;\textbf{wx}_0 + r\frac{\|\textbf{w}\|^2}{\|\textbf{w}\|} - b = 1 \\ \implies &amp;\textbf{wx}_0 + r\|\textbf{w}\| - b = 1 \\ \implies &amp;\textbf{wx}_0 - b = 1 - r\|\textbf{w}\| \\ \implies &amp;-1 = 1 - r\|\textbf{w}\|\\ \implies &amp; r = \frac{2}{\|\textbf{w}\|} \end{align*}</p>
<p><a href="https://i.sstatic.net/oDLS5.png" rel="noreferrer"><img src="https://i.sstatic.net/oDLS5.png" alt="SVM" /></a></p> <p>Let <span class="math-container">$\textbf{x}_+$</span> be a positive example on one gutter, such that <span class="math-container">$$\textbf{w} \cdot \textbf{x}_+ - b = 1$$</span></p> <p>Let <span class="math-container">$\textbf{x}_-$</span> be a negative example on another gutter, such that <span class="math-container">$$\textbf{w} \cdot \textbf{x}_- - b = -1$$</span></p> <p>The width of margin is the <a href="https://en.wikipedia.org/wiki/Scalar_projection" rel="noreferrer">scalar projection</a> of <span class="math-container">$\textbf{x}_+ - \textbf{x}_-$</span> on unit normal vector , that is the dot production of <span class="math-container">$\textbf{x}_+ - \textbf{x}_-$</span> and <span class="math-container">$\frac{\textbf{w}}{\|\textbf{w}\|}$</span></p> <p><span class="math-container">\begin{align} width &amp; = (\textbf{x}_+ - \textbf{x}_-) \cdot \frac{\textbf{w}}{\|\textbf{w}\|} \\ &amp; = \frac {(\textbf{x}_+ - \textbf{x}_-) \cdot {\textbf{w}}}{\|\textbf{w}\|} \\ &amp; = \frac{\textbf{x}_+ \cdot \textbf{w} \,{\bf -}\, \textbf{x}_-\cdot \textbf{w}}{\|\textbf{w}\|} \\ &amp; = \frac{1-b-(-1-b)}{\lVert \textbf{w} \rVert} \\ &amp; = \frac{2}{\|\textbf{w}\|} \end{align}</span></p> <p>The above refers to <a href="https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/lecture-16-learning-support-vector-machines/" rel="noreferrer">MIT 6.034 Artificial Intelligence</a></p>
differentiation
<p>In 6th grade I was first introduced to the idea of a function in the form of tables. The input would be "n" and the output "$f_n$" would be some modification of the input. I remember finding a pattern in the function "f(n)=n^2". Here is what the table looked like:</p> <p>\begin{array}{|c|c|} \hline n&amp; f_n\\ \hline 1&amp;1 \\ \hline 2&amp;4\\ \hline 3&amp;9\\ \hline 4&amp;16\\ \hline 5&amp;25\\ \hline ...&amp;...\\ \hline n&amp;n^2\\ \hline \end{array}</p> <p>I would then take the outputs $f_n$ and find the differences between each one: $f_n-f_{n-1}$. This would produce:</p> <p>\begin{array}{|c|c|} \hline n&amp; f(n)-f(n-1)\\ \hline 1&amp;1 \\ \hline 2&amp;3\\ \hline 3&amp;5\\ \hline 4&amp;7\\ \hline 5&amp;9\\ \hline ...&amp;...\\ \hline \end{array}</p> <p>Repeating this process (of finding the differences) for the outputs of $f_n-f_{n-1}$ would yield a continuous string of $2$s. As a 6th grader I called this process 'breaking down the function' and at the time it was just another pattern I had found.</p> <p>Looking back at my work as a freshman in high school, I realize that the end result of 'breaking down a function' corresponds to the penultimate derivative (before the derivative equals zero). For example: breaking down $y=x^3$ gives a continuous string of $6$s, and the third derivative of $x^3$ is 6 (while the 2nd derivative is 6x). </p> <p>Is there any significance to this pattern found by finding the differences between each output of a function over-and-over again? Does it have anything to do with derivatives? I know my question is naive, but I'm only a high school freshman in algebra II. A non-calculus (or intuitively explained calculus concepts) answer would be very helpful [note that I used an online derivative calculator to find the derivatives of these functions and I apologize for any incorrect calculus terminology].</p>
<p>Yes, this has plenty to do with the derivative. In particular, what you describe is the <em>backwards difference operator</em>, which is just defined as $$\nabla f(n)=f(n)-f(n-1).$$ This is an operator of interest on its own, but the connection to calculus is that we can consider this as telling us the "average" slope between $n-1$ and $n$.</p> <p>What you are doing is iterating the operator. In particular, one often writes $$\nabla^{k+1} f(n)=\nabla^k f(n)-\nabla^k f(n-1)$$ to meant that $\nabla^k f(n)$ is the result of applying this operator $k$ times. For instance, one has that $\nabla^3 n^3 = 6$, as you note. More generally $\nabla^k n^k = k!$, and this lets us recover a polynomial function from its table, which is what you were up to in sixth grade.</p> <p>However, we can take things further by trying to interpret these numbers - and there <em>is</em> a natural interpretation. For instance, $\nabla^2 f(n)$ represents how quickly $f$ is "accelerating" over the interval $[n-2,n]$, since it tells us about how the average slope changes between the interval $[n-2,n-1]$ and the interval $[n-1,n]$. If we keep going, we get that $\nabla^3 f(n)$ tells us how the acceleration changes between an interval $[n-3,n-2]$ and $[n-2,n]$. We can keep going like this for physical interpretations.</p> <p>However, this operator has a problem: We'd like to interpret the values as accelerations or as slopes, but $\nabla^k f(n)$ depends on the values of $f$ across the interval $[n-k,n]$. That is, it keeps taking up information from further and further away from the point of interest. The way one fixes this is to try to measure the slope over a smaller distance $h$ rather than measure it over a length of $1$: $$\nabla_h f(n)=\frac{f(n)-f(n-h)}h$$ which is now the average slope of $f$ between $n-h$ and $n$. So, if we make $h$ smaller, we start to need to know $f$ across a smaller range. This gives better meanings to higher order differences like $\nabla_h^k f(n)$, since now they only depend on a small portion of $f$.</p> <p>The derivative is just what happens to $\nabla_h$ when you send $h$ to $0$. It captures only <em>local</em> information about the function - so, it captures <em>instantaneous</em> slope or <em>instantaneous</em> acceleration and so on. In particular, one can work out that $\nabla f(n)$ is just the average of the derivative over the interval $[n-1,n]$. One can also work out that $\nabla^2 f(n)$ is a weighted average* of the second derivative over the interval $[n-2,n]$ and $\nabla^3 f(n)$ is another weighted average of the third derivative over $[n-3,n]$. </p> <p>In particular, if the $k^{th}$ derivative is constant, then it coincides with $\nabla^k f(n)$. One can also find results that if the $k^{th}$ derivative is linear, then $\nabla^k f(n)$ differs from it by at worst a constant. In particular, $\nabla$ is good at capture "global" effects (like the highest order term in a polynomial and its coefficient) but bad at capturing "local" effects (like instantaneous changes in the slope). So, in some sense, $\nabla$ is just a rough approximation of the derivative, and has similar interpretations, just doesn't work nearly as cleanly.</p> <p>(*Unfortunately, "weighted average" here is hard to explain rigorously without calculus. For the benefit of readers with more background, I really mean "convolution" assuming that $f$ is actually differentiable enough times for any of this to make sense)</p>
<p>While I was in Algebra 2 I discovered the same exact things. Pretty cool and exiting?</p> <p>Does $f(x)-f(x-1)$ have anything to do with the derivative? Kind of.</p> <p>The derivative is defined as:</p> <p>$$\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}=f'(x)$$</p> <p>Note you found:</p> <p>$$f(x)-f(x-1)=\frac{f(x)-f(x-1)}{(x)-(x-1)}$$</p> <p>This resembles the derivative, and is weak approximation to the derivative. </p> <p>$$x^2-(x-1)^2=2x-1$$</p> <p>Where as </p> <p>$$\frac{d}{dx} x^2=2x$$</p> <p>Concerning the other thing you are observing (the difference $n$ amount of times gives a constant expression for an $n$ degree polynomial): </p> <p>Let's denote $\nabla f$ to mean the operation $f(x)-f(x-1)$. And denote $D_n(x)$ to mean a polynomial of degree $n$ with input $x$. Let $\backsim$ denote "resembles".</p> <p>Then (our intuition may suggest) </p> <p>$$\underbrace{\nabla \nabla \nabla..\nabla}_{n times} D_n(x) \backsim \frac{d^n}{dx^n}D_n(x)=c$$</p> <p>Where $c$ is a constant. Above I used the fact that the $n$th derivative of an $n$ degree polynomial is constant.</p> <p>However the proof of what you see and call "breaking down the function" does not involve calculus, you just need to prove:</p> <p>$$\nabla D_n(x)=D_{n-1}(x)$$</p> <p>For $n \geq 1$ using the binomial theorem.</p>
logic
<p>I just started learning model theory on my own, and I was wondering if there are any interesting examples of two structures of a language L which are not isomorphic, but are elementarily equivalent (this means that any L-sentence satisfied by one of them is satisfied by the second).</p> <p>I am using the notation of David Marker's book &quot;Model theory: an introduction&quot;.</p>
<p>First, I'm glad you are reading my book! :)</p> <p>Let me make a couple of comments on Pete's answer--this is my first time here and I don't see how to leave comments.</p> <ol start="2"> <li><p>Any two dense linear orders without endpoints are elementarily equivalent. In particular <span class="math-container">$(Q,&lt;)$</span> and <span class="math-container">$(R,&lt;)$</span> are elementarily equivalent. So there is no first order way of expressing the completeness of the reals.</p> </li> <li><p>Any two algebraically closed fields of the same characteristic are elementarily equivalent. So the algebraic numbers is elementarily equivalent to the complex numbers. This means you can prove first order things about the algebraic numbers using complex analysis or the complex numbers using Galois Theory or countability.</p> </li> <li><p>Similarly the reals field is elementarily equivalent to the real algebraic numbers or to the field of real Puiseux series. One can for example use the Puiseux series to prove asymptotic properties of semialgebraic functions.</p> </li> </ol> <p>Finally, Pete's comment 5) about infinite models of the theory of finite fields being elementarily equivalent isn't quite right. This is only true if the relative algebraic closure of the prime fields are isomorphic. For example,</p> <p>a) take an ultrapower of finite fields <span class="math-container">$F_p$</span> where the ultrafilter containing <span class="math-container">$\{2,4,8,\ldots\}$</span> then the resulting model has characteristic <span class="math-container">$2,$</span> while if the ultrafilter contains the set of primes, then the ultraproduct has characteristic <span class="math-container">$0.$</span></p> <p>b) if the ultrapower contains the set of primes congruent to <span class="math-container">$1 \bmod 4,$</span> in the ultraproduct <span class="math-container">$-1$</span> is a square, while if the ultraproduct contains the set of primes congruent to <span class="math-container">$3 \mod 4$</span> then in the ultraproduct <span class="math-container">$-1$</span> is not a square..</p>
<p>Laws, yes<sup>[1]</sup>: if there weren't such examples, there wouldn't be a subject called model theory and therefore Marker's book would not exist.</p> <p>There are, however, not very many explicit examples which can be given, with proof, right after the definition of elementary equivalence, a pedagogical problem that I encounered recently when I taught a short summer course on model theory. When I first gave the definition, all I was able to come up with was the following argument: for any language <span class="math-container">$L$</span>, the class of <span class="math-container">$L$</span>-structures is a proper class [any nonempty set can be made into the "universe", or underlying set of, an <span class="math-container">$L$</span>-structure] whereas since there are at most <span class="math-container">$2^{\max\{|L|,\aleph_0\}}$</span> different theories in the language <span class="math-container">$L$</span>, the <span class="math-container">$L$</span>-structures up to elementary equivalence must form a set.</p> <p>But if you just want examples without proof, sure, here goes:</p> <p>1) In the empty language, two sets are elementarily equivalent iff they are both infinite or both finite of the same cardinality.</p> <p>2) Any two dense linear orders without endpoints are elementarily equivalent. [The same holds for DLOs <em>with</em> endpoints, but the two classes of structures are not elementarily equivalent to each other.]</p> <p>3) Any two algebraically closed fields of the same characteristic are elementarily equivalent.</p> <p>4) Any two real-closed fields are elementarily equivalent.</p> <p><s>5) Any two infinite models of the theory of finite fields are elementarily equivalent.</s> (Nope! See Dave Marker's answer.) </p> <p>In each case, such structures exist of every infinite cardinality, so the class of isomorphism classes of such structures is a proper class, not a set. </p> <p>[1]: "M-O-O-N, that spells model theory!"</p>
game-theory
<p>Consider a $9 \times 9$ matrix that consists of $9$ block matrices of $3 \times 3$. Let each $3 \times 3$ block be a game of tic-tac-toe. For each game, label the $9$ cells of the game from $1$ to $9$ with order from left to right, from above to down, call this a cell number. Label the $9$ games of the big matrix $1$ to $9$ with the same order, call this a game number.</p> <p>The rule is the following:</p> <p>$1$. Player $1$ starts with any game number and any cell number.</p> <p>$2$. Player $2$ can make a move in the game whose game number is the cell number where player $1$ made the last move</p> <p>$3$. It continues like this, where player $1$ then plays in the game whose game number is the cell number where player $2$ made the last move.</p> <p>$4$. Special case, when a player is supposed to play in game $X$, but game $X$ is already won (may not be full)/lost (may not be full)/drawn (is full), then he may choose to play in any game he wants.</p> <p>$5$. Winning: whenever a player has three winning games such that the three games line up either horizontally, vertically or across the diagonals, he wins.<br> <img src="https://i.sstatic.net/GQwiN.png" alt="Tic tac toe"></p> <p>It is easy to see why we call it tic-tac-toe $\times$ tic-tac-toe.</p> <p>Now question:</p> <blockquote> <p>We know tic-tac-toe has a non-losing strategy. Does tic-tac-toe $\times$ tic-tac-toe have a non-losing strategy? If so what is it? In general what is a good strategy?</p> </blockquote> <p>PS: This is a fun game. Originally what was a 'good move' now sends your opponent to a 'good game position', so it is more complicated.</p>
<p>The first question, if there is a non-losing strategy, I have an answer for: Yes. </p> <p>Since this is a finite two-person perfect information game without chance, at least one player must have a non-losing strategy, guaranteed by <a href="http://en.wikipedia.org/wiki/Zermelo%27s_theorem_%28game_theory%29" rel="noreferrer">Zermelo's theorem</a> (of game theory).</p> <p>For Tic-Tac-Toe related games, it can be proven that the first player has this non-losing strategy. (If it is a winning strategy depends on whether or not the second player has a non-losing strategy).</p> <p>The argument goes something like this (Player 1 = $P_1$, Player 2 = $P_2$): Suppose there is a non-losing strategy $S$ for $P_2$. Then $P_1$ will start the game with a random move $X$, and for whatever $P_2$ will do, follow strategy $S$ (thus $P_1$ takes on the role of being the second player). Since $S$ is a non-losing strategy, $P_1$ will not lose, which means $S$ is a non-losing strategy for $P_1$.</p> <p>Note that, if strategy $S$ ever calls for making the move $X$ (which was the original random move), $P_1$ may simply do another random move $X_2$ and then keep following $S$ as if $X_2$ had been the original random move. This is further explained in page 12-13 <a href="http://uu.diva-portal.org/smash/get/diva2:631339/FULLTEXT01.pdf" rel="noreferrer">here</a>.</p> <p>(EDIT: Since the first move $P_1$ affects what move $P_2$ can do (by rule 2) the latter argument may not apply to this game. Anyone?)</p>
<p>I think it is possible to "control" the board by having many sub-games "point" to a square that has already been won in the larger game, preventing your opponent from blocking you in that square, and driving you towards marking other squares, so eventually you have 2 in a row in many sub-games, eventually forcing your opponent to let you go on a sub-game-winning spree.</p> <p>For example, taking square 3 on a number of boards will essentially give your opponent sub-game #3, but from there on, you could start taking squares 1 and 2, or 5 and 7, or 6 and 9; all of which "point" to square 3 in their respective games. Thus, in order to block you in a sub-game that already has such a "pointer", they must allow you to take a move wherever you want after their turn, forcing them to allow you to either take a square (at leisure) or continue to set yourself up for more "pointers". Opponents placing moves elsewhere tend to fall even further behind, as they cannot overtake your offensive lead, and can't block you efficiently.</p> <p>There is also a "gambit" strategy, where you keep selecting the same block in each sub-game thereby sacrificing one sub-game for the sake of getting a head-start in many others. </p> <p>EDIT: Elaborating on the strategy explanation</p>
matrices
<p>What is the importance of eigenvalues/eigenvectors? </p>
<h3>Short Answer</h3> <p><em>Eigenvectors make understanding linear transformations easy</em>. They are the "axes" (directions) along which a linear transformation acts simply by "stretching/compressing" and/or "flipping"; eigenvalues give you the factors by which this compression occurs. </p> <p>The more directions you have along which you understand the behavior of a linear transformation, the easier it is to understand the linear transformation; so you want to have as many linearly independent eigenvectors as possible associated to a single linear transformation.</p> <hr> <h3>Slightly Longer Answer</h3> <p>There are a lot of problems that can be modeled with linear transformations, and the eigenvectors give very simply solutions. For example, consider the system of linear differential equations \begin{align*} \frac{dx}{dt} &amp;= ax + by\\\ \frac{dy}{dt} &amp;= cx + dy. \end{align*} This kind of system arises when you describe, for example, the growth of population of two species that affect one another. For example, you might have that species $x$ is a predator on species $y$; the more $x$ you have, the fewer $y$ will be around to reproduce; but the fewer $y$ that are around, the less food there is for $x$, so fewer $x$s will reproduce; but then fewer $x$s are around so that takes pressure off $y$, which increases; but then there is more food for $x$, so $x$ increases; and so on and so forth. It also arises when you have certain physical phenomena, such a particle on a moving fluid, where the velocity vector depends on the position along the fluid.</p> <p>Solving this system directly is complicated. But suppose that you could do a change of variable so that instead of working with $x$ and $y$, you could work with $z$ and $w$ (which depend linearly on $x$ and also $y$; that is, $z=\alpha x+\beta y$ for some constants $\alpha$ and $\beta$, and $w=\gamma x + \delta y$, for some constants $\gamma$ and $\delta$) and the system transformed into something like \begin{align*} \frac{dz}{dt} &amp;= \kappa z\\\ \frac{dw}{dt} &amp;= \lambda w \end{align*} that is, you can "decouple" the system, so that now you are dealing with two <em>independent</em> functions. Then solving this problem becomes rather easy: $z=Ae^{\kappa t}$, and $w=Be^{\lambda t}$. Then you can use the formulas for $z$ and $w$ to find expressions for $x$ and $y$..</p> <p>Can this be done? Well, it amounts <em>precisely</em> to finding two linearly independent eigenvectors for the matrix $\left(\begin{array}{cc}a &amp; b\\c &amp; d\end{array}\right)$! $z$ and $w$ correspond to the eigenvectors, and $\kappa$ and $\lambda$ to the eigenvalues. By taking an expression that "mixes" $x$ and $y$, and "decoupling it" into one that acts independently on two different functions, the problem becomes a lot easier. </p> <p>That is the essence of what one hopes to do with the eigenvectors and eigenvalues: "decouple" the ways in which the linear transformation acts into a number of independent actions along separate "directions", that can be dealt with independently. A lot of problems come down to figuring out these "lines of independent action", and understanding them can really help you figure out what the matrix/linear transformation is "really" doing. </p>
<h3>A short explanation</h3> <p>Consider a matrix <span class="math-container">$A$</span>, for an example one representing a physical transformation (e.g rotation). When this matrix is used to transform a given vector <span class="math-container">$x$</span> the result is <span class="math-container">$y = A x$</span>.</p> <p>Now an interesting question is</p> <blockquote> <p>Are there any vectors <span class="math-container">$x$</span> which do not change their direction under this transformation, but allow the vector magnitude to vary by scalar <span class="math-container">$ \lambda $</span>?</p> </blockquote> <p>Such a question is of the form <span class="math-container">$$A x = \lambda x $$</span></p> <p>So, such special <span class="math-container">$x$</span> are called <em>eigenvector</em>(s) and the change in magnitude depends on the <em>eigenvalue</em> <span class="math-container">$ \lambda $</span>.</p>
logic
<p>In some areas of mathematics it is everyday practice to prove the existence of things by entirely non-constructive arguments that say nothing about the object in question other than it exists, e.g. the celebrated probabilistic method and many things found in this thread: <a href="https://math.stackexchange.com/questions/1452844/what-are-some-things-we-can-prove-they-must-exist-but-have-no-idea-what-they-ar">What are some things we can prove they must exist, but have no idea what they are?</a></p> <p><strong>Now what about proofs themselves?</strong> Is there some remote (or not so remote) area of mathematics or logic which contains a concrete theorem that is actually capable of being proved by showing that a proof must exist without actually giving such a proof?</p> <p>Naively, I imagine that this would require formalizing a proof in such a way that it can be regarded as a mathematical object itself and then maybe showing that the set of such is non-empty. The only thing in this direction that I've seen so far is the "category of proofs", but that's not the answer.</p> <p>This may sound like mathematical science fiction, but initially so does e.g. the idea of proving that some statement is unprovable in a framework, which has become standard in axiomatic set theory.</p> <p>Feel free to change my speculative tags.</p>
<p>There are various ways to interpret the question. One interesting class of examples consists of "speed up" theorems. These generally involve two formal systems, $T_1$ and $T_2$, and family of statements which are provable in both $T_1$ and $T_2$, but for which the shortest formal proofs in $T_1$ are much longer than the shortest formal proofs in $T_2$. </p> <p>One of the oldest such theorems is due to Gödel. He noticed that statements such as "This theorem cannot be proved in Peano Arithmetic in fewer than $10^{1000}$ symbols" are, in fact, provable in Peano Arithmetic. </p> <p>Knowing this, we know that we could make a formal proof by cases that examines every Peano Arithmetic formal proof with fewer than $10^{1000}$ symbols and checks that none of them proves the statement. So we can prove indirectly that a formal proof of the statement in Peano Arithmetic exists. </p> <p>But, because the statement is true, the shortest formal proof of the statement in Peano Arithmetic will in fact require more than $10^{1000}$ symbols. So nobody will be able to write out that formal proof completely. We can replace $10^{1000}$ with any number we wish, to obtain results whose shortest formal proof in Peano arithmetic must have at least that many symbols. </p> <p>Similarly, if we prefer another formal system such as ZFC, we can consider statements such as "This theorem cannot be proved in ZFC in fewer than $10^{1000}$ symbols". In this way each sufficiently strong formal system will have some results which we know are formally provable, but for which the shortest formal proof in that system is too long to write down. </p>
<p><a href="https://en.wikipedia.org/wiki/Leon_Henkin#The_completeness_proof">Henkin's completeness proof</a> is an example of what you seek: It demonstrates that for a certain statement there is a proof, but does not establish what that proof is.</p>
linear-algebra
<p>Given two square matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, how do you show that <span class="math-container">$$\det(AB) = \det(A) \det(B)$$</span> where <span class="math-container">$\det(\cdot)$</span> is the determinant of the matrix?</p>
<p>Let's consider the function <span class="math-container">$B\mapsto \det(AB)$</span> as a function of the columns of <span class="math-container">$B=\left(v_1|\cdots |v_i| \cdots | v_n\right)$</span>. It is straight forward to verify that this map is multilinear, in the sense that <span class="math-container">$$\det\left(A\left(v_1|\cdots |v_i+av_i'| \cdots | v_n\right)\right)=\det\left(A\left(v_1|\cdots |v_i| \cdots | v_n\right)\right)+a\det\left(A\left(v_1|\cdots |v_i'| \cdots | v_n\right)\right).$$</span> It is also alternating, in the sense that if you swap two columns of <span class="math-container">$B$</span>, you multiply your overall result by <span class="math-container">$-1$</span>. These properties both follow directly from the corresponding properties for the function <span class="math-container">$A\mapsto \det(A)$</span>.</p> <p>The determinant is completely characterized by these two properties, and the fact that <span class="math-container">$\det(I)=1$</span>. Moreover, any function that satisfies these two properties must be a multiple of the determinant. If you have not seen this fact, you should try to prove it. I don't know of a reference online, but I know it is contained in Bretscher's linear algebra book.</p> <p>In any case, because of this fact, we must have that <span class="math-container">$\det(AB)=c\det(B)$</span> for some constant <span class="math-container">$c$</span>, and setting <span class="math-container">$B=I$</span>, we see that <span class="math-container">$c=\det(A)$</span>.</p> <hr /> <p>For completeness, here is a proof of the necessary lemma that any a multilinear, alternating function is a multiple of determinant.</p> <p>We will let <span class="math-container">$f:\mathbb (F^n)^n\to \mathbb F$</span> be a multilinear, alternating function, where, to allow for this proof to work in characteristic 2, we will say that a multilinear function is alternating if it is zero when two of its inputs are equal (this is equivalent to getting a sign when you swap two inputs everywhere except characteristic 2). Let <span class="math-container">$e_1, \ldots, e_n$</span> be the standard basis vectors. Then <span class="math-container">$f(e_{i_1},e_{i_2}, \ldots, e_{i_n})=0$</span> if any index occurs twice, and otherwise, if <span class="math-container">$\sigma\in S_n$</span> is a permutation, then <span class="math-container">$f(e_{\sigma(1)}, e_{\sigma(2)},\ldots, e_{\sigma(n)})=(-1)^\sigma$</span>, the sign of the permutation <span class="math-container">$\sigma$</span>.</p> <p>Using multilinearity, one can expand out evaluating <span class="math-container">$f$</span> on a collection of vectors written in terms of the basis:</p> <p><span class="math-container">$$f\left(\sum_{j_1=1}^n a_{1j_1}e_{j_1}, \sum_{j_2=1}^n a_{2j_2}e_{j_2},\ldots, \sum_{j_n=1}^n a_{nj_n}e_{j_n}\right) = \sum_{j_1=1}^n\sum_{j_2=1}^n\cdots \sum_{j_n=1}^n \left(\prod_{k=1}^n a_{kj_k}\right)f(e_{j_1},e_{j_2},\ldots, e_{j_n}).$$</span></p> <p>All the terms with <span class="math-container">$j_{\ell}=j_{\ell'}$</span> for some <span class="math-container">$\ell\neq \ell'$</span> will vanish before the <span class="math-container">$f$</span> term is zero, and the other terms can be written in terms of permutations. If <span class="math-container">$j_{\ell}\neq j_{\ell'}$</span> for any <span class="math-container">$\ell\neq \ell'$</span>, then there is a unique permutation <span class="math-container">$\sigma$</span> with <span class="math-container">$j_k=\sigma(k)$</span> for every <span class="math-container">$k$</span>. This yields:</p> <p><span class="math-container">$$\begin{align}\sum_{j_1=1}^n\sum_{j_2=1}^n\cdots \sum_{j_n=1}^n \left(\prod_{k=1}^n a_{kj_k}\right)f(e_{j_1},e_{j_2},\ldots, e_{j_n}) &amp;= \sum_{\sigma\in S_n} \left(\prod_{k=1}^n a_{k\sigma(k)}\right)f(e_{\sigma(1)},e_{\sigma(2)},\ldots, e_{\sigma(n)}) \\ &amp;= \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{k=1}^n a_{k\sigma(k)}\right)f(e_{1},e_{2},\ldots, e_{n}) \\ &amp;= f(e_{1},e_{2},\ldots, e_{n}) \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{k=1}^n a_{k\sigma(k)}\right). \end{align} $$</span></p> <p>In the last line, the thing still in the sum is the determinant, although one does not need to realize this fact, as we have shown that <span class="math-container">$f$</span> is completely determined by <span class="math-container">$f(e_1,\ldots, e_n)$</span>, and we simply define <span class="math-container">$\det$</span> to be such a function with <span class="math-container">$\det(e_1,\ldots, e_n)=1$</span>.</p>
<p>The proof using elementary matrices can be found e.g. on <a href="http://www.proofwiki.org/wiki/Determinant_of_Matrix_Product" rel="noreferrer">proofwiki</a>. It's basically the same proof as given in Jyrki Lahtonen 's comment and Chandrasekhar's link.</p> <p>There is also a proof using block matrices, I googled a bit and I was only able to find it in <a href="http://books.google.com/books?id=N871f_bp810C&amp;pg=PA112&amp;dq=determinant+product+%22alternative+proof%22&amp;hl=en&amp;ei=EPlZToKHOo6aOuyapJIM&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=2&amp;ved=0CC8Q6AEwAQ#v=onepage&amp;q=determinant%20product%20%22alternative%20proof%22&amp;f=false" rel="noreferrer">this book</a> and <a href="http://www.mth.kcl.ac.uk/%7Ejrs/gazette/blocks.pdf" rel="noreferrer">this paper</a>.</p> <hr /> <p>I like the approach which I learned from Sheldon Axler's Linear Algebra Done Right, <a href="http://books.google.com/books?id=ovIYVIlithQC&amp;printsec=frontcover&amp;dq=linear+algebra+done+right&amp;hl=en&amp;ei=H-xZTuutJoLrOdvrwaMM&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CCkQ6AEwAA#v=onepage&amp;q=%2210.31%22&amp;f=false" rel="noreferrer">Theorem 10.31</a>. Let me try to reproduce the proof here.</p> <p>We will use several results in the proof, one of them is - as far as I can say - a little less known. It is the <a href="http://www.proofwiki.org/wiki/Determinant_as_Sum_of_Determinants" rel="noreferrer">theorem</a> which says, that if I have two matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, which only differ in <span class="math-container">$k$</span>-th row and other rows are the same, and the matrix <span class="math-container">$C$</span> has as the <span class="math-container">$k$</span>-th row the sum of <span class="math-container">$k$</span>-th rows of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and other rows are the same as in <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, then <span class="math-container">$|C|=|B|+|A|$</span>.</p> <p><a href="https://math.stackexchange.com/questions/668/whats-an-intuitive-way-to-think-about-the-determinant">Geometrically</a>, this corresponds to adding two parallelepipeds with the same base.</p> <hr /> <p><strong>Proof.</strong> Let us denote the rows of <span class="math-container">$A$</span> by <span class="math-container">$\vec\alpha_1,\ldots,\vec\alpha_n$</span>. Thus <span class="math-container">$$A= \begin{pmatrix} a_{11} &amp; a_{12}&amp; \ldots &amp; a_{1n}\\ a_{21} &amp; a_{22}&amp; \ldots &amp; a_{2n}\\ \vdots &amp; \vdots&amp; \ddots &amp; \vdots \\ a_{n1} &amp; a_{n2}&amp; \ldots &amp; a_{nn} \end{pmatrix}= \begin{pmatrix} \vec\alpha_1 \\ \vec\alpha_2 \\ \vdots \\ \vec\alpha_n \end{pmatrix}$$</span></p> <p>Directly from the definition of matrix product we can see that the rows of <span class="math-container">$A\cdot B$</span> are of the form <span class="math-container">$\vec\alpha_kB$</span>, i.e., <span class="math-container">$$A\cdot B=\begin{pmatrix} \vec\alpha_1B \\ \vec\alpha_2B \\ \vdots \\ \vec\alpha_nB \end{pmatrix}$$</span> Since <span class="math-container">$\vec\alpha_k=\sum_{i=1}^n a_{ki}\vec e_i$</span>, we can rewrite this equality as <span class="math-container">$$A\cdot B=\begin{pmatrix} \sum_{i_1=1}^n a_{1i_1}\vec e_{i_1} B\\ \vdots\\ \sum_{i_n=1}^n a_{ni_n}\vec e_{i_n} B \end{pmatrix}$$</span> Using the theorem on the sum of determinants multiple times we get <span class="math-container">$$ |{A\cdot B}|= \sum_{i_1=1}^n a_{1i_1} \begin{vmatrix} \vec e_{i_1}B\\ \sum_{i_2=1}^n a_{2i_2}\vec e_{i_2} B\\ \vdots\\ \sum_{i_n=1}^n a_{ni_n}\vec e_{i_n} B \end{vmatrix}= \ldots = \sum_{i_1=1}^n \ldots \sum_{i_n=1}^n a_{1i_1} a_{2i_2} \dots a_{ni_n} \begin{vmatrix} \vec e_{i_1} B \\ \vec e_{i_2} B \\ \vdots \\ \vec e_{i_n} B \end{vmatrix} $$</span></p> <p>Now notice that if <span class="math-container">$i_j=i_k$</span> for some <span class="math-container">$j\ne k$</span>, then the corresponding determinant in the above sum is zero (it has two <a href="http://www.proofwiki.org/wiki/Square_Matrix_with_Duplicate_Rows_has_Zero_Determinant" rel="noreferrer">identical rows</a>). Thus the only nonzero summands are those one, for which the <span class="math-container">$n$</span>-tuple <span class="math-container">$(i_1,i_2,\dots,i_n)$</span> represents a permutation of the numbers <span class="math-container">$1,\ldots,n$</span>. Thus we get <span class="math-container">$$|{A\cdot B}|=\sum_{\varphi\in S_n} a_{1\varphi(1)} a_{2\varphi(2)} \dots a_{n\varphi(n)} \begin{vmatrix} \vec e_{\varphi(1)} B \\ \vec e_{\varphi(2)} B \\ \vdots \\ \vec e_{\varphi(n)} B \end{vmatrix}$$</span> (Here <span class="math-container">$S_n$</span> denotes the set of all permutations of <span class="math-container">$\{1,2,\dots,n\}$</span>.) The matrix on the RHS of the above equality is the matrix <span class="math-container">$B$</span> with permuted rows. Using several transpositions of rows we can get the matrix <span class="math-container">$B$</span>. We will show that this can be done using <span class="math-container">$i(\varphi)$</span> transpositions, where <span class="math-container">$i(\varphi)$</span> denotes the number of <a href="http://en.wikipedia.org/wiki/Inversion_%28discrete_mathematics%29" rel="noreferrer">inversions</a> of <span class="math-container">$\varphi$</span>. Using this fact we get <span class="math-container">$$|{A\cdot B}|=\sum_{\varphi\in S_n} a_{1\varphi(1)} a_{2\varphi(2)} \dots a_{n\varphi(n)} (-1)^{i(\varphi)} |{B}| =|A|\cdot |B|.$$</span></p> <p>It remains to show that we need <span class="math-container">$i(\varphi)$</span> transpositions. We can transform the &quot;permuted matrix&quot; to matrix <span class="math-container">$B$</span> as follows: we first move the first row of <span class="math-container">$B$</span> on the first place by exchanging it with the preceding row until it is on the correct position. (If it already is in the first position, we make no exchanges at all.) The number of transpositions we have used is exactly the number of inversions of <span class="math-container">$\varphi$</span> that contains the number 1. Now we can move the second row to the second place in the same way. We will use the same number of transposition as the number of inversions of <span class="math-container">$\varphi$</span> containing 2 but not containing 1. (Since the first row is already in place.) We continue in the same way. We see that by using this procedure we obtain the matrix <span class="math-container">$B$</span> after <span class="math-container">$i(\varphi)$</span> row transpositions.</p>
probability
<p>I'd like to have a correct general understanding of the importance of measure theory in probability theory. For now, it seems like mathematicians work with the notion of probability measure and prove theorems, because it automacially makes the theorem true, no matter if we work with discrete and continuous probability distribution.</p> <p>So for example, expected value - we can prove the Law of large numbers using its general definition (measure theoretic) $\operatorname{E} [X] = \int_\Omega X \, \mathrm{d}P$, and then derive the formula for discrete and continuous cases (discrete and continuous random variables), without having to prove it separately for each case (we have one proof instead of two). One could say that the Law of large numbers justifies the definition of expected value, by the way.</p> <p>Is it right to say that probability using the general notion of probability measure saves work of mathematicians? What are the other advantages?</p> <p>Please correct me if I'm wrong, but I hope you get the idea of what sort of information I expect - it's the importance and role of measure theory in probability and answer to the question: <strong>are there theorems in probability that do not hold for general probability measure, but are true only for either discrete or continuous probability distribution?</strong> If we can prove that no such theorems can exist, we can simply forget about the distinction between discrete and continuous distributions.</p> <p>If someone could come up with a clear, concise summary, I'd be grateful. I'm not an expert, so please take that into account.</p>
<p>2 reasons why measure theory is needed in probability:</p> <ol> <li>We need to work with random variables that are neither discrete nor continuous like <span class="math-container">$X$</span> below:</li> </ol> <p>Let <span class="math-container">$(\Omega, \mathscr{F}, \mathbb{P})$</span> be a probability space and let <span class="math-container">$Z, B$</span> be random variables in <span class="math-container">$(\Omega, \mathscr{F}, \mathbb{P})$</span> s.t.</p> <p><span class="math-container">$Z$</span> ~ <span class="math-container">$N(\mu,\sigma^2)$</span>, <span class="math-container">$B$</span> ~ Bin<span class="math-container">$(n,p)$</span>.</p> <p>Consider random variable <span class="math-container">$X = Z1_A + B1_{A^c}$</span> where <span class="math-container">$A \in \mathscr{F}$</span>, discrete or continuous depending on A.</p> <ol start="2"> <li>We need to work with certain sets:</li> </ol> <p>Consider <span class="math-container">$U$</span> ~ Unif<span class="math-container">$([0,1])$</span> s.t. <span class="math-container">$f_U(u) = 1_{[0,1]}$</span> on <span class="math-container">$([0,1], 2^{[0,1]}, \lambda)$</span>.</p> <p>In probability w/o measure theory:</p> <p>If <span class="math-container">$(i_1, i_2) \subseteq [0,1]$</span>, then <span class="math-container">$$P(U \in (i_1, i_2)) = \int_{i_1}^{i_2} 1 du = i_2 - i_1$$</span></p> <p>In probability w/ measure theory:</p> <p><span class="math-container">$$P(U \in (i_1, i_2)) = \lambda((i_1, i_2)) = i_2 - i_1$$</span></p> <p>So who needs measure theory right? Well, what about if we try to compute</p> <p><span class="math-container">$$P(U \in \mathbb{Q} \cap [0,1])?$$</span></p> <p>We need measure theory to say <span class="math-container">$$P(U \in \mathbb{Q} \cap [0,1]) = \lambda(\mathbb{Q}) = 0$$</span></p> <p>I think Riemann integration <a href="https://math.stackexchange.com/questions/437711/is-dirichlet-function-riemann-integrable">doesn't give an answer</a> for <span class="math-container">$$\int_{\mathbb{Q} \cap [0,1]} 1 du$$</span>.</p> <p>Furthermore, <span class="math-container">$\exists A \subset {[0,1]}$</span> s.t. <span class="math-container">$P(U \in A)$</span> is undefined.</p> <hr /> <p>From Rosenthal's A First Look at Rigorous Probability Theory:</p> <p><a href="https://i.sstatic.net/ZCNE0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZCNE0.png" alt="enter image description here" /></a></p> <hr /> <p><a href="https://i.sstatic.net/tDptF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tDptF.png" alt="enter image description here" /></a></p> <hr /> <p><a href="https://i.sstatic.net/9Olra.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Olra.png" alt="enter image description here" /></a></p> <hr /> <p><a href="https://i.sstatic.net/MPnLw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MPnLw.png" alt="enter image description here" /></a></p>
<p>Since measure-theoretic axiomatization of probability was formulated by Kolmogorov, I think you'd be very much interested in <a href="http://arxiv.org/pdf/math/0606533.pdf" rel="noreferrer">this article</a>. I had similar questions to you, and most of them were clarified after the reading - although I've also read Kolmogorov's original work after that. </p> <p>One of the ideas is that historically there were proofs for LLN and CLT available without explicit use of measure theory, however both Borel and Kolmogorov started using measure-theoretical tools to solve probabilistic problems on $[0,1]$ and similar spaces, such as treating a binary expansion of $x\in [0,1]$ as coordinates of a random walk. Then the idea was: it works well, what if we try to use this method much more often, and even say that this is the way to go actually? When the work of Kolmogorov was first out, not every mathematician was agree with his claim (to say the least). But you are somewhat right in saying that measure theory allows dealing with probability easier. It's like solving basic geometric problems using vector algebra.</p> <p>Regarding facts exclusively available for discrete/continuous distributions: usually a good probabilistic theorem is quite general and works fine with both cases. However, there are some things that hold for "continuous" measures only. A proper name for continuous is atomless: $\mu$ is atomless if for any measurable set $F$ there exists $E \subseteq F$ such that $\mu(E) &lt; \mu(F)$ where inequality must be strict. Then the range of $\mu$ is convex, that is for all $0 \leq c \leq \mu(\Omega)$ there exists a set $C$ such that $\mu(C) = c$. Of course, that does not hold for measures with atoms. Not a very probabilistic fact though.</p>
geometry
<p>I have a doubt about the real meaning of the derivative of a vector field. This question seems silly at first but the doubt came when I was studying the definition of tangent space.</p> <p>If I understood well a vector is a directional derivative operator, i.e.: a vector is an operator that can produce derivatives of scalar fields. If that's the case then a vector acts on a scalar field and tells me how the field changes on that point. </p> <p>However, if a vector is a derivative operator, a vector field defines a different derivative operator at each point. So differentiate a vector would be differentiate a derivate operator, and that seems strange to me at first. I thought for example that the total derivative of a vector field would produce rates of change of the field, but my studies led me to a different approach, where the total derivative produces rates of change only for scalar fields and for vector fields it produces the pushforward.</p> <p>So, what's the real meaning of differentiating a vector field knowing all of this?</p>
<p>As I understand it, these are your questions:</p> <ul> <li>How does one define the derivative of a vector field? Do we just take the "derivatives" of each vector in the field? If so, what does it mean to take the derivative of a differential operator, anyway?</li> <li>Why does the total derivative of a scalar field give information about rates of change, while the "total derivative" of a vector field gives the pushforward (which doesn't seem to relate to rates of change)?</li> </ul> <p>I think the best way to answer these questions is to provide a broader context:</p> <hr> <p>In calculus, we ask how to find derivatives of functions $F\colon \mathbb{R}^m \to \mathbb{R}^n$. The typical answer is the <em>total derivative</em> $DF\colon \mathbb{R}^m \to L(\mathbb{R}^m, \mathbb{R}^n)$, which assigns to each point $p \in \mathbb{R}^m$ a linear map $D_pF \in L(\mathbb{R}^m, \mathbb{R}^n)$. With respect to to the standard bases, this linear map can be represented as a matrix: $$D_pF = \begin{pmatrix} \left.\frac{\partial F^1}{\partial x^1}\right|_p &amp; \cdots &amp; \left.\frac{\partial F^1}{\partial x^m}\right|_p \\ \vdots &amp; &amp; \vdots \\ \left.\frac{\partial F^n}{\partial x^1}\right|_p &amp; \cdots &amp; \left.\frac{\partial F^n}{\partial x^m}\right|_p \end{pmatrix}$$</p> <p>Personally, I think this encodes the idea of "rate of change" very well. (Just look at all those partial derivatives!)</p> <p>Let's now specialize to the case $m = n$. Psychologically, how does one intuit these functions $F\colon \mathbb{R}^n \to \mathbb{R}^n$? There are two usual answers:</p> <blockquote> <p>(1) We intuit $F\colon \mathbb{R}^n \to \mathbb{R}^n$ as a map between two different spaces. Points from the domain space get sent to points in the codomain space.</p> <p>(2) We intuit $F\colon \mathbb{R}^n \to \mathbb{R}^n$ as a vector field. Every point in $\mathbb{R}^n$ is assigned an arrow in $\mathbb{R}^n$.</p> </blockquote> <p>This distinction is important. When we generalize from $\mathbb{R}^n$ to abstract manifolds, these two ideas will take on different forms. Consequently, this means that we will end up with different concepts of "derivative."</p> <hr> <p>In case (1), the maps $F\colon \mathbb{R}^m \to \mathbb{R}^n$ generalize to <em>smooth maps</em> between manifolds $F \colon M \to N$. In this setting, the concept of "total derivative" generalizes nicely to "pushforward." That is, it makes sense to talk about the pushforward of a smooth map $F \colon M \to N$.</p> <p>But you asked about vector fields, which brings us to case (2). In this case, we first have to be careful about what we mean by "vector" and "vector field."</p> <blockquote> <p>A <em>vector</em> $v_p \in T_pM$ at a point $p$ is (as you say) a directional derivative operator <em>at the point</em> $p$. This means that $v_p$ inputs a scalar field $f\colon M \to \mathbb{R}$ and outputs a real number $v_p(f) \in \mathbb{R}$.</p> <p>A <em>vector field</em> $v$ on $M$ is a map which associates to each point $p \in M$ a vector $v_p \in T_pM$. This means that a vector field defines a derivative operator at each point.</p> <p>Therefore: a vector field $v$ can be regarded as an operator which inputs scalar fields $f\colon M \to \mathbb{R}$ and outputs scalar fields $v(f)\colon M \to \mathbb{R}$.</p> </blockquote> <p>In this setting, it no longer makes sense to talk about the "total derivative" of a vector field. You've said it yourself: what would it even mean to talk about "derivatives" of vectors, anyway? This doesn't make sense, so we'll need to go a different route.</p> <hr> <p>In differential geometry, there are two ways of talking about the derivative of a vector field with respect to another vector field:</p> <ul> <li>Connections (usually denoted $\nabla_wv$ or $D_wv$)</li> <li>Lie derivatives (usually denoted $\mathcal{L}_wv$ or $[w,v]$)</li> </ul> <p>Intuitively, these notions capture the idea of "infinitesimal rate of change of a vector field $v$ in the direction of a vector field $w$."</p> <p><strong>Question:</strong> What do these constructions look like in $\mathbb{R}^n$?</p> <p>Taking advantage of the fact that we're in $\mathbb{R}^n$, we can look at our vector fields in the calculus way: as functions $v\colon \mathbb{R}^n \to \mathbb{R}^n$. As such, we can write the components as $v = (v^1,\ldots, v^n)$.</p> <blockquote> <p>The <em>(Levi-Civita) connection of $v$ with respect to $w$</em> is defined as $$\nabla_wv = (w(v^1), \ldots, w(v^n)),$$ where $$w(v^i) := w^1\frac{\partial v^i}{\partial x^1} + \ldots + w^n\frac{\partial v^i}{\partial x^n}.$$</p> <p>The <em>Lie derivative of $v$ with respect to $w$</em> has a technical definition in terms of flows that I don't want to go into, but the bottom line is that it's similar to Rod Carvalho's answer.</p> </blockquote> <p>Also, in $\mathbb{R}^n$ we have the pleasant formula</p> <p>$$\mathcal{L}_wv = \nabla_wv - \nabla_vw,$$</p> <p>which aids in computation.</p>
<p>Let $\mathbb{v} : \mathbb{R}^n \to \mathbb{R}^n$ be a vector field, and let $\varphi : \mathbb{R}^n \to \mathbb{R}$ be a <a href="http://en.wikipedia.org/wiki/Scalar_field" rel="noreferrer">scalar field</a>. Suppose that we would like to obtain the <a href="http://en.wikipedia.org/wiki/Directional_derivative" rel="noreferrer">directional derivative</a> of $\varphi$ at every $x$ in the direction of $\mathbb{v} (x)$, which is the following</p> <p>$$(D_{\mathbb{v}} \varphi) (x) := \displaystyle\lim_{t \rightarrow 0^+} \frac{\varphi (x + t \mathbb{v} (x)) - \varphi (x)}{t} = \langle \nabla \varphi (x), \mathbb{v} (x) \rangle$$</p> <p>This is the <a href="http://en.wikipedia.org/wiki/Lie_derivative" rel="noreferrer">Lie derivative</a> of $\varphi$ along $\mathbb{v}$. It's widely used in control theory, namely, in the study of <a href="http://en.wikipedia.org/wiki/Lyapunov_stability" rel="noreferrer">Lyapunov stability</a> of dynamical systems. If the vector field $\mathrm{v}$ is the gradient of a scalar field $\psi : \mathbb{R}^n \to \mathbb{R}$, then the Lie derivative of $\varphi$ along $\mathrm{v} (x) := \nabla \psi (x)$ is given by</p> <p>$$(D_{\mathbb{v}} \varphi) (x) = \langle \nabla \varphi (x), \mathbb{v} (x) \rangle = \langle \nabla \varphi (x), \nabla \psi (x) \rangle$$</p> <p>Has this answered, even if remotely, your question?</p> <hr> <p><strong>Update:</strong> Since my original post did not answer the OP's question, I will add this update. Let $\mathbb{u}, \mathbb{v} : \mathbb{R}^n \to \mathbb{R}^n$ be vector fields. Let $\mathbb{u}_i$ be the $i$-th component of $\mathbb{u}$, and note that $\mathbb{u}_i$ is a scalar field. We can compute the Lie derivative of $\mathbb{u}_i$ along $\mathbb{v}$, which is the scalar function</p> <p>$$(D_{\mathbb{v}} \mathbb{u}_i) (x) = \langle \nabla \mathbb{u}_i (x), \mathbb{v} (x) \rangle$$</p> <p>We could define the Lie derivative of vector field $\mathbb{u}$ along the vector field $\mathbb{v}$ as follows</p> <p>$$(D_{\mathbb{v}} \mathbb{u}) (x) := \left[\begin{array}{c} (D_{\mathbb{v}} \mathbb{u}_1) (x)\\ (D_{\mathbb{v}} \mathbb{u}_2) (x)\\ \vdots \\ (D_{\mathbb{v}} \mathbb{u}_n) (x)\end{array}\right]$$</p> <p>Finally, do note that $(D_{\mathbb{v}} \mathbb{u}) (x) = ((D \mathbb{u}) (x)) \, \mathbb{v} (x)$, where $(D \mathbb{u})$ is the <a href="http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="noreferrer">Jacobian</a> of $\mathbb{u}$.</p>
probability
<p><strong>Context:</strong> I'm a high school student, who has only ever had an introductory treatment, if that, on combinatorics. As such, the extent to which I have seen combinatoric applications is limited to situations such as "If you need a group of 2 men and 3 women and you have 8 men and 9 women, how many possible ways can you pick the group" (They do get slightly more complicated, but are usually similar).</p> <p><strong>Question:</strong> I apologise in advance for the naive question, but at an elementary level it seems as though combinatorics (and the ensuing probability that can make use of it), seems not overly rigorous. It doesn't seem as though you can "prove" that the number of arrangements you deemed is the correct number. What if you forget a case?</p> <p>I know that you could argue that you've considered all cases, by asking if there is another case other than the ones you've considered. But, that doesn't seem to be the way other areas of mathematics is done. If I wish to prove something, I couldn't just say "can you find a situation where the statement is incorrect" as we don't just assume it is correct by nature.</p> <p><strong>Is combinatorics rigorous?</strong></p> <p>Thanks</p>
<p>Combinatorics certainly <em>can</em> be rigourous but is not usually presented that way because doing it that way is:</p> <ul> <li>longer (obviously)</li> <li>less clear because the rigour can obscure the key ideas</li> <li>boring because once you know intuitively that something works you lose interest in a rigourous argument</li> </ul> <p>For example, compare the following two proofs that the binomial coefficient is $n!/k!(n - k)!$ where I will define the binomial coefficient as the number of $k$-element subsets of $\{1,\dots,n\}$.</p> <hr> <p><strong>Proof 1:</strong></p> <p>Take a permutation $a_1,\dots, a_n$ of $n$. Separate this into $a_1,\dots,a_k$ and $a_{k + 1}, \dots, a_n$. We can permute $1,\dots, n$ in $n!$ ways and since we don't care about the order of $a_1,\dots,a_k$ or $a_{k + 1},\dots,a_n$ we divide by $k!(n - k)!$ for a total of $n!/k!(n - k)!$.</p> <hr> <p><strong>Proof 2:</strong></p> <p>Let $B(n, k)$ denote the set of $k$-element subsets of $\{1,\dots,n\}$. We will show that there is a bijection</p> <p>$$ S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}. $$</p> <p>The map $\to$ is defined as follows. Let $\pi \in S_n$. Let $A = \{\pi(1),\pi(2),\dots,\pi(k)\}$ and let $B = \{\pi(k + 1),\dots, \pi(n)\}$. For each finite subset $C$ of $\{1,\dots,n\}$ with $m$ elements, fix a bijection $g_C : C \longleftrightarrow \{1,\dots,m\}$ by writting the elements of $C$ in increasing order $c_1 \le \dots \le c_m$ and mapping $c_i \longleftrightarrow i$.</p> <p>Define maps $\pi_A$ and $\pi_B$ on $\{1,\dots,k\}$ and $\{1,\dots,n-k\}$ respectively by defining $$ \pi_A(i) = g_A(\pi(i)) \text{ and } \pi_B(j) = g_B(\pi(j)). $$</p> <p>We map the element $\pi \in S_n$ to the triple $(A, \pi_A, \pi_B) \in B(n, k) \times S_k \times S_{n - k}$.</p> <p>Conversely, given a triple $(A, \sigma, \rho) \in B(n, k) \times S_k \times S_{n - k}$ we define $\pi \in S_n$ by $$ \pi(i) = \begin{cases} g_A^{-1}(\sigma(i)) &amp; \text{if } i \in \{1,\dots,k\} \\ g_B^{-1}(\rho(i-k)) &amp; \text{if } i \in \{k + 1,\dots,n \} \end{cases} $$ where $B = \{1,\dots,n\} \setminus A$.</p> <p>This defines a bijection $S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}$ and hence</p> <p>$$ n! = {n \choose k} k!(n - k)! $$</p> <p>as required.</p> <hr> <p><strong>Analysis:</strong></p> <p>The first proof was two sentences whereas the second was some complicated mess. People with experience in combinatorics will understand the second argument is happening behind the scenes when reading the first argument. To them, the first argument is all the rigour necessary. For students it is useful to teach the second method a few times to build a level of comfort with bijective proofs. But if we tried to do all of combinatorics the second way it would take too long and there would be rioting.</p> <hr> <p><strong><em>Post Scriptum</em></strong></p> <p>I will say that a lot of combinatorics textbooks and papers do tend to be written more in the line of the second argument (i.e. rigourously). Talks and lectures tend to be more in line with the first argument. However, higher level books and papers only prove "higher level results" in this way and will simply state results that are found in lower level sources. They will also move a lot faster and not explain each step exactly.</p> <p>For example, I didn't show that the map above was a bijection, merely stated it. In a lower level book there will be a proof that the two maps compose to the identity in both ways. In a higher level book, you might just see an example of the bijection and a statement that there is a bijection in general with the assumption that the person reading through the example could construct a proof on their own.</p>
<p>Essentially, all (nearly all?) of combinatorics comes down to two things, the multiplication rule and the addition rule.</p> <blockquote> <p>If <span class="math-container">$A,B$</span> are finite sets then <span class="math-container">$|A\times B|=|A|\,|B|$</span>.</p> <p>If <span class="math-container">$A,B$</span> are finite sets and <span class="math-container">$A\cap B=\emptyset$</span>, then <span class="math-container">$|A\cup B|=|A|+|B|$</span>.</p> </blockquote> <p>These can be rigorously proved, and more sophisticated techniques can be rigorously derived from them, for example, the fact that the number of different <span class="math-container">$r$</span>-element subsets of an <span class="math-container">$n$</span>-element set is <span class="math-container">$C(n,r)$</span>.</p> <p>So, this far, combinatorics is perfectly rigorous. IMHO, the point at which it may become (or may appear to become) less rigorous is when it moves from pure to applied mathematics. So, with your specific example, you have to assume (or justify if you can) that counting the number of choices of <span class="math-container">$2$</span> men and <span class="math-container">$3$</span> women from <span class="math-container">$8$</span> men and <span class="math-container">$9$</span> women is the same as evaluating <span class="math-container">$$|\{A\subseteq M: |A|=2\}\times \{B\subseteq W: |B|=3\}|\ ,$$</span> where <span class="math-container">$|M|=8$</span> and <span class="math-container">$|W|=9$</span>.</p> <p>It should not be surprising that the applied aspect of the topic requires some assumptions that may not be mathematically rigorous. The same is true in many other cases: for example, modelling a physical system by means of differential equations. Solving the equations once you have them can be done (more or less) rigorously, but deriving the equations in the first place usually cannot.</p> <p>Hope this helps!</p>
probability
<p>Suppose I have a line segment of length $L$. I now select two points at random along the segment. What is the expected value of the distance between the two points, and why?</p>
<p>Byron has already answered your question, but I will attempt to provide a detailed solution...</p> <p>Let $X$ be a random variable uniformly distributed over $[0,L]$, i.e., the probability density function of $X$ is the following</p> <p>$$f_X (x) = \begin{cases} \frac{1}{L} &amp; \textrm{if} \quad{} x \in [0,L]\\ 0 &amp; \textrm{otherwise}\end{cases}$$</p> <p>Let us randomly pick two points in $[0,L]$ <em>independently</em>. Let us denote those by $X_1$ and $X_2$, which are random variables distributed according to $f_X$. The distance between the two points is a new random variable</p> <p>$$Y = |X_1 - X_2|$$</p> <p>Hence, we would like to find the expected value $\mathbb{E}(Y) = \mathbb{E}( |X_1 - X_2| )$. Let us introduce function $g$</p> <p>$$g (x_1,x_2) = |x_1 - x_2| = \begin{cases} x_1 - x_2 &amp; \textrm{if} \quad{} x_1 \geq x_2\\ x_2 - x_1 &amp; \textrm{if} \quad{} x_2 \geq x_1\end{cases}$$</p> <p>Since the two points are picked independently, the joint probability density function is the product of the pdf's of $X_1$ and $X_2$, i.e., $f_{X_1 X_2} (x_1, x_2) = f_{X_1} (x_1) f_{X_2} (x_2) = 1 / L^2$ in $[0,L] \times [0,L]$. Therefore, the expected value $\mathbb{E}(Y) = \mathbb{E}(g(X_1,X_2))$ is given by</p> <p>$$\begin{align} \mathbb{E}(Y) &amp;= \displaystyle\int_{0}^L\int_{0}^L g(x_1,x_2) \, f_{X_1 X_2} (x_1, x_2) \,d x_1 \, d x_2\\[6pt] &amp;= \frac{1}{L^2} \int_0^L\int_0^L |x_1 - x_2| \,d x_1 \, d x_2\\[6pt] &amp;= \frac{1}{L^2} \int_0^L\int_0^{x_1} (x_1 - x_2) \,d x_2 \, d x_1 + \frac{1}{L^2} \int_0^L\int_{x_1}^L (x_2 - x_1) \,d x_2 \, d x_1\\[6pt] &amp;= \frac{L^3}{6 L^2} + \frac{L^3}{6 L^2} = \frac{L}{3}\end{align}$$</p>
<p>Sorry. I posted a cryptic comment just before running off to class. What I meant was that if $X,Y$ are independent uniform $(0,1)$ random variables, then the triple $$(A,B,C):=(\min(X,Y),\ \max(X,Y)-\min(X,Y),\ 1-\max(X,Y))$$ is an exchangeable sequence. In particular, $\mathbb{E}(A)=\mathbb{E}(B)=\mathbb{E}(C),$ and since $A+B+C=1$ identically we must have $\mathbb{E}(B)=\mathbb{E}(\mbox{distance})={1\over 3}.$ </p> <p>Intuitively, the "average" configuration of two random points on a interval looks like this: <img src="https://i.sstatic.net/NTD7P.jpg" alt="enter image description here"></p>
geometry
<p>Is this really possible? Is there any other example of this other than the Koch Snowflake? If so can you prove that example to be true?</p>
<p>One can have a bounded region in the plane with finite area and infinite perimeter, and this (and not the reverse) is true for (the inside of) the <a href="http://en.wikipedia.org/wiki/Koch_snowflake" rel="noreferrer">Koch Snowflake</a>.</p> <p>On the other hand, the <a href="http://en.wikipedia.org/wiki/Isoperimetric_inequality" rel="noreferrer">Isoperimetric Inequality</a> says that if a bounded region has area $A$ and perimeter $L$, then $$4 \pi A \leq L^2,$$ and in particular, finite perimeter implies finite area. In fact, equality holds here if and only if the region is a disk (that is, if its boundary is a circle). See <a href="http://www.math.utah.edu/~treiberg/isoperim/isop.pdf" rel="noreferrer">these notes</a> (pdf) for much more about this inequality, including a few proofs.</p> <p>(As Peter LeFanu Lumsdaine observes in the comments below, proving this inequality in its full generality is technically demanding, but to answer the question of whether there's a bounded region with infinite area but finite perimeter, it's enough to know that there is <em>some</em> positive constant $\lambda$ for which $$A \leq \lambda L^2,$$ and it's easy to see this intuitively: Any closed, simple curve of length $L$ must be contained in the disc of radius $\frac{L}{2}$ centered at any point on the curve, and so the area of the region the curve encloses is smaller than the area of the disk, that is, $$A \leq \frac{\pi}{4} L^2.)$$</p> <p>NB that the Isoperimetric Inequality is not true, however, if one allows general surfaces (roughly, 2-dimensional shapes not contained in the plane. For example, if one starts with a disk and "pushes the inside out" without changing the circular boundary of the disk, then one can make a region with a given perimeter (the circumference of the boundary circle) but (finite) surface area as large as one likes.</p>
<p>It's a bit of a matter of semantics.</p> <p>What's a "shape" but a subset of the plane separated from the rest by a curve? But <em>which subset</em>?</p> <p>A circumference (for example) is a finite closed curve (with finite perimeter) that separates and defines two subsets of the plane - we conventionally pick the one with finite area and we call it "circle". But the circumference also defines the subset with infinite area that lays "outside" (which is a conventional concept). That other "outside shape" would be an example of a finite-perimeter curve with an infinite area.</p> <p>That sounds like cheating and playing with words. But think about it: what else could possibly an infinite area delimited by a finite curve look like? If you only allow yourself to look at the "inside" of any closed curve, it couldn't have an infinite area because you can always define a circumference "around it" whose circle would necessarily fully contain the first shape and also be of finite area. Any possible shape with infinite area and finite perimeter would have to be the "outside" delimited by a closed curve.</p> <p>So the answer to your question depends on whether you're interested in considering the "outside" of a closed curve (in which case all closed curves delimit such shapes), or whether you're not (in which case there cannot be any such shape).</p>
probability
<p>I have come across this text recently. I was confused, asked a friend, she was also not certain. Can you explain please? What author is talking about here? I don't understand. Is the problem with the phrase &quot;on average&quot;?</p> <blockquote> <p>Innumerable misconceptions about probability. For example, suppose I toss a fair coin 100 times. On every “heads”, I take one step to the north. On every “tails”, I take one step to the south. After the 100th step, how far away am I, on average, from where I started? (Most kids – and more than a few teachers – say “zero” ... which is not the right answer.)</p> </blockquote> <p>In a way it is pointless to talk about misconceptions, when you don't explain the misconceptions...</p> <p>Source: <a href="https://www.av8n.com/physics/pedagogy.htm" rel="noreferrer">https://www.av8n.com/physics/pedagogy.htm</a> Section 4.2 Miscellaneous Misconceptions, item number 5</p>
<p>Since the distance can't be negative, the average distance is strictly greater than zero when at least once you don't land on zero.</p> <p>Let's say you land 2 steps above zero on the first try and 2 steps beneath zero on the second try. Then on average you will have thrown heads and tails equally many times, but the average distance from zero is 2.</p> <p>How you can see that is as follows: suppose you start with a variable <span class="math-container">$x$</span> which is initially 0 and every time you throw heads, you add 1, and every time you throw tails, you subtract 1.</p> <p>Then the average value of <span class="math-container">$x$</span> after 100 throws is 0, since they expect you to throw as many heads as tails on average.</p> <p>But the average distance is the average value of <span class="math-container">$|x|$</span> after 100 throws. Throwing a lot of heads once in one &quot;run&quot; and throwing a lot of tails in another &quot;run&quot; cancels in the average value, but adds to the average distance.</p>
<p>The average distance (in steps) from where you started is approximately <span class="math-container">$\sqrt{\frac{200}{\pi}}\approx 7.978845608$</span> and as the number of steps <span class="math-container">$N$</span> tends to infinity is asymptotic to <span class="math-container">$\sqrt{\frac{2N}{\pi}}$</span>.</p> <p>The &quot;misconception&quot; here is a confusion between the expectation of <span class="math-container">$S_N=\sum_{i=1}^NX_i$</span> with <span class="math-container">$X_i$</span> independent random variables taking values <span class="math-container">$\{-1,1\}$</span> with equal probabilities <span class="math-container">$\frac{1}{2}$</span>, and the expectation of <span class="math-container">$|S_N|$</span>; i think this confusion is highly prevalent because we instinctively visualize position and its average, rather than distance, especially because here the symmetry of the random walk draws us to the simple &quot;symmetric&quot; value, <span class="math-container">$0=-0$</span>. There is another closely related value, simpler to calculate and in some ways more natural: the squareroot of the expectation of <span class="math-container">$S_N^2$</span>, that is the standard deviation <span class="math-container">$\sigma$</span> of <span class="math-container">$S_N$</span>, as <span class="math-container">$ES_N=0$</span>.</p> <p>There are <span class="math-container">$2^N$</span> possible paths after N steps, each with probability <span class="math-container">$2^{-N}$</span>, thus as noted in other answers since all values of the random variable <span class="math-container">$|S_N|$</span> are nonnegative, it suffices to find one that is positive to prove that the average distance <span class="math-container">$E|S_N|&gt;0$</span>. You may take the path <span class="math-container">$\omega=(1,1,1,...,1,1)$</span>, &quot;all steps to the north&quot;: the final distance from where you started is <span class="math-container">$N$</span> which contributes <span class="math-container">$\frac{N}{2^N}$</span> to the expectation, so <span class="math-container">$E|S_N|\geq\frac{N}{2^N}&gt;0$</span>.</p> <p>Hölder's inequality implies that the variance <span class="math-container">$\sigma^2_{S_N}=ES_N^2\geq (E|S_N|)^2$</span>; and actually <span class="math-container">$ES_N^2=N$</span> (by independence of the <span class="math-container">$X_i$</span> and <span class="math-container">$\sigma_{X_i}^2=1$</span>) while as <span class="math-container">$N\rightarrow\infty$</span>, <span class="math-container">$(E|S_N|)^2\sim\frac{2N}{\pi}&lt;N$</span>.</p> <p>The exact formula -whose proof you can find in <a href="https://mathworld.wolfram.com/RandomWalk1-Dimensional.html" rel="nofollow noreferrer">https://mathworld.wolfram.com/RandomWalk1-Dimensional.html</a> - for <span class="math-container">$E|S_N|$</span> is <span class="math-container">$\frac{(N-1)!!}{(N-2)!!}$</span>, for <span class="math-container">$N$</span> even, and <span class="math-container">$\frac{N!!}{(N-1)!!}$</span> for <span class="math-container">$N$</span> odd. And both formulas have the same asymptotic, given above. As commented by Džuris, the average distance increases only when taking an odd-numbered step -see my reply for a direct proof, not relying on the exact formula, which is not trivial to arrive at.</p> <p>A decimal approximation of the value (rather than of the asymptotic estimate given at the beginning of this answer) is <span class="math-container">$E|S_{100}|\approx 7.95892373871787614981270502421704614$</span>, and of course the standard deviation of <span class="math-container">$S_{100}$</span> is <span class="math-container">$10$</span>.</p>
combinatorics
<p>How do you prove <span class="math-container">$n \choose k$</span> is maximum when <span class="math-container">$k$</span> is <span class="math-container">$\lceil n/2 \rceil$</span> or <span class="math-container">$\lfloor n/2 \rfloor$</span>?</p> <p>This <a href="http://clrs.skanev.com/C/01/10.html" rel="noreferrer">link</a> provides a proof of sorts but it is not satisfying. From what I understand, it focuses on product pairings present in <span class="math-container">$k! (n-k)!$</span> term which are of the form <span class="math-container">$i \times (i-1)$</span>. Since these are minimized when <span class="math-container">$i=n/2$</span>, we get the result. But what about the reasoning for the rest of the terms?</p>
<p>HINT:</p> <p>As $\displaystyle \binom nk&gt;0$ for $0\le k\le n$ where $n&gt;0,k$ are integers</p> <p>Check for $k$ such that $$\frac{\binom n{k+1}}{\binom nk}=\frac{n-k}{k+1}&gt;=&lt;1$$</p>
<p>I have done this proof <a href="http://us2.metamath.org:88/mpegif/bcmax.html" rel="noreferrer">in Metamath</a> before; it may help to see the whole thing laid out.</p> <p>The proof follows from the fact that the binomial coefficient is monotone in the second argument, i.e. ${n\choose k'}\le{n\choose k''}$ when $0\le k'\le k''\le\lceil\frac n2\rceil$, which can be proven by induction. Given this, you just set $k''=\lceil\frac n2\rceil$ and $k'=k$ or $k'=n-k$ depending on whether $k\le\frac n2$, and you get ${n\choose k}={n\choose n-k}\le{n\choose \lceil n/2\rceil}={n\choose \lfloor n/2\rfloor}$ (where the equalities are deduced by symmetry of the binomial coefficient under $k\mapsto n-k$).</p> <p>To prove monotonicity, we prove ${n\choose k-1}\le{n\choose k}$ for $1\le k\le\lceil\frac n2\rceil$, and thus ${n\choose k}\le{n\choose k+1}\le\dots\le{n\choose l}$ for any $k\le l$ in the range. Now we have:</p> <p>$${n\choose k-1}=\frac{n!}{(k-1)!(n-k+1)!}=\frac{n!}{k!(n-k)!}\frac{k}{n-k+1}={n\choose k}\frac{k}{n-k+1},$$</p> <p>so ${n\choose k-1}\le{n\choose k}$ iff $\frac{k}{n-k+1}\le 1$. But that is equivalent to $$k\le n-k+1\iff 2k\le n+1\iff k\le \frac{n+1}2\iff k\le \left\lceil\frac{n}2\right\rceil,$$</p> <p>and we are done.</p>
geometry
<p>This problem occurred to me when I came across a similar problem where the radii were taken over only the primes. That question was unanswered, but it seems to me infinitely many circles of radius $1/2, 1/3, 1/4...$ can fit into a unit disk. The area of all those circles would be $\pi \sum_2^\infty 1/n^2 = \pi^3/6 -\pi$, which is less than the area of the unit disk $\pi$. But can the circles actually be packed with no overlaps?</p>
<p>This packing of first circles with radii $\dfrac{1}{2}, \ldots, \dfrac{1}{16}$ gives me optimism in possibility of such packing: </p> <p><a href="https://i.sstatic.net/DfDFN.png" rel="noreferrer"><img src="https://i.sstatic.net/DfDFN.png" alt="enter image description here"></a></p> <p>Next step: one can cut free room into strips, which can be packed with smaller circles... <br>Sketch is below: </p> <p><a href="https://i.sstatic.net/Eaqn0.png" rel="noreferrer"><img src="https://i.sstatic.net/Eaqn0.png" alt="enter image description here"></a></p> <p>$2$nd strip: circles with radii $\dfrac{1}{17}, \ldots, \dfrac{1}{47}$; $3$rd strip: circles with radii $\dfrac{1}{48},\ldots,\dfrac{1}{99}$ (for example).</p> <hr> <p><strong>Update:</strong> And this packing is, maybe, more elegant:</p> <p><a href="https://i.sstatic.net/gtaXQ.png" rel="noreferrer"><img src="https://i.sstatic.net/gtaXQ.png" alt="enter image description here"></a></p> <p>One note: when arrange circles in row, then "tail" is fast-convergent: <a href="https://i.sstatic.net/wKVbF.png" rel="noreferrer"><img src="https://i.sstatic.net/wKVbF.png" alt="enter image description here"></a></p> <p>while radius is $\dfrac{1}{n}$, then $y = \dfrac{2}{n}$, $x = 2\sum\limits_{k=1}^n \dfrac{1}{n} \approx 2(\ln n +\gamma)$, where $\gamma \approx 0.577$; therefore red line has formula $y = 2 e^{\gamma}e^{-x/2}$. In the previous image this "tail" is rolling infinite number of times near the main circle, but its width is very-very tiny. Each loop is $\approx e^{\pi}\approx 23.14$ times thinner than previous one. So total thickness of tail (starting of $n$-th circle) has the same behavior: $\tilde ~ e^{-1/(2n)}$.</p>
<p>Consider lining up all circles with radii $1/2...1/n$ such that each circle is tangent to the two circles next to it, and all the circles are tangent to a straight line. The function which approximates their heights as $n\rightarrow\infty$ is $\sqrt{e^{-n}}$, as shown below</p> <a href="https://i.sstatic.net/ejRem.png" rel="noreferrer"><img src="https://i.sstatic.net/ejRem.png" alt="enter image description here"></a></p> $\int_{-1/2}^{\infty} \sqrt{e^{-n}}dn = 2\sqrt[4]{e}$ ~$2.568... \lt \pi$, therefore the whole function can be contained inside the unit circle as a beautiful spiral, $r=1-\sqrt{e^{-\theta}}$</p> <a href="https://i.sstatic.net/8R3Sm.png" rel="noreferrer"><img src="https://i.sstatic.net/8R3Sm.png" alt="enter image description here"></a></p> This perfectly matches @Oleg567's elegant packing.</p>
probability
<p>Linearity of expectation is a very simple and &quot;obvious&quot; statement, but has many non-trivial applications, e.g., to analyze randomized algorithms (for instance, the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem#Calculating_the_expectation" rel="noreferrer">coupon collector's problem</a>), or in some proofs where dealing with non-independent random variables would otherwise make any calculation daunting.</p> <p>What are the cleanest, most elegant, or striking applications of the linearity of expectation you've encountered?</p>
<p><strong>Buffon's needle:</strong> rule a surface with parallel lines a distance <span class="math-container">$d$</span> apart. What is the probability that a randomly dropped needle of length <span class="math-container">$\ell\leq d$</span> crosses a line?</p> <p>Consider dropping <em>any</em> (continuous) curve of length <span class="math-container">$\ell$</span> onto the surface. Imagine dividing up the curve into <span class="math-container">$N$</span> straight line segments, each of length <span class="math-container">$\ell/N$</span>. Let <span class="math-container">$X_i$</span> be the indicator for the <span class="math-container">$i$</span>-th segment crossing a line. Then if <span class="math-container">$X$</span> is the total number of times the curve crosses a line, <span class="math-container">$$\mathbb E[X]=\mathbb E\left[\sum X_i\right]=\sum\mathbb E[X_i]=N\cdot\mathbb E[X_1].$$</span> That is to say, the expected number of crossings is proportional to the length of the curve (and independent of the shape).</p> <p>Now we need to fix the constant of proportionality. Take the curve to be a circle of diameter <span class="math-container">$d$</span>. Almost surely, this curve will cross a line twice. The length of the circle is <span class="math-container">$\pi d$</span>, so a curve of length <span class="math-container">$\ell$</span> crosses a line <span class="math-container">$\frac{2\ell}{\pi d}$</span> times.</p> <p>Now observe that a straight needle of length <span class="math-container">$\ell\leq d$</span> can cross a line either <span class="math-container">$0$</span> or <span class="math-container">$1$</span> times. So the probability it crosses a line is precisely this expectation value <span class="math-container">$\frac{2\ell}{\pi d}$</span>.</p>
<p>As <a href="https://math.stackexchange.com/users/252071/lulu">lulu</a> mentioned in a comment, the fact that a uniformly random permutation <span class="math-container">$\pi\colon\{1,2,\dots,n\}\to\{1,2,\dots,n\}$</span> has in expectation <em>one</em> fixed point is a quite surprising statement, with a one-line proof.</p> <p>Let <span class="math-container">$X$</span> be the number of fixed points of such a uniformly random <span class="math-container">$\pi$</span>. Then <span class="math-container">$X=\sum_{k=1}^n \mathbf{1}_{\pi(k)=k}$</span>, and thus <span class="math-container">$$ \mathbb{E}[X] = \mathbb{E}\left[\sum_{k=1}^n \mathbf{1}_{\pi(k)=k}\right] = \sum_{k=1}^n \mathbb{E}[\mathbf{1}_{\pi(k)=k}] = \sum_{k=1}^n \mathbb{P}\{\pi(k)=k\} = \sum_{k=1}^n \frac{1}{n} = 1\,. $$</span></p>
logic
<p>Asaf's answer <a href="https://math.stackexchange.com/questions/45145/why-are-metric-spaces-non-empty">here</a> reminded me of something that should have been bothering me ever since I learned about it, but which I had more or less forgotten about. In first-order logic, there is a convention to only work with non-empty models of a theory $T$. The reason usually given is that the sentences $(\forall x)(x = x)$ and $(\forall x)(x \neq x)$ both hold in the "empty model" of $T$, so if we want the set of sentences satisfied by a model to be consistent, we need to disallow the empty model.</p> <p>This smells fishy to me. I can't imagine that a sufficiently categorical setup of first-order logic (in terms of functors $C_T \to \text{Set}$ preserving some structure, where $C_T$ is the "free model of $T$" in an appropriate sense) would have this defect, or if it did it would have it for a reason. So something is incomplete about the standard setup of first-order logic, but I don't know what it could be. </p> <p>The above looks like an example of <a href="http://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">too simple to be simple</a>, except that I can't explain it to myself in the same way that I can explain other examples. </p>
<p>Both $(\forall x)(x = x)$ and $(\forall x)(x \not = x)$ do hold in the empty model, and it's perfectly consistent. What we lose when we move to empty models, as Qiaochu Yuan points out, are certain inference rules that we're used to. </p> <p>For first-order languages that include equality, the set $S$ of statements that are true all models (empty or not) is a proper subset of the set $S_N$ of statements that are true in all nonempty models. Because the vast majority of models we are interested in are nonempty, in logic we typically look at sets of inference rules that generate $S_N$ rather than rules that generate $S$. </p> <p>One particular example where this is useful is the algorithm to put a formula into prenex normal form, which is only correct when we limit to nonempty models. For example, the formula $(\forall x)(x \not = x) \land \bot$ is false in every model, but its prenex normal form $(\forall x)(x \not = x \land \bot)$ is true in the empty model. The marginal benefit of considering the empty model doesn't outweigh the loss of the beautiful algorithm for prenex normal form that works for every other model. In the rare cases when we do need to consider empty models, we realize we have to work with alternative inference rules; it just isn't usually worth the trouble.</p> <p>From a different point of view, only considering nonempty models is analogous to only considering Hausdorff manifolds. But with the empty model there is only one object being ignored, which we can always treat as a special case if we need to think about it. </p>
<p>Isn't this a non-issue? </p> <p>Many of the most common set-ups for the logical axioms were developed long ago, in a time when mathematicians (not just logicians) thought that they wanted to care only about non-empty structures, and so they made sure that $\exists x\, x=x$ was derivable in their logical system. They had to do this in order to have the completeness theorem, that every statement true in every intended model was derivable. And so those systems continue to have that property today.</p> <p>Meanwhile, many mathematicians developed a fancy to consider the empty structure seriously. So logicians developed logical systems that handle this, in which $\exists x\, x=x$ is not derivable. For example, this is always how I teach first order logic, and it is no problem at all. But as you point out in your answer, one does need to use a different logical set-up. </p> <p>So if you care about it, then be sure to use the right logical axioms, since definitely you will not want to give up on the completeness theorem.</p>
linear-algebra
<p>If the column vectors of a matrix $A$ are all orthogonal and $A$ is a square matrix, can I say that the row vectors of matrix $A$ are also orthogonal to each other?</p> <p>From the equation $Q \cdot Q^{T}=I$ if $Q$ is orthogonal and square matrix, it seems that this is true but I still find it hard to believe. I have a feeling that I may still be wrong because those column vectors that are perpendicular are vectors within the column space. Taking the rows vectors give a totally different direction from the column vectors in the row space and so how could they always happen to be perpendicular?</p> <p>Thanks for any help.</p>
<p>Recall that two vectors are orthogonal if and only if their inner product is zero. You are incorrect in asserting that if the columns of $Q$ are orthogonal to each other then $QQ^T = I$; this follows if the columns of $Q$ form an <em>orthonormal</em> set (basis for $\mathbb{R}^n$); orthogonality is not sufficient. Note that "$Q$ is an orthogonal matrix" is <strong>not</strong> equivalent to "the columns of $Q$ are pairwise orthogonal".</p> <p>With that clarification, the answer is that if you only ask that the columns be pairwise orthogonal, then the rows need not be pairwise orthogonal. For example, take $$A = \left(\begin{array}{ccc}1&amp; 0 &amp; 0\\0&amp; 0 &amp; 1\\1 &amp; 0 &amp; 0\end{array}\right).$$ The columns are orthogonal to each other: the middle column is orthogonal to everything (being the zero vector), and the first and third columns are orthogonal. However, the rows are not orthogonal, since the first and third rows are equal and nonzero.</p> <p>On the other hand, if you require that the columns of $Q$ be an <strong>orthonormal</strong> set (pairwise orthogonal, and the inner product of each column with itself equals $1$), then it <em>does</em> follow: precisely as you argue. That condition <em>is</em> equivalent to "the matrix is orthogonal", and since $I = Q^TQ = QQ^T$ and $(Q^T)^T = Q$, it follows that if $Q$ is orthogonal then so is $Q^T$, hence the columns of $Q^T$ (i.e., the <em>rows</em> of $Q$) form an orthonormal set as well. </p>
<p>Even if $A$ is non-singular, orthogonality of columns by itself does not guarantee orthogonality of rows. Here is a 3x3 example: $$ A = \left( \begin{matrix} 1 &amp; \;2 &amp; \;\;5 \\ 2 &amp; \;2 &amp; -4 \\ 3 &amp; -2 &amp; \;\;1 \end{matrix} \right) $$ Column vectors are orthogonal, but row vectors are not orthogonal.</p> <p>On the other hand, orthonormality of columns guarantees orthonormality of rows, and vice versa.</p> <p>As a footnote, one of the forms of Hadamard's inequality concerns the absolute value of the determinant of a matrix given the norms of the column vectors. That absolute value will be maximum when those vectors are orthogonal. The determinant, in absolute value, will be equal to the product of the norms. In the case of the above matrix, as the columns are orthogonal, 84 is the maximum possible absolute value of the determinant $-$ <em>det(A)</em> is -84 $-$ for column vectors with the given norms ($\sqrt {14}, 2\sqrt 3$ and $ \sqrt {42}$ respectively).</p> <p>Although $det(A)=det(A^T)$, Hadamard's inequality does not imply neither orthogonality of the rows of <em>A</em> nor that the absolute value of the determinant is maximum for the given norms of the row vectors ($ \sqrt{30}, 2\sqrt 6$ and $ \sqrt{14}$ respectively; their product is $ 12 \sqrt{70} \cong 100.4 $).</p>
linear-algebra
<blockquote> <p>Let <span class="math-container">$ \sigma(A)$</span> be the set of all eigenvalues of <span class="math-container">$A$</span>. Show that <span class="math-container">$ \sigma(A) = \sigma\left(A^T\right)$</span> where <span class="math-container">$A^T$</span> is the transpose matrix of <span class="math-container">$A$</span>.</p> </blockquote>
<p>The matrix <span class="math-container">$(A - \lambda I)^{T}$</span> is the same as the matrix <span class="math-container">$\left(A^{T} - \lambda I\right)$</span>, since the identity matrix is symmetric.</p> <p>Thus:</p> <p><span class="math-container">$$\det\left(A^{T} - \lambda I\right) = \det\left((A - \lambda I)^{T}\right) = \det (A - \lambda I)$$</span></p> <p>From this it is obvious that the eigenvalues are the same for both <span class="math-container">$A$</span> and <span class="math-container">$A^{T}$</span>.</p>
<p>I'm going to work a little bit more generally.</p> <p>Let $V$ be a finite dimensional vector space over some field $K$, and let $\langle\cdot,\cdot\rangle$ be a <em>nondegenerate</em> bilinear form on $V$.</p> <p>We then have for every linear endomorphism $A$ of $V$, that there is a unique endomorphism $A^*$ of $V$ such that $$\langle Ax,y\rangle=\langle x,A^*y\rangle$$ for all $x$ and $y\in V$.</p> <p>The existence and uniqueness of such an $A^*$ requires some explanation, but I will take it for granted.</p> <blockquote> <p><strong>Proposition:</strong> Given an endomorphism $A$ of a finite dimensional vector space $V$ equipped with a nondegenerate bilinear form $\langle\cdot,\cdot\rangle$, the endomorphisms $A$ and $A^*$ have the same set of eigenvalues.</p> </blockquote> <p><em>Proof:</em> Let $\lambda$ be an eigenvalue of $A$. And let $v$ be an eigenvector of $A$ corresponding to $\lambda$ (in particular, $v$ is nonzero). Let $w$ be another arbitrary vector. We then have that: $$\langle v,\lambda w\rangle=\langle\lambda v,w\rangle=\langle Av,w\rangle=\langle v,A^*w\rangle$$ This implies that $\langle v,\lambda w-A^*w\rangle =0$ for all $w\in V$. Now either $\lambda$ is an eigenvalue of $A^*$ or not. If it isn't, the operator $\lambda I -A^*$ is an automorphism of $V$ since $\lambda I-A^*$ being singular is equivalent to $\lambda$ being an eigenvalue of $A^*$. In particular, this means that $\langle v, z\rangle = 0$ for all $z\in V$. But since $\langle\cdot,\cdot\rangle$ is nondegenerate, this implies that $v=0$. A contradiction. $\lambda$ must have been an eigenvalue of $A^*$ to begin with. Thus every eigenvalue of $A$ is an eigenvalue of $A^*$. The other inclusion can be derived similarly.</p> <p>How can we use this in your case? I believe you're working over a real vector space and considering the dot product as your bilinear form. Now consider an endomorphism $T$ of $\Bbb R^n$ which is given by $T(x)=Ax$ for some $n\times n$ matrix $A$. It just so happens that for all $y\in\Bbb R^n$ we have $T^*(y)=A^t y$. Since $T$ and $T^*$ have the same eigenvalues, so do $A$ and $A^t$.</p>
matrices
<p>For a lower triangular matrix, the inverse of itself should be easy to find because that's the idea of the LU decomposition, am I right? For many of the lower or upper triangular matrices, often I could just flip the signs to get its inverse. For eg: $$\begin{bmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 0\\ -1.5 &amp; 0 &amp; 1 \end{bmatrix}^{-1}= \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 0\\ 1.5 &amp; 0 &amp; 1 \end{bmatrix}$$ I just flipped from -1.5 to 1.5 and I got the inverse.</p> <p>But this apparently doesn't work all the time. Say in this matrix: $$\begin{bmatrix} 1 &amp; 0 &amp; 0\\ -2 &amp; 1 &amp; 0\\ 3.5 &amp; -2.5 &amp; 1 \end{bmatrix}^{-1}\neq \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 2 &amp; 1 &amp; 0\\ -3.5 &amp; 2.5 &amp; 1 \end{bmatrix}$$ By flipping the signs, the inverse is wrong. But if I go through the whole tedious step of gauss-jordan elimination, I would get its correct inverse like this: $\begin{bmatrix} 1 &amp; 0 &amp; 0\\ -2 &amp; 1 &amp; 0\\ 3.5 &amp; -2.5 &amp; 1 \end{bmatrix}^{-1}= \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 2 &amp; 1 &amp; 0\\ 1.5 &amp; 2.5 &amp; 1 \end{bmatrix}$ And it looks like some entries could just flip its signs but not for others.</p> <p>Then this is kind of weird because I thought the whole idea of getting the lower and upper triangular matrices is to avoid the need to go through the tedious process of gauss-jordan elimination and can get the inverse quickly by flipping signs? Maybe I have missed something out here. How should I get an inverse of a lower or an upper matrix quickly?</p>
<p>Ziyuang's answer handles the cases, where <span class="math-container">$N^2=0$</span>, but it can be generalized as follows. A triangular <span class="math-container">$n\times n$</span> matrix <span class="math-container">$T$</span> with 1s on the diagonal can be written in the form <span class="math-container">$T=I+N$</span>. Here <span class="math-container">$N$</span> is the strictly triangular part (with zeros on the diagonal), and it always satisfies the relation <span class="math-container">$N^{n}=0$</span>. Therefore we can use the polynomial factorization <span class="math-container">$1-x^n=(1-x)(1+x+x^2+\cdots +x^{n-1})$</span> with <span class="math-container">$x=-N$</span> to get the matrix relation <span class="math-container">$$ (I+N)(I-N+N^2-N^3+\cdot+(-1)^{n-1}N^{n-1})=I + (-1)^{n-1}N^n=I $$</span> telling us that <span class="math-container">$(I+N)^{-1}=I+\sum_{k=1}^{n-1}(-1)^kN^k$</span>.</p> <p>Yet another way of looking at this is to notice that it also is an instance of a geometric series <span class="math-container">$1+q+q^2+q^3+\cdots =1/(1-q)$</span> with <span class="math-container">$q=-N$</span>. The series converges for the unusual reason that powers of <span class="math-container">$q$</span> are all zero from some point on. The same formula can be used to good effect elsewhere in algebra, too. For example, in a residue class ring like <span class="math-container">$\mathbf{Z}/2^n\mathbf{Z}$</span> all the even numbers are nilpotent, so computing the modular inverse of an odd number can be done with this formula. </p>
<p>In case of a lower triangular matrix with arbitrary non-zero diagonal members, you may just need to change it in to: $T = D(I+N)$ where $D$ is a diagonal matrix and $N$ is again an strictly lower diagonal matrix. Apparently, all said about inverse in previous comments will be the same.</p>
probability
<p>I recently <a href="https://twitter.com/willcole/status/999276319061041153" rel="noreferrer">posted a tweet</a> claiming I had encountered a real life <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="noreferrer">Monty Hall dilemma</a>. Based on the resulting discussion, I'm not sure I have. <hr></p> <p><strong>The Scenario</strong> </p> <ul> <li><p>I have 3 tacos (A,B,C) where tacos A and C are filled with beans, and taco B is filled with steak.</p></li> <li><p>I have no foreknowledge of the filling of any tacos.</p></li> <li><p>My wife <em>only</em> knows that taco C is filled with beans.</p></li> <li><p>My wife and I both know that I want steak.</p></li> <li><p>After I pick taco A, my wife informs me taco C is filled with beans. </p></li> <li><p>I switch my pick from taco A to taco B, thinking the logic behind the Monty Hall problem is relevant to my choice. <hr></p></li> </ul> <p><strong>Edit for clarity</strong></p> <ul> <li><p>Timing: The contents of taco C were not revealed to me until after I had made my selection of taco A.</p></li> <li><p>My knowledge of what my wife knew: When she told me the contents of taco C, I knew that she had previously opened taco C. I also knew that she had no other knowledge of the contents of the other tacos.</p></li> </ul> <p><strong>Questions</strong></p> <ol> <li>Even though my wife does not know the fillings of all the tacos, does her revealing that taco C is definitively not the taco I want after I've made my initial selection satisfy the logic for me switching (from A to B) if I thought it would give me a 66.6% chance of getting the steak taco?</li> <li>If this is not a Monty Hall situation, is there any benefit in me switching?</li> </ol>
<p>No, this is not a Monty Hall problem. If your wife only knew the contents of #3, and was going to reveal it regardless, then the odds were always 50/50/0. The information never changed. It was just delayed until after your original choice. Essentially, you NEVER had the chance to pick #3, as she would have immediately told you it was wrong. (In this case, she is on your team, and essentially part of the player). #3 would be eliminated regardless: "No, not that one!"</p> <p>Imagine you had picked #3. Monty Hall never said, "You picked a goat. Want to switch?" </p> <p>If he did, the odds would immediately become 50/50, which is what we have here.</p> <p>Monty always reveals the worst half of the 2/3 you didn't select, leaving the player at 33/67/0.</p>
<p><strong>tl;dr: Switching in this case has no effect, unlike the Monty Hall problem, where switching doubles your odds.</strong></p> <p>The reason this is different is that Will's wife knew the content of one, and only one taco, and that it was a bean one, and Will <em>knows</em> that his wife only knew the content of one bean one. (Monty is different because he knew <em>all</em> doors.)</p> <p>Here's why:</p> <p>Unlike the MH problem, which has one car, and two goats that can be treated as identical, the Will's Wife Problem has one Steak, One <em>known</em> bean, and one <em>unknown</em> bean, so the beans need to be considered differently.</p> <p>That's important because MH's reveal gave players a strong incentive to switch, but gave them NO new information by revealing a goat in an unpicked door - whatever the player picked, he could show a goat, so no new info is provided by the reveal. But that's not the case for Will's wife:</p> <p>Since she only knows the content of the <em>known</em> bean taco, her revealing that it's not one of the ones you picked actually changes what you know about your odds. Because she would have behaved differently if you'd picked the known bean one - she'd have said, "the one you picked is bean". </p> <p><em>Without</em> her info, you only had a 1/3 chance of having the steak, but by showing you that you <em>didn't</em> pick the only bean one she knew about, it means you <em>already</em> know you have a 50% chance of having the steak. </p> <p>And since you <em>also</em> have a 50% chance of having the <em>unknown</em> bean taco, it's irrelevant if you switch or not.</p> <p>In the MH problem, the key fact is this: Since the reveal in no way changes what you know about your odds, there's only a 1/3 chance that you START with the car. And since you:</p> <ul> <li>Win by staying in all cases where you started with the car, and</li> <li>Win by switching in all cases where you <em>didn't</em> start with the car...</li> </ul> <p>In the MH problem, switching doubles your odds (from 1/3 to 2/3), but in this case, switching has no impact (since it's 50% either way).</p>
logic
<p>Second order logic is a language, but, is it a logic?</p> <p>My understanding is that a logic (or "logical system") is an ordered pair; it is a formal system together with a semantics. However, the language of second-order logic is associated with a variety of inequivalent formal systems and a variety of semantics. So, is it a logic?</p> <p>I think not. Supposing not, what do sentences like the following (<a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem#Completeness_in_other_logics">copied</a> from wikipedia) even mean?</p> <blockquote> <p>Second-order logic, for example, does not have a completeness theorem for its standard semantics (but does have the completeness property for Henkin semantics).</p> </blockquote>
<p>This is a very interesting philosophical question! There are philosophers on both sides of the issue. A prominent figure who spoke against SOL as logic was W.V.O. Quine. His argument mostly focuses on the point that SOL, which quantifies over sets explicitly, seems to just be "set theory in disguise", and hence <em>not</em> logic proper (see his <em>Philosophy of Logic</em> for more details). People have also put forth the argument you presented, viz. any putative logic must have a sound and complete deduction system in order to be a genuine logic, and since SOL doesn't have one, it follows that it can't be a logic. </p> <p>On the other side, a prominent defender of SOL as logic proper is George Boolos. See his "On Second-Order Logic" and his "To Be is to Be the Value of a Variable (Or to Be Some Values of Some Variables)". One argument put forward in favor of SOL from a practical standpoint is that it has a way of really shortening proofs (see "A Curious Inference"). Boolos also suggests that we could use monadic SOL to model talk of plurals, i.e. to model sentences like "Some critics only admire one another". Another great defender of SOL is Stewart Shapiro, whose book <em>Foundations Without Foundationalism</em> not only presents a wonderful technical treatment of SOL but also some good philosophical defenses of it. In particular, Shapiro argues that SOL is needed for everyday mathematical discourse. I would <em>highly</em> recommend this book as an introduction to the subject, both mathematically and philosophically.</p> <p>The status of completeness theorems in this debate is fascinating, but difficult to pin down. After all, many philosophers might just say "Look, so what if SOL is incomplete? That just means that the set of <em>validities</em> doesn't coincide with the set of <em>provable sentences</em>, i.e. there are some sentences which are logically true, but not provable. And while this is unfortunate, why shouldn't we expect this? Why shouldn't we just accept that logic didn't turn out the way we hoped?" On the other hand, many philosophers might say "Look, logic is about <em>making inferences</em>. If there are truths which don't come from some deduction or inference, then how could they be <em>truths of logic</em>?" </p> <p>I personally don't find the view that genuine logics must have complete proof systems convincing (or at least convincing enough), but I certainly do feel the pull to search for/work with logics with complete proof systems. Ultimately, it comes down to what you think "logic" and "logical consequence" are, which are highly contentious matters in the philosophical literature, and which have no widely accepted answers. </p>
<p>Second-order logic has a collection of deduction rules. From a collection of statements, we can apply the deduction rules to derive new statements. This is called "syntactic entailment", or more simply, "formal proof".</p> <p>Standard semantics talks about set-theoretic interpretations with the property that power types are interpreted as power sets. e.g. if $X$ is the interpretation of the domain of objects, then $\mathcal{P}(X)$ (or equivalently, $\{T,F\}^X$) is the interpretation of the domain of unary predicates on objects.</p> <p>Given a collection of statements, we can consider set-theoretic models of those statements; interpretations where those statements are true. Using <em>the laws of ZFC</em>, one can show that the interpretations of statements are also true in every model. This is called "semantic entailment".</p> <p>The incompleteness theorem is that syntactic entailment and semantic entailment (with standard semantics) do not coincide for second-order logic. (They do coincide for first-order logic!)</p> <hr> <p>Incidentally, <em>first-order logic</em> also has variations. e.g. on the syntactic side, one can generalize to <a href="http://en.wikipedia.org/wiki/Free_logic">"free logic"</a> (which, in some settings is what people mean when they say "first-order logic"), and people often consider various restricted fragments of it such as "regular logic" or "algebraic theories". On the semantic side, we can consider interpretations in terms of sheaves or objects in categories rather than in sets.</p>
probability
<blockquote> <p>Consider the following experiment. I roll a die repeatedly until the die returns 6, then I count the number of times 3 appeared in the random variable $X$. What is $E[X]$?</p> </blockquote> <p><strong>Thoughts:</strong> I expect to roll the die 6 times before 6 appears (this part is geometric), and on the preceding 5 rolls each roll has a $1/5$ chance of returning a 3. Treating this as binomial, I therefore expect to count 3 once, so $E[X]=1$.</p> <p><strong>Problem:</strong> Don't know how to model this problem mathematically. Hints would be appreciated.</p>
<p>We can restrict ourselves to dice throws with outcomes $3$ and $6$. Among these throws, both outcomes are equally likely. This means that the index $Y$ of the first $6$ is geometrically distributed with parameter $\frac12$, hence $\mathbb{E}(Y)=2$. The number of $3$s occuring before the first $6$ equals $Y-1$ and has expected value $1$.</p>
<p>There are infinite ways to solve this problem, here is another solution I like. </p> <p>Let $A = \{\text{first roll is }6\}$, $B = \{\text{first roll is }3\}$, $C = \{\text{first roll is neither }3\text{ nor }6\}$. Then $$ E[X] = E[X|A]P(A) + E[X|B] P(B) + E[X|C] P(C) = 0 + (E[X] + 1) \frac16 + E[X]\frac46, $$ whence $E[X] = 1$. </p>
matrices
<p>I have matrices for my syllabus but I don't know where they find their use. I even asked my teacher but she also has no answer. Can anyone please tell me where they are used? And please also give me an example of how they are used?</p>
<p>I work in the field of applied math, so I will give you the point of view of an applied mathematician.</p> <p>I do numerical PDEs. Basically, I take a differential equation (an equation whose solution is not a number, but a function, and that involves the function and its derivatives) and, instead of finding an analytical solution, I try to find an approximation of the value of the solution at some points (think of a grid of points). It's a bit more deep than this, but it's not the point here. The point is that eventually I find myself having to solve a linear system of equations which usually is of huge size (order of millions). It is a pretty huge number of equations to solve, I would say.</p> <p>Where do matrices come into play? Well, as you know (or maybe not, I don't know) a linear system can be seen in matrix-vector form as</p> <p>$$\text{A}\underline{x}=\underline{b}$$ where $\underline{x}$ contains the unknowns, A the coefficients of the equations and $\underline{b}$ contains the values of the right hand sides of the equations. For instance for the system</p> <p>$$\begin{cases}2x_1+x_2=3\\4x_1-x_2=1\end{cases}$$ we have</p> <p>$$\text{A}=\left[ \begin{array}{cc} 2 &amp; 1\\ 4 &amp; -1 \end{array} \right],\qquad \underline{x}= \left[\begin{array}{c} x_1\\ x_2 \end{array} \right]\qquad \underline{b}= \left[\begin{array}{c} 3\\ 1 \end{array} \right]$$</p> <p>For what I said so far, in this context matrices look just like a fancy and compact way to write down a system of equations, mere tables of numbers.</p> <p>However, in order to solve this system fast is not enough to use a calculator with a big RAM and/or a high clock rate (CPU). Of course, the more powerful the calculator is, the faster you will get the solution. But sometimes, faster might still mean <strong>days</strong> (or more) if you tackle the problem in the wrong way, even if you are on a Blue Gene.</p> <p>So, to reduce the computational costs, you have to come up with a good algorithm, a smart idea. But in order to do so, you need to exploit some property or some structure of your linear system. These properties are encoded somehow in the coefficients of the matrix A. Therefore, studying matrices and their properties is of crucial importance in trying to improve linear solvers efficiency. Recognizing that the matrix enjoys a particular property might be crucial to develop a fast algorithm or even to prove that a solution exists, or that the solution has some nice property.</p> <p>For instance, consider the linear system</p> <p>$$\left[\begin{array}{cccc} 2 &amp; -1 &amp; 0 &amp; 0\\ -1 &amp; 2 &amp; -1 &amp; 0\\ 0 &amp; -1 &amp; 2 &amp; -1\\ 0 &amp; 0 &amp; -1 &amp; 2 \end{array} \right] \left[ \begin{array}{c} x_1\\ x_2\\ x_3\\ x_4 \end{array} \right]= \left[ \begin{array}{c} 1\\ 1\\ 1\\ 1 \end{array} \right]$$ which corresponds (in equation form) to</p> <p>$$\begin{cases} 2x_1-x_2=1\\ -x_1+2x_2-x_3=1\\ -x_2+2x_3-x_4=1\\ -x_3+2x_4=1 \end{cases}$$</p> <p>Just giving a quick look to the matrix, I can claim that this system has a solution and, moreover, the solution is non-negative (meaning that all the components of the solution are non-negative). I'm pretty sure you wouldn't be able to draw this conclusion just looking at the system without trying to solve it. I can also claim that to solve this system you need only 25 operations (one operation being a single addition/subtraction/division/multiplication). If you construct a larger system with the same pattern (2 on the diagonal, -1 on the upper and lower diagonal) and put a right hand side with only positive entries, I can still claim that the solution exists and it's positive and the number of operations needed to solve it is only $8n-7$, where $n$ is the size of the system.</p> <p>Moreover, people already pointed out other fields where matrices are important bricks and plays an important role. I hope this thread gave you an idea of why it is worth it to study matrices. =)</p>
<p>Matrices are a useful way to represent, manipulate and study linear maps between finite dimensional vector spaces (if you have chosen basis). <br /> Matrices can also represent quadratic forms (it's useful, for example, in analysis to study hessian matrices, which help us to study the behavior of critical points).<br /></p> <p>So, it's a useful tool of linear algebra.</p> <p>Moreover, linear algebra is a crucial tool in math.<br /> To convince yourself, there are a lot of linear problems you can study with little knowledge in math. For examples, system of linear equations, some error-correcting codes (linear codes), linear differential equations, linear recurrence sequences...<br /> I also think that linear algebra is a natural framework of quantum mechanics.</p>
probability
<p>So I know one can go from a joint density function $f(x,y)$ to marginal density functions, like $f_x(x)$ by integrating against the other variables as in $f_x(x) = \int f(x,y) dy$...but given $f_x(x)$ and $f_y(y)$ as densities for dependent random vars..how would one go about finding a joint density or distribution function?</p> <p>Thanks</p>
<p>For example, suppose the marginal densities for $X$ and $Y$ are both 1 on the interval $[0,1]$, 0 otherwise. One family of possibilities for the joint density is $f(x,y) = 1 + g(x) h(y)$ for $0 &lt; x &lt; 1$, $0 &lt; y &lt; 1$, 0 otherwise, for functions $g$ and $h$ such that $\int_0^1 g(x)\, dx = \int_0^1 h(y)\, dy = 0$, $-1 \le g(x) \le 1$ and $-1 \le h(y) \le 1$. And there are infinitely many other possibilities.</p>
<p>There will be many different distributions with the same marginal distributions, so one needs to select a specific way to aggregate the marginal distributions into joint distributions. Assuming they are independent is essentially making one of these possible choices. The most common way to make the choice, is by working with a <a href="http://en.wikipedia.org/wiki/Copula_%28probability_theory%29">copula</a>. </p>
differentiation
<p>To my understanding, the Taylor series is a type of power series that provides an approximation of a function at some particular point $x=a$. But under what circumstances is this approximation perfect, and under what circumstances is it "off" even at infinity? </p> <p>I realize is a little hazy so I'll rephrase: By "perfect" I refer to how a regular limit doesn't ever actually reach something but instead provides a sort of error term that you can make as small as you want, so for all practical purposes we treat it as zero error. Whereas for something that is an imperfect approximation maybe that arbitrarily small error piece doesn't exist, or maybe the function is only correct for that particular point and nowhere else, etc.</p> <p>So maybe what I am asking is when the Taylor series provides an equivalent representation of the function over all $x$ in $f$'s domain, and when does it not? And when it doesn't, how do we even know?</p>
<h2>Limits are exact</h2> <p>You have a misunderstanding about limits! A limit, when it exists is just a value. An <em>exact</em> value.</p> <p>It doesn't make sense to talk about the limit reaching some value, or there being some error. $\lim_{x \to 1} x^2$ is just number, and that number is <em>exactly</em> one.</p> <p>What you are describing &mdash; these ideas about "reaching" a value with some "error" &mdash; are descriptions of the behavior of the expression $x^2$ as $x \to 1$. Among the features of this behavior is that $x^2$ is "reaching" one.</p> <p>By its very definition, the limit is the <em>exact</em> value that its expression is "reaching". $x^2$ may be "approximately" one, but $\lim_{x \to 1} x^2$ is exactly one.</p> <h2>Taylor polynomials</h2> <p>In this light, nearly everything you've said in your post is not about Taylor <em>series</em>, but instead about Taylor <em>polynomials</em>. When a Taylor series exists, the Taylor <em>polynomial</em> is given simply by truncating the series to finitely many terms. (Taylor polynomials can exist in situations where Taylor series don't)</p> <p>In general, the definition of the $n$-th order Taylor polynomial for an $n$-times differentiable function is the sum</p> <p>$$ \sum_{k=0}^n f^{(k)}(x) \frac{x^k}{k!} $$</p> <p>Taylor polynomials, generally, are not exactly equal to the original function. The only time that happens is when the original function is a polynonial of degree less than or equal to $n$.</p> <p>The sequence of Taylor polynomials, as $n \to \infty$, may converge to something. The Taylor <em>series</em> is <em>exactly</em> the value that the Taylor polynomials converge to.</p> <p>The error in the approximation of a function by a Taylor polynomial is something people study. One often speaks of the "remainder term" or the "Taylor remainder", which is precisely the error term. There are a number of theorems that put constraints on how big the error term can be.</p> <h2>Taylor series can have errors!</h2> <p>Despite all of the above, one of the big surprises of real analysis is that a function might not be equal to its Taylor series! There is a notorious example:</p> <p>$$ f(x) = \begin{cases} 0 &amp; x = 0 \\ \exp(-1/x^2) &amp; x \neq 0 \end{cases} $$</p> <p>you can prove that $f$ is infinitely differentiable everywhere. However, all of its derivatives have the property that $f^{(k)}(0) = 0$, so its Taylor series around zero is simply the zero function.</p> <p>However, we define</p> <blockquote> <p>A function $f$ is <strong>analytic</strong> at a point $a$ if there is an interval around $a$ on which $f$ is (exactly) equal to its Taylor series.</p> </blockquote> <p>"Most" functions mathematicians actually work with are analytic functions (e.g. all of the trigonometric functions are analytic on their domain), or analytic except for obvious exceptions (e.g. $|x|$ is not analytic at zero, but it is analytic everywhere else).</p>
<p>This is the fundamental question behind remainder estimates for Taylor's Theorem. Typically (meaning for sufficiently differentiable functions, and assuming without loss of generality that we are expanding at $0$), we have estimates of the form $$ f(x) = \sum_{n = 0}^N f^{(n)}(0) \frac{x^n}{n!} + E_N(x)$$ where the error $E_N(x)$ is given explicitly by $$ E_N(x) = \int_0^x f^{(N+1)}(t) \frac{(x-t)^N}{N!} dt,$$ though this is frequently approximated by $$ |E_N(x)| \leq \max_{t \in [0,x]} |f^{(N+1)}(t)| \frac{x^{N+1}}{(N+1)!}.$$</p> <p>A Taylor series will converge to $f$ at $x$ if $E_N(x) \to 0$ as $N \to \infty$. If the derivatives are well-behaved, then this is relatively easy to understand. But if the derivatives are very hard to understand, then this question can be very hard to determine.</p> <p>There are examples of infinitely differentiable functions whose Taylor series don't converge in any neighborhood of the central expansion point, and there are examples of functions with pretty hard-to-understand derivatives whose Taylor series converge everywhere to that function. Asking for more is a bit nuanced for each individual function.</p>
differentiation
<p>I have been wondering whether the following limit is being used somehow, as a variation of the derivative:</p> <p>$$\lim_{h\to 0} \frac{f(x+h)-f(x-h)}{2h} .$$</p> <p><strong>Edit:</strong> I know that this limit is defined in some places where the derivative is not defined, but it gives us some useful information.</p> <p>The question <strong>is not</strong> whether this limit is similar to the derivative, but whether it is useful somehow.</p> <p>Thanks.</p>
<p>The "symmetric difference" form of the derivative is quite convenient for the purposes of <em>numerical</em> computation; to wit, note that the symmetric difference can be expanded in this way:</p> <p>$$D_h f(x)=\frac{f(x+h)-f(x-h)}{2h}=f^\prime(x)+\frac{f^{\prime\prime\prime}(x)}{3!}h^2+\frac{f^{(5)}(x)}{5!}h^4+\dots$$</p> <p>and one thing that should be noted here is that in this series expansion, only <em>even</em> powers of $h$ show up.</p> <p>Consider the corresponding expansion when $h$ is halved:</p> <p>$$D_{h/2} f(x)=\frac{f(x+h/2)-f(x-h/2)}{h}=f^\prime(x)+\frac{f^{\prime\prime\prime}(x)}{3!}\left(\frac{h}{2}\right)^2+\frac{f^{(5)}(x)}{5!}\left(\frac{h}{2}\right)^4+\dots$$</p> <p>One could take a particular linear combination of this half-$h$ expansion and the previous expansion in $h$ such that the term with $h^2$ zeroes out:</p> <p>$$4D_{h/2} f(x)-D_h f(x)=3f^\prime(x)-\frac{f^{(5)}(x)}{160}h^4+\dots$$</p> <p>and we have after a division by $3$:</p> <p>$$\frac{4D_{h/2} f(x)-D_h f(x)}{3}=f^\prime(x)-\frac{f^{(5)}(x)}{480}h^4+\dots$$</p> <p>Note that the surviving terms after $f^\prime(x)$ are (supposed to be) much smaller than either of the terms after $f^\prime(x)$ in the expansions for $D_h f(x)$ and $D_{h/2} f(x)$. Numerically speaking, one could obtain a slightly more accurate estimate of the derivative by evaluating the symmetric difference at a certain (well-chosen) step size $h$ and at half of the given $h$, and computing the linear combination $\dfrac{4D_{h/2} f(x)-D_h f(x)}{3}$. (This is akin to deriving Simpson's rule from the trapezoidal rule). The procedure generalizes, as one keeps taking appropriate linear combinations of a symmetric difference for some $h$ and the symmetric difference at half $h$ to zero out successive powers of $h^2$; this is the famous <em>Richardson extrapolation</em>.</p>
<p><strong>Lemma</strong>: Let $f$ be a convex function on an open interval $I$. For all $x \in I$, $$ g(x) = \lim_{h \to 0} \frac{f(x+h) - f(x-h)}{2h} $$ exists and $f(y) \geq f(x) + g(x) (y-x)$ for all $y \in I$.</p> <p>In particular, $g$ is a <a href="http://en.wikipedia.org/wiki/Subderivative">subderivative</a> of $f$. </p>
geometry
<p><em>This is a very speculative/soft question; please keep this in mind when reading it. Here "higher" means "greater than 3".</em></p> <p>What I am wondering about is what <em>new</em> geometrical phenomena are there in higher dimensions. When I say new I mean phenomena which are counterintuitive or not analogous to their lower dimensional counterparts. A good example could be <a href="http://mathworld.wolfram.com/HyperspherePacking.html" rel="noreferrer">hypersphere packing</a>.</p> <p>My main (and sad) impression is that almost all phenomena in higher dimensions could be thought intuitively by dimensional analogy. See for example, <a href="http://eusebeia.dyndns.org/4d/vis/10-rot-1" rel="noreferrer">this link</a>:</p> <p><a href="https://i.sstatic.net/Tk7J4.gif" rel="noreferrer"><img src="https://i.sstatic.net/Tk7J4.gif" alt="Rotation of a 3-cube in the YW plane"></a></p> <p>What this implies (for me) is the boring consequence that there is no new conceptual richness in higher dimensional geometry beyond the fact than the numbers are larger (for example my field of study is string compactifications and though, at first sight, it could sound spectacular to use orientifolding which set a loci of fixed points which are O3 and O7 planes; the reasoning is pretty much the same as in lower dimensions...)</p> <p>However the question of higher dimensional geometry is very related (for me) to the idea of beauty and complexity: these projections to 2-D of higher dimensional objects totally amazes me (for example this orthonormal projection of a <a href="https://i.pinimg.com/originals/a3/ac/9f/a3ac9fdf2dd062ebf4e9fab843ddf1c4.jpg" rel="noreferrer">12-cube</a>) and makes me think there must be interesting higher dimensional phenomena...</p> <p><a href="https://i.sstatic.net/jU5pes.jpg" rel="noreferrer"><img src="https://i.sstatic.net/jU5pes.jpg" alt="enter image description here"></a></p> <p>I would thank anyone who could give me examples of beautiful ideas implying “visualization” of higher dimensional geometry…</p>
<p>In high dimensions, almost all of the volume of a ball sits at its surface. More exactly, if $V_d(r)$ is the volume of the $d$-dimensional ball with radius $r$, then for any $\epsilon&gt;0$, no matter how small, you have $$\lim_{d\to\infty} \frac{V_d(1-\epsilon)}{V_d(1)} = 0$$ Algebraically that's obvious, but geometrically I consider it highly surprising.</p> <p><strong>Edit:</strong></p> <p>Another surprising fact: In 4D and above, you can have a flat torus, that is, a torus without any intrinsic curvature (like a cylinder in 3D). Even more: You can draw such a torus (not an image of it, the flat torus itself) on the surface of a hyperball (that is, a hypersphere). Indeed, the three-dimensional hypersphere (surface of the four-dimensional hyperball) can be almost completely partitioned into such tori, with two circles remaining in two completely orthogonal planes (thanks to anon in the comments for reminding me of those two leftover circles). Note that the circles could be considered degenerate tori, as the flat tori continuously transform into them (in much the same way as the circles of latitude on the 2-sphere transform into a point at the poles).</p>
<p>A number of problems in discrete geometry (typically, involving arrangements of points or other objects in $\mathbb R^d$) change behavior as the number of dimensions grows past what we have intuition for.</p> <p>My favorite example is the "sausage catastrophe", because of the name. The problem here is: take $n$ unit balls in $\mathbb R^d$. How can we pack them together most compactly, minimizing the volume of the convex hull of their union? (To visualize this in $\mathbb R^3$, imagine that you're wrapping the $n$ balls in plastic wrap, creating a single object, and you want the object to be as small as possible.)</p> <p>There are two competing strategies here:</p> <ol> <li>Start with a dense sphere packing in $\mathbb R^d$, and pick out some roughly-circular piece of it.</li> <li>Arrange all the spheres in a line, so that the convex hull of their union forms the shape of a sausage.</li> </ol> <p>Which strategy is best? It depends on $d$, in kind of weird ways. For $d=2$, the first strategy (using the hexagonal circle packing, and taking a large hexagonal piece of it) is best for almost any number of circles. For $d=3$, the sausage strategy is the best known configuration for $n \le 56$ (though this is not proven) and the first strategy takes over for larger $n$ than that: the point where this switch happens is called the "sausage catastrophe".</p> <p>For $d=4$, the same behavior as in $d=3$ occurs, except we're even less certain when. We've managed to show that the sausage catastrophe occurs for some $n &lt; 375,769$. On the other hand, we're not even sure if the sausage is optimal for $n=5$.</p> <p>Finally, we know that there is <em>some</em> sufficiently large $d$ such that the sausage strategy is always the best strategy in $\mathbb R^d$, no matter how many balls there are. We think that value is $d=5$, but the best we've shown is that the sausage is always optimal for $d\ge 42$. There are many open questions about sausages.</p> <hr> <p>If you're thinking about the more general problem of packing spheres in $\mathbb R^d$ as densely as possible, the exciting stuff also happens in dimensions we can't visualize. A recent result says that the <a href="https://en.wikipedia.org/wiki/E8_lattice" rel="noreferrer">$E_8$ lattice</a> and the <a href="https://en.wikipedia.org/wiki/Leech_lattice" rel="noreferrer">Leech lattice</a> are the densest packing in $\mathbb R^8$ and $\mathbb R^{24}$ respectively, and these are much better than the best thing we know how to do in "adjacent" dimensions. In a sense, this is saying that there are $8$-dimensional and $24$-dimensional objects with no analog in $\mathbb R^d$ for arbitrary $d$: a perfect example of something that happens in many dimensions that can't be intuitively described by comparing it to ordinary $3$-dimensional space.</p> <hr> <p>Results like the <a href="https://en.wikipedia.org/wiki/Hales%E2%80%93Jewett_theorem" rel="noreferrer">Hales–Jewett theorem</a> are another source of "new behavior" in sufficiently high-dimensional space. The Hales–Jewett theorem says, roughly speaking, that for any $n$ there is a dimension $d$ such that $n$-in-a-row tic-tac-toe on an $n \times n \times \dots \times n$ board cannot be played to a draw. (For $n=3$, that dimension is $d=3$; for $n=4$, it's somewhere between $d=7$ and $d = 10^{11}$.) However, you could complain that this result is purely combinatorial; you're not doing so much visualizing of $d$-dimensional objects here.</p>
number-theory
<p>If a solution was found to the Riemann Hypothesis, would it have any effect on the security of things such as RSA protection? Would it make cracking large numbers easier?</p>
<p>If by 'solution' you mean confirmation or counterexample, then no. One could just assume the result, produce an algorithm whose validity requires the Riemann hypothesis, and use it to break RSA codes. Merely knowing if the Riemann Hypothesis holds or not doesn't help you construct any factorization method (although it can tell you a theoretical bound on how well a certain algorithm can run). It is possible, however, that in the process of resolving RH, we improve our understanding of related questions/techniques and use our improved knowledge to create algorithms that do have the potential to crack RSA codes.</p> <p>Most mathematicians don't want to see RH resolved just for a yes/no answer. It is more important that research into the problem produces new mathematics and deep insights -- the resolution of the problem is simply one goal to reach and a yardstick to measure our progress. The situation is similar to the development of algebraic number theory with the goal of understanding higher reciprocity laws. Along the way a whole new interesting subject opened up spawning many decades of interesting mathematics, and the original motivation no longer holds center stage.</p>
<p>No. Practical applications can simply assume the truth of the Riemann hypothesis; proving it would increase knowledge but not itself affect security.</p>
linear-algebra
<p>Let $f(x)$ be a polynomial of degree $n$ with coefficients in $\mathbb{Q}$. There are well-known ways to construct a $n \times n$ matrix $A$ with entries in $\mathbb{Q}$ whose characteristic polynomial is $f$. My question is: when is it possible to choose $A$ symmetric?</p> <p>An obvious necessary condition is that the roots of $f$ are all real, but it is not clear to me even in the case $n = 2$ that this is sufficient. In degree $2$ this comes down to determining whether or not every pair $(p,q)$ which satisfies $p^2 &gt; 4q$ (the condition that $x^2 + px + q$ has real roots) can be expressed in the form $$p = -(a + c)$$ $$q = ac - b^2$$ where $a$, $b$, and $c$ are rational. I have some partial results from just fumbling around and checking cases, but it seems clear that a more conceptual argument would be required to handle larger degrees.</p>
<p>I found an answer to your question in the following paper: Estes, Dennis R.(1-SCA); Guralnick, Robert M.(1-SCA) Minimal polynomials of integral symmetric matrices. Computational linear algebra in algebraic and related problems (Essen, 1992). Linear Algebra Appl. 192 (1993), 83–99. </p> <p>In the introduction, the authors mention:</p> <p>"Bender, in a series of papers, studied this problem for $\Bbb Q$, the rationals. He proved that any totally real monic polynomial over $\Bbb Q$ of odd degree can occur as the characteristic polynomial of a symmetric rational matrix. This fails already for polynomials of degree 2. The problem of determining precisely which polynomials can occur as characteristic polynomials of rational symmetric matrices is still not solved. The conjecture is that this is a local problem (i.e., if $f \in \Bbb Q[X]$ is monic of degree $n$, then $f$ is the characteristic polynomial of some element of $S_n(\Bbb Q)$ if and only if $f$ is the characteristic polynomial of some element of $S_n(\Bbb R)$ and $S_n(\Bbb Q_p)$ for all primes $p$."</p>
<p>I guess that the following paper shows how, for any given polynomial, it can be determined (calculated) a certain symmetric matrix A for which said polynomial is the characteristic polynomial of A (see link below).</p> <p>Given the demonstration presented in the paper, I would not say that given a certain polynomial, matrix A could "be chosen", since that would imply that the degrees of freedom allow for such choosing to be possible.</p> <p>As the demonstration shows, for any given polynomial a certain matrix A can be calculated, which means that there always is a system of linear equations that can be solved (the determinant of the system of linear equations is != 0), so, it means that that the degrees of freedom is always zero, so, there is no way that matrix A could be "chosen", even though the solution of A could be guessed.</p> <p>Kind regards, GEN</p> <p><a href="http://ac.els-cdn.com/0024379590903235/1-s2.0-0024379590903235-main.pdf?_tid=bd2c75aa-42cb-11e6-aaf5-00000aacb362&amp;acdnat=1467735530_b38cd0adce0c4955dffb08b1a1bb8882" rel="nofollow">http://ac.els-cdn.com/0024379590903235/1-s2.0-0024379590903235-main.pdf?_tid=bd2c75aa-42cb-11e6-aaf5-00000aacb362&amp;acdnat=1467735530_b38cd0adce0c4955dffb08b1a1bb8882</a></p>
number-theory
<p>Consider the following decision problem: given two lists of positive integers $a_1, a_2, \dots, a_n$ and $b_1, b_2, \dots, b_m$ the task is to decide if $a_1^{a_2^{\cdot^{\cdot^{\cdot^{a_n}}}}} &lt; b_1^{b_2^{\cdot^{\cdot^{\cdot^{b_m}}}}}$.</p> <ul> <li>Is this problem in the <a href="http://en.wikipedia.org/wiki/P_%28complexity%29">class $P$</a>?</li> <li>If yes, then what is the algorithm solving it in polynomial time?</li> <li>Otherwise, what is the fastest algorithm that can solve it?</li> </ul> <p><em>Update:</em> </p> <ul> <li>I mean polynomial type with respect to the size of the input, i.e. total number of digits in all $a_i, b_i$.</li> <li>$p^{q^{r^s}}=p^{(q^{(r^s)})}$, <strong>not</strong> $((p^q)^r)^s$.</li> </ul>
<p>Recently I asked a <a href="https://mathematica.stackexchange.com/questions/24815/how-to-compare-power-towers-in-mathematica">very similar question</a> at Mathematica.SE. I assume you know it, because you participated in the discussion.</p> <p><a href="https://mathematica.stackexchange.com/users/81/leonid-shifrin">Leonid Shifrin</a> suggested an <a href="https://mathematica.stackexchange.com/a/24864/7309">algorithm that solves this problem</a> for the majority of cases, although there were cases when it gave an incorrect answer. But his approach seems correct and it looks like it is possible to fix those defects. Although it was not rigorously proved, his algorithm seems to work in polynomial time. It looks like it would be fair if he got the bounty for this question, but for some reason he didn't want to.</p> <p>So, this question is not yet settled completely and I am going to look for the complete and correct solution, and will start a new bounty for this question once the current one expires. I do not expect to get a bounty for this answer, but should you choose to award it, I will add it up to the amount of the new bounty so that it passes to whomever eventually solves this question.</p>
<p>For readability I'll write $[a_1,a_2,\ldots,a_n]$ for the tower $a_1^{a_2^{a_3^\cdots}}$.</p> <p>Let all of the $a_i,b_i$ be in the interval $[2,N]$ where $N=2^K$ (if any $a_i$ or $b_i$ is 1 we can truncate the tower at the previous level, and the inputs must be bounded to talk about complexity).</p> <p>Then consider two towers of the same height $$ T=[N,N,\ldots,N,x] \quad \mathrm{and} \quad S=[2,2,\dots,2,y] $$ i.e. T is the largest tower in our input range with $x$ at the top, and S is the smallest with $y$ at the top.</p> <p>With $N, x\ge 2$ and $y&gt;2Kx$ then $$ \begin{aligned} 2^y &amp; &gt; 2^{2Kx} \\ &amp; = N^{2x} \\ &amp; &gt; 2log(N)N^x &amp;\text{ as $x \gt 1 \ge \frac{1+log(log(N))}{log(N)}$} \\ &amp; = 2KN^x \end{aligned} $$ Now write $x'=N^x$ and $y'=2^y&gt;2Kx'$ then $$ [N,N,x]=[N,x']&lt;[2,y']=[2,2,y] $$ Hence by induction $T&lt;S$ when $y&gt;2Kx$.</p> <p>So we only need to calculate the exponents down from the top until one exceeds the other by a factor of $2K$, then that tower is greater no matter what values fill in the lower ranks.</p> <p>If the towers have different heights, wlog assume $n&gt;m$, then first we reduce $$ [a_1,a_2,\ldots,a_n] = [a_1,a_2,\ldots,a_{m-1},A] $$ where $A=[a_m,a_{m+1},\ldots,a_n]$. If we can determine that $A&gt;2Kb_m$ then the $a$ tower is larger.</p> <p>If the towers match on several of the highest exponents, then we can reduce the need for large computations with a shortcut. Assume $n=m$, that $a_j&gt; b_j$ for some $j&lt;m$ and $a_i=b_i$ for $j&lt;i\le m$. Then $$ [a_1,a_2,\ldots,a_m] = [a_1,a_2,\ldots,a_j,X] \\ [b_1,b_2,\ldots,b_m] = [b_1,b_2,\ldots,b_j,X] $$ and the $a$ tower is larger if $(a_j/b_j)^X&gt;2K$. So we don't need to compute $X$ fully if we can determine that it exceeds $\log(2K)/\log(a_j/b_j)$.</p> <p>These checks need to be combined with a numeric method like the one @ThomasAhle gave. They can solve the problem that method has with deep trees that match at the top, but can't handle $[4,35,15],[20,57,13]$ which are too big to compute but don't allow for one of these shortcuts. </p>
geometry
<p>The question is written like this:</p> <blockquote> <p>Is it possible to find an infinite set of points in the plane, not all on the same straight line, such that the distance between <strong>EVERY</strong> pair of points is rational?</p> </blockquote> <p>This would be so easy if these points could be on the same straight line, but I couldn't get any idea to solve the question above(not all points on the same straight line). I believe there must be a kind of concatenation between the points but I couldn't figure it out.</p> <p>What I tried is totally mess. I tried to draw some triangles and to connect some points from one triangle to another, but in vain.</p> <p><strong>Note:</strong> I want to see a real example of such an infinite set of points in the plane that can be an answer for the question. A graph for these points would be helpful.</p>
<p>You can even find infinitely many such points on the unit circle: Let $\mathscr S$ be the set of all points on the unit circle such that $\tan \left(\frac {\theta}4\right)\in \mathbb Q$. If $(\cos(\alpha),\sin(\alpha))$ and $(\cos(\beta),\sin(\beta))$ are two points on the circle then a little geometry tells us that the distance between them is (the absolute value of) $$2 \sin \left(\frac {\alpha}2\right)\cos \left(\frac {\beta}2\right)-2 \sin \left(\frac {\beta}2\right)\cos \left(\frac {\alpha}2\right)$$ and, if the points are both in $\mathscr S$ then this is rational.</p> <p>Details: The distance formula is an immediate consequence of the fact that, if two points on the circle have an angle $\phi$ between them, then the distance between them is (the absolute value of) $2\sin \frac {\phi}2$. For the rationality note that $$z=\tan \frac {\phi}2 \implies \cos \phi= \frac {1-z^2}{1+z^2} \quad \&amp; \quad \sin \phi= \frac {2z}{1+z^2}$$</p> <p>Note: Of course $\mathscr S$ is dense on the circle. So far as I am aware, it is unknown whether you can find such a set which is dense on the entire plane.</p>
<p>Yes, it's possible. For instance, you could start with $(0,1)$ and $(0,0)$, and then put points along the $x$-axis, noting that there are infinitely many different right triangles with rational sides and one leg equal to $1$. For instance, $(3/4,0)$ will have distance $5/4$ to $(0,1)$.</p> <p>This means that <em>most</em> if the points are on a single line (the $x$-axis), but one point, $(0,1)$, is not on that line.</p>
matrices
<p>I found out that there exist positive definite matrices that are non-symmetric, and I know that symmetric positive definite matrices have positive eigenvalues.</p> <p>Does this hold for non-symmetric matrices as well?</p>
<p>Let <span class="math-container">$A \in M_{n}(\mathbb{R})$</span> be any non-symmetric <span class="math-container">$n\times n$</span> matrix but "positive definite" in the sense that: </p> <p><span class="math-container">$$\forall x \in \mathbb{R}^n, x \ne 0 \implies x^T A x &gt; 0$$</span> The eigenvalues of <span class="math-container">$A$</span> need not be positive. For an example, the matrix in David's comment:</p> <p><span class="math-container">$$\begin{pmatrix}1&amp;1\\-1&amp;1\end{pmatrix}$$</span></p> <p>has eigenvalue <span class="math-container">$1 \pm i$</span>. However, the real part of any eigenvalue <span class="math-container">$\lambda$</span> of <span class="math-container">$A$</span> is always positive.</p> <p>Let <span class="math-container">$\lambda = \mu + i\nu\in\mathbb C $</span> where <span class="math-container">$\mu, \nu \in \mathbb{R}$</span> be an eigenvalue of <span class="math-container">$A$</span>. Let <span class="math-container">$z \in \mathbb{C}^n$</span> be a right eigenvector associated with <span class="math-container">$\lambda$</span>. Decompose <span class="math-container">$z$</span> as <span class="math-container">$x + iy$</span> where <span class="math-container">$x, y \in \mathbb{R}^n$</span>.</p> <p><span class="math-container">$$(A - \lambda) z = 0 \implies \left((A - \mu) - i\nu\right)(x + iy) = 0 \implies \begin{cases}(A-\mu) x + \nu y = 0\\(A - \mu) y - \nu x = 0\end{cases}$$</span> This implies</p> <p><span class="math-container">$$x^T(A-\mu)x + y^T(A-\mu)y = \nu (y^T x - x^T y) = 0$$</span></p> <p>and hence <span class="math-container">$$\mu = \frac{x^TA x + y^TAy}{x^Tx + y^Ty} &gt; 0$$</span></p> <p>In particular, this means any real eigenvalue <span class="math-container">$\lambda$</span> of <span class="math-container">$A$</span> is positive.</p>
<p>I am answering the first part of @nukeguy's comment, who asked:</p> <blockquote> <p>Is the converse true? If all of the eigenvalues of a matrix <span class="math-container">$𝐴$</span> have positive real parts, does this mean that <span class="math-container">$𝑥^𝑇𝐴𝑥&gt;0$</span> for any <span class="math-container">$𝑥≠0∈ℝ^𝑛$</span>? What if we assume <span class="math-container">$𝐴$</span> is diagonalizable?</p> </blockquote> <p>I have a counterexample, where <span class="math-container">$A$</span> has positive eigenvalues, but it is not positive definite: <span class="math-container">$ A= \begin{bmatrix} 7 &amp; -2 &amp; -4 \\ -17 &amp; 40 &amp; -19 \\ -21 &amp; -9 &amp; 31 \end{bmatrix} $</span>. Eigenvalues of this matrix are <span class="math-container">$1.2253$</span>, <span class="math-container">$27.4483$</span>, and <span class="math-container">$49.3263$</span>, but it indefinite because if <span class="math-container">$x_1 = \begin{bmatrix}-48 &amp; -10&amp; -37\end{bmatrix}$</span> and <span class="math-container">$x_2 = \begin{bmatrix}-48 &amp;10 &amp;-37\end{bmatrix}$</span>, then we have <span class="math-container">$𝑥_1𝐴𝑥_1^T = -1313$</span> and <span class="math-container">$𝑥_2𝐴𝑥_2^T = 37647.$</span></p>
combinatorics
<p>If I want to find how many possible ways there are to choose k out of n elements I know you can use the simple formula below:</p> <p>$$ \binom{n}{k} = \frac{n! }{ k!(n-k)! } .$$</p> <p>What if I want to go the other way around though?</p> <p>That is, I know I want to have $X$ possible combinations, and I want to find all the various pairs of $n$ and $k$ that will give me that number of combinations.</p> <p>For example, if the number of combinations I want is $3$, I want a formula/method to find that all the pairs that will result in that number of combinations are $(3,1)$ and $(3,2)$</p> <p>I know I could test all the possible pairs, but this would be impractical for large numbers.</p> <p>But perhaps there's no easier way of doing this then the brute force approach?</p>
<p>If $X$ is only as large as $10^7$ then this is straightforward to do with computer assistance. First note the elementary inequalities $$\frac{n^k}{k!} \ge {n \choose k} \ge \frac{(n-k)^k}{k!}$$</p> <p>which are close to tight when $k$ is small. If $X = {n \choose k}$, then it follows that $$n \ge \sqrt[k]{k! X} \ge n-k$$</p> <p>hence that $$\sqrt[k]{k! X} + k \ge n \ge \sqrt[k]{k! X}$$</p> <p>so for fixed $k$ you only have to check at most $k+1$ possible values of $n$, which is manageable when $k$ is small. You can speed up this process by factoring $X$ if you want and applying <a href="http://en.wikipedia.org/wiki/Lucas%27_theorem#Variations_and_generalizations">Kummer's theorem</a> (the first bullet point in that section of the article), but computing binomial coefficients for $k$ small is straightforward so this probably isn't necessary. </p> <p>For larger $k$, note that you can always assume WLOG that $n \ge 2k$ since ${n \choose k} = {n \choose n-k}$, hence you can assume that $$X = {n \choose k} \ge {2k \choose k} &gt; \frac{4^k}{2k+1}$$</p> <p>(see <a href="http://en.wikipedia.org/wiki/Proof_of_Bertrand%27s_postulate">Erdős' proof of Bertrand's postulate</a> for details on that last inequality). Consequently you only have to check logarithmically many values of $k$ (as a function of $X$). For example, if $X \le 10^7$ you only have to check up to $k = 14$.</p> <p>In total, applying the above algorithm you only have to check $O(\log(X)^2)$ pairs $(n, k)$, and each check requires at worst $O(\log(X))$ multiplications of numbers at most as large as $X$, together with at worst a comparison of two numbers of size $O(X)$. So the above algorithm takes polynomial time in $\log(X)$. </p> <p><strong>Edit:</strong> It should be totally feasible to just factor $X$ at the sizes you're talking about, but if you want to apply the Kummer's theorem part of the algorithm to larger $X$, you don't actually have to completely factor $X$; you can probably do the Kummer's theorem comparisons on the fly by computing the greatest power of $2$ that goes into $X$, then $3$, then $5$, etc. and storing these as necessary. As a second step, if neither $X$ nor the particular binomial coefficient ${n_0 \choose k_0}$ you're testing are divisible by a given small prime $p$, you can appeal to Lucas' theorem. Of course, you have to decide at some point when to stop testing small primes and just test for actual equality. </p>
<p>Here's an implementation in code of Qiaochu's answer. The algorithm, to recap, is:</p> <ul> <li><p>Input <span class="math-container">$X$</span>. (We want to find all <span class="math-container">$(n, k)$</span> such that <span class="math-container">$\binom{n}{k} = X$</span>.)</p></li> <li><p>For each <span class="math-container">$k \ge 1$</span> such that <span class="math-container">$4^k/(2k + 1) &lt; X$</span>,</p> <ul> <li><p>Let <span class="math-container">$m$</span> be the smallest number such that <span class="math-container">$m^k \ge k!X$</span>.</p></li> <li><p>For each <span class="math-container">$n$</span> from <span class="math-container">$\max(m, 2k)$</span> to <span class="math-container">$m + k$</span> (inclusive),</p> <ul> <li>If <span class="math-container">$\binom{n}{k} = X$</span>, yield <span class="math-container">$(n, k)$</span> and <span class="math-container">$(n, n-k)$</span>.</li> </ul></li> </ul></li> </ul> <p>It is written in Python (chose this language for readability and native big integers, not for speed). It is careful to use only integer arithmetic, to avoid any errors due to floating-point precision.</p> <p>The version below is optimized to avoid recomputing <span class="math-container">$\binom{n+1}{k}$</span> from scratch after having computed <span class="math-container">$\binom{n}{k}$</span>; this speeds it up so that for instance for <span class="math-container">$\binom{1234}{567}$</span> (a <span class="math-container">$369$</span>-digit number) it takes (on my laptop) 0.4 seconds instead of the 50 seconds taken by the unoptimized version in the <a href="https://math.stackexchange.com/revisions/2381576/1">first revision</a> of this answer.</p> <pre><code>#!/usr/bin/env python from __future__ import division import math def binom(n, k): """Returns n choose k, for nonnegative integer n and k""" assert k &gt;= 0 assert n &gt;= 0 assert k == int(k) assert n == int(n) k = min(k, n - k) ans = 1 for i in range(k): ans *= n - i ans //= i + 1 return ans def first_over(k, c): """Binary search to find smallest value of n for which n^k &gt;= c""" n = 1 while n ** k &lt; c: n *= 2 # Invariant: lo**k &lt; c &lt;= hi**k lo = 1 hi = n while hi - lo &gt; 1: mid = lo + (hi - lo) // 2 if mid ** k &lt; c: lo = mid else: hi = mid assert hi ** k &gt;= c assert (hi - 1) ** k &lt; c return hi def find_n_k(x): """Given x, yields all n and k such that binom(n, k) = x.""" assert x == int(x) assert x &gt; 1 k = 0 while True: k += 1 # https://math.stackexchange.com/a/103385/205 if (2 * k + 1) * x &lt;= 4**k: break nmin = first_over(k, math.factorial(k) * x) nmax = nmin + k + 1 nmin = max(nmin, 2 * k) choose = binom(nmin, k) for n in range(nmin, nmax): if choose == x: yield (n, k) if k &lt; n - k: yield (n, n - k) choose *= (n + 1) choose //= (n + 1 - k) if __name__ == '__main__': import sys if len(sys.argv) &lt; 2: print('Pass X in the command to see (n, k) such that (n choose k) = X.') sys.exit(1) x = int(sys.argv[1]) if x == 0: print('(n, k) for any n and any k &gt; n, and (0, 0)') sys.exit(0) if x == 1: print('(n, 0) and (n, n) for any n, and (1, 1)') sys.exit(0) for (n, k) in find_n_k(x): print('%s %s' % (n, k)) </code></pre> <p>Example runs:</p> <pre><code>~<span class="math-container">$ ./mse_103377_binom.py 2 2 1 ~$</span> ./mse_103377_binom.py 3 3 1 3 2 ~<span class="math-container">$ ./mse_103377_binom.py 6 6 1 6 5 4 2 ~$</span> ./mse_103377_binom.py 10 10 1 10 9 5 2 5 3 ~<span class="math-container">$ ./mse_103377_binom.py 20 20 1 20 19 6 3 ~$</span> ./mse_103377_binom.py 55 55 1 55 54 11 2 11 9 ~<span class="math-container">$ ./mse_103377_binom.py 120 120 1 120 119 16 2 16 14 10 3 10 7 ~$</span> ./mse_103377_binom.py 3003 3003 1 3003 3002 78 2 78 76 15 5 15 10 14 6 14 8 ~<span class="math-container">$ ./mse_103377_binom.py 8966473191018617158916954970192684 8966473191018617158916954970192684 1 8966473191018617158916954970192684 8966473191018617158916954970192683 123 45 123 78 ~$</span> ./mse_103377_binom.py 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477440 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477440 1 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477440 116682544286207712305570174244760883486876241791210077037133735047856714594324355733933738740204795317223528882568337264611289789138133946725471443924278812277695432803500115699090641248357468388106131543801393801287657125117557432072695477147403395443757359171876874010770591355653882772562301453205472707597435095925666815012707478996454360460481159339802667334477439 1234 567 1234 667 </code></pre>
probability
<p>What is the mean and variance of Squared Gaussian: $Y=X^2$ where: $X\sim\mathcal{N}(0,\sigma^2)$?</p> <p>It is interesting to note that Gaussian R.V here is zero-mean and non-central Chi-square Distribution doesn't work.</p> <p>Thanks.</p>
<p>We can avoid using the fact that $X^2\sim\sigma^2\chi_1^2$, where $\chi_1^2$ is the chi-squared distribution with $1$ degree of freedom, and calculate the expected value and the variance just using the definition. We have that $$ \operatorname E X^2=\operatorname{Var}X=\sigma^2 $$ since $\operatorname EX=0$ (see <a href="https://math.stackexchange.com/questions/768816/if-x-sim-n0-1-why-is-ex2-1/768824#768824">here</a>).</p> <p>Also, $$ \operatorname{Var}X^2=\operatorname EX^4-(\operatorname EX^2)^2. $$ The fourth moment $\operatorname EX^4$ is equal to $3\sigma^4$ (see <a href="https://math.stackexchange.com/questions/1917647/proving-ex4-3%CF%834">here</a>). Hence, $$ \operatorname{Var}X^2=3\sigma^4-\sigma^4=2\sigma^4. $$</p>
<p>Note that $X^2 \sim \sigma^2 \chi^2_1$ where $\chi^2_1$ is the Chi-squared distribution with 1 degree of freedom. Since $E[\chi^2_1] = 1, \text{Var}[\chi^2_1] = 2$ we have $E[X^2] = \sigma^2, \text{Var}[X^2] = 2 \sigma^4$.</p>
linear-algebra
<p>I'm a student in an elementary linear algebra course. Without bashing on my professor, I must say that s/he is very poor at answering questions, often not addressing the question itself. Throughout the course, there have been multiple questions that have gone unanswered, but I must have the answer to this one question.</p> <p>"Why are some vectors written as row vectors and others as column vectors?"</p> <p>I understand that if I transpose one, it becomes the other. However, I'd like to understand the purpose behind writing a vector in a certain format.</p> <p>Taking examples from my lectures, I see that when I'm trying to prove linear independence of a group of vectors, the vectors are written as column vectors in a matrix, and the row reduced form is found.</p> <p>Other times, like trying to find the cross product or just solving a matrix, I see the vectors written as row vectors.</p> <p>My professor is very vague on the notations and explanations, and it bugs me as a person who needs to know the reason behind every small thing, why this variation occurs in the format. Any input is greatly appreciated.</p>
<p>In one sense, you can say that a vector is simply an object with certain properties, and it is neither a row of numbers nor a column of numbers. But in practice, we often want to use a list of <span class="math-container">$n$</span> numeric coordinates to describe an <span class="math-container">$n$</span>-dimensional vector, and we call this list of coordinates a vector. The general convention seems to be that the coordinates are listed in the format known as a <em>column</em> vector, which is (or at least, which acts like) an <span class="math-container">$n \times 1$</span> matrix.</p> <p>This has the nice property that if <span class="math-container">$v$</span> is a vector and <span class="math-container">$M$</span> is a matrix representing a linear transformation, the product <span class="math-container">$Mv$</span>, computed by the usual rules of matrix multiplication, is another vector (specifically, a column vector) representing the image of <span class="math-container">$v$</span> under that transformation.</p> <p>But because we write mostly in a horizontal direction, it is not always convenient to list the coordinates of a vector from left to right. If you're careful, you might write</p> <p><span class="math-container">$$ \langle x_1, x_2, \ldots, x_n \rangle^T $$</span></p> <p>meaning the <em>transpose</em> of the row vector <span class="math-container">$\langle x_1, x_2, \ldots, x_n \rangle$</span>; that is, we want the convenience of left-to-right notation but we make it clear that we actually mean a column vector (which is what you get when you transpose a row vector). If we're <em>not</em> being careful, however, we might just write <span class="math-container">$\langle x_1, x_2, \ldots, x_n \rangle$</span> as our &quot;vector&quot; and assume everyone will understand what we mean.</p> <p>Occasionally, we actually need the coordinates of a vector in row-vector format, in which case we can represent that by transposing a column vector. For example, if <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are vectors (that is, column vectors), then the usual inner product of <span class="math-container">$u$</span> and <span class="math-container">$v$</span> can be written <span class="math-container">$u^T v$</span>, evaluated as the product of a <span class="math-container">$1\times n$</span> matrix with an <span class="math-container">$n \times 1$</span> matrix. Note that if <span class="math-container">$u$</span> is a (column) vector, then <span class="math-container">$u^T$</span> really is a row vector and can (and should) legitimately be written as <span class="math-container">$\langle u_1, u_2, \ldots, u_n \rangle$</span>.</p> <p>This all works out quite neatly and conveniently when people are careful and precise in how they write things. At a deeper and more abstract level, you can formalize these ideas as shown in another answer. (My answer here is relatively informal, intended merely to give a sense of why people think of the column vector as &quot;the&quot; representation of an abstract vector.)</p> <p>When people are <em>not</em> careful and precise, it may help to say to yourself sometimes that the transpose of a certain vector representation is <em>intended</em> in a certain context even though the person writing that representation neglected to indicate it.</p>
<p>For short: Column vectors live in say ${\mathbb{R}^n}$ and row vectors live in the dual of ${\mathbb{R}^n}$ which is denoted by ${\left( {{\mathbb{R}^n}} \right)^ * } \cong Hom({\mathbb{R}^n},\mathbb{R})$. Co-vectors are therefore linear mappings $\alpha :{\mathbb{R}^n} \to \mathbb{R}$. If one uses basis in ${\mathbb{R}^n}$ and basis in ${\left( {{\mathbb{R}^n}} \right)^ * }$, then for $v \in {\mathbb{R}^n}$ and $\alpha \in {\left( {{\mathbb{R}^n}} \right)^ * }$ with representations:</p> <p>$$\alpha = {\sum\limits_j {{\alpha _j} \cdot \left( {{e^j}} \right)} ^ * }$$ and $$v = \sum\limits_i {{v^i} \cdot {e_i}}$$ we get: $$\alpha (v) = \alpha (\sum\limits_i {{v^i} \cdot {e_i}} ) = \sum\limits_i {{v^i} \cdot \alpha ({e_i}} )$$ $$\sum\limits_i {{v^i}\alpha ({e_i})} = \sum\limits_i {{v^i}{{\sum\limits_j {{\alpha _j} \cdot \left( {{e^j}} \right)} }^ * }({e_i})} = \sum\limits_i {{{\sum\limits_j {{\alpha _j}{v^i} \cdot \left( {{e^j}} \right)} }^ * }({e_i})}$$ $$\sum\limits_i {\sum\limits_j {{\alpha _j}{v^i} \cdot \delta _i^j} } = \sum\limits_k {{\alpha _k}{v^k}} = \left( {{\alpha _1}, \cdots ,{\alpha _n}} \right) \cdot \left( {\begin{array}{*{20}{c}} {{v^1}} \\ \vdots \\ {{v^n}} \end{array}} \right)$$ Here $\alpha$ is a row vector and $v$ a column vector. Note that $${\left( {{e^j}} \right)^ * }({e_i}) = \delta _i^j = \left\{ {\begin{array}{*{20}{c}} 1&amp;{i = j} \\ 0&amp;{i \ne j} \end{array}} \right.$$ is the link between a pair of dual bases. Using Einstein-Index notation (as usual) we have simply: $$\alpha (v) = {\alpha _k}{v^k}$$ This is co- and contra variant notation. Same story for ${T_p}M$ and $T_p^ * M$ that is Tangent- and Co-Tangent space for manifolds taken at a point $p \in M$. But it's another story.</p>
probability
<p>I want to teach a short course in probability and I am looking for some counter-intuitive examples in probability. I am mainly interested in the problems whose results seem to be obviously false while they are not.</p> <p>I already found some things. For example these two videos:</p> <ul> <li><a href="http://www.youtube.com/watch?v=Sa9jLWKrX0c" rel="noreferrer">Penney's game</a></li> <li><a href="https://www.youtube.com/watch?v=ud_frfkt1t0" rel="noreferrer">How to win a guessing game</a></li> </ul> <p>In addition, I have found some weird examples of random walks. For example this amazing theorem:</p> <blockquote> <p>For a simple random walk, the mean number of visits to point <span class="math-container">$b$</span> before returning to the origin is equal to <span class="math-container">$1$</span> for every <span class="math-container">$b \neq 0$</span>.</p> </blockquote> <p>I have also found some advanced examples such as <a href="http://amstat.tandfonline.com/doi/abs/10.1080/00031305.1989.10475674" rel="noreferrer">Do longer games favor the stronger player</a>?</p> <p>Could you please do me a favor and share some other examples of such problems? It's very exciting to read yours...</p>
<p>The most famous counter-intuitive probability theory example is the <a href="https://brilliant.org/wiki/monty-hall-problem/">Monty Hall Problem</a></p> <ul> <li>In a game show, there are three doors behind which there are a car and two goats. However, which door conceals which is unknown to you, the player.</li> <li>Your aim is to select the door behind which the car is. So, you go and stand in front of a door of your choice.</li> <li>At this point, regardless of which door you selected, the game show host chooses and opens one of the remaining two doors. If you chose the door with the car, the host selects one of the two remaining doors at random (with equal probability) and opens that door. If you chose a door with a goat, the host selects and opens the other door with a goat.</li> <li>You are given the option of standing where you are and switching to the other closed door.</li> </ul> <p>Does switching to the other door increase your chances of winning? Or does it not matter?</p> <p>The answer is that it <em>does</em> matter whether or not you switch. This is initially counter-intuitive for someone seeing this problem for the first time.</p> <hr> <ul> <li>If a family has two children, at least one of which is a daughter, what is the probability that both of them are daughters?</li> <li>If a family has two children, the elder of which is a daughter, what is the probability that both of them are the daughters?</li> </ul> <p>A beginner in probability would expect the answers to both these questions to be the same, which they are not.</p> <p><a href="https://mathwithbaddrawings.com/2013/10/14/the-riddle-of-the-odorless-incense/">Math with Bad Drawings</a> explains this paradox with a great story as a part of a seven-post series in Probability Theory</p> <hr> <p><a href="https://en.wikipedia.org/wiki/Nontransitive_dice">Nontransitive Dice</a></p> <p>Let persons P, Q, R have three distinct dice.</p> <p>If it is the case that P is more likely to win over Q, and Q is more likely to win over R, is it the case that P is likely to win over R?</p> <p>The answer, strangely, is no. One such dice configuration is $(\left \{2,2,4,4,9,9 \right\},\left \{ 1,1,6,6,8,8\right \},\left \{ 3,3,5,5,7,7 \right \})$</p> <hr> <p><a href="https://brilliant.org/discussions/thread/rationality-revisited-the-sleeping-beauty-paradox/">Sleeping Beauty Paradox</a></p> <p>(This is related to philosophy/epistemology and is more related to subjective probability/beliefs than objective interpretations of it.)</p> <p>Today is Sunday. Sleeping Beauty drinks a powerful sleeping potion and falls asleep.</p> <p>Her attendant tosses a <a href="https://en.wikipedia.org/wiki/Fair_coin">fair coin</a> and records the result.</p> <ul> <li>The coin lands in <strong>Heads</strong>. Beauty is awakened only on <strong>Monday</strong> and interviewed. Her memory is erased and she is again put back to sleep.</li> <li>The coin lands in <strong>Tails</strong>. Beauty is awakened and interviewed on <strong>Monday</strong>. Her memory is erased and she's put back to sleep again. On <strong>Tuesday</strong>, she is once again awaken, interviewed and finally put back to sleep. </li> </ul> <p>In essence, the awakenings on Mondays and Tuesdays are indistinguishable to her.</p> <p>The most important question she's asked in the interviews is</p> <blockquote> <p>What is your credence (degree of belief) that the coin landed in heads?</p> </blockquote> <p>Given that Sleeping Beauty is epistemologically rational and is aware of all the rules of the experiment on Sunday, what should be her answer?</p> <p>This problem seems simple on the surface but there are both arguments for the answer $\frac{1}{2}$ and $\frac{1}{3}$ and there is no common consensus among modern epistemologists on this one.</p> <hr> <p><a href="https://brilliant.org/discussions/thread/rationality-revisited-the-ellsberg-paradox/">Ellsberg Paradox</a></p> <p>Consider the following situation:</p> <blockquote> <p>In an urn, you have 90 balls of 3 colors: red, blue and yellow. 30 balls are known to be red. All the other balls are either blue or yellow.</p> <p>There are two lotteries:</p> <ul> <li><strong>Lottery A:</strong> A random ball is chosen. You win a prize if the ball is red.</li> <li><strong>Lottery B:</strong> A random ball is chosen. You win a prize if the ball is blue.</li> </ul> </blockquote> <p>Question: In which lottery would you want to participate?</p> <blockquote> <ul> <li><strong>Lottery X:</strong> A random ball is chosen. You win a prize if the ball is either red or yellow.</li> <li><strong>Lottery Y:</strong> A random ball is chosen. You win a prize if the ball is either blue or yellow.</li> </ul> </blockquote> <p>Question: In which lottery would you want to participate?</p> <p>If you are an average person, you'd choose Lottery A over Lottery B and Lottery Y over Lottery X. </p> <p>However, it can be shown that there is no way to assign probabilities in a way that make this look rational. One way to deal with this is to extend the concept of probability to that of imprecise probabilities.</p>
<h1>Birthday Problem</h1> <p>For me this was the first example of how counter intuitive the real world probability problems are due to the inherent underestimation/overestimation involved in mental maps for permutation and combination (which is an inverse multiplication problem in general), which form the basis for probability calculation. The question is:</p> <blockquote> <p>How many people should be in a room so that the probability of at least two people sharing the same birthday, is at least as high as the probability of getting heads in a toss of an unbiased coin (i.e., $0.5$).</p> </blockquote> <p>This is a good problem for students to hone their skills in estimating the permutations and combinations, the base for computation of <em>a priori probability</em>.</p> <p>I still feel the number of persons for the answer to be surreal and hard to believe! (The real answer is $23$).</p> <p>Pupils should at this juncture be told about quick and dirty mental maps for permutations and combinations calculations and should be encouraged to inculcate a habit of mental computations, which will help them in forming intuition about probability. It will also serve them well in taking to the other higher level problems such as the Monty Hall problem or conditional probability problems mentioned above, such as:</p> <blockquote> <p>$0.5\%$ of the total population out of a population of $10$ million is supposed to be affected by a strange disease. A test has been developed for that disease and it has a truth ratio of $99\%$ (i.e., its true $99\%$ of the times). A random person from the population is selected and is found to be tested positive for that disease. What is the real probability of that person suffering from the strange disease. The real answer here is approximately $33\%$.</p> </blockquote> <p>Here strange disease can be replaced by any real world problems (such as HIV patients or a successful trading / betting strategy or number of terrorists in a country) and this example can be used to give students a feel, why in such cases (HIV patients or so) there are bound to be many false positives (as no real world tests, I believe for such cases are $99\%$ true) and how popular opinions are wrong in such cases most of the times.</p> <p>This should be the starting point for introducing some of the work of <strong>Daniel Kahneman</strong> and <strong>Amos Tversky</strong> as no probability course in modern times can be complete without giving pupils a sense of how fragile one's intuitions and estimates are in estimating probabilities and uncertainties and how to deal with them. $20\%$ of the course should be devoted to this aspect and it can be one of the final real world projects of students.</p>
logic
<p>Sorry, but I don't think I can know, since it's a definition. Please tell me. I don't think that <span class="math-container">$0=\emptyset\,$</span>, since I distinguish between empty set and the value <span class="math-container">$0$</span>. Do all sets, even the empty set, have infinite emptiness e.g. do all sets, including the empty set, contain infinitely many empty sets?</p>
<p>There is only one empty set. It is a subset of every set, including itself. Each set only includes it once as a subset, not an infinite number of times.</p>
<blockquote> <p>Let $A$ and $B$ be sets. If every element $a\in A$ is also an element of $B$, then $A\subseteq B$.</p> </blockquote> <p>Flip that around and you get</p> <blockquote> <p>If $A\not\subseteq B$, then there exists some element $x\in A$ such that $x\notin B$.</p> </blockquote> <p>If $A$ is the empty set, there are no $x$s in $A$, so in particular there are no $x$s in $A$ that are not in $B$. Thus $A\not\subseteq B$ can't be true. Furthermore, note that we haven't used any property of $B$ in the previous line, so this applies to every set $B$, including $B=\emptyset$.</p> <p>(From a wider standpoint, you can think of the empty set as the set for which $x\in \emptyset\implies P$ is true for every statement $P$. For example, every $x$ in the empty set is orange; also, every $x$ in the emptyset is not orange. There is no contradiction in either of these statements because there are no $x$'s which could provide counterexamples.)</p>
probability
<p>I am trying to find a way to generate random points uniformly distributed on the surface of an ellipsoid.</p> <p>If it was a sphere there is a neat way of doing it: Generate three $N(0,1)$ variables $\{x_1,x_2,x_3\}$, calculate the distance from the origin</p> <p>$$d=\sqrt{x_1^2+x_2^2+x_3^2}$$</p> <p>and calculate the point </p> <p>$$\mathbf{y}=(x_1,x_2,x_3)/d.$$</p> <p>It can then be shown that the points $\mathbf{y}$ lie on the surface of the sphere and are uniformly distributed on the sphere surface, and the argument that proves it is just one word, "isotropy". No prefered direction.</p> <p>Suppose now we have an ellipsoid</p> <p>$$\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}=1$$</p> <p>How about generating three $N(0,1)$ variables as above, calculate</p> <p>$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$</p> <p>and then using $\mathbf{y}=(x_1,x_2,x_3)/d$ as above. That way we get points guaranteed on the surface of the ellipsoid but will they be uniformly distributed? How can we check that?</p> <p>Any help greatly appreciated, thanks.</p> <p>PS I am looking for a solution without accepting/rejecting points, which is kind of trivial.</p> <p>EDIT:</p> <p>Switching to polar coordinates, the surface element is $dS=F(\theta,\phi)\ d\theta\ d\phi$ where $F$ is expressed as $$\frac{1}{4} \sqrt{r^2 \left(16 \sin ^2(\theta ) \left(a^2 \sin ^2(\phi )+b^2 \cos ^2(\phi )+c^2\right)+16 \cos ^2(\theta ) \left(a^2 \cos ^2(\phi )+b^2 \sin ^2(\phi )\right)-r^2 \left(a^2-b^2\right)^2 \sin ^2(2 \theta ) \sin ^2(2 \phi )\right)}$$</p>
<p>One way to proceed is to generate a point uniformly on the sphere, apply the mapping $f : (x,y,z) \mapsto (x'=ax,y'=by,z'=cz)$ and then correct the distortion created by the map by discarding the point randomly with some probability $p(x,y,z)$ (after discarding you restart the whole thing).</p> <p>When we apply $f$, a small area $dS$ around some point $P(x,y,z)$ will become a small area $dS'$ around $P'(x',y',z')$, and we need to compute the multiplicative factor $\mu_P = dS'/dS$.</p> <p>I need two tangent vectors around $P(x,y,z)$, so I will pick $v_1 = (dx = y, dy = -x, dz = 0)$ and $v_2 = (dx = z,dy = 0, dz=-x)$</p> <p>We have $dx' = adx, dy'=bdy, dz'=cdz$ ; $Tf(v_1) = (dx' = adx = ay = ay'/b, dy' = bdy = -bx = -bx'/a,dz' = 0)$, and similarly $Tf(v_2) = (dx' = az'/c,dy' = 0,dz' = -cx'/a)$</p> <p>(we can do a sanity check and compute $x'dx'/a^2+ y'dy'/b^2+z'dz'/c^2 = 0$ in both cases)</p> <p>Now, $dS = v_1 \wedge v_2 = (y e_x - xe_y) \wedge (ze_x-xe_z) = x(y e_z \wedge e_x + ze_x \wedge e_y + x e_y \wedge e_z)$ so $|| dS || = |x|\sqrt{x^2+y^2+z^2} = |x|$</p> <p>And $dS' = (Tf \wedge Tf)(dS) = ((ay'/b) e_x - (bx'/a) e_y) \wedge ((az'/c) e_x-(cx'/a) e_z) = (x'/a)((acy'/b) e_z \wedge e_x + (abz'/c) e_x \wedge e_y + (bcx'/a) e_y \wedge e_z)$</p> <p>And finally $\mu_{(x,y,z)} = ||dS'||/||dS|| = \sqrt{(acy)^2 + (abz)^2 + (bcx)^2}$.</p> <p>It's quick to check that when $(x,y,z)$ is on the sphere the extrema of this expression can only happen at one of the six "poles" ($(0,0,\pm 1), \ldots$). If we suppose $0 &lt; a &lt; b &lt; c$, its minimum is at $(0,0,\pm 1)$ (where the area is multiplied by $ab$) and the maximum is at $(\pm 1,0,0)$ (where the area is multiplied by $\mu_{\max} = bc$)</p> <p>The smaller the multiplication factor is, the more we have to remove points, so after choosing a point $(x,y,z)$ uniformly on the sphere and applying $f$, we have to keep the point $(x',y',z')$ with probability $\mu_{(x,y,z)}/\mu_{\max}$.</p> <p>Doing so should give you points uniformly distributed on the ellipsoid. </p>
<p>Here's the code, This works in ANY dimension :</p> <pre><code>import numpy as np dim = 2 r=1 a = 10 b = 4 A = np.array([[1/a**2, 0], [0, 1/b**2]]) L = np.linalg.cholesky(A).T x = np.random.normal(0,1,(200,dim)) z = np.linalg.norm(x, axis=1) z = z.reshape(-1,1).repeat(x.shape[1], axis=1) y = x/z * r y_new = np.linalg.inv(L) @ y.T plt.figure(figsize=(15,15)) plt.plot(y[:,0],y[:,1], linestyle=&quot;&quot;, marker='o', markersize=2) plt.plot(y_new.T[:,0],y_new.T[:,1], linestyle=&quot;&quot;, marker='o', markersize=5) plt.gca().set_aspect(1) plt.grid() </code></pre> <p><a href="https://i.sstatic.net/013FD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/013FD.png" alt="enter image description here" /></a></p> <p>And here's the theory : <a href="https://freakonometrics.hypotheses.org/files/2015/11/distribution_workshop.pdf" rel="nofollow noreferrer">https://freakonometrics.hypotheses.org/files/2015/11/distribution_workshop.pdf</a></p>
logic
<p>Then otherwise the sentence <em>"It is not possible for someone to find a counter-example"</em> would be a proof.</p> <p>I mean, are there some hypotheses that are false but the counter-example is somewhere we cannot find even if we have super computers.</p> <p>Sorry, if this is a silly question. </p> <p>Thanks a lot.</p>
<p>A standard example of this is <a href="http://en.wikipedia.org/wiki/Halting_problem">the halting problem</a>, which states essentially:</p> <blockquote> <p>There is no program which can always determine whether another program will eventually terminate. </p> </blockquote> <p>Thus there must be some program which does not terminate, but no proof that it does not terminate exists. Otherwise for any program, we could run the program and at the same time search for a proof that it does not terminate, and either the program would eventually terminate or we would find such a proof. </p> <p>To match the phrasing of your question, this means that the statement:</p> <blockquote> <p>If a program does not terminate, there is some proof of this fact.</p> </blockquote> <p>is false, but no counterexample can be found.</p>
<p>I actually like this one:</p> <p>There are uncountably many real numbers. However, given that all specifications of specific real numbers (be it by digits, by an algorithm, or even a description of the number in plain English) is ultimately given by a finite string of finitely many symbols, there are only countably many descriptions of real numbers.</p> <p>A straightforward formalization (but not the only possible, nor the most general one) of that idea is to model the descriptions as natural numbers (think e.g. of storing the description in a file, and then interpreting the file as natural number), and then having a function from the natural numbers (that is, the descriptions) to subsets of the real numbers (namely the set of real numbers which fit the description). A description which uniquely describes a real number would, in this model, be a natural number which maps to a one-element subset of the real numbers; the single element of that subset is. Since there are only countably many natural numbers (by definition), they can only map to at most countably many one-element subsets, whose union therefore only contains countably many real numbers. Since there are uncountably many real numbers, there must be uncountably many numbers not in this set.</p> <p>Therefore in this formalization, for any given mapping, almost every real number cannot be individually specified by any description. Therefore there exist uncountably many counterexamples to the claim "you can uniquely specify any real number".</p> <p>Of course I cannot give a counterexample, because to give a counterexample, I'd have to specify an individual real number violating the claim, but if I could specify it, it would not violate the claim and therefore not be a counterexample.</p> <p>Note that in the first version, I omitted that possible formalization. As I learned from the comments and especially <a href="https://mathoverflow.net/a/44129/1682">this MathOverflow post</a> linked from them, in the original generality the argument is wrong.</p>
differentiation
<p>I happened to ponder about the differentiation of the following function: $$f(x)=x^{2x^{3x^{4x^{5x^{6x^{7x^{.{^{.^{.}}}}}}}}}}$$ Now, while I do know how to manipulate power towers to a certain extent, and know the general formula to differentiate $g(x)$ wrt $x$, where $$g(x)=f(x)^{f(x)^{f(x)^{f(x)^{f(x)^{f(x)^{f(x)^{.{^{.^{.}}}}}}}}}}$$ I'm still unable to figure out as to how I can adequately manipulate the function to differentiate it within its domain of convergence.</p> <hr> <p>General formula: $$g'(x)=\frac{g^2(x)f'(x)}{f(x)\left[1-g(x)\ln(f(x))\right]}$$</p>
<p>It is not completed answer but I thought it can be an approach for such kind of questions. Thus I decided to post it. Let's define</p> <p><span class="math-container">$$f_n(x)=nx^{(n+1)x^{(n+2)x^{(n+3)x^{(n+4)x^{(n+5)x^{(n+6)x^{.{^{.^{.}}}}}}}}}}$$</span></p> <p>Your function can be found by <span class="math-container">$$f_1(x)=f(x)$$</span> and you are looking for <span class="math-container">$$\frac {\partial f_n(x)}{\partial x} \bigg|_{n=1}=f'(x)$$</span></p> <p>We can easily see a relation for <span class="math-container">$f_n(x)$</span> <span class="math-container">$$f_n(x)=nx^{f_{n+1}(x)}$$</span> <span class="math-container">$$\ln(f_n(x))=\ln(n) +f_{n+1}(x)\ln(x)$$</span></p> <p><span class="math-container">$$\frac {\partial \ln(f_n(x))}{\partial x}=\frac {\partial (f_{n+1}(x)\ln(x))}{\partial x}$$</span></p> <p><span class="math-container">$$\frac {\partial \ln(f_n(x))}{\partial x}=\frac {\partial (f_{n+1}(x)\ln(x))}{\partial x}$$</span></p> <p><span class="math-container">$$\frac{\partial f_n(x)}{\partial x} = f_n(x) \frac{\partial f_{n+1}(x)}{\partial x}\ln(x)+\frac {f_{n+1}(x)f_n(x)}{x}$$</span></p> <p>----Let's put <span class="math-container">$n=1,2,3,....$</span></p> <p><span class="math-container">$$\frac{\partial f_1(x)}{\partial x}= \frac{\partial f_{2}(x)}{\partial x}f_1(x)\ln(x)+\frac {f_{2}(x)f_1(x)}{x}$$</span></p> <p><span class="math-container">$$\frac{\partial f_2(x)}{\partial x} = \frac{\partial f_{3}(x)}{\partial x}f_2(x)\ln(x)+\frac {f_{3}(x)f_2(x)}{x}$$</span></p> <p><span class="math-container">$$\frac{\partial f_3(x)}{\partial x} = \frac{\partial f_{4}(x)}{\partial x}f_3(x)\ln(x)+\frac {f_{4}(x)f_3(x)}{x}$$</span></p> <p><span class="math-container">$$.$$</span> <span class="math-container">$$.$$</span> <span class="math-container">$$.$$</span></p> <p><span class="math-container">$$\frac{\partial f_1(x)}{\partial x} = U(x)+\frac {f_{1}(x)f_2(x)}{x}+\frac {f_{1}(x)f_2(x)f_3(x) \ln(x) }{x}+\frac {f_{1}(x)f_2(x)f_3(x) f_4(x) \ln^2(x) }{x}+.... \tag{1}$$</span></p> <p>Where <span class="math-container">$$U(x)=\lim\limits_{ n\to \infty } \frac{\partial f_n(x)}{\partial x} \ln^n(x)\prod_{k=1}^{n-1} f_n(x) $$</span></p> <p>Finally we can express the derivative as</p> <p><span class="math-container">$$f'(x)=\frac{\partial f_1(x)}{\partial x} = U(x)+ \sum_{k=1}^{\infty} \frac {\ln^{k-1}(x)f_1(x)\prod_{n=2}^{k+1} f_n(x)}{x}$$</span></p> <p>Note: Yet I have not found what <span class="math-container">$U(x)$</span> is. I estimate that <span class="math-container">$U(x)$</span> will vanish but I have not proved it yet. Maybe someone can help with some numerical values that if <span class="math-container">$U(x)=0$</span> or not. Thanks a lot for contributions and advice</p> <p>An observation : I noticed a similar pattern with general formula <span class="math-container">$g'(x)$</span> in the question and <span class="math-container">$f'(x)$</span> that I wrote in (1). <span class="math-container">$$g(x)=h(x)^{h(x)^{h(x)^{h(x)^{h(x)^{h(x)^{h(x)^{.{^{.^{.}}}}}}}}}}$$</span> <span class="math-container">$$g'(x)=\frac{g^2(x)h'(x)}{h(x)\left[1-g(x)\ln(h(x))\right]}=$$</span></p> <p>It can be rewritten as</p> <p><span class="math-container">$$g'(x)=\frac{g^2(x)h'(x)}{h(x)}(1+g(x)\ln(h(x))+g^2(x)\ln^2(h(x))+g^3(x)\ln^3(h(x))+....)$$</span></p> <p>If <span class="math-container">$h(x)=x$</span> then we can obtain</p> <p><span class="math-container">$$g'(x)=\frac{g^2(x)}{x\left[1-g(x)\ln(x)\right]}=$$</span> <span class="math-container">$$g'(x)=\frac{g^2(x)}{x}(1+g(x)\ln(x)+g^2(x)\ln^2(x)+g^3(x)\ln^3(x)+....)$$</span> <span class="math-container">$$g'(x)=\frac{g^2(x)}{x}+\frac{g^3(x)}{x}\ln(x)+\frac{g^4(x)}{x}\ln^2(x)+....$$</span></p> <p>It has similarities with my formula for <span class="math-container">$f'(x)$</span> that I wrote above. This result supports my idea that <span class="math-container">$U(x)$</span> may vanish.</p>
<p><strong>Partial Answer / Observation</strong><br> So it is clear that this function can be defined for $f(0)$ and $f(1)$, and is definitely defined only within some portion of this domain... accordingly, I chose to evaluate what happens with $0.5$ as you increase the tower height. Numerically, it appears that two limits are approached, one for even heights and one for odd heights (this is the nature of power towers evaluated between $0$ and $1$ for any I have ever calculated... there is probably a proof of this somewhere, at least for any sequence of heights that reach some limit). Regardless, it appears that the function is simply periodic between these two limits, and I would say that this should hold as you continue to increase the tower height. Now, this isn't a proof in any sense, (even for the value $0.5$), as numerical analysis alone won't cut this, but I think it provides some interesting insight, and is the only thing that got me any sort of result after hours of looking into this. (Note that I am not referencing iterated functions with the superscript, but the height of the tower.) As is pointed out in the comments, this function is probably only defined at a finite amount of points... One could probably find a way to show that a point such as $0.5$ diverges by analyzing the property that causes this dual limit (I am fairly certain it is due to the domain $[0,1]$), but I am not sure that such observations would be sufficient to prove this across the entire domain. $$f^1(0.5) \approx x = 0.5$$ $$f^2(0.5)\approx x^{2x} = 0.5$$ $$f^3(0.5)\approx x^{2x^{3x}} = 0.612...$$ $$f^4(0.5)\approx x^{2x^{3x^{4x}}} = 0.439...$$ $$f^5(0.5)\approx 0.679...$$ $$f^6(0.5)\approx 0.374...$$ $$f^9(0.5)\approx 0.804...$$ $$f^{10}(0.5)\approx 0.305...$$ $$f^{14}(0.5)\approx 0.3040559...$$ $$f^{18}(0.5)\approx 0.3040557...$$ $$f^{15}(0.5)\approx 0.81045144...$$ $$f^{19}(0.5)\approx 0.81045145968867...$$ $$f^{23}(0.5)\approx 0.81045145968869...$$</p>
logic
<p>I can never figure out (because the English language is imprecise) which part of "if and only if" means which implication.</p> <p>($A$ if and only if $B$) = $(A \iff B)$, but is the following correct:</p> <p>($A$ only if $B$) = $(A \implies B)$</p> <p>($A$ if $B$) = $(A \impliedby B)$</p> <p>The trouble is, one never comes into contact with "$A$ if $B$" or "$A$ only if $B$" using those constructions in everyday common speech.</p>
<p>This example may be more clear, because apples ⊂ fruits is more obvious:</p> <p>"This is an apple if it is a fruit" is false.<br> "This is an apple only if it is a fruit" is true.<br> "This is a fruit if it is an apple" is true.<br> "This is a fruit only if it is an apple" is false. </p> <p>A is an apple => A is a fruit</p>
<p>The <a href="https://www.webpages.uidaho.edu/~morourke/404-phil/Summer-99/Handouts/Philosophical/Material-Biconditional.htm" rel="noreferrer">explanation in this link</a> clearly and briefly differentiates the meanings and the inference direction of "if" and "only if". In summary, <span class="math-container">$A \text{ if and only if } B$</span> is mathematically interpreted as follows:</p> <ul> <li>'<span class="math-container">$A \text{ if } B$</span>' : '<span class="math-container">$A \Leftarrow B$</span>'</li> <li>'<span class="math-container">$A \text{ only if } B$</span>' : '<span class="math-container">$\neg A \Leftarrow \neg B$</span>' which is the contrapositive (hence, logical equivalent) of <span class="math-container">$A \Rightarrow B$</span></li> </ul>
logic
<p>I am learning about the difference between booleans and classical logics in Coq, and why logical propositions are sort of a superset of booleans:</p> <p><a href="https://stackoverflow.com/questions/31554453/why-are-logical-connectives-and-booleans-separate-in-coq/31568076#31568076">Why are logical connectives and booleans separate in Coq?</a></p> <p>The answer there explains the main reason for this distinction is because of the law of excluded middle, and how in intuitionistic logic, you can't prove this:</p> <pre><code>Definition excluded_middle := forall P : Prop, P \/ ~ P. </code></pre> <p>I have seen this stated in many different books and articles, but seem to have missed an explanation on why you can't prove it.</p> <p>In a way that a person with some programming/math experience can understand (not an expert in math), what is the reason why you can't prove this in intuitionistic logic?</p>
<p>If you could prove the law of the excluded middle, then it would be true in all systems satisfying intuitionistic axioms. So we just need to find some model of intuitionistic logic for which the law of the excluded middle fails.</p> <p>Let $X$ be a topological space. A proposition will consist of an open set, where $X$ is "true", and $\emptyset$ is "false". Union will be "or", intersection will be "and".</p> <p>We define $U\implies V$ to be the interior of $U^c \cup V$, and $\neg U = (U\implies \emptyset)$ is just the interior of $U^c$.</p> <p>The system I just described obeys the axioms of intuitionistic logic. But the law of the excluded middle says that $U\cup \neg U = X$, which fails for any open set whose complement is not open. So this is false for $X=\mathbb{R}$, for example, or even for the two-point <a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space">Sierpinski space</a>.</p> <p>In fact, intuitionistic logic can generally be interpreted in this geometric kind of way, though topological spaces are not general enough and we need things called elementary topoi to model intuitionistic logic.</p>
<p>Not knowing how much mathematics experience you've had, it can be difficult to explain the difference in viewpoint. </p> <p>Intuitionism, and constructive logic in general, are very much inspired by the fact that we as mathematicians do not know the answers to all mathematical questions. Classical logic is based on the idea that all mathematical statements are either true or false, even if we do not know the truth value. Constructive logic does not focus so much on whether statements are "really" true or false, in that sense - it focuses on whether we <em>know</em> a statement is true, or <em>know</em> it is false. Other answers have described this as a focus on proof rather than truth. </p> <p>So, before someone working in constructive mathematics can assert a statement, she needs to have a concrete justification for the truth of the statement. Arguments such as proof by contradiction are not permitted. The connective "or", in particular, has a different meaning. To assert a statement of the form "<span class="math-container">$A$</span> or <span class="math-container">$B$</span>", a constructive mathematicians much know <em>which one</em> of the two options holds.</p> <p>One kind of example of how excluded middle fails in this framework was called a "weak counterexample" by Brouwer. Here is one (I don't know if it is due to Brouwer). Consider the number <span class="math-container">$\pi$</span>. As a fact, even in classical mathematics we do not currently know whether there are infinitely many occurrences of the digit <span class="math-container">$5$</span> in the decimal expansion of <span class="math-container">$\pi$</span>. Let <span class="math-container">$A$</span> be the statement "there are infinitely many occurrences of <span class="math-container">$5$</span> in the decimal expansion of <span class="math-container">$\pi$</span>". Then someone working in constructive mathematics cannot assert "<span class="math-container">$A$</span> or not <span class="math-container">$A$</span>", because she does not know <em>which</em> of the two options holds. Of course "<span class="math-container">$A$</span> or not <span class="math-container">$A$</span>" is an instance of the law of the excluded middle, and is valid in classical mathematics. </p> <p>The key point is that the first difference between constructive and classical logic deals with what it takes to assert a particular sentence. Mathematical terms such as "or", "there exists", and "not" take on a different meaning under the higher burden of evidence that is required to assert statements in constructive mathematics. </p>
probability
<p>The PDF describes the probability of a random variable to take on a given value:</p> <p>$f(x)=P(X=x)$</p> <p>My question is whether this value can become greater than $1$?</p> <p>Quote from wikipedia:</p> <p>"Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac12]$ has probability density $f(x) = 2$ for $0 \leq x \leq \frac12$ and $f(x) = 0$ elsewhere."</p> <p>This wasn't clear to me, unfortunately. The question has been asked/answered here before, yet used the same example. Would anyone be able to explain it in a simple manner (using a real-life example, etc)?</p> <p>Original question:</p> <p>"$X$ is a continuous random variable with probability density function $f$. Answer with either True or False.</p> <ul> <li>$f(x)$ can never exceed $1$."</li> </ul> <p>Thank you!</p> <p>EDIT: Resolved.</p>
<p>Discrete and continuous random variables are not defined the same way. Human mind is used to have discrete random variables (example: for a fair coin, -1 if it the coin shows tail, +1 if it's head, we have that $f(-1)=f(1)=\frac12$ and $f(x)=0$ elsewhere). As long as the probabilities of the results of a discrete random variable sums up to 1, it's ok, so they have to be at most 1.</p> <p>For a continuous random variable, the necessary condition is that $\int_{\mathbb{R}} f(x)dx=1$. Since an integral behaves differently than a sum, it's possible that $f(x)&gt;1$ on a small interval (but the length of this interval shall not exceed 1).</p> <p>The definition of $\mathbb{P}(X=x)$is not $\mathbb{P}(X=x)=f(x)$ but more $\mathbb{P}(X=x)=\mathbb{P}(X\leq x)-\mathbb{P}(X&lt;x)=F(x)-F(x^-)$. In a discrete random variable, $F(x^-)\not = F(x)$ so $\mathbb{P}(X=x)&gt;0$. However, in the case of a continuous random variable, $F(x^-)=F(x)$ (by the definition of continuity) so $\mathbb{P}(X=x)=0$. This can be seen as the probability of choosing $\frac12$ while choosing a number between 0 and 1 is zero.</p> <p>In summary, for continuous random variables $\mathbb{P}(X=x)\not= f(x)$.</p>
<p>Your conception of <a href="https://en.wikipedia.org/wiki/Probability_density_function" rel="noreferrer">probability density function</a> is wrong.</p> <p>You are mixing it up with <a href="https://en.wikipedia.org/wiki/Probability_mass_function" rel="noreferrer">probability mass function</a>.</p> <p>If <span class="math-container">$f$</span> is a PDF then <span class="math-container">$f(x)$</span> is not a probability and does not have the restriction that it cannot exceed <span class="math-container">$1$</span>.</p>
linear-algebra
<p>Can <span class="math-container">$\det(A + B)$</span> expressed in terms of <span class="math-container">$\det(A), \det(B), n$</span> where <span class="math-container">$A,B$</span> are <span class="math-container">$n\times n$</span> matrices?</p> <h3></h3> <p>I made the edit to allow <span class="math-container">$n$</span> to be factored in.</p>
<p>When <span class="math-container">$n=2$</span>, and suppose <span class="math-container">$A$</span> has inverse, you can easily show that</p> <p><span class="math-container">$\det(A+B)=\det A+\det B+\det A\,\cdot \mathrm{Tr}(A^{-1}B)$</span>.</p> <hr /> <p>Let me give a general method to find the determinant of the sum of two matrices <span class="math-container">$A,B$</span> with <span class="math-container">$A$</span> invertible and symmetric (The following result might also apply to the non-symmetric case. I might verify that later...). I am a physicist, so I will use the index notation, <span class="math-container">$A_{ij}$</span> and <span class="math-container">$B_{ij}$</span>, with <span class="math-container">$i,j=1,2,\cdots,n$</span>. Let <span class="math-container">$A^{ij}$</span> donate the inverse of <span class="math-container">$A_{ij}$</span> such that <span class="math-container">$A^{il}A_{lj}=\delta^i_j=A_{jl}A^{li}$</span>. We can use <span class="math-container">$A_{ij}$</span> to lower the indices, and its inverse to raise. For example <span class="math-container">$A^{il}B_{lj}=B^i{}_j$</span>. Here and in the following, the Einstein summation rule is assumed.</p> <p>Let <span class="math-container">$\epsilon^{i_1\cdots i_n}$</span> be the totally antisymmetric tensor, with <span class="math-container">$\epsilon^{1\cdots n}=1$</span>. Define a new tensor <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}=\epsilon^{i_1\cdots i_n}/\sqrt{|\det A|}$</span>. We can use <span class="math-container">$A_{ij}$</span> to lower the indices of <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}$</span>, and define <span class="math-container">$\tilde\epsilon_{i_1\cdots i_n}=A_{i_1j_1}\cdots A_{i_nj_n}\tilde\epsilon^{j_1\cdots j_n}$</span>. Then there is a useful property: <span class="math-container">$$ \tilde\epsilon_{i_1\cdots i_kl_{k+1}\cdots l_n}\tilde\epsilon^{j_1\cdots j_kl_{k+1}\cdots l_n}=(-1)^sl!(n-l)!\delta^{[j_1}_{i_1}\cdots\delta^{j_k]}_{i_k}, $$</span> where the square brackets <span class="math-container">$[]$</span> imply the antisymmetrization of the indices enclosed by them. <span class="math-container">$s$</span> is the number of negative elements of <span class="math-container">$A_{ij}$</span> after it has been diagonalized.</p> <p>So now the determinant of <span class="math-container">$A+B$</span> can be obtained in the following way <span class="math-container">$$ \det(A+B)=\frac{1}{n!}\epsilon^{i_1\cdots i_n}\epsilon^{j_1\cdots j_n}(A+B)_{i_1j_1}\cdots(A+B)_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\tilde\epsilon^{i_1\cdots i_n}\tilde\epsilon^{j_1\cdots j_n}\sum_{k=0}^n C_n^kA_{i_1j_1}\cdots A_{i_kj_k}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon^{j_1\cdots j_k}{}_{i_{k+1}\cdots i_n}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon_{j_1\cdots j_ki_{k+1}\cdots i_n}B_{i_{k+1}}{}^{j_{k+1}}\cdots B_{i_n}{}^{j_n} $$</span> <span class="math-container">$$ =\frac{\det A}{n!}\sum_{k=0}^nC_n^kk!(n-k)!B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$</span> <span class="math-container">$$ =\det A\sum_{k=0}^nB_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$</span> <span class="math-container">$$ =\det A+\det A\sum_{k=1}^{n-1}B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]}+\det B. $$</span></p> <p>This reproduces the result for <span class="math-container">$n=2$</span>. An interesting result for physicists is when <span class="math-container">$n=4$</span>,</p> <p><span class="math-container">\begin{split} \det(A+B)=&amp;\det A+\det A\cdot\text{Tr}(A^{-1}B)+\frac{\det A}{2}\{[\text{Tr}(A^{-1}B)]^2-\text{Tr}(BA^{-1}BA^{-1})\}\\ &amp;+\frac{\det A}{6}\{[\text{Tr}(BA^{-1})]^3-3\text{Tr}(BA^{-1})\text{Tr}(BA^{-1}BA^{-1})+2\text{Tr}(BA^{-1}BA^{-1}BA^{-1})\}\\ &amp;+\det B. \end{split}</span></p>
<p>When $n\ge2$, the answer is no. To illustrate, consider $$ A=I_n,\quad B_1=\pmatrix{1&amp;1\\ 0&amp;0}\oplus0,\quad B_2=\pmatrix{1&amp;1\\ 1&amp;1}\oplus0. $$ If $\det(A+B)=f\left(\det(A),\det(B),n\right)$ for some function $f$, you should get $\det(A+B_1)=f(1,0,n)=\det(A+B_2)$. But in fact, $\det(A+B_1)=2\ne3=\det(A+B_2)$ over any field.</p>
combinatorics
<p>It is known that if $f_n = \sum\limits_{i=0}^{n} g_i \binom{n}{i}$ for all $0 \le n \le m$, then $g_n = \sum_{i=0}^{n} (-1)^{i+n} f_i \binom{n}{i}$ for $0 \le n \le m$. This sort of inversion is called <em>binomial inversion</em>, for obvious reasons.</p> <p>Many nice elegant proofs exist (my favorite uses exponential generating functions of $f_n$ and $g_n$), and also many applications (such as proving that if polynomial $f$ of degree $n$ assumes integers values on $0,1,\cdots,n$, then $f(i) \in \mathbb{Z}$ for all integers $i$).</p> <p>What I'm interested is the following:</p> <ol> <li>A nice inclusion-exclusion proof - similar to interpreting Möbius inversion as inclusion-exclusion.</li> <li>If $f_0 = g_0 = 0$ and $i | f_i$ for $i&gt;1$, we get, by the Binomial Inversion, that $i | g_i$ (reason: $i\binom{n}{i} = n\binom{n-1}{i-1}$). Is there a nice combinatorial interpretation of this phenomena? Nice applications?</li> <li>Are there any more famous/cool inversions (I know of Möbius inversion, binomial inversion, and the discrete derivative inversion $a_i \to a_{i+1}-a_{i})$?</li> </ol>
<p>These kinds of inverse relations are equivalent to orthogonal relations between sets of numbers.</p> <p>Suppose you have two triangular sets of numbers $a_{n,k}$ and $b_{n,k}$, each defined for $k = 0, 1, \ldots, n$, such that $$\sum_{k=m}^n b_{n,k} a_{k,m} = \delta_{nm}.$$ Then $a_{n,k}$ and $b_{n,k}$ are orthogonal, and they have the inverse property you are asking for in (3); i.e., if $f_n = \sum_{k=0}^n a_{n,k} g_k$ then $g_n = \sum_{k=0}^n b_{n,k} f_k$, and vice versa.</p> <p><em>Proof</em>: $$\sum_{k=0}^n b_{n,k} f_k = \sum_{k=0}^n b_{n,k} \sum_{m=0}^k a_{k,m} g_m = \sum_{m=0}^n \left(\sum_{k=m}^n b_{n,k} a_{k,m}\right) g_m = g_n.$$ </p> <p>Thus binomial inversion follows from the "<a href="https://math.stackexchange.com/questions/4175/beautiful-identity-sum-k-mn-1k-m-binomkm-binomnk-delta">beautiful identity</a>" $$\sum_{k=m}^n (-1)^{k+m} \binom{n}{k} \binom{k}{m} = \delta_{nm}.$$</p> <p>Since the orthogonal relation and the inverse relation are equivalent, perhaps the <a href="https://math.stackexchange.com/questions/4175/beautiful-identity-sum-k-mn-1k-m-binomkm-binomnk-delta/4187#4187">proof of this identity given by Aryabhata</a> or <a href="https://math.stackexchange.com/questions/4175/beautiful-identity-sum-k-mn-1k-m-binomkm-binomnk-delta/4189#4189">the proof by Yuval Filmus</a> can be considered a combinatorial proof of the inverse relation you describe for binomial coefficients.</p> <p><HR></p> <p><strong>Other examples</strong></p> <p>The <a href="http://en.wikipedia.org/wiki/Lah_number" rel="noreferrer">Lah numbers</a> $L(n,k)$ satisfy $$\sum_{k=m}^n (-1)^{k+m} L(n,k) L(k,m) = \delta_{nm},$$ and so, like the binomial coefficients, are (up to sign) self-orthogonal and have the inverse relation $$f_n = \sum_{k=0}^n L(n,k) g_k \Leftrightarrow g_n = \sum_{k=0}^n (-1)^{k+n} L(n,k) f_k.$$</p> <p>The two kinds of <a href="http://en.wikipedia.org/wiki/Stirling_number" rel="noreferrer">Stirling numbers</a>, $\left[ n \atop k \right]$ and $\left\{ n \atop k \right\}$, are orthogonal, satisfying $$\sum_{k=m}^n (-1)^{k+m} \left[ n \atop k \right] \left\{ k \atop m \right\} = \delta_{nm}$$ and $$\sum_{k=m}^n (-1)^{k+m} \left\{ n \atop k \right\} \left[ k \atop m \right] = \delta_{nm}.$$ Thus they satisfy the inverse relation $$f_n = \sum_{k=0}^n \left[ n \atop k \right] g_k \Leftrightarrow g_n = \sum_{k=0}^n (-1)^{k+n} \left\{ n \atop k \right\} f_k.$$</p> <p>John Riordan wrote a paper "Inverse Relations and Combinatorial Identities" (<em>American Mathematical Monthly</em> 71 (5), May 1964, pp. 485--498) and devoted two of the six chapters of his text <em>Combinatorial Identities</em> to these kinds of inverse relations. For example, he shows how inverse relations can be derived from <a href="http://en.wikipedia.org/wiki/Chebyshev_polynomials" rel="noreferrer">Chebyshev polynomials</a> (since they are orthogonal) and <a href="http://en.wikipedia.org/wiki/Legendre_polynomials" rel="noreferrer">Legendre polynomials</a> (since they are also orthogonal). See the article or the book for many more examples. </p> <p><HR></p> <p><strong>Additional comments and consequences</strong></p> <p>The proof using the orthogonal relation can also be applied with respect to the upper index to obtain inverse relations based on the upper index rather than the lower. Thus, for example, we also have (provided, of course, that the sums converge) $$f_n = \sum_{k=n}^\infty \binom{k}{n} g_k \Leftrightarrow g_n = \sum_{k=n}^\infty (-1)^{k+n} \binom{k}{n} f_k,$$ as well as the same kind of thing for the Lah numbers, Stirling numbers, and the other examples.</p> <p>In addition, these orthogonal relations mean that matrices consisting of orthogonal numbers are inverses. Thus, for example, if $A$ and $B$ are $n \times n$ matrices such that $A_{ij} = \binom{i}{j}$ and $B_{ij} = (-1)^{i+j} \binom{i}{j}$ then the orthogonal relationship implies that $AB = I$. This, of course, means that $BA = I$ as well, and so every orthogonal relationship goes both ways; i.e., $$\sum_{k=m}^n b_{n,k} a_{k,m} = \delta_{nm} \Leftrightarrow \sum_{k=m}^n a_{n,k} b_{k,m} = \delta_{nm}.$$ For more on inverse matrices consisting of combinatorial numbers, see <a href="https://math.stackexchange.com/questions/42018/stirling-numbers-and-inverse-matrices/49496#49496">my answer</a> to the question "Stirling numbers and inverse matrices."</p>
<p>This identity, Möbius inversion, and the "fundamental theorem of discrete calculus" are all special cases of <a href="http://en.wikipedia.org/wiki/Incidence_algebra" rel="noreferrer">Möbius inversion on a poset</a>.</p> <ul> <li>In this identity the relevant poset is the poset of finite subsets of $\mathbb{N}$.</li> <li>In classical Möbius inversion it's the poset $\mathbb{N}$ under division.</li> <li>In discrete calculus it's the poset $\mathbb{Z}_{\ge 0}$ under the usual order.</li> </ul> <p>This beautiful theory was first described by Gian-Carlo Rota in <a href="http://www.maths.ed.ac.uk/~aar/papers/rota1.pdf" rel="noreferrer">On the Foundations of Combinatorial Theory I</a>, and elaborated on in many other papers. If you like exponential generating functions, you will love <a href="http://math.mit.edu/~rstan/pubs/pubfiles/10.pdf" rel="noreferrer">On the Foundations of Combinatorial Theory VI</a>. These papers were quite a revelation to me a few years ago and I hope you will enjoy them as well!</p> <p>A modern reference is Stanley's <a href="http://www-math.mit.edu/~rstan/ec/" rel="noreferrer">Enumerative Combinatorics</a> Vol. I, Chapter 3 (incidentally, Stanley was a student of Rota's). Here is the appropriate specialization of the general result.</p> <p><strong>Proposition:</strong> Let $g : \mathcal{P}_{\text{fin}}(\mathbb{N}) \to R$ be a function from the poset of finite subsets of $\mathbb{N}$ to a commutative ring $R$. Let</p> <p>$$f(T) = \sum_{S \subseteq T} g(S).$$</p> <p>Then</p> <p>$$g(T) = \sum_{S \subseteq T} (-1)^{|S| - |T|} f(S)$$</p> <p>where $|S|$ denotes the size of $S$. (The special case you state is for when $g$ only depends on $|S|$.)</p> <p><em>Proof.</em> This is actually a slightly more general form of inclusion-exclusion. It suffices to exchange the order of summation in</p> <p>$$\sum_{S \subseteq T} (-1)^{|S| - |T|} \sum_{R \subseteq S} g(R) = \sum_{R \subseteq T} g(R) \sum_{R \subseteq S \subseteq T} (-1)^{|S| - |T|}$$</p> <p>and observe that the resulting coefficient is equal to $(1 - 1)^{|T| - |R|}$, which is equal to $0$ unless $R = T$ in which case it is equal to $1$. Of course I can mumble my way through the standard inclusion-exclusion proof as well: start with $f(T)$, which has too many terms, so subtract the terms $f(T - \{ t \})$, but we've subtracted the other terms too many times, so add back the terms $f(T - \{ t_1, t_2 \})$... </p> <hr> <p>A fundamental observation about Möbius functions in general is that they are multiplicative under product of posets. The posets I've named above are all products of <a href="http://en.wikipedia.org/wiki/Chain_(order_theory)#Chains" rel="noreferrer">chains</a>, so one can quickly compute their Möbius functions simply by computing the Möbius function of a chain. </p>
probability
<p>Whats the difference between <em>probability density function</em> and <em>probability distribution function</em>? </p>
<p><strong>Distribution Function</strong></p> <ol> <li>The probability distribution function / probability function has ambiguous definition. They may be referred to: <ul> <li>Probability density function (PDF) </li> <li>Cumulative distribution function (CDF)</li> <li>or probability mass function (PMF) (statement from Wikipedia)</li> </ul></li> <li>But what confirm is: <ul> <li>Discrete case: Probability Mass Function (PMF)</li> <li>Continuous case: Probability Density Function (PDF)</li> <li>Both cases: Cumulative distribution function (CDF)</li> </ul></li> <li>Probability at certain <span class="math-container">$x$</span> value, <span class="math-container">$P(X = x)$</span> can be directly obtained in: <ul> <li>PMF for discrete case</li> <li>PDF for continuous case</li> </ul></li> <li>Probability for values less than <span class="math-container">$x$</span>, <span class="math-container">$P(X &lt; x)$</span> or Probability for values within a range from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, <span class="math-container">$P(a &lt; X &lt; b)$</span> can be directly obtained in: <ul> <li>CDF for both discrete / continuous case</li> </ul></li> <li>Distribution function is referred to CDF or Cumulative Frequency Function (see <a href="http://mathworld.wolfram.com/DistributionFunction.html" rel="noreferrer">this</a>)</li> </ol> <p><strong>In terms of Acquisition and Plot Generation Method</strong></p> <ol> <li>Collected data appear as discrete when: <ul> <li>The measurement of a subject is naturally discrete type, such as numbers resulted from dice rolled, count of people.</li> <li>The measurement is digitized machine data, which has no intermediate values between quantized levels due to sampling process.</li> <li>In later case, when resolution higher, the measurement is closer to analog/continuous signal/variable.</li> </ul></li> <li>Way of generate a PMF from discrete data: <ul> <li>Plot a histogram of the data for all the <span class="math-container">$x$</span>'s, the <span class="math-container">$y$</span>-axis is the frequency or quantity at every <span class="math-container">$x$</span>.</li> <li>Scale the <span class="math-container">$y$</span>-axis by dividing with total number of data collected (data size) <span class="math-container">$\longrightarrow$</span> and this is called PMF.</li> </ul></li> <li>Way of generate a PDF from discrete / continuous data: <ul> <li>Find a continuous equation that models the collected data, let say normal distribution equation.</li> <li>Calculate the parameters required in the equation from the collected data. For example, parameters for normal distribution equation are mean and standard deviation. Calculate them from collected data.</li> <li>Based on the parameters, plot the equation with continuous <span class="math-container">$x$</span>-value <span class="math-container">$\longrightarrow$</span> that is called PDF.</li> </ul></li> <li>How to generate a CDF: <ul> <li>In discrete case, CDF accumulates the <span class="math-container">$y$</span> values in PMF at each discrete <span class="math-container">$x$</span> and less than <span class="math-container">$x$</span>. Repeat this for every <span class="math-container">$x$</span>. The final plot is a monotonically increasing until <span class="math-container">$1$</span> in the last <span class="math-container">$x$</span> <span class="math-container">$\longrightarrow$</span> this is called discrete CDF.</li> <li>In continuous case, integrate PDF over <span class="math-container">$x$</span>; the result is a continuous CDF.</li> </ul></li> </ol> <p><strong>Why PMF, PDF and CDF?</strong></p> <ol> <li>PMF is preferred when <ul> <li>Probability at every <span class="math-container">$x$</span> value is interest of study. This makes sense when studying a discrete data - such as we interest to probability of getting certain number from a dice roll.</li> </ul></li> <li>PDF is preferred when <ul> <li>We wish to model a collected data with a continuous function, by using few parameters such as mean to speculate the population distribution.</li> </ul></li> <li>CDF is preferred when <ul> <li>Cumulative probability in a range is point of interest. </li> <li>Especially in the case of continuous data, CDF much makes sense than PDF - e.g., probability of students' height less than <span class="math-container">$170$</span> cm (CDF) is much informative than the probability at exact <span class="math-container">$170$</span> cm (PDF).</li> </ul></li> </ol>
<p>The relation between the probability density funtion <span class="math-container">$f$</span> and the cumulative distribution function <span class="math-container">$F$</span> is...</p> <ul> <li><p>if <span class="math-container">$f$</span> is discrete: <span class="math-container">$$ F(k) = \sum_{i \le k} f(i) $$</span></p></li> <li><p>if <span class="math-container">$f$</span> is continuous: <span class="math-container">$$ F(x) = \int_{y \le x} f(y)\,dy $$</span></p></li> </ul>
logic
<p>Diaconescu's Theorem states that AC implies the Law of Excluded Middle. Essentially the proof goes by defining $A = \{x \in \{0,1\} : x = 0 \lor p\}$, and $B = \{x \in \{0,1\} : x = 1 \lor p\}$, for a given proposition $p$, and defining a choice function $f : \{A,B\} \rightarrow \{0,1\}$. We then show that no matter what values $f$ takes, either $p$ or $\lnot p$ holds. The full proof can be found <a href="https://proofwiki.org/wiki/Axiom_of_Choice_Implies_Law_of_Excluded_Middle" rel="noreferrer">here</a>.</p> <p>What confuses me is why the Axiom of Choice is needed, as we are only choosing from two sets, and finite choice is provable in standard ZF. In addition, I've heard that countable choice does not imply excluded middle, but clearly we are not choosing from more than countably many sets in this proof.</p>
<p>Choice from finitely many nonempty sets is indeed provable in standard ZF, but standard ZF is based on classical logic, which includes the law of the excluded middle. </p> <p>Also, in Diaconescu's proof, the set $\{A,B\}$ cannot be asserted to be a two-element set. It admits a surjective map from, say, $\{0,1\}$, the standard two-element set, but that map might not be bijective since $A$ might equal $B$. If we had classical logic (excluded middle) available, then we could say that $\{A,B\}$ has either two elements or just one element, and it's finite in either case. But without the law of the excluded middle, we can't say that. </p> <p>In intuitionistic set theory, there are several inequivalent notions of finiteness. The most popular (as far as I can tell) amounts to "surjective image of $\{0,1,\dots,n-1\}$ for some natural number $n$." With this definition, $\{A,B\}$ is finite, but you can't prove choice from finitely many inhabited sets. The second most popular notion of finite amounts to "bijective image of $\{0,1,\dots,n-1\}$ for some natural number $n$." With this definition, you can (unless I'm overlooking something) prove choice from finitely many inhabited sets (just as in classical ZF), but $\{A,B\}$ can't be proved to be finite.</p> <p>(Side comment: I wrote "inhabited" where you might have expected "nonempty". The reason is that "nonempty" taken literally means that the set is not empty, i.e., that it's not the case that the set has no elements. That's the double-negation of "it has an element". Intuitionistically, the double-negation of a statement is weaker than hte statement itself. In the discussion of choice, I wanted the sets to actually have elements, not merely to not not have elements. "Inhabited" has become standard terminology for "having at least one element" in contexts where "nonempty" doesn't do the job.)</p>
<p>The background theory is an important issue. It is not quite right to say that the law of the excluded middle is provable from the axiom of choice. There are two important details: </p> <ol> <li><p>We are talking about provability in constructive set theories that include separation axioms for undecidable formulas.</p></li> <li><p>We are talking about the axiom of choice as expressed in set theory.</p></li> </ol> <p>There are other constructive systems, such as constructive type theories, where the relevant form of the axiom of choice does not imply the law of the excluded middle. The implication in Diaconescu's theorem is particular to constructive set theory.</p> <p>Separately, as described in the Stanford Encyclopedia article at <a href="http://plato.stanford.edu/entries/set-theory-constructive/index.html#ConChoPri" rel="noreferrer">http://plato.stanford.edu/entries/set-theory-constructive/index.html#ConChoPri</a> , the axiom of countable choice does not imply the law of excluded middle even in constructive set theory.</p> <p>If we try to apply the axiom of countable choice to the set in Diaconescu's theorem, nothing odd happens, because it is easy to write a choice function if we view $\{A, B\}$ as a sequence of two sets. The trick in Diaconescu's theorem is that if we apply choice to the family $\{A, B\}$, the choice function has to be extensional, and the theorem leverages the extensionality. Replacing the family with a sequence of two sets $A$ and $B$ makes it easier to write a formula for an extensional choice function.</p>
logic
<p>The competition has ended 6 june 2014 22:00 GMT The winner is Bryan </p> <p>Well done ! </p> <hr> <p>When I was rereading the proof of the drinkers paradox (see <a href="https://math.stackexchange.com/questions/807092/proof-of-drinker-paradox">Proof of Drinker paradox</a> I realised that $\exists x \forall y (D(x) \to D(y)) $ is also a theorem.</p> <p>I managed to proof it in 21 lines (see below), but I am not sure if this is the shortest possible proof. And I would like to know are there shorter versions?</p> <p>That is the reason for this <strong>competition</strong> (see also below)</p> <p><strong>Competition rules</strong></p> <ul> <li><p><strong>This is a small competition, I will give the person who comes with the shortest proof (shorter than the proof below ) a reward of 200 (unmarked) reputation points.</strong> ( I can only add a bounty in a couple of days from now , but you may already start to think about how to prove it, and you may allready post your answer. </p></li> <li><p>The length of a proof is measured by the number of numbered lines in the proof (see the proof below as example, the first formula is at line number 2, and end proof lines are not counted) </p></li> <li><p>If there is more than one person with the shortest answer the reward goes to the first poster. (measured with the last substantional edit of the proof) </p></li> <li><p>The answer must be in the form of a complete Fitch-style natural deduction style<br> typed in a answer box as below. proofs formatted in $\LaTeX$ is even better, but I just don't know how to do that. </p></li> <li><p>The proofsystem is the Fitch style natural deduction system as described in "Language proof and Logic" by Barwise cs, ( LPL, <a href="http://ggweb.stanford.edu/lpl/" rel="noreferrer">http://ggweb.stanford.edu/lpl/</a> ) except that the General Conditional proof rule is <strong>not</strong> allowed. (I just don't like the rule, not sure if using this rule would shorten the proof either) </p></li> <li><p>Maybe I will also give an extra prize for the most beautiful answer, or the shortest proof in another style/ method of natural deduction. it depends a bit on the answers I get, others may also set and give bounties as they see fit.</p></li> <li><p>you may give more than one answer, answers in more than one proof method ect, but do post them as seperate answers, and be aware that only the proof that uses the fitch method described above is competing, and other participants can peek at your answers.</p></li> <li><ul> <li>GOOD LUCK, and may the best one win.</li> </ul></li> </ul> <p>Proof system:</p> <p>For participants that don't have the LPL book the only allowed inference rules are the rules I used in the proof:</p> <ul> <li>the $\bot$ (falsum , contradiction) introduction and elimination rules</li> <li>the $\lnot \lnot$ elimination rules</li> <li>the $\lnot $ , $\to$ , $\exists$ and $\forall$ introduction rules</li> </ul> <p>(also see the proof below for examples of how to use them.)</p> <p>Also the following rules are allowable : </p> <ul> <li>the $\land$ , $\lor$ and $\leftrightarrow$ introduction and elimination rules. </li> <li>the $\to$ , $\exists$ and $\forall$ elimination rules</li> <li>the reiteration rule.</li> </ul> <p>(This is just to be complete, I don't think they are useful in this proof) </p> <p>Notice:</p> <ul> <li><p>line 1 is empty (in the Fitch software that accompanies the book line 1 is for premisses only and there are no premisses in this proof)</p></li> <li><p>the end subproof lines have no line number. (and they don't count in the all important the number of lines)</p></li> <li><p>the General Conditional proof rule is <strong>not</strong> allowed</p></li> <li>there is no $\lnot$ elimination rule (only an double negation elimination rule)</li> </ul> <p>My proof: (and example on how to use the rules, and how your answer should be formatted) </p> <pre><code>1 | . |----------------- 2 | |____________ ~Ex Vy (D(x) -&gt; D(y)) New Subproof for ~ Introduction 3 | | |_________a variable for Universal Introduction 4 | | | |________ D(b) New Subproof for -&gt; Introduction 5 | | | | |______ ~D(a) New Subproof for ~ Introduction 6 | | | | | |___c variable for Universal Introduction 7 | | | | | | |__ D(a) New Subproof for -&gt; Introduction 8 | | | | | | | _|_ 5,7 _|_ introduction 9 | | | | | | | D(c) 8 _|_ Elimination .. | | | | | | &lt;------------------------- end subproof 10 | | | | | | D(a) -&gt; D(c) 7-9 -&gt; Introduction .. | | | | | &lt;--------------------------- end subproof 11 | | | | | Vy(D(a) -&gt; D(y)) 6-10 Universal Introduction 12 | | | | | Ex Vy (D(x) -&gt; D(y)) 11 Existentional Introduction 13 | | | | | _|_ 2,12 _|_ introduction .. | | | | &lt;----------------------------- end subproof 14 | | | | ~~D(a) 5-13 ~ introduction 15 | | | | D(a) 14 ~~ Elimination .. | | | &lt;------------------------------- end subproof 16 | | | D(b) -&gt; D(a) 4-15 -&gt; Introduction .. | | &lt;--------------------------------- end subproof 17 | | Vy(D(b) -&gt; D(y)) 3-16 Universal Introduction 18 | | ExVy(D(x) -&gt; D(y)) 17 Existentional Introduction 19 | | _|_ 2,18 _|_ introduction .. | &lt;----------------------------------- end subproof 20 | ~~Ex Vy (D(x) -&gt; D(y)) 2-19 ~ introduction 21 | Ex Vy (D(x) -&gt; D(y)) 20 ~~ Elimination </code></pre> <p><strong>Allowable Question and other meta stuff</strong></p> <p>I did ask on <a href="http://meta.math.stackexchange.com/questions/13855/are-small-competitions-allowed">http://meta.math.stackexchange.com/questions/13855/are-small-competitions-allowed</a> if this is an allowable question, I was not given an answer yet (28 may 2014) that such questions were not allowable, outside the scope of this forum or any other negative remark, I did not get any comment that this was an improper question, or under which circumstances such competitions are allowed. </p> <p>(the most negative comment was that it should be unmarked reputation points. :) and that comment was later removed...</p> <p>If you disagree with this please add that as answer to the question on at the meta math stackexchange site. (or vote such an answer up)</p> <p>If on the other hand you do like competitions, also show your support on the meta site, ( by adding it as answer, or voting such an answer up)</p> <p>PS don't think all logic is all simple , I will make a shorter proof in no time, the question is rather hard and difficult, but proof that I am wrong :) </p>
<p>Well, here's my solution even though I've used 4 'unallowed' rules. </p> <ol> <li>$\neg\exists x\forall y(Dx\implies Dy)\quad$ assumption for $\neg$ introduction</li> <li>$\forall x\neg\forall y(Dx\implies Dy)\quad$ DeMorgan's for quantifiers</li> <li>$\forall x\exists y \neg(Dx\implies Dy)\quad$ DeMorgan's for quantifiers</li> <li>$\forall x\exists y\neg(\neg Dx\vee Dy)\quad$ Conditional exchange</li> <li>$\forall x\exists y(\neg\neg Dx\wedge \neg Dy)\quad$ DeMorgan's</li> <li>$\forall x \exists y(Dx\wedge\neg Dy)\quad$ Double negation</li> <li>$\exists y(Da\wedge\neg Dy)\quad$ Universal instantiation</li> <li>$Da\wedge \neg Db\quad$ Existential instantiation (flag $b$)</li> <li>$\neg Db\quad$ Simplification</li> <li>$\exists y(Db\wedge \neg Dy)\quad$ Universal instantiation</li> <li>$Db\wedge \neg Dc\quad$ Existential instantiation (flag $c$)</li> <li>$Db\quad$ Simplification</li> <li>$\bot\quad$ Contradiction 9, 12</li> <li>$\neg\neg\exists x\forall y(Dx\implies Dy)\quad$ Proof by contradiction 1-13</li> <li>$\exists x\forall y(Dx\implies Dy)\quad$ Double negation</li> </ol> <p>I guess there are 16 lines in all if we include the empty premise line. I do have a couple comments though.</p> <p>First, I highly doubt this proof can be attained by the last applied rule being Existential Generalization. This statement is a logical truth precisely because we can change our $x$ depending on whether the domain has or has not a member such that $Dx$ holds. If $D$ holds for all members of the domain, any choice of $x$ will make the statement true. If $D$ does not hold for some member of the domain, choosing that as our $x$ will make the statement true. Saying that we can reach the conclusion by one final use of E.G. means that the member $b$ which appears in the precedent somehow handled both cases while the <em>only</em> thing we are supposed to know about $b$ is that it is a member of the domain. We still don't know anything of the domain.</p> <p>With that said, after I got a copy of your book and read the rules, it appears the only way we can get to the conclusion is by an application of double negation. And the only way to get there is a proof by contradiction. Thus I believe your lines 1, 2, 19, 20, and 21 belong to the minimal solution. So far, I haven't found anything simpler for the middle.</p>
<p>Although outside the scope of what is asked, it is perhaps amusing to note that a proof in a tableau system is markedly shorter, and writes itself (without even having to split the tree and consider cases):</p> <p>$$\neg\exists x\forall y(Dx\implies Dy)$$ $$\forall x\neg\forall y(Dx\implies Dy)$$ $$\neg\forall y(Da\implies Dy)$$ $$\exists y\neg(Da\implies Dy)$$ $$\neg(Da\implies Db)$$ $$\neg Db$$ $$\neg\forall y(Db\implies Dy)$$ $$\exists y\neg(Db\implies Dy)$$ $$\neg(Db\implies Dc)$$ $$Db$$ $$\bigstar$$</p>
logic
<p><span class="math-container">$x,y$</span> are perpendicular if and only if <span class="math-container">$x\cdot y=0$</span>. Now, <span class="math-container">$||x+y||^2=(x+y)\cdot (x+y)=(x\cdot x)+(x\cdot y)+(y\cdot x)+(y\cdot y)$</span>. The middle two terms are zero if and only if <span class="math-container">$x,y$</span> are perpendicular. So, <span class="math-container">$||x+y||^2=(x\cdot x)+(y\cdot y)=||x||^2+||y||^2$</span> if and only if <span class="math-container">$x,y$</span> are perpendicular. ( I copied <a href="https://math.stackexchange.com/questions/11509/how-to-prove-the-pythagoras-theorem-using-vectors">this</a>)</p> <p>I think this argument is circular because the property</p> <blockquote> <p><span class="math-container">$x\cdot y=0 $</span> implies <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are perpendicular</p> </blockquote> <p>comes from the Pythagorean theorem. </p> <p>Oh, it just came to mind that the property could be derived from the law of cosines. The law of cosines can be proved without the Pythagorean theorem, right, so the proof isn't circular?</p> <p><strong>Another question</strong>: If the property comes from the Pythagorean theorem or cosine law, then how does the dot product give a condition for orthogonality for higher dimensions?</p> <p><strong><em>Edit</em></strong>: The following quote by Poincare hepled me regarding the question:</p> <blockquote> <p>Mathematics is the art of giving the same name to different things.</p> </blockquote>
<p>I think the question mixes two quite different concepts together: <em>proof</em> and <em>motivation.</em></p> <p>The <em>motivation</em> for defining the inner product, orthogonality, and length of vectors in <span class="math-container">$\mathbb R^n$</span> in the "usual" way (that is, <span class="math-container">$\langle x,y\rangle = x_1y_1 + x_2y_2 + \cdots + x_ny_n$</span>) is presumably at least in part that by doing this we will be able to establish a property of <span class="math-container">$\mathbb R_n$</span> corresponding to the familiar Pythagorean Theorem from synthetic plane geometry. The motivation is, indeed, circular in that we get the Pythagorean Theorem as one of the results of something we set up because we wanted the Pythagorean Theorem.</p> <p>But that's how many axiomatic systems are born. Someone wants to be able to work with mathematical objects in a certain way, so they come up with axioms and definitions that provide mathematical objects they can work with the way they wanted to. I would be surprised to learn that the classical axioms of Euclidean geometry (from which the original Pythagorean Theorem derives) were <em>not</em> created for the reason that they produced the kind of geometry that Euclid's contemporaries wanted to work with.</p> <p><em>Proof,</em> on the other hand, consists of starting with a given set of axioms and definitions (with emphasis on the word "given," that is, they have no prior basis other than that we want to believe them), and showing that a certain result necessarily follows from those axioms and definitions without relying on any other facts that did not derive from those axioms and definitions. In the proof of the "Pythagorean Theorem" in <span class="math-container">$\mathbb R^n,$</span> after the point at which the axioms were given, did any step of the proof rely on anything other than the stated axioms and definitions?</p> <p>The answer to that question would depend on how the axioms were stated. If there is an axiom that says <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are orthogonal if <span class="math-container">$\langle x,y\rangle = 0,$</span> then this fact does not <em>logically</em> "come from" the Pythagorean Theorem; it comes from the axioms.</p>
<p>Let's try this on a different vector space. Here's a nice one: Let <span class="math-container">$\mathscr L = C([0,1])$</span> be the set of all real continuous functions defined on the interval <span class="math-container">$I = [0,1]$</span>. If <span class="math-container">$f, g \in \mathscr L$</span> and <span class="math-container">$a,b \in \Bbb R$</span>, then <span class="math-container">$h(x) := af(x) + bg(x)$</span> defines another continuous function on <span class="math-container">$I$</span>, so <span class="math-container">$\scr L$</span> is indeed a vector space over <span class="math-container">$\Bbb R$</span>.</p> <p>Now I arbitrarly define <span class="math-container">$f \cdot g := \int_I f(x)g(x)dx$</span>, and quickly note that this operation is commutative and <span class="math-container">$(af + bg)\cdot h = a(f\cdot h) + b(g \cdot h)$</span>, and that <span class="math-container">$f \cdot f \ge 0$</span> and <span class="math-container">$f\cdot f = 0$</span> if and only if <span class="math-container">$f$</span> is the constant function <span class="math-container">$0$</span>.</p> <p>Thus we see that <span class="math-container">$f\cdot g$</span> acts as a dot product on <span class="math-container">$\scr L$</span>, and so we can define <span class="math-container">$$\|f\| := \sqrt{f\cdot f}$$</span> and call <span class="math-container">$\|f - g\|$</span> the "distance from <span class="math-container">$f$</span> to <span class="math-container">$g$</span>".</p> <p>By the Cauchy-Schwarz inequality <span class="math-container">$$\left(\int fg\right)^2 \le \int f^2\int g^2$$</span> and therefore <span class="math-container">$$|f\cdot g| \le \|f\|\|g\|$$</span></p> <p>Therefore, we can arbitrarily define for non-zero <span class="math-container">$f, g$</span> that <span class="math-container">$$\theta = \cos^{-1}\left(\frac{f\cdot g}{\|f\|\|g\|}\right)$$</span> and call <span class="math-container">$\theta$</span> the "angle between <span class="math-container">$f$</span> and <span class="math-container">$g$</span>", and define that <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are "perpendicular" when <span class="math-container">$\theta = \pi/2$</span>. Equivalently, <span class="math-container">$f$</span> is perpendicular to <span class="math-container">$g$</span> exactly when <span class="math-container">$f \cdot g = 0$</span>.</p> <p>And now we see that a Pythagorean-like theorem holds for <span class="math-container">$\scr L$</span>: <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are perpendicular exactly when <span class="math-container">$\|f - g\|^2 = \|f\|^2 + \|g\|^2$</span></p> <hr> <p>The point of this exercise? That the vector Pythagorean theorem is something different from the familiar Pythagorean theorem of geometry. The vector space <span class="math-container">$\scr L$</span> is not a plane, or space, or even <span class="math-container">$n$</span> dimensional space for any <span class="math-container">$n$</span>. It is in fact an infinite dimensional vector space. I did not rely on geometric intuition to develop this. At no point did the geometric Pythagorean theorem come into play.</p> <p>I did choose the definitions to follow a familiar pattern, but the point here is that I (or actually far more gifted mathematicians whose footsteps I'm aping) made those definitions by choice. They were not forced on me by the Pythagorean theorem, but rather were chosen by me exactly so that this vector Pythagorean theorem would be true.</p> <p>By making these definitions, I can now <em>start</em> applying those old geometric intuitions to this weird set of functions that beforehand was something too esoteric to handle.</p> <p>The vector Pythagorean theorem isn't a way to prove that old geometric result. It is a way to show that the old geometric result also has application in this entirely new and different arena of vector spaces.</p>
geometry
<p>Yesterday I was tutoring a student, and the following question arose (number 76):</p> <p><img src="https://i.sstatic.net/7nw6J.png" alt="enter image description here"></p> <p>My student believed the answer to be J: square. I reasoned with her that the information given only allows us to conclude that the <em>top and bottom sides are parallel</em>, and that the <em>bottom and right sides are congruent</em>. That's not enough to be "more" than a trapezoid, so it's a trapezoid. </p> <p>Now fast-forward to today. She is publicly humiliated in front of the class, and my reputation is called into question once the student claims to have been guided by a tutor. The teacher insists that the answer is J: square ("obviously"... no further proof was given).</p> <ol> <li><p>Who is right? Is there a chance that we're both right? </p></li> <li><p>How should I handle this? I told my student that I would email the teacher, but I'm not sure that's a good idea. </p></li> </ol>
<p>Clearly the figure is a trapezoid because you can construct an infinite number of quadralaterals consistent with the given constraints so long as the vertical height $h$ obeys $0 &lt; h \leq 9$ inches. Only one of those infinite number of figures is a square.</p> <p>I would email the above statement to the teacher... but that's up to you.</p> <p>As for the "politics" or "pedagogy" of drawing a square, but giving conditions that admit non-square quadralaterals... well, I'd take this as a learning opportunity. The solution teaches the students that of course any single drawing must be an example member of a solution set, but need not be <em>every</em> example of that set. In this case: a square is just a special case of a trapezoid.</p> <p>The solution goes further and reveals that the vertexes (apparently) lie on a semi-circle... ("obvious" to a student). A good followup or "part b" question would be to prove this is the case.</p> <p><img src="https://i.sstatic.net/CItRo.png" alt="enter image description here"></p>
<p>Of course, you are right. Send an email to the teacher with a concrete example, given that (s)he seems to be geometrically challenged. For instance, you could attach the following pictures with the email, <strong>which are both drawn to scale</strong>. You should also let him/her know that you need $5$ parameters to fix a quadrilateral uniquely. With just $4$ pieces of information as given in the question, there exists infinitely many possible quadrilaterals, even though all of them have to be trapezium, since the sum of adjacent angles being $180^{\circ}$ forces the pair of opposite sides to be parallel.</p> <p>The first one is an exaggerated example where the trapezium satisfies all conditions but is nowhere close to a square, even visually.<img src="https://i.sstatic.net/rWjZM.png" alt="enter image description here"></p> <p>The second one is an example where the trapezium visually looks like a square but is not a square.<img src="https://i.sstatic.net/BFzmF.png" alt="enter image description here"></p> <p>Not only should you email the teacher, but you should also direct him/her to this math.stackexchange thread.</p> <p>Good luck!</p> <hr> <p><strong>EDIT</strong></p> <p>Also, one might also try and explain to the teacher using the picture below that for the question only the first criterion is met, i.e., only one pair of opposite sides have been made parallel.</p> <p><img src="https://i.sstatic.net/aXBBA.png" alt="enter image description here"></p>
logic
<p>What is the difference between Gödel's completeness and incompleteness theorems?</p>
<p>First, note that, in spite of their names, one is not a negation of the other.</p> <p>The completeness theorem applies to any first order theory: If $T$ is such a theory, and $\phi$ is a sentence (in the same language) and any model of $T$ is a model of $\phi$, then there is a (first-order) <em>proof</em> of $\phi$ using the statements of $T$ as axioms. One sometimes says this as "anything true is provable."</p> <p>The incompleteness theorem is more technical. It says that if $T$ is a first-order theory that is:</p> <ol> <li>Recursively enumerable (i.e., there is a computer program that can list the axioms of $T$),</li> <li>Consistent, and </li> <li>Capable of interpreting some amount of Peano arithmetic (typically, one requires the fragment known as Robinson's Q), </li> </ol> <p>then $T$ is not complete, i.e., there is at least one sentence $\phi$ in the same language as $T$ such that there is a model of $T$ and $\phi$, and there is also a model of $T$ and $\lnot\phi$. Equivalently (by the completeness theorem), $T$ cannot prove $\phi$ and also $T$ cannot prove $\lnot\phi$.</p> <p>One usually says this as follows: If a theory is reasonable and at least modestly strong, then it is not complete.</p> <p>The second incompleteness theorem is more striking. If we actually require that $T$ interprets Peano Arithmetic, then in fact $T$ cannot prove its own consistency. So: There is no way of proving the consistency of a reasonably strong mathematical theory, unless we are willing to assume an even stronger setting to carry out the proof. Or: If a reasonably strong theory can prove its own consistency, then it is in fact inconsistent. (Note that any inconsistent theory proves anything, in particular, if its language allows us to formulate this statement, then it can prove that it is consistent). </p> <p>The requirement that $T$ is recursively enumerable is reasonable, I think. Formally, a theory is just a set of sentences, but we are mostly interested in theories that we can write down or, at least, for which we can recognize whether something is an axiom or not. </p> <p>The interpretability requirement is usually presented in a more restrictive form, for example, asking that $T$ is a theory about numbers, and it contains Peano Arithmetic. But the version I mentioned applies in more situations; for example, to set theory, which is not strictly speaking a theory about numbers, but can easily interpret number theory. The requirement of interpreting Peano Arithmetic is two-fold. First, we look at theories that allows us (by coding) to carry out at least some amount of common mathematical practice, and number theory is singled out as the usual way of doing that. More significantly, we want some amount of "coding" within the theory to be possible, so we can talk about sentences, and proofs. Number theory allows us to do this easily, and this is way we can talk about "the theory is consistent", a statement about proofs, although our theory may really be about numbers and not about first order formulas.</p>
<p><em>I'll add some comments</em>...</p> <p>It is useful to state Gödel's Completeness Theorem in this form : </p> <blockquote> <p>if a wff <span class="math-container">$A$</span> of a first-order theory <span class="math-container">$T$</span> is logically implied by the axioms of <span class="math-container">$T$</span>, then it is provable in <span class="math-container">$T$</span>, where "<span class="math-container">$T$</span> logically implies <span class="math-container">$A$</span>" means that <span class="math-container">$A$</span> is true in every model of <span class="math-container">$T$</span>.</p> </blockquote> <p>The problem is that most of first-order mathematical theories have more than one model; in particular, this happens for <span class="math-container">$\mathsf {PA}$</span> and related systems (to which Gödel's (First) Incompleteness Theorem applies).</p> <p>When we "see" (with insight) that the unprovable formula of Gödel's Incompleteness Theorem is true, we refer to our "natural reading" of it in the intended interpretation of <span class="math-container">$\mathsf {PA}$</span> (the structure consisting of the natural number with addition and multiplication). </p> <p>So, there exist some "unintended interpretation" that is also a model of <span class="math-container">$\mathsf {PA}$</span> in which the aforesaid formula is not true. This in turn implies that the unprovable formula isn't logically implied by the axioms of <span class="math-container">$\mathsf {PA}$</span>.</p>
number-theory
<p>This an interesting problem my friend has been working on for a while now (I just saw it an hour ago, but could not come up with anything substantial besides some PMI attempts).</p> <p>Here's the full problem:</p> <p>Let $x_{1}, x_{2}, x_{3}, \cdots x_{y}$ be all the primes that are less than a given $n&gt;1,n \in \mathbb{N}$. </p> <p>Prove that $$x_{1}x_{2}x_{3}\cdots x_{y} &lt; 4^n$$</p> <p>Any ideas very much appreciated!</p>
<p>I think the following argument is from Erdős: The binomial coefficient $$ {n \choose {\lfloor n/2 \rfloor}} = \frac{n!}{{\lfloor n/2 \rfloor}!{\lceil n/2 \rceil}!}$$ is an integer. Any prime in the range ${\lceil n/2 \rceil}+1 \le p \le n$ will appear in the numerator but not in the denominator, as a consequence $n \choose {\lfloor n/2 \rfloor}$ is divisible by the product of all the primes in the range ${\lceil n/2 \rceil}+1 \le p \le n$. On the other hand if $n$ is even then it will be the central term in the binomial expansion of $(1+1)^n$ so $$ {n \choose {\lfloor n/2 \rfloor}} \le 2^n $$ and if $n$ is odd then $n \choose \lfloor n/2 \rfloor$ and $n \choose \lceil n/2 \rceil$ will be the two central terms of the binomial expansion of $(1+1)^n$, as they are equal we have $$ {n \choose {\lfloor n/2 \rfloor}} \le 2^{n-1} $$ we apply this recursively to $n$, $\lceil n/2 \rceil, \lceil n/4 \rceil, \dots$, but $$ { \lceil n/2 \rceil \choose \lfloor \lceil n/2 \rceil/2 \rfloor } = { \lceil n/2 \rceil \choose \lfloor n/4 \rfloor } \le 2^{\lfloor n/2 \rfloor } $$ using either of the two preceding inequalities depending on the parity of $\lceil n/2 \rceil$, by the same argument we get $${ \lceil n/4 \rceil \choose \lfloor \lceil n/4 \rceil/2 \rfloor } ={ \lceil n/4 \rceil \choose \lfloor n/8 \rfloor } \le 2^{\lfloor n/4 \rfloor }$$ and so on, so $$ \begin{align}{n \choose {\lfloor n/2 \rfloor}}{{\lceil n/2 \rceil} \choose {\lfloor n/4 \rfloor}}{{\lceil n/4 \rceil} \choose {\lfloor n/8 \rfloor}}\cdots &amp;\le 2^n\cdot 2^{{\lfloor n/2 \rfloor}} \cdot 2^{{\lfloor n/4 \rfloor}} \cdots \\\\ &amp;\le 2^{n + {\lfloor n/2 \rfloor} + {\lfloor n/4 \rfloor} + \cdots } \\\\ &amp;\le 2^{n(1 + 1/2 + 1/4 + \cdots) } = 2^{2n} = 4^n\end{align}$$ but the left hand side is divisible by all the primes up to $n$. So the product of any subset of these primes will also be bounded by $4^n$.</p>
<p>I just like to point out that this argument is in Hardy and Wright, An Introduction to the Theory of Numbers, with the slight differences that they avoid the use of the floor and ceiling functions, and finish off (quite nicely, in my opinion) with induction.</p> <p>I'll type it here, to save you looking it up.</p> <p>Theorem: $\theta(n) &lt; 2n \log 2$ for all $ n \ge 1,$ where $$\theta(x) = \log \prod_{p \le x} p.$$</p> <p>Let $M = { 2m+1 \choose m},$ an integer, which occurs twice in the expansion of $(1+1)^{2m+1}$ and so $2M &lt; 2^{2m+1}.$ Thus $M&lt; 2^{2m}.$</p> <p>If $m+1 &lt; p \le 2m+1,$ $p$ divides $M$ since it divides the numerator, but not the denominator, of $ { 2m+1 \choose m } = \frac{(2m+1)(2m)\ldots(m+2)}{m!}.$</p> <p>Hence</p> <p>$$\left( \prod_{m+1 &lt; p \le 2m+1} p \right) | M $$</p> <p>and</p> <p>$$ \theta(2m+1) - \theta(m+1) = \sum_{m+1 &lt; p \le 2m+1} \log p \le \log M &lt; 2m \log 2.$$</p> <p>The theorem is clear for $n=1$ and $n=2,$ so suppose that it is true for all $n \le N-1.$ If $N$ is even, we have</p> <p>$$ \theta(N)= \theta(N-1) &lt; 2(N-1) \log 2 &lt; 2N \log 2.$$</p> <p>If $N$ is odd, $N=2m+1 $ say, we have</p> <p>$$\begin{align} \theta(N)=\theta(2m+1) &amp;=\theta(2m+1)-\theta(m+1)+\theta(m+1) \\ &amp;&lt; 2m \log 2 +2(m+1) \log 2 \\ &amp;=2(2m+1) \log 2 = 2N \log 2, \end{align}$$</p> <p>since $m+1 &lt; N.$ Hence the theorem is true for $n=N$ and the result follows by induction.</p> <p>EDIT: It turns out that this proof was discovered by Erdős and another mathematician, Kalmar, independently and almost simultaneously, in 1939. See <a href="http://www.ias.ac.in/resonance/Mar1998/pdf/Mar1998Reflections.pdf" rel="noreferrer">Reflections, Ramanujan and I,</a> by Paul Erdős.</p>
game-theory
<p>Lets say $2$ players $A$ and $B$ make a bet, who can have more money at the end after playing the following game:</p> <p>a coin is flipped:</p> <p>with $51\%$ probability it lands tails, with $49\%$ probability it lands heads</p> <p>you win if it lands heads, where you get back your bet $\times 2$.</p> <p>e.g. you bet $\$1$ and it lands heads, then you get back $\$2$</p> <p>e.g. you bet $\$2$ and it lands tails, then you get back $\$0$</p> <p>here are the rules to the bet between A and B (the winner of the bet wins $\$100000$):</p> <ul> <li>you both start with $\$100$ (given to you for free, you're not allowed to cash this out nor the money you make from the coin game)</li> <li>each player may play the game as many times as they want and bet as much as they want for each time they play the game</li> <li>player A must go first (player A plays the casino games as many times as he wants then decides to stop, after that, A can't play the game anymore)</li> </ul> <p>obviously the optimal strategy for player $B$ involves playing until $B$ either goes bankrupt or has more money than $A$ (although it's not obvious what bet sizes to use).</p> <p>what would be the optimal strategy for $A$?</p>
<p>Once $A$ has finished playing, ending with an amount $a$, the strategy for $B$ is simple and well-known: Use bold play. That is, aim for a target sum of $a+\epsilon$ and bet what is needed to reach this goal exactly or bet all, whatever is less. As seen for example <a href="http://www.maa.org/joma/Volume8/Siegrist/Bold.xml" rel="nofollow">here</a>, the probability of $B$ reaching this target is maximized by this strategy and depends only on the initial proportion $\alpha:=\frac{100}{a+\epsilon}\in(0,1)$. (Of course, $B$ wins immediately if $a&lt;100$). While the function $p(\alpha)$ that returns $B$'s winning probability is fractal and depends on the dyadic expansion of the number $\alpha$, we can for simplicyity (or a first approximate analysis) assume that $p(\alpha)=\alpha$: If the coin were fair, we would indeed have $p(\alpha)=\alpha$, and the coin is quite close to being fair. Also, we drop the $\epsilon$ as $B$ may chose it arbitrarily small. (This is the same as saying that $B$ wins in case of a tie).</p> <p>In view of this, what should $A$ do? If $A$ does not play at all, $B$ wins with probability $\approx 1$. If $A$ decides to bet $x$ once and then stop, $B$ wins if either $A$ loses ($p=0.51$) and $B$ wins immediately or if $A$ wins $p=0.49$ and then $B$ wins (as seen above) with $p(\frac{100}{100+x})\approx \frac{100}{100+x}$. So if $A$ decides beforehand to play only once, she better bet all she has and thus wins the grand prize with probaility $\approx 0.49\cdot(1-p(\frac12))\approx \frac14$.</p> <p>Assume $A$ wins the first round and has $200$. What is the best decision to do now? Betting $x&lt;100$ will result in a winning probability of approximately $$0.49\cdot(1-\frac{100}{200+x})+0.51\cdot(1- \frac{100}{200-x}) $$ It looks like the best to do is stop playing (with winning probability $\approx\frac12$ now).</p> <p>Alernatively, let us assume instead that $A$ employs bold play as well with a target sum $T&gt;100$. Then the probability of reaching the target is $\approx \frac{100}{T}$, so the total probability of $A$ winning is approximately $$ \frac{100}T\cdot(1-\frac{100}T)$$ and this is maximized precisely when $T=200$. This repeats what we suspect from above:</p> <blockquote> <p>The optimal strategy for $A$ is to play once and try to double, resulting in a winning probability $\approx \frac14$.</p> </blockquote> <p>Admittedly, the optimality of this strategy for $A$ is not rigorously shown and especially there may be some gains from exploiting the detailed shape of $B$'s winnign probability function, but I am pretty sure this is a not-too-bad approximation.</p>
<p>The best answer provided by <a href="https://math.stackexchange.com/users/72669/marshall">marshall</a> / <a href="https://math.stackexchange.com/users/39174/hagen-von-eitzen">Hagen von Eitzen</a> on the comments of his answer:</p> <blockquote> <p>That's a smart answer there ("Bold play to target that can be reached with probability <span class="math-container">$\frac12$</span>") and even lets the deviation from linearity cancel itself out! And you may have misread the answer: With that value, <span class="math-container">$A$</span> wins with <span class="math-container">$\frac 14$</span> <strong>precisely</strong> (the <span class="math-container">$0.249999385$</span> were for bold play with target <span class="math-container">$196$</span> instead of <span class="math-container">$195.67803788$</span>). – <em>Hagen von Eitzen</em> Jul 10 '13 at 13:16 </p> </blockquote> <p>So, <span class="math-container">$A$</span> should not just play all in once, but use a bold play strategy to a goal that is reachable by <span class="math-container">$p=50\%$</span>. If he loses, he lost, if he wins, the best strategy for B is to also bold play up to that goal, which he can reach with <span class="math-container">$p=50\%$</span> (since A and B start from the same money). So, <strong>A will win with exactly p=25%</strong>, and may play many times.</p> <p>Note here <span class="math-container">$A$</span> may be forced to play fractional dollars.</p>
linear-algebra
<p>I'm teaching a linear algebra course this term (using Lay's book) and would like some fun "challenge problems" to give the students. The problems that I am looking for should be be easy to state and have a solution that </p> <ol> <li><p>Requires only knowledge that an average matrix algebra student would be expected to have (i.e. calculus and linear algebra, but not necessarily abstract algebra).</p></li> <li><p>Has a solution that requires cleverness, but not so much cleverness that only students with contest math backgrounds will be able to solve the problem.</p></li> </ol> <p>An example of the type of problem that I have in mind is:</p> <blockquote> <p>Fix an integer $n&gt;1$. Let $S$ be the set of all $n \times n$ matrices whose entries are only zero and one. Show that the average determinant of a matrix in $S$ is zero.</p> </blockquote>
<p>One of my favourites is the odd-town puzzle.</p> <p>A town with $n$ inhabitants has $m$ clubs such that</p> <ul> <li>Each club has an odd number of members</li> <li>Any two clubs have an even number of common members (zero included)</li> </ul> <p>Show that $m \le n$.</p> <p>It becomes easy once you treat each club as a vector. The conditions imply that the vectors are linearly independent over $\mathbb{F}_{2}$.</p>
<p>This was an old Putnam problem, but if your students have seen determinants, I don't think it'd be beyond them.</p> <p>Alice and Bob play the following game: they start with an empty 2008x2008 matrix (p.s. take a wild guess which year this was) and take turns writing numbers in each of the $2008^2$ positions. Once the matrix is filled, Alice wins if the determinant is nonzero and Bob wins if the determinant is zero. If Alice goes first, does either player have a winning strategy?</p>
linear-algebra
<p>I have two square matrices: <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. <span class="math-container">$A^{-1}$</span> is known and I want to calculate <span class="math-container">$(A+B)^{-1}$</span>. Are there theorems that help with calculating the inverse of the sum of matrices? In general case <span class="math-container">$B^{-1}$</span> is not known, but if it is necessary then it can be assumed that <span class="math-container">$B^{-1}$</span> is also known.</p>
<p>In general, <span class="math-container">$A+B$</span> need not be invertible, even when <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are. But one might ask whether you can have a formula under the additional assumption that <span class="math-container">$A+B$</span> <em>is</em> invertible.</p> <p>As noted by Adrián Barquero, there is <a href="http://www.jstor.org/stable/2690437" rel="noreferrer">a paper by Ken Miller</a> published in the <em>Mathematics Magazine</em> in 1981 that addresses this.</p> <p>He proves the following:</p> <p><strong>Lemma.</strong> If <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> are invertible, and <span class="math-container">$B$</span> has rank <span class="math-container">$1$</span>, then let <span class="math-container">$g=\operatorname{trace}(BA^{-1})$</span>. Then <span class="math-container">$g\neq -1$</span> and <span class="math-container">$$(A+B)^{-1} = A^{-1} - \frac{1}{1+g}A^{-1}BA^{-1}.$$</span></p> <p>From this lemma, we can take a general <span class="math-container">$A+B$</span> that is invertible and write it as <span class="math-container">$A+B = A + B_1+B_2+\cdots+B_r$</span>, where <span class="math-container">$B_i$</span> each have rank <span class="math-container">$1$</span> and such that each <span class="math-container">$A+B_1+\cdots+B_k$</span> is invertible (such a decomposition always exists if <span class="math-container">$A+B$</span> is invertible and <span class="math-container">$\mathrm{rank}(B)=r$</span>). Then you get:</p> <p><strong>Theorem.</strong> Let <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> be nonsingular matrices, and let <span class="math-container">$B$</span> have rank <span class="math-container">$r\gt 0$</span>. Let <span class="math-container">$B=B_1+\cdots+B_r$</span>, where each <span class="math-container">$B_i$</span> has rank <span class="math-container">$1$</span>, and each <span class="math-container">$C_{k+1} = A+B_1+\cdots+B_k$</span> is nonsingular. Setting <span class="math-container">$C_1 = A$</span>, then <span class="math-container">$$C_{k+1}^{-1} = C_{k}^{-1} - g_kC_k^{-1}B_kC_k^{-1}$$</span> where <span class="math-container">$g_k = \frac{1}{1 + \operatorname{trace}(C_k^{-1}B_k)}$</span>. In particular, <span class="math-container">$$(A+B)^{-1} = C_r^{-1} - g_rC_r^{-1}B_rC_r^{-1}.$$</span></p> <p>(If the rank of <span class="math-container">$B$</span> is <span class="math-container">$0$</span>, then <span class="math-container">$B=0$</span>, so <span class="math-container">$(A+B)^{-1}=A^{-1}$</span>).</p>
<p>It is shown in <a href="http://dspace.library.cornell.edu/bitstream/1813/32750/1/BU-647-M.version2.pdf" rel="noreferrer">On Deriving the Inverse of a Sum of Matrices</a> that </p> <p><span class="math-container">$(A+B)^{-1}=A^{-1}-A^{-1}B(A+B)^{-1}$</span>.</p> <p>This equation cannot be used to calculate <span class="math-container">$(A+B)^{-1}$</span>, but it is useful for perturbation analysis where <span class="math-container">$B$</span> is a perturbation of <span class="math-container">$A$</span>. There are several other variations of the above form (see equations (22)-(26) in this paper). </p> <p>This result is good because it only requires <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> to be nonsingular. As a comparison, the SMW identity or Ken Miller's paper (as mentioned in the other answers) requires some nonsingualrity or rank conditions of <span class="math-container">$B$</span>.</p>
probability
<p>In Season 5 Episode 16 of Agents of Shield, one of the characters decides to prove she can't die by pouring three glasses of water and one of poison; she then randomly drinks three of the four cups. I was wondering how to compute the probability of her drinking the one with poison.</p> <p>I thought to label the four cups $\alpha, \beta, \gamma, \delta$ with events </p> <ul> <li>$A = \{\alpha \text{ is water}\}, \ a = \{\alpha \text{ is poison}\}$</li> <li>$B = \{\beta \text{ is water}\},\ b = \{\beta \text{ is poison}\}$</li> <li>$C = \{\gamma \text{ is water}\},\ c = \{\gamma \text{ is poison}\}$</li> <li>$D = \{\delta \text{ is water}\},\ d = \{\delta \text{ is poison}\}$</li> </ul> <p>If she were to drink in order, then I would calculate $P(a) = {1}/{4}$. Next $$P(b|A) = \frac{P(A|b)P(b)}{P(A)}$$ Next $P(c|A \cap B)$, which I'm not completely sure how to calculate.</p> <p>My doubt is that I shouldn't order the cups because that assumes $\delta$ is the poisoned cup. I am also unsure how I would calculate the conditional probabilities (I know about Bayes theorem, I mean more what numbers to put in the particular case). Thank you for you help.</p>
<p>The probability of not being poisoned is exactly the same as the following problem:</p> <p>You choose one cup and drink from the other three. What is the probability of choosing the poisoned cup (and not being poisoned)? That probability is 1/4.</p> <p>Therefore, the probability of being poisoned is 3/4.</p>
<p>NicNic8 has provided a nice intuitive answer to the question. </p> <p>Here are three alternative methods. In the first, we solve the problem directly by considering which cups are selected if she is poisoned. In the second, we solve the problem indirectly by considering the order in which the cups are selected if she is not poisoned. In the third, we add the probabilities that she was poisoned with the first cup, second cup, or third cup. </p> <p><strong>Method 1:</strong> We use the <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="noreferrer">hypergeometric distribution</a>. </p> <p>There are $\binom{4}{3}$ ways to select three of the four cups. Of these, the person selecting the cups is poisoned if she selects the poisoned cup and two of the three cups of water, which can be done in $\binom{1}{1}\binom{3}{2}$ ways. Hence, the probability that she is poisoned is $$\Pr(\text{poisoned}) = \frac{\binom{1}{1}\binom{3}{2}}{\binom{4}{3}} = \frac{1 \cdot 3}{4} = \frac{3}{4}$$ </p> <p><strong>Method 2:</strong> We subtract the probability that she is not poisoned from $1$. </p> <p>The probability that the first cup she selects is not poisoned is $3/4$ since three of the four cups do not contain poison. If the first cup she selects is not poisoned, the probability that the second cup she selects is not poisoned is $2/3$ since two of the three remaining cups do not contain poison. If both of the first two cups she selects are not poisoned, the probability that the third cup she selects is also not poisoned is $1/2$ since one of the two remaining cups is not poisoned. Hence, the probability that she is not poisoned if she drinks three of the four cups is $$\Pr(\text{not poisoned}) = \frac{3}{4} \cdot \frac{2}{3} \cdot \frac{1}{2} = \frac{1}{4}$$ Hence, the probability that she is poisoned is $$\Pr(\text{poisoned}) = 1 - \Pr(\text{not poisoned}) = 1 - \frac{1}{4} = \frac{3}{4}$$</p> <p><em>Addendum</em>: We can relate this method to the first method by using the hypergeometric distribution.</p> <p>She is not poisoned if she selects all three cups which do not contain poison when selecting three of the four cups. Hence, the probability that she is not poisoned is $$\Pr(\text{not poisoned}) = \frac{\dbinom{3}{3}}{\dbinom{4}{3}} = \frac{1}{4}$$ so the probability she is poisoned is $$\Pr(\text{poisoned}) = 1 - \frac{\dbinom{3}{3}}{\dbinom{4}{3}} = 1 - \frac{1}{4} = \frac{3}{4}$$</p> <p><strong>Method 3:</strong> We calculate the probability that the person is poisoned by adding the probabilities that she is poisoned with the first cup, the second cup, and the third cup.</p> <p>Let $P_k$ denote the event that she is poisoned with the $k$th cup.</p> <p>Since there are four cups, of which just one contains poison, the probability that she is poisoned with her first cup is $$\Pr(P_1) = \frac{1}{4}$$</p> <p>To be poisoned with the second cup, she must not have been poisoned with the first cup and then be poisoned with the second cup. The probability that she is not poisoned with the first cup is $\Pr(P_1^C) = 1 - 1/4 = 3/4$. If she is not poisoned with the first cup, there are three cups remaining of which one is poisoned, so the probability that she is poisoned with the second cup if she is not poisoned with the first is $\Pr(P_2 \mid P_1^C) = 1/3$. Hence, the probability that she is poisoned with the second cup is $$\Pr(P_2) = \Pr(P_2 \mid P_1^C)\Pr(P_1) = \frac{3}{4} \cdot \frac{1}{3} = \frac{1}{4}$$</p> <p>To be poisoned with the third cup, she must not have been poisoned with the first two cups and then be poisoned with the third cup. The probability that she is not poisoned with the first cup is $\Pr(P_1^C) = 3/4$. The probability that she is not poisoned with the second cup given that she was not poisoned with the first is $\Pr(P_2^C \mid P_1^C) = 1 - \Pr(P_2 \mid P_1^C) = 1 - 1/3 = 2/3$. If neither of the first two cups she drank was poisoned, two cups are left, one of which is poisoned, so the probability that the third cup she drinks is poisoned given that the first two were not is $\Pr(P_3 \mid P_1^C \cap P_2^C) = 1/2$. Hence, the probability that she is poisoned with the third cup is $$\Pr(P_3) = \Pr(P_3 \mid P_1^C \cap P_2^C)\Pr(P_2^C \mid P_1^C)\Pr(P_1^C) = \frac{1}{2} \cdot \frac{2}{3} \cdot \frac{3}{4} = \frac{1}{4}$$ Hence, the probability that she is poisoned is $$\Pr(\text{poisoned}) = \Pr(P_1) + \Pr(P_2) + \Pr(P_3) = \frac{1}{4} + \frac{1}{4} + \frac{1}{4} = \frac{3}{4}$$ </p>
linear-algebra
<p>Consider the class of rational functions that are the result of dividing one linear function by another:</p> <p>$$\frac{a + bx}{c + dx}$$</p> <p>One can easily compute that, for $\displaystyle x \neq \frac cd$ $$\frac{\mathrm d}{\mathrm dx}\left(\frac{a + bx}{c + dx}\right) = \frac{bc - ad}{(c+dx)^2} \lessgtr 0 \text{ as } ad - bc \gtrless 0$$ Thus, we can easily check whether such a rational function is increasing or decreasing (on any connected interval in its domain) by checking the determinant of a corresponding matrix</p> <p>\begin{pmatrix}a &amp; b \\ c &amp; d\end{pmatrix}</p> <p>This made me wonder whether there is some known deeper principle that is behind this connection between linear algebra and rational functions (seemingly distant topics), or is this probably just a coincidence?</p>
<p>I'll put it more simply. If the determinant is zero, then the linear functions $ax+b$ and $cx+d$ are rendered linearly dependent. For a pair of monovariant functions this forces the ratio between them to be a constant. The zero determinant condition is thereby a natural boundary between increasing and decreasing functions.</p>
<p>What you are looking at is a <a href="https://en.wikipedia.org/wiki/M%C3%B6bius_transformation" rel="noreferrer">Möbius transformation</a>. The relationship between matrices and these functions are given in some detail in the Wikipedia article. Most of this is not anything that I know much about, perhaps another responder will give better details.</p> <p>What you can find is that the composition of two of these functions corresponds to matrix multiplication with the matrix defined as you have inferred from the determinant issue. </p> <p>These are also related to continued fraction arithmetic since a continued fraction just is a composition of these functions. A simple continued fraction is a number $a_0+\frac{1}{a_1 + \frac{1}{a_2 + \cdots}}$ and you can see almost directly that each level of the continued fraction is something like $t+\frac{1}{x} = \frac{tx+1}{x}$ where "x" is "the rest of the continued fraction." Each time we expand a bit more of the continued fraction, we engage in just this composition of functions as above. So Gosper used this relationship to perform term-at-a-time arithmetic of continued fractions. In practice this means representing a continued fraction as a matrix product. </p> <p>For instance, $1+\sqrt{2} = 2 + \frac{1}{2+\frac{1}{2 + \cdots}}$ so you could represent it as $$\prod^{\infty} \pmatrix{2 &amp; 1 \\ 1 &amp; 0}$$ And to find out what $\frac{3}{5}(1+\sqrt{2})$ is you could then calculate, to arbitrary precision, $$\pmatrix{3 &amp; 0 \\ 0 &amp; 5}\times \prod^{\infty} \pmatrix{2 &amp; 1 \\ 1 &amp; 0}$$</p>
logic
<p>It's well known that vacuous truths are a concept, i.e. an implication being true even if the premise is false.</p> <p>What would be the problem with simply redefining this to be evaluated to false? Would we still be able to make systems work with this definition or would it lead to a problem somewhere? Why must it be the case that false -> false is true and false -> true is true?</p>
<p>Notice that 3=5 is false. but if 3=5 we can prove 8=8 which is true.</p> <p>$$ 3=5$$ </p> <p>therefore $$ 5=3$$</p> <p>Add both sides, $$8=8$$</p> <p>We can also prove that $$ 8=10$$ which is false.</p> <p>$$ 3=5$$</p> <p>Add $5$ to both sides, we get $$8=10$$</p> <p>The point is that if we assume a false assumption, then we can claim whatever we like.</p> <p>That means " False $\implies$ False " is true. </p> <p>And " False $\implies$ True " is true. </p>
<p>Clearly we want $P\rightarrow P$ to be true, wouldn't you agree? </p> <p>I mean, if i say:</p> <blockquote> <p>If Pat is a bachelor, then Pat is a bachelor</p> </blockquote> <p>do you really dispute the truth of that claim, or claim that it depends on whether or not Pat really is a bachelor? The whole point of conditionals is that we can say '<em>if</em>', and thereby imagine a situation where something would be the case, whether it is actually the case or not. And guess what: <em>if</em> Pat would be a bachelor, then Pat would be a bachelor, even if Pat is not actually a bachelor.</p> <p>So, if $P$ is false, it better be the case that $false \rightarrow false = true$, for otherwise $P \rightarrow P$ would be false, which is just weird.</p> <p>Of course, we also want $true \rightarrow true = true$ by this same argument, for otherwise again we would have $P \rightarrow P$ being false.</p> <p>As far as $false \rightarrow true$ is concerned: given that we have that $true \rightarrow true =true$, $false \rightarrow false$, and ( I think you would certainly agree) $true \rightarrow false = false$, we better set $false \rightarrow true =true$, because otherwise the $\rightarrow$ would become commutative, i.e. We would have that $P \rightarrow Q$ is equivalent to $Q \rightarrow P$ ... which is highly undesired, since conditionals have a 'direction' to them that cannot be reversed automatically. Indeed, while I think you would agree with the truth of:</p> <blockquote> <p>'if Pat is a bachelor, then Pat is male'</p> </blockquote> <p>I doubt you would agree with:</p> <blockquote> <p>'if Pat is male, then Pat is a bachelor'</p> </blockquote> <p>EDIT</p> <p>Re-reading your question, and considering some of the ensuing discussions and comments, I wonder if the following might help:</p> <p>Suppose that we <em>know</em> some statement $P$ is false, i.e. We know that:</p> <p>$1. \neg P \quad Given$</p> <p>Then we can show that $P$ implies any $Q$, given the standard definition of logical implication:</p> <p>$2. P \quad Assumption$</p> <p>$3. P \lor Q \quad \lor \ Intro \ 2$</p> <p>$4. Q \quad Disjunctive \ Syllogism \ 1,3$</p> <p>And, using our typical rule for $\rightarrow \ Intro$, we can then also get:</p> <p>$5. P \rightarrow Q \quad \rightarrow \ Intro \ 2-5$</p> <p>And this of course works whether $Q$ is true or false.</p>
probability
<p>Let $\{A_n\}$ and $\{B_n\}$ be two bases for an $N$-dimensional <a href="http://en.wikipedia.org/wiki/Hilbert_space">Hilbert space</a>. Does there exist a <a href="http://en.wikipedia.org/wiki/Unit_vector">unit vector</a> $V$ such that: </p> <p>$$(V\cdot A_j)\;(A_j\cdot V) = (V\cdot B_j)\;(B_j\cdot V) = 1/N\;\;\; \ \text{for all} \ 1\le j\le N?$$</p> <hr> <p><strong>Notes and application:</strong><br> That the $\{A_n\}$ and $\{B_n\}$ are <a href="http://en.wikipedia.org/wiki/Basis_%28linear_algebra%29">bases</a> means that<br> $$(A_j\cdot A_k) =\left\{\begin{array}{cl} 1&amp;\;\text{if }j=k,\\ 0&amp;\;\text{otherwise}.\end{array}\right.$$ </p> <p>In the physics notation, one might write $V\cdot A_j = \langle V\,|\,A_j\rangle$. In quantum mechanics, $P_{jk} = |\langle A_j|B_k\rangle|^2$ is the "transition probability" between the states $A_j$ and $B_k$. "Unbiased" means that there is no preference in the transition probabilities. A subject much studied in <a href="http://en.wikipedia.org/wiki/Quantum_information">quantum information theory</a> is <a href="http://en.wikipedia.org/wiki/Mutually_unbiased_bases">"mutually unbiased bases" or MUBs</a>. Two mutually unbiased bases satisfy<br> $|\langle A_j|B_k\rangle|^2 = 1/N\;\;$ for all $j,k$. </p> <p>If it is true that the vector $V$ always exists, then one can multiply the rows and columns of any unitary matrix by complex phases so as to obtain <a href="http://brannenworks.com/Gravity/qioumm_view.pdf">a unitary matrix where each row and column individually sums to one</a>.</p> <hr> <p>If true, then $U(n)$ can be written as follows:<br> $$U(n) = \exp(i\alpha) \begin{pmatrix}1&amp;0&amp;0&amp;0...\\0&amp;e^{i\beta_1}&amp;0&amp;0...\\0&amp;0&amp;e^{i\beta_2}&amp;0...\end{pmatrix} M \begin{pmatrix}1&amp;0&amp;0&amp;0...\\0&amp;e^{i\gamma_1}&amp;0&amp;0...\\0&amp;0&amp;e^{i\gamma_2}&amp;0...\end{pmatrix}$$ where the Greek letters give complex phases and where $M$ is a "magic" unitary matrix, that is, $M$ has all rows and columns individually sum to 1.</p> <p>And $M$ can be written as $M=\exp(im)$ where $m$ is Hermitian and has all rows and columns sum to 0. What's significant about this is that the $m$ form a Lie algebra. Thus unitary matrices can be thought of as complex phases, plus a Lie algebra. This is a new decomposition of unitary matrices.</p> <p>Since $m$ is Hermitian and has all rows and columns sum to 0, it is equivalent to an $(n-1)\times(n-1)$ Hermitian matrix with no restriction on the row and column sums. And this shows that $U(n)$ is equivalent to complex phases added to an object (the $M$ matrices) that is equivalent to $U(n-1)$. This gives a recursive definition of unitary matrices entirely in terms of complex phases.</p>
<p>I believe the answer to be yes, and it follows by some symplectic geometry of Lagrangian intersections. </p> <p>Let $U$ be the unitary matrix so that $B_j = U A_j$. Without loss of generality, we will also assume that $A_j = e_j$. This means that $B_j = U e_j$.</p> <p>We will identify $\mathbb C^N = \mathbb R^{2N}$.</p> <p>Then, the first condition on the vector $V$ is that: $$ |(V, e_j)|^2 = \frac{1}{N}, j=1, \dots, N $$ This is equivalent to saying that $V = \frac{1}{\sqrt{N}} \sum \mathrm e^{i \theta_j} e_j$, or, in other words, that $V$ lies in the Lagrangian torus in $\mathbb R^{2N}$ with the standard symplectic structure $\sum dx_j \wedge dy_j$, defined by $\{ |x_j|^2 + |y_j|^2 = \frac{1}{\sqrt{N}} \}$.</p> <p>The second condition on $V$ is that $$ |(V, U e_j)|^2 = \frac{1}{N}. $$ Thus, $U^* V$ also should lie in the torus $L$. Thus, the vector $V$ exists if and only if $L \cap UL$ is non-empty.</p> <p>(Note the first condition gives automatically that $V$ is a unit vector.)</p> <p>Right now, I don't see how to take advantage of the linearity in this problem, so I will use an incredibly high powered theory (Floer theory). If I think of a better solution, I will update.</p> <p>Notice that the action of $U$ on $\mathbb C^N$ induces a map on $\mathbb CP^{N-1}$. Furthermore, if we write $U=\mathrm{e}^{iH}$ for a Hermitian $H$, then $U$ is the time-1 map of the Hamiltonian flow generated by the Hamiltonian $$h(v) = \frac{1}{2} \Re (v, Hv).$$ </p> <p>Finally, we note that $L$ projects to the Clifford torus $L&#39;$ in $\mathbb CP^{N-1}$. It is known for Floer theoretic reasons (not sure who first proved it... there are now many proofs in the literature) that the Clifford torus is not Hamiltonian displaceable, so there must always exist an intersection point. After normalizing, this lifts to an intersection point in $\mathbb C^N$, as desired.</p> <p>Note that the Floer homology argument is a very powerful tool. I suspect that a much simpler proof can be found, since this doesn't use the linear structure. </p> <hr> <p>EDIT: Apparently my use of the term "Clifford torus" is non-standard. Here is what I mean by it: Consider $\mathbb CP^{N-1}$ as the quotient of the unit sphere in $\mathbb C^{N}$ by the $S^1$ action by multiplication by a unit complex number (as we have defined here). In the unit sphere there is a real $N$ dimensional torus given by $|z_1| = |z_2| = \dots = |z_N| = \frac{1}{\sqrt{N}}$. The image of this $N$ dimensional torus by the quotient map is an $N-1$ dimensional torus in $\mathbb CP^{N-1}$. Equivalently, it is the torus given in homogeneous coordinates on $\mathbb CP^{N-1}$ by $[e^{i \theta_1}, e^{i \theta_2}, \dots, e^{i \theta_{N-1}}, 1]$. </p>
<p>This is not a full answer, but I don't intend to work on this in the next two weeks ;-), so I thought I'd put it up here and perhaps someone else can complete it.</p> <p>Writing $U_{jk}=\langle A_j\mid B_k\rangle$ (with $U$ unitary), we can formulate the problem like this: $V$ must have a component of length $1/\sqrt{N}$ along each $B_k$, so we can write it as</p> <p>$$V=\frac{1}{\sqrt{N}}\sum_k\mathrm{e}^{\mathrm{i}\phi_k}B_k\;.$$</p> <p>Then the condition that the projections onto the $A_j$ also all have length $1/\sqrt{N}$ becomes</p> <p>$$\sqrt{N}\langle A_j \mid V\rangle=\sum_k\langle A_j\mid\mathrm{e}^{\mathrm{i}\phi_k} B_k\rangle=\sum_k U_{jk}\mathrm{e}^{\mathrm{i}\phi_k}=\mathrm{e}^{\mathrm{i}\theta_j}\;.$$</p> <p>To manipulate this more easily, we can introduce diagonal matrices $\Phi$ and $\Theta$ with diagonal elements $\phi_k$ and $\theta_j$, respectively. Then the condition becomes</p> <p>$$U\mathrm{e}^{\mathrm{i}\Phi}\vec{1}=\mathrm{e}^{\mathrm{i}\Theta}\vec{1}\;,$$</p> <p>where $\vec{1}$ is the vector with all components $1$.</p> <p>Now all unitary matrices $U$ can be written as $U=\mathrm{e}^{\mathrm{i}H}$ with $H$ Hermitian, and conversely every such exponential is a unitary matrix. Thus, we can always find a vector $V$ for any unitary $U$ if and only if we can always find $V$ for any Hermitian $H$. In particular, we can consider the one-dimensional family of unitary matrices $\mathrm{e}^{\mathrm{i}\lambda H}$ with Hermitian matrix $H$ and real number $\lambda$, and our problem then becomes showing that for arbitrary $H$ we can find $V$ for all $\lambda$. In this way we can consider the path from the identity, where we know that $V$ exists, to an arbitrary unitary matrix $\mathrm{e}^{\mathrm{i}\lambda H}$ and reduce the problem to a differential equation along this path. Thus, letting $\Phi$ and $\Theta$ (but not $H$) depend on $\lambda$, we get</p> <p>$$\mathrm{e}^{\mathrm{i}\lambda H} \mathrm{e}^{\mathrm{i}\Phi(\lambda)}\vec{1}= \mathrm{e}^{\mathrm{i}\Theta(\lambda)}\vec{1}\;,$$</p> <p>and differentiating with respect to $\lambda$ yields</p> <p>$$H\mathrm{e}^{\mathrm{i}\lambda H} \mathrm{e}^{\mathrm{i}\Phi}\vec{1}+ \mathrm{e}^{\mathrm{i}\lambda H}\mathrm{e}^{\mathrm{i}\Phi}\Phi&#39;\vec{1} = \mathrm{e}^{\mathrm{i}\Theta}\Theta&#39;\vec{1}\;,$$</p> <p>$$\mathrm{e}^{-\mathrm{i}\Theta}H\mathrm{e}^{\mathrm{i}\lambda H} \mathrm{e}^{\mathrm{i}\Phi}\vec{1}+ \mathrm{e}^{-\mathrm{i}\Theta}\mathrm{e}^{\mathrm{i}\lambda H}\mathrm{e}^{\mathrm{i}\Phi}\Phi&#39;\vec{1} = \Theta&#39;\vec{1}\;,$$</p> <p>$$\mathrm{e}^{-\mathrm{i}\Theta}H \mathrm{e}^{\mathrm{i}\Theta}\vec{1}+ \mathrm{e}^{-\mathrm{i}\Theta}\mathrm{e}^{\mathrm{i}\lambda H}\mathrm{e}^{\mathrm{i}\Phi}\vec{\Phi}&#39; = \vec{\Theta}&#39;\;,$$</p> <p>where the prime denotes the derivative with respect to $\lambda$, and $\vec{\Phi}&#39;=\Phi\vec{1}$ and $\vec{\Theta}&#39;=\Theta\vec{1}$ are real vectors containing the derivatives of the $\phi_k$ and $\theta_j$, respectively.</p> <p>If we now take the perspective that we have reached a solution $\Phi$, $\Theta$ at a certain $\lambda$ and want to determine how $\Phi$ and $\Theta$ need to change with respect to $\lambda$ to maintain the condition along the path, then we can consider everything except for $\vec{\Phi}&#39;$ and $\vec{\Theta}&#39;$ as given, and we obtain a linear system of $N$ complex equations for the $2N$ real variables $\phi&#39;_k$ and $\theta&#39;_j$, which will have a unique solution in the general case. If we could somehow show that the system cannot become singular, it would follow that we have a well-defined and well-behaved system of first-order differential equations which for given initial conditions determines a unique solution for the $\phi_k$ and $\theta_j$ as a function of $\lambda$. Since we can start out with arbitrary angles $\phi_k=\theta_k$ at the identity, this would yield an $N$-dimensional family of solutions along each path. In case the system of linear equations can become singular, one might still be able to show that there is at least one member of this family for which it doesn't.</p> <p><strong>[Edit:]</strong> I just realized that that's actually a contradiction; there can't be an $N$-dimensional family of solutions along the path and yet unique derivatives $\phi&#39;_k$ and $\theta&#39;_j$, since the derivatives would differ depending on which family member one moves to. This is resolved by looking at the differentiated condition at the identity (i.e. $\lambda=0$), which we can write as</p> <p>$$\mathrm{e}^{-\mathrm{i}\Theta}H\mathrm{e}^{\mathrm{i}\Theta}\vec{1}=\vec{\Theta}&#39;-\vec{\Phi}&#39;\;.$$</p> <p>The right-hand side is real, and that gives a condition on $\vec{\Theta}$ (and hence on $\vec{\Phi}=\vec{\Theta}$) at the identity that must be fulfilled in order for $\vec{\Theta}$ to be a suitable starting point for solutions along the path $\mathrm{e}^{\mathrm{i}\lambda H}$. These are $N$ reality conditions for $N$ real parameters, so one might be able to show that this condition has a solution. As for uniqueness, the solution will not be unique if $H$ has a canonical basis vector ( $\vec{e}_j$ with $e_{jk}=\delta_{jk}$) as an eigenvector (meaning that $A$ and $B$ share a basis vector up to phase), and this also leads to corresponding underdetermination in the linear system for $\vec{\Phi}&#39;$ and $\vec{\Theta}&#39;$. Thus if one wanted to use this approach to show uniqueness of the solution (Sam has already shown existence in the meantime), the appropriate conjecture might be that the solution is unique up to an arbitrary phase for each canonical basis vector that is an eigenvector of $H$.</p>
differentiation
<p>Could someone explain the following?</p> <p><span class="math-container">$$ \nabla_X \operatorname{tr}(AXB) = BA $$</span></p> <p>I understand that</p> <p><span class="math-container">$$ {\rm d} \operatorname{tr}(AXB) = \operatorname{tr}(BA \; {\rm d} X) $$</span></p> <p>but I don't quite understand how to move <span class="math-container">${\rm d} X$</span> out of the trace.</p>
<p>The notation is quite misleading (at least for me).</p> <p><em>Hint:</em></p> <p>Does it make sense that $$\frac{\partial}{\partial X_{mn}} \mathop{\rm tr} (A X B) = (B A)_{nm}?$$</p> <p><em>More information:</em> $$\frac{\partial}{\partial X_{mn}} \mathop{\rm tr} (A X B) = \frac{\partial}{\partial X_{mn}} \sum_{jkl} A_{jk} X_{kl} B_{lj} = \sum_{jkl} A_{jk} \delta_{km} \delta_{nl} B_{lj} = \sum_{j} A_{jm} B_{nj} =(B A)_{nm}. $$</p>
<p>Try expanding to linear order. This always eases the understanding:</p> <p><span class="math-container">$$\operatorname{tr}(A (X+dX)B)=A_{ij} (X_{jk}+dX_{jk})B_{ki}$$</span></p> <p>where Einstein's summation rule is used. Substracting <span class="math-container">$\operatorname{tr}(AXB)$</span> you get</p> <p><span class="math-container">$$\begin{align} d\operatorname{tr}(AXB)&amp;=\operatorname{tr}(A(X+dX)B)-\operatorname{tr}(AXB)\\&amp;=A_{ij} dX_{jk}B_{ki}=\underbrace{B_{ki}A_{ij}}_{=(BA)_{kj}} \; dX_{jk} \end{align}$$</span></p>
geometry
<p>It’s easy to divide an equilateral triangle into <span class="math-container">$n^2$</span>, <span class="math-container">$2n^2$</span>, <span class="math-container">$3n^2$</span> or <span class="math-container">$6n^2$</span> equal triangles.</p> <p>But can you divide an equilateral triangle into 5 congruent parts? Recently M. Patrakeev <a href="https://www.jstor.org/stable/10.4169/amer.math.monthly.124.6.547" rel="nofollow noreferrer">found</a> an awesome way to do it — see the picture below (note that the parts are non-connected — but indeed are congruent, not merely having the same area). So an equilateral triangle can also be divided into <span class="math-container">$5n^2$</span> and <span class="math-container">$10n^2$</span> congruent parts.</p> <blockquote> <p><strong>Question.</strong> Are there any other ways to divide an equilateral triangle into congruent parts? (For example, can it be divided into 7 congruent parts?) Or in the opposite direction: can you prove that an equilateral triangle can’t be divided into <span class="math-container">$N$</span> congruent parts for some <span class="math-container">$N$</span>?</p> </blockquote> <p>                                            <img src="https://i.sstatic.net/6j9W7.png" width="300"></p> <p>(Naturally, I’ve tried to find something in the spirit of the example above for some time — but to no avail. Maybe someone can find an example using computer search?..)</p> <p>I’d prefer to use finite unions of polygons as ‘parts’ and different parts are allowed to have common boundary points. But if you have an example with more general ‘parts’ — that also would be interesting.</p>
<p>Recently Pavel Guzenko found a way to divide an equilateral triangle into 15 congruent parts (and also into 30 congruent parts).</p> <p><a href="https://i.sstatic.net/wxgmY.jpg" rel="noreferrer"><img src="https://i.sstatic.net/wxgmY.jpg" alt="dissection"></a></p>
<p>In a recent preprint <a href="https://arxiv.org/abs/1812.07014" rel="noreferrer">https://arxiv.org/abs/1812.07014</a> M.Beeson shows how to divide an equilateral triangle into <span class="math-container">$15×3^6=10935$</span> equal triangles (with sides 3, 5, 7 and one angle equal to <span class="math-container">$2\pi/3$</span>).</p> <p><a href="https://i.sstatic.net/fEdEO.jpg" rel="noreferrer"><img src="https://i.sstatic.net/fEdEO.jpg" alt="10935 triangles"></a></p>
linear-algebra
<p>What is an intuitive proof of the multivariable changing of variables formula (Jacobian) without using mapping and/or measure theory?</p> <p>I think that textbooks overcomplicate the proof.</p> <p>If possible, use linear algebra and calculus to solve it, since that would be the simplest for me to understand.</p>
<p>To do it for a particular number of variables is very easy to follow. Consider what you do when you integrate a function of x and y over some region. Basically, you chop up the region into boxes of area ${\rm d}x{~\rm d} y$, evaluate the function at a point in each box, multiply it by the area of the box. This can be notated a bit sloppily as:</p> <p>$$\sum_{b \in \text{Boxes}} f(x,y) \cdot \text{Area}(b)$$</p> <p>What you do when changing variables is to chop the region into boxes that are not rectangular, but instead chop it along lines that are defined by some function, call it $u(x,y)$, being constant. So say $u=x+y^2$, this would be all the parabolas $x+y^2=c$. You then do the same thing for another function, $v$, say $v=y+3$. Now in order to evaluate the expression above, you need to find "area of box" for the new boxes - it's not ${\rm d}x~{\rm d}y$ anymore.</p> <p>As the boxes are infinitesimal, the edges cannot be curved, so they must be parallelograms (adjacent lines of constant $u$ or constant $v$ are parallel.) The parallelograms are defined by two vectors - the vector resulting from a small change in $u$, and the one resulting from a small change in $v$. In component form, these vectors are ${\rm d}u\left\langle\frac{\partial x}{\partial u}, ~\frac{\partial y}{\partial u}\right\rangle $ and ${\rm d}v\left\langle\frac{\partial x}{\partial v}, ~\frac{\partial y}{\partial v}\right\rangle $. To see this, imagine moving a small distance ${\rm d}u$ along a line of constant $v$. What's the change in $x$ when you change $u$ but hold $v$ constant? The partial of $x$ with respect to $u$, times ${\rm d}u$. Same with the change in $y$. (Notice that this involves writing $x$ and $y$ as functions of $u$, $v$, rather than the other way round. The main condition of a change in variables is that both ways round are possible.)</p> <p>The area of a paralellogram bounded by $\langle x_0,~ y_0\rangle $ and $\langle x_1,~ y_1\rangle $ is $\vert y_0x_1-y_1x_0 \vert$, (or the abs value of the determinant of a 2 by 2 matrix formed by writing the two column vectors next to each other.)* So the area of each box is </p> <p>$$\left\vert\frac{\partial x}{\partial u}{\rm d}u\frac{\partial y}{\partial v}{\rm d}v - \frac{\partial y}{\partial u}{\rm d}u\frac{\partial x}{\partial v}dv\right\vert$$</p> <p>or</p> <p>$$\left\vert \frac{\partial x}{\partial u}\frac{\partial y}{\partial v} - \frac{\partial y}{\partial u}\frac{\partial x}{\partial v}\right\vert~{\rm d}u~{\rm d}v$$</p> <p>which you will recognise as being $\mathbf J~{\rm d}u~{\rm d}v$, where $\mathbf J$ is the Jacobian.</p> <p>So, to go back to our original expression</p> <p>$$\sum_{b \in \text{Boxes}} f(x,y) \cdot \text{Area}(b)$$</p> <p>becomes</p> <p>$$\sum_{b \in \text{Boxes}} f(u, v) \cdot \mathbf J \cdot {\rm d}u{\rm d}v$$</p> <p>where $f(u, v)$ is exactly equivalent to $f(x, y)$ because $u$ and $v$ can be written in terms of $x$ and $y$, and vice versa. As the number of boxes goes to infinity, this becomes an integral in the $uv$ plane.</p> <p>To generalize to $n$ variables, all you need is that the area/volume/equivalent of the $n$ dimensional box that you integrate over equals the absolute value of the determinant of an n by n matrix of partial derivatives. This is hard to prove, but easy to intuit.</p> <hr> <p>*to prove this, take two vectors of magnitudes $A$ and $B$, with angle $\theta$ between them. Then write them in a basis such that one of them points along a specific direction, e.g.:</p> <p>$$A\left\langle \frac{1}{\sqrt 2}, \frac{1}{\sqrt 2}\right\rangle \text{ and } B\left\langle \frac{1}{\sqrt 2}(\cos(\theta)+\sin(\theta)),~ \frac{1}{\sqrt 2} (\cos(\theta)-\sin(\theta))\right\rangle $$</p> <p>Now perform the operation described above and you get $$\begin{align} &amp; AB\cdot \frac12 \cdot (\cos(\theta) - \sin(\theta)) - AB \cdot 0 \cdot (\cos(\theta) + \sin(\theta)) \\ = &amp; \frac 12 AB(\cos(\theta)-\sin(\theta)-\cos(\theta)-\sin(\theta)) \\ = &amp; -AB\sin(\theta) \end{align}$$</p> <p>The absolute value of this, $AB\sin(\theta)$, is how you find the area of a parallelogram - the products of the lengths of the sides times the sine of the angle between them.</p>
<p>The multivariable change of variables formula is nicely intuitive, and it's not too hard to imagine how somebody might have derived the formula from scratch. However, it seems that proving the theorem rigorously is not as easy as one might hope.</p> <p>Here's my attempt at explaining the intuition -- how you would derive or discover the formula.</p> <p>The first thing to understand is that if <span class="math-container">$A$</span> is an <span class="math-container">$N \times N$</span> matrix with real entries and <span class="math-container">$S \subset \mathbb R^N$</span>, then <span class="math-container">$$ \tag{1} m(AS) = |\det A| \, m(S). $$</span> Here <span class="math-container">$m(S)$</span> is the area of <span class="math-container">$S$</span> (if <span class="math-container">$N=2$</span>) or the volume of <span class="math-container">$S$</span> (if <span class="math-container">$N=3$</span>) or more generally the Lebesgue measure of <span class="math-container">$S$</span>. Technically I should assume that <span class="math-container">$S$</span> is measurable. The above equation (1) is intuitively clear from the SVD of <span class="math-container">$A$</span>: <span class="math-container">\begin{equation} A = U \Sigma V^T \end{equation}</span> where <span class="math-container">$U$</span> and <span class="math-container">$V$</span> are orthogonal and <span class="math-container">$\Sigma$</span> is diagonal with nonnegative diagonal entries. Multiplying by <span class="math-container">$V^T$</span> doesn't change the measure of <span class="math-container">$S$</span>. Multiplying by <span class="math-container">$\Sigma$</span> scales along each axis, so the measure gets multiplied by <span class="math-container">$\det \Sigma = | \det A|$</span>. Multiplying by <span class="math-container">$U$</span> doesn't change the measure.</p> <p>Next suppose <span class="math-container">$\Omega$</span> and <span class="math-container">$\Theta$</span> are open subsets of <span class="math-container">$\mathbb R^N$</span> and suppose <span class="math-container">$g:\Omega \to \Theta$</span> is <span class="math-container">$1-1$</span> and onto. We should probably assume <span class="math-container">$g$</span> and <span class="math-container">$g^{-1}$</span> are <span class="math-container">$C^1$</span> just to be safe. (Since we're just seeking an intuitive derivation of the change of variables formula, we aren't obligated to worry too much about what assumptions we make on <span class="math-container">$g$</span>.) Suppose also that <span class="math-container">$f:\Theta \to \mathbb R$</span> is, say, continuous (or whatever conditions we need for the theorem to actually be true).</p> <p>Partition <span class="math-container">$\Theta$</span> into tiny subsets <span class="math-container">$\Theta_i$</span>. For each <span class="math-container">$i$</span>, let <span class="math-container">$u_i$</span> be a point in <span class="math-container">$\Theta_i$</span>. Then <span class="math-container">\begin{equation} \int_{\Theta} f(u) \, du \approx \sum_i f(u_i) m(\Theta_i). \end{equation}</span></p> <p>Now let <span class="math-container">$\Omega_i = g^{-1}(\Theta_i)$</span> and <span class="math-container">$x_i = g^{-1}(u_i)$</span> for each <span class="math-container">$i$</span>. The sets <span class="math-container">$\Omega_i$</span> are tiny and they partition <span class="math-container">$\Omega$</span>. Then <span class="math-container">\begin{align} \sum_i f(u_i) m(\Theta_i) &amp;= \sum_i f(g(x_i)) m(g(\Omega_i)) \\ &amp;\approx \sum_i f(g(x_i)) m(g(x_i) + Jg(x_i) (\Omega_i - x_i)) \\ &amp;= \sum_i f(g(x_i)) m(Jg(x_i) \Omega_i) \\ &amp;\approx \sum_i f(g(x_i)) |\det Jg(x_i)| m(\Omega_i) \\ &amp;\approx \int_{\Omega} f(g(x)) |\det Jg(x)| \, dx. \end{align}</span></p> <p>We have discovered that <span class="math-container">\begin{equation} \int_{g(\Omega)} f(u) \, du \approx \int_{\Omega} f(g(x)) |\det Jg(x)| \, dx. \end{equation}</span> By using even tinier subsets <span class="math-container">$\Theta_i$</span>, the approximation would be even better -- so we see by a limiting argument that we actually have equality.</p> <p>At a key step in the above argument, we used the approximation <span class="math-container">\begin{equation} g(x) \approx g(x_i) + Jg(x_i)(x - x_i) \end{equation}</span> which is a good approximation when <span class="math-container">$x$</span> is close to <span class="math-container">$x_i$</span></p>
probability
<p>I have a very simple simulation program, the sequence is:</p> <ul> <li>Create an array of 400k elements</li> <li>Use a PRNG to pick an index, and mark the element (repeat 400k times)</li> <li>Count number of marked elements.</li> </ul> <p>An element may be picked more than once, but counted as only one "marked element".</p> <p>The PRNG is properly seeded. No matter how many times I run the simulation, I always end up getting around 63% (252k) marked elements.</p> <p>What is the math behind this? Or was there a fault in my PRNG?</p>
<p>No, your program is correct. The probability that a particular element is not marked at all, is about $\frac{1}{e}$. This comes from the poisson-distribution which is a very well approximation for large samples (400k is very large). So $1-\frac{1}{e}$ is the fraction of marked elements.</p>
<p>Let $X_k\in \{0, 1\}$ indicate if entry $k$ is <em>unmarked</em> (in which case $X_k=1$). Then the expected number of unmarked items $X$ in an array of $N$ is $$\mathbb{E}(X) = \mathbb{E}\left(\sum_{k=1}^N X_k\right) = \sum_{k=1}^N\mathbb{E}(X_k) = N \, \left(1-\frac{1}{N}\right)^N \approx N \, e^{-1}.$$</p> <p>The expected number of marked items is therefore approximately $N \, (1-e^{-1})$ or $N \cdot 0.63212\cdots$ which matches your observations quite well. To make the approximation more precise one can show that $$ N\,e^{-1} -\frac{1}{2e(1-1/N)}&lt; N\left(1-\frac{1}{N}\right)^N &lt; N\,e^{-1} -\frac{1}{2e}$$ for all $N\geq 2$.</p>
probability
<p>If we have a probability space <span class="math-container">$(\Omega,\mathcal{F},P)$</span> and <span class="math-container">$\Omega$</span> is partitioned into pairwise disjoint subsets <span class="math-container">$A_{i}$</span>, with <span class="math-container">$i\in\mathbb{N}$</span>, then the <a href="https://en.wikipedia.org/wiki/Law_of_total_probability" rel="noreferrer">law of total probability</a> says that <span class="math-container">$P(B)=\sum_{i=1}^{n}P(B|A_{i})P(A_i{})$</span>. This law can be proved using the following two facts: <span class="math-container">\begin{align*} P(B|A_{i})&amp;=\frac{P(B\cap A_{i})}{P(A_{i})}\\ P\left(\bigcup_{i\in \mathbb{N}} S_{i}\right)&amp;=\sum_{i\in\mathbb{N}}P(S_{i}) \end{align*}</span> Where the <span class="math-container">$S_{i}$</span>'s are a pairwise disjoint and a <span class="math-container">$\textit{countable}$</span> family of events in <span class="math-container">$\mathcal{F}$</span>.</p> <p>However, if we want to apply the law of total probability on a continuous random variable <span class="math-container">$X$</span> with density <span class="math-container">$f$</span>, we have (<a href="https://en.wikipedia.org/wiki/Law_of_total_probability#Continuous_case" rel="noreferrer">like here</a>): <span class="math-container">$$P(A)=\int_{-\infty}^{\infty}P(A|X=x)f(x)dx$$</span> which is the law of total probabillity but with the summation replaced with an integral, and <span class="math-container">$P(A_{i})$</span> replaced with <span class="math-container">$f(x)dx$</span>. The problem is that we are conditioning on an <span class="math-container">$\textit{uncountable}$</span> family. Is there any proof of this statement (if true)?</p>
<p>Excellent question. The issue here is that you first have to define what <span class="math-container">$\mathbb{P}(A|X=x)$</span> means, as you're conditioning on the event <span class="math-container">$[X=x]$</span>, which has probability zero if <span class="math-container">$X$</span> is a continuous random variable. Can we still give <span class="math-container">$\mathbb{P}(A|X=x)$</span> a meaning? In the words of Kolmogorov,</p> <blockquote> <p>&quot;The concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible.&quot;</p> </blockquote> <p>The problem with conditioning on a single event of probability zero is that it can lead to paradoxes, such as the <a href="https://en.wikipedia.org/wiki/Borel%E2%80%93Kolmogorov_paradox" rel="noreferrer">Borel-Kolmogorov paradox</a>. However, if we don't just have an isolated hypothesis such as <span class="math-container">$[X=x]$</span>, but a whole partition of hypotheses <span class="math-container">$\{[X=x] ~|~ x \in \mathbb{R}\}$</span> with respect to which our notion of conditional probability is supposed to make sense, we can give a meaning to <span class="math-container">$\mathbb{P}(A|X=x)$</span> for almost every <span class="math-container">$x$</span>. Let's look at an important special case.</p> <hr /> <h2>Continuous random variables in Euclidean space</h2> <p>In many instances where we might want to apply the law of total probability for continuous random variables, we are actually interested in events of the form <span class="math-container">$A = [(X,Y) \in B]$</span> where <span class="math-container">$B$</span> is a Borel set and <span class="math-container">$X,Y$</span> are random variables taking values in <span class="math-container">$\mathbb{R}^d$</span> which are absolutely continuous with respect to Lebesgue measure. For simplicity, I will assume here that <span class="math-container">$X,Y$</span> take values in <span class="math-container">$\mathbb{R}$</span>, although the multivariate case is completely analogous. Choose a representative of <span class="math-container">$f_{X,Y}$</span>, the density of <span class="math-container">$(X,Y)$</span>, and a representative of <span class="math-container">$f_X$</span>, the density of <span class="math-container">$X$</span>, then the conditional density of <span class="math-container">$Y$</span> given <span class="math-container">$X$</span> is defined as <span class="math-container">$$ f_{Y|X}(x,y) = \frac{f_{X,Y}(x,y)}{f_{X}(x)}$$</span> at all points <span class="math-container">$(x,y)$</span> where <span class="math-container">$f(x) &gt; 0$</span>. We may then define for <span class="math-container">$A = [(X,Y) \in B]$</span> and <span class="math-container">$B_x := \{ y \in \mathbb{R} : (x,y) \in B\}$</span></p> <p><span class="math-container">$$\mathbb{P}(A | X = x) := \int_{B_x}^{} f_{Y|X}(x,y)~\mathrm{d}y, $$</span> at least at all points <span class="math-container">$x$</span> where <span class="math-container">$f(x) &gt; 0$</span>. Note that this definition depends on the choice of representatives we made for the densities <span class="math-container">$f_{X,Y}$</span> and <span class="math-container">$f_{X}$</span>, and we should keep this in mind when trying to interpret <span class="math-container">$P(A|X=x)$</span> pointwise. Whichever choice we made, the law of total probability holds as can be seen as follows:</p> <p><span class="math-container">\begin{align*} \mathbb{P}(A) &amp;= \mathbb{E}[1_{B}(X,Y)] = \int_{B} f_{X,Y}(x,y)~\mathrm{d}y~\mathrm{d}x = \int_{-\infty}^{\infty}\int_{B_x} f_{X,Y}(x,y)~\mathrm{d}y~\mathrm{d}x \\ &amp;= \int_{-\infty}^{\infty}f_{X}(x)\int_{B_x} f_{Y|X}(x,y)~\mathrm{d}y~\mathrm{d}x = \int_{-\infty}^{\infty}\mathbb{P}(A|X=x)~ f_X(x)~\mathrm{d}x. \end{align*}</span></p> <p>One can convince themselves that this construction gives us the properties we would expect if, for example, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent, which should give us some confidence that this notion of conditional probability makes sense.</p> <hr /> <h2>Disintegrations</h2> <p>The more general name for the concept we dealt with in the previous paragraph is <a href="https://en.wikipedia.org/wiki/Disintegration_theorem#Statement_of_the_theorem" rel="noreferrer">disintegration</a>. In complete generality, disintegrations need not exist, however if the probability space <span class="math-container">$\Omega$</span> is a Radon space equipped with its Borel <span class="math-container">$\sigma$</span>-field, they do. It might seem off-putting that the topology of the probability space now comes into play, but I believe for most purposes it will not be a severe restriction to assume that the probability space is a (possibly infinite) product of the space <span class="math-container">$([0,1],\mathcal{B},\lambda)$</span>, that is, <span class="math-container">$[0,1]$</span> equipped with the Euclidean topology, Borel <span class="math-container">$\sigma$</span>-field and Lebesgue measure. A one-dimensional variable <span class="math-container">$X$</span> can then be understood as <span class="math-container">$X(\omega) = F^{-1}(\omega)$</span>, where <span class="math-container">$F^{-1}$</span> is the generalized inverse of the cumulative distribution function of <span class="math-container">$X$</span>. The <a href="https://en.wikipedia.org/wiki/Disintegration_theorem#Statement_of_the_theorem" rel="noreferrer">disintegration theorem</a> then gives us the existence of a family of measures <span class="math-container">$(\mu_x)_{x \in \mathbb{R}}$</span>, where <span class="math-container">$\mu_x$</span> is supported on the event <span class="math-container">$[X=x]$</span>, and the family <span class="math-container">$(\mu_x)_{x\in \mathbb{R}}$</span> is unique up to <span class="math-container">$\text{law}(X)$</span>-almost everywhere equivalence. Writing <span class="math-container">$\mu_x$</span> as <span class="math-container">$\mathbb{P}(\cdot|X=x)$</span>, in particular, for any Borel set <span class="math-container">$A \in \mathcal{B}$</span> we then again have</p> <p><span class="math-container">$$\mathbb{P}(A) = \int_{-\infty}^{\infty} \mathbb{P}(A|X=x)~f_X(x)~\mathrm{d}x.$$</span></p> <hr /> <p>Reference for Kolmogorov quote:</p> <p><em>Kolmogoroff, A.</em>, Grundbegriffe der Wahrscheinlichkeitsrechnung., Ergebnisse der Mathematik und ihrer Grenzgebiete 2, Nr. 3. Berlin: Julius Springer. IV + 62 S. (1933). <a href="https://zbmath.org/?q=an:59.1152.03" rel="noreferrer">ZBL59.1152.03</a>.&gt;</p>
<p>Think of it like this: Suppose you have a continuous random variable <span class="math-container">$X$</span> with pdf <span class="math-container">$f(x)$</span>. Then <span class="math-container">$P(A)=E(1_{A})=E[E(1_{A}|X)]=\int E(1_{A}|X=x)f(x)dx=\int P(A|X=x)f(x)dx$</span>.</p>
logic
<p>For some reason, be it some bad habit or something else, I can not understand why the statement "p only if q" would translate into p implies q. For instance, I have the statement "Samir will attend the party only if Kanti will be there." The way I interpret this is, "It is true that Samir will attend the party only if it is true that Kanti will be at the party;" which, in my mind, becomes "If Kanti will be at the party, then Samir will be there." </p> <p>Can someone convince me of the right way?</p> <p>EDIT:</p> <p>I have read them carefully, and probably have done so for over a year. I understand what sufficient conditions and necessary conditions are. I understand the conditional relationship in almost all of its forms, except the form "q only if p." <strong>What I do not understand is, why is p the necessary condition and q the sufficient condition.</strong> I am <strong>not</strong> asking, what are the sufficient and necessary conditions, rather, I am asking <strong>why.</strong></p>
<p>Think about it: "$p$ only if $q$" means that $q$ is a <strong>necessary condition</strong> for $p$. It means that $p$ can occur <strong>only when</strong> $q$ has occurred. This means that whenever we have $p$, it must also be that we have $q$, as $p$ can happen only if we have $q$: that is to say, that $p$ <strong>cannot happen</strong> if we <strong>do not</strong> have $q$. </p> <p>The critical line is <em>whenever we have $p$, we must also have $q$</em>: this allows us to say that $p \Rightarrow q$, or $p$ implies $q$.</p> <p>To use this on your example: we have the statement "Samir will attend the party only if Kanti attends the party." So if Samir attends the party, then Kanti must be at the party, because Samir will attend the party <strong>only if</strong> Kanti attends the party.</p> <p>EDIT: It is a common mistake to read <em>only if</em> as a stronger form of <em>if</em>. It is important to emphasize that <em>$q$ if $p$</em> means that $p$ is a <strong>sufficient condition</strong> for $q$, and that <em>$q$ only if $p$</em> means that $p$ is a <strong>necessary condition</strong> for $q$.</p> <p>Furthermore, we can supply more intuition on this fact: Consider $q$ <em>only if</em> $p$. It means that $q$ can occur only when $p$ has occurred: so if we don't have $p$, we can't have $q$, because $p$ is necessary for $q$. We note that <em>if we don't have $p$, then we can't have $q$</em> is a logical statement in itself: $\lnot p \Rightarrow \lnot q$. We know that all logical statements of this form are equivalent to their contrapositives. Take the contrapositive of $\lnot p \Rightarrow \lnot q$: it is $\lnot \lnot q \Rightarrow \lnot \lnot p$, which is equivalent to $q \Rightarrow p$.</p>
<p>I don't think there's really anything to <em>understand</em> here. One simply has to learn as a fact that in mathematics jargon the words "only if" invariably encode that particular meaning. It is not really forced by the everyday meanings of "only" and "if" in isolation; it's just how it is.</p> <p>By this I mean that the mathematical meaning is certainly a <em>possible</em> meaning of the English phrase "only if", the mathematical meaning is not the <em>only</em> possible way "only if" can be used in everyday English, and it just needs to be memorized as a fact that the meaning in mathematics is less flexible than in ordinary conversation.</p> <p>To see that the mathematical meaning is at least <em>possible</em> for ordinary language, consider the sentence</p> <blockquote> <p>John smokes only on Saturdays.</p> </blockquote> <p>From this we can conclude that if we see John pulsing on a cigarette, then today must be a Saturday. We <em>cannot</em>, out of ordinary common sense, conclude that if we look at the calendar and it says today is Saturday, then John must <em>currently</em> be lighting up -- because the claim doesn't say that John smokes <em>continously</em> for the entire Saturday, or even every Saturday.</p> <p>Now, if we can agree that there's no essential difference between "if" and "when" in this context, this might as well he phrased as</p> <blockquote> <p>John is smoking now <em>only if</em> today is a Saturday.</p> </blockquote> <p>which (according to the above analysis) ought to mean, mathematically, $$ \mathit{smokes}(\mathit{John}) \implies \mathit{today}=\mathit{Saturday} $$</p>
probability
<p>I was watching the movie <span class="math-container">$21$</span> yesterday, and in the first 15 minutes or so the main character is in a classroom, being asked a &quot;trick&quot; question (in the sense that the teacher believes that he'll get the wrong answer) which revolves around theoretical probability.</p> <p>The question goes a little something like this (I'm paraphrasing, but the numbers are all exact):</p> <p>You're on a game show, and you're given three doors. Behind one of the doors is a brand new car, behind the other two are donkeys. With each door you have a <span class="math-container">$1/3$</span> chance of winning. Which door would you pick?</p> <p>The character picks A, as the odds are all equally in his favor.</p> <p>The teacher then opens door C, revealing a donkey to be behind there, and asks him if he would like to change his choice. At this point he also explains that most people change their choices out of fear; paranoia; emotion and such.</p> <p>The character does change his answer to B, but because (according to the movie), the odds are now in favor of door B with a <span class="math-container">$1/3$</span> chance of winning if door A is picked and <span class="math-container">$2/3$</span> if door B is picked.</p> <p>What I don't understand is how removing the final door increases the odds of winning if door B is picked only. Surely the split should be 50/50 now, as removal of the final door tells you nothing about the first two?</p> <p>I assume that I'm wrong; as I'd really like to think that they wouldn't make a movie that's so mathematically incorrect, but I just can't seem to understand why this is the case.</p> <p>So, if anyone could tell me whether I'm right; or if not explain why, I would be extremely grateful.</p>
<p>This problem, known as the Monty Hall problem, is famous for being so bizarre and counter-intuitive. It is in fact best to switch doors, and this is not hard to prove either. In my opinion, the reason it seems so bizarre the first time one (including me) encounters it is that humans are simply bad at thinking about probability. What follows is essentially how I have justified switching doors to myself over the years.</p> <p>At the start of the game, you are asked to pick a single door. There is a $1/3$ chance that you have picked correctly, and a $2/3$ chance that you are wrong. This does not change when one of the two doors you did not pick is opened. The second time is that you are choosing between whether your first guess was right (which has probability $1/3$) or wrong (probability $2/3$). Clearly it is more likely that your first guess was wrong, so you switch doors.</p> <p>This didn't sit well with me when I first heard it. To me, it seemed that the situation of picking between two doors has a certain kind of <em>symmetry</em>-things are either behind one door or the other, with equal probability. Since this is not the case here, I was led to ask where the asymmetry comes from? What causes one door to be more likely to hold the prize than the other? The key is that the host <em>knows</em> which door has the prize, and opens a door that he knows does not have the prize behind it.</p> <p>To clarify this, say you choose door $A$, and are then asked to choose between doors $A$ and $B$ (no doors have been opened yet). There is no advantage to switching in this situation. Say you are asked to choose between $A$ and $C$; again, there is no advantage in switching. However, what if you are asked to choose between a) the prize behind door $A$ and b) the better of the two prizes behind door $B$ and $C$. Clearly, in this case it is in your advantage to switch. But this is exactly the same problem as the one you've been confronted with! Why? Precisely because the host <em>always</em> opens (hence gets rid of) the door that you did not pick which has the worse prize behind it. This is what I mean when I say that the asymmetry in the situation comes from the knowledge of the host.</p>
<p>To understand why your odds increase by changing door, let us take an extreme example first. Say there are $10000$ doors. Behind one of them is a car and behind the rest are donkeys. Now, the odds of choosing a car is $1\over10000$ and the odds of choosing a donkey are $9999\over10000$. Say you pick a random door, which we call X for now. According to the rules of the game, the game show host now opens all the doors except for two, one of which contains the car. You now have the option to switch. Since the probability for not choosing the car initially was $9999\over10000$ it is very likely you didn't choose the car. So assuming now that door X is a goat and you switch you get the car. This means that as long as you pick the goat on your first try you will always get the car. </p> <p>If we return to the original problem where there are only 3 doors, we see that the exact same logic applies. The probability that you choose a goat on your first try is $2\over3$, while choosing a car is $1\over3$. If you choose a goat on your first try and switch you will get a car and if you choose the car on your first try and switch, you will get a goat. Thus, the probability that you will get a car, if you switch is $2\over3$ (which is more than the initial $1\over3$).</p>
logic
<p>I stumbled across article titled <a href="http://math.andrej.com/2010/03/29/proof-of-negation-and-proof-by-contradiction/" rel="noreferrer">"Proof of negation and proof by contradiction"</a> in which the author differentiates proof by contradiction and proof by negation and denounces an abuse of language that is "bad for mental hygiene". I get that it is probably a hyperbole but I am genuinely curious about what's so horrible in using those two interchangeably and I struggle to see any difference at all.</p> <ul> <li><strong>to prove $\neg\phi$ assume $\phi$ and derive absurdity</strong> (<em>proof by negation</em>)</li> </ul> <p>and </p> <ul> <li><strong>to prove $\phi$ suppose $\neg\phi$ and derive absurdity</strong> (<em>proof by contradiction</em>) </li> </ul> <p>More specifically the author claims that:</p> <blockquote> <p>The difference in placement of negations is not easily appreciated by classical mathematicians because their brains automagically cancel out double negations, just like good students automatically cancel out double negation signs.</p> </blockquote> <p>Could you provide an example where the "difference in placement of negations" can be appreciated and make a difference?</p> <p>The author later use two cases: the irrationality of $\sqrt{2}$ and the statement "a continuous map [0,1) on $\mathbb{R}$ is bounded" but I can't see the difference. If I massage the semantics of the proof a little bit I obtain two valid proofs as well using negation/contradiction.</p> <blockquote> <p>Can we turn this proof into one that does not use contradiction (but still uses Bolzano-Weierstrass)? </p> </blockquote> <p>Why would we want to do that if both proof methods are equivalent?</p> <p>I feel that the crux of the article is the following sentence:</p> <blockquote> <p>A classical mathematician will quickly remark that we can get either of the two principles from the other by plugging in ¬ϕ and cancelling the double negation in ¬¬ϕ to get back to ϕ. Yes indeed, but the cancellation of double negation is precisely the reasoning principle we are trying to get. These really are different.</p> </blockquote> <p>I have done some research and it seems that $\neg\neg\phi$ is the issue here. To quote <a href="http://en.wikipedia.org/wiki/Double_negation" rel="noreferrer">Wikipedia\Double_Negation</a> on that:</p> <blockquote> <p>this principle is considered to be a law of thought in classical logic,<a href="http://en.wikipedia.org/wiki/Double_negation" rel="noreferrer">2</a> but it is disallowed by intuitionistic logic</p> </blockquote> <p>I should probably precise that my maths background is pretty limited so far as I am finishing my first year in college. This is bugging me and I would really appreciate if someone could explain it to me. Layman terms would be great but feel free to dive deeper as well. That will be homework for the summer (and interesting for more advanced readers)!</p> <p><strong>Are proofs by contradiction and proofs of negation equivalent? If not, in which situation do differences matters and what makes them different?</strong></p>
<p>Proof of negation and proof by contradiction are equivalent in classical logic. However they are not equivalent in constructive logic.</p> <p>One would usually define <span class="math-container">$\neg \phi$</span> as <span class="math-container">$\phi \rightarrow \perp$</span>, where <span class="math-container">$\perp$</span> stands for contradiction / absurdity / falsum. Then the proof of negation is nothing more than an instance of &quot;implication introduction&quot;:</p> <p>If <span class="math-container">$B$</span> follows from <span class="math-container">$A$</span>, then <span class="math-container">$A\rightarrow B$</span>. So in particular: If <span class="math-container">$\perp$</span> follows from <span class="math-container">$\phi$</span>, then <span class="math-container">$\phi \rightarrow \perp$</span> (<span class="math-container">$\neg \phi$</span>).</p> <p>The following rule is of course just a special case:</p> <p>If <span class="math-container">$\perp$</span> follows from <span class="math-container">$\neg \phi$</span>, then <span class="math-container">$\neg \neg \phi$</span>.</p> <p>But the rule <span class="math-container">$\neg \neg \phi \rightarrow \phi$</span> is not valid in constructive logic in general. It is equivalent to the law of excluded middle (<span class="math-container">$\phi\vee \neg \phi$</span>). If you add this rule to your logic, you get classical logic.</p>
<p>Intuitionistic refusal of <em>double negation</em> is particularly relevant in the context of &quot;existence proofs&quot;; see :</p> <ul> <li>Sara Negri &amp; Jan von Plato, <a href="https://books.google.it/books?id=ZvACGkn9138C&amp;pg=PA26" rel="noreferrer">Structural Proof Theory</a> (2001), page 26 :</li> </ul> <blockquote> <p>Classical logic contains the principle of indirect proof: If <span class="math-container">$¬A$</span> leads to a contradiction, <span class="math-container">$A$</span> can be inferred. Axiomatically expressed, this principle is contained in the law of double negation, <span class="math-container">$¬¬A → A$</span>. The law of excluded middle, <span class="math-container">$A \lor ¬A$</span>, is a somewhat stronger way of expressing the same principle.</p> <p>Under the constructive interpretation, the law of excluded middle is not an empty &quot;tautology,&quot; but expresses the decidability of proposition <span class="math-container">$A$</span>. Similarly, a direct proof of an existential proposition <span class="math-container">$∃xA$</span> consists of a proof of <span class="math-container">$A$</span> for some [&quot;witness&quot;] <span class="math-container">$a$</span>. Classically, we can prove existence indirectly by assuming that there is no <span class="math-container">$x$</span> such that <span class="math-container">$A$</span>, then deriving a contradiction, and concluding that such an <span class="math-container">$x$</span> exists. Here the classical law of double negation is used for deriving <span class="math-container">$∃xA$</span> from <span class="math-container">$¬¬∃xA$</span>.</p> </blockquote> <p>Thus, it is correct to say that, from a <em>constructivist</em> point of view, <em>tertium non datur</em> [i.e. <em>excluded middle</em>] does not apply <strong>in general</strong>.</p> <p>Its application to existence proofs impies that the existence of a <em>witness</em> of <span class="math-container">$A$</span> is <em>undecided/unproven</em> until we are not able to &quot;show it&quot;.</p>
logic
<p>Why is this true?</p> <p><span class="math-container">$\exists x\,\big(P(x) \rightarrow \forall y\:P(y)\big)$</span></p>
<p>Since this may be homework, I do not want to provide the full formal proof, but I will share the informal justification. Classical first-order logic typically makes the assumption of existential import (i.e., that the domain of discourse is non-empty). In classical logic, the principle of excluded middle holds, i.e., that for any $\phi$, either $\phi$ or $\lnot\phi$ holds. Since I first encountered this kind of sentence where $P(x)$ was interpreted as "$x$ is a bird," I will use that in the following argument. Finally, recall that a material conditional $\phi \to \psi$ is true if and only if either $\phi$ is false or $\psi$ is true.</p> <p>By excluded middle, it is either true that everything is a bird, or that not everything is a bird. Let us consider these cases:</p> <ul> <li>If everything is a bird, then pick an arbitrary individual $x$, and note that the conditional “if $x$ is a bird, then everything is a bird,” is true, since the consequent is true. Therefore, if everything is a bird, then there is something such that if it is a bird, then everything is a bird.</li> <li>If it is not the case that everything is a bird, then there must be some $x$ which is not a bird. Then consider the conditional “if $x$ is a bird, then everything is a bird.” It is true because its antecedent, “$x$ is a bird,” is false. Therefore, if it is not the case that everything is a bird, then there is something (a non-bird, in fact) such that if it is a bird, then everything is a bird.</li> </ul> <p>Since it holds in each of the exhaustive cases that there is something such that if it is a bird, then everything is a bird, we conclude that there is, in fact, something such that if it is a bird, then everything is a bird.</p> <h2>Alternatives</h2> <p>Since questions about the domain came up in the comments, it seems worthwhile to consider the three preconditions to this argument: existential import (the domain is non-empty); excluded middle ($\phi \lor \lnot\phi$); and the material conditional ($(\phi \to \psi) \equiv (\lnot\phi \lor \psi)$). Each of these can be changed in a way that can affect the argument. This might not be the place to examine <em>how</em> each of these affects the argument, but we can at least give pointers to resources about the alternatives.</p> <ul> <li>Existential import asserts that the universe of discourse is non-empty. <a href="http://plato.stanford.edu/entries/logic-free/">Free logics</a> relax this constraint. If the universe of discourse were empty, it would seem that $\exists x.(P(x) \to \forall y.P(y))$ should be vacuously false.</li> <li><a href="http://en.wikipedia.org/wiki/Intuitionistic_logic">Intuitionistic logics</a> do not presume the excluded middle, in general. The argument above started with a claim of the form “either $\phi$ or $\lnot\phi$.”</li> <li>There are plenty of <a href="http://en.wikipedia.org/wiki/Material_conditional#Philosophical_problems_with_material_conditional">philosophical difficulties with the material conditional</a>, especially as used to represent “if … then …” sentences in natural language. If we took the conditional to be a counterfactual, for instance, and so were considering the sentence “there is something such that if it were a bird (even if it is not <em>actually</em> a bird), then everything would be a bird,” it seems like it should no longer be provable.</li> </ul>
<p>Hint: The only way for $A\implies B$ to be false is for $A$ to be true and $B$ to be false.</p> <p>I don't think this is actually true unless you know your domain isn't empty. If your domain is empty, then $\forall y: P(y)$ is true "vacuously," but $\exists x: Q$ is not true for any $Q$.</p>
logic
<p>I remember once hearing offhandedly that in set builder notation, there was a difference between using a colon versus a vertical line, e.g. $\{x: x \in A\}$ as opposed to $\{x\mid x \in A\}$. I've tried searching for the distinction, but have come up empty-handed.</p>
<p>There is no difference that I've ever heard of. I do strongly prefer "$\vert$" to "$\colon$", though, because I'm often interested in sets of maps, and e.g. $$\{f \mid f\colon \mathbb{R}\rightarrow\mathbb{C}\text{ with $f(6)=24$}\}$$ is easier to read than $$\{f: f\colon \mathbb{R}\rightarrow\mathbb{C}\text{ with $f(6)=24$}\}$$.</p> <p>EDIT: Note that as Mike Pierce's answer shows, sometimes "$:$" is clearer. At the end of the day, <em>use whichever notation is most clear for your context</em>. </p>
<p>There is no difference. The <em>bar</em> is just often easier to read than the <em>colon</em> (like in the example in Noah Schweber's answer). However in analysis and probability, the <em>bar</em> is used in other notation. In analysis it is used for absolute value (or distance or norms) and in probability it is used in conditional statements (the probability of $A$ given $B$ is $\operatorname{P}(A \mid B)$). So looking at <em>bar</em> versus <em>colon</em> in sets with these notations $$ \{x \in X \mid ||x|-|y_0||&lt;\varepsilon\} \quad\text{vs}\quad \{x \in X : ||x|-|y_0||&lt;\varepsilon\} $$ $$ \{A \subset X \mid \operatorname{P}(B \mid A) &gt; 0.42\} \quad\text{vs}\quad \{A \subset X : \operatorname{P}(B \mid A) &gt; 0.42\} $$ it can be better to use the <em>colon</em> just to avoid overloading the <em>bar</em>.</p>
linear-algebra
<p>How do we prove that</p> <p><span class="math-container">$\operatorname{rank}(A) = \operatorname{rank}(AA^T) = \operatorname{rank}(A^TA)$</span> ?</p> <p>Is it always true?</p>
<p>This is only true for real matrices. For instance $\begin{bmatrix} 1 &amp; i \\ 0 &amp;0 \end{bmatrix}\begin{bmatrix} 1 &amp; 0 \\ i &amp;0 \end{bmatrix}$ has rank zero. For complex matrices, you'll need to take the conjugate transpose.</p>
<p>Here is a common proof.</p> <p>All matrices in this note are real. Think of a vector <span class="math-container">$X$</span> as an <span class="math-container">$m\!\times\!1$</span> matrix. Let <span class="math-container">$A$</span> be an <span class="math-container">$m\!\times\!n$</span> matrix.</p> <p>We will prove that <span class="math-container">$A A^T X = 0$</span> if and only if <span class="math-container">$A^T X = 0$</span>.</p> <p>It is clear that <span class="math-container">$A^T X = 0$</span> implies <span class="math-container">$AA^T X = 0$</span>.</p> <p>Assume that <span class="math-container">$AA^T X = 0$</span> and set <span class="math-container">$Y = A^T\!X$</span>. Then <span class="math-container">$X^T\!A\, Y = 0$</span>, and thus <span class="math-container">$(A^T\!X)^T Y = 0$</span>. That is <span class="math-container">$Y^T Y = 0$</span>. Setting <span class="math-container">$Y = [y_1 \cdots y_n]^\top$</span> we obtain <span class="math-container">$0 = Y^T Y = y_1^2 + \cdots + y_n^2$</span>. Since the entries of <span class="math-container">$Y$</span> are real, for all <span class="math-container">$k \in \{1,\ldots,n\}$</span> we have <span class="math-container">$y_k^2 \geq 0$</span>. Therefore, <span class="math-container">$Y^T Y = 0$</span> yields <span class="math-container">$y_k = 0$</span> for all <span class="math-container">$k \in \{1,\ldots,n\}$</span>. Thus <span class="math-container">$Y = A^T X = 0$</span>.</p> <p>We just proved that the <span class="math-container">$m\!\times\!m$</span> matrix <span class="math-container">$AA^T$</span> and the <span class="math-container">$n\!\times\!m$</span> matrix <span class="math-container">$A^T$</span> have the same null space. Consequently, they have the same nullity. The nullity-rank theorem states that <span class="math-container">$$ \operatorname{Nul} AA^T + \operatorname{Rank} AA^T = m = \operatorname{Nul} A^T + \operatorname{Rank} A^T. $$</span></p> <p>Hence <span class="math-container">$\operatorname{Rank} AA^T = \operatorname{Rank} A^T$</span>.</p>
game-theory
<p>Apparently, I'm not understanding this simple concept. What are the differences between the two? Can a person have multiple pure strategies that change throughout the game?</p>
<p>A pure strategy determines <em>all</em> your moves during the game (and should therefore specify your moves for all possible other players' moves).</p> <p>A mixed strategy is a probability distribution over all possible pure strategies (some of which may get zero weight). After a player has determined a mixed strategy at the beginning of the game, using a randomising device, that player may pick one of those pure strategies and then stick to it.</p> <p>Also see <a href="http://en.wikipedia.org/wiki/Strategy_%28game_theory%29#Pure_and_mixed_strategies" rel="noreferrer">Wikipedia</a>.</p>
<p>I want to add an observation that might make mixed strategies more intuitive. This observation is based on [1].</p> <p>If you are player $1$ and the other players are $\{2,3,4, \dots, n\}$; and the other players are playing strategies $p_2, p_3, \dots$ . </p> <p>Suppose that you have a choice between strategies $p_1$ or $p_1 \prime,$ such that each one of these actions give you an equal (average) utility, and they are both "Best responses" to your oppoent's strategies. How do you choose between them? One approach is to not choose either, but to choose a randomized mixture of both of them: whereby you select $x \in [0,1]$ : with probability $x$ you play $p_1$, else you play $p_1 \prime$. </p> <p>By randomly choosing between your two actions, you can confuse your opponents, as they won't know which you'll do. This confusion might give you an advantage.</p> <p>In these cases, a mixed strategy can be rational. </p> <p>Of course, other players will also mix their strategies simultaneously. </p> <p>Note that if all other players are playing pure ($ie$ non-random) strategies, then you should too. This is because if you "know" how other players play, then you should just play your exact optimal response, there is no advantage to randomizing in this case. </p> <p>[1] <a href="http://oyc.yale.edu/sites/default/files/mixed_strategies_handout_0.pdf" rel="nofollow noreferrer">http://oyc.yale.edu/sites/default/files/mixed_strategies_handout_0.pdf</a></p>