tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
probability | <p>I have been looking at the birthday problem (http://en.wikipedia.org/wiki/Birthday_problem) and I am trying to figure out what the probability of 3 people sharing a birthday in a room of 30 people is. (Instead of 2).</p>
<p>I thought I understood the problem but I guess not since I have no idea how to do it with 3.</p>
| <p>The birthday problem with 2 people is quite easy because finding the probability of the complementary event "all birthdays distinct" is straightforward. For 3 people, the complementary event includes "all birthdays distinct", "one pair and the rest distinct", "two pairs and the rest distinct", etc. To find the exact value is pretty complicated. </p>
<p>The Poisson approximation is pretty good, though. Imagine checking every triple and calling it a "success" if all three have the same birthdays. The total number of successes is approximately Poisson with mean value ${30 \choose 3}/365^2$. Here $30\choose 3$ is the number of triples, and $1/365^2$ is the chance that any particular triple is a success.
The probability of getting at least one success is obtained from the Poisson distribution:
$$ P(\mbox{ at least one triple birthday with 30 people})\approx 1-\exp(-{30 \choose 3}/365^2)=.0300. $$ </p>
<p>You can modify this formula for other values, changing either 30 or 3. For instance,
$$ P(\mbox{ at least one triple birthday with 100 people})\approx 1-\exp(-{100 \choose 3}/365^2)=.7029,$$
$$ P(\mbox{ at least one double birthday with 25 people })\approx 1-\exp(-{25 \choose 2}/365)=.5604.$$</p>
<p>Poisson approximation is very useful in probability, not only for birthday problems! </p>
| <p>An exact formula can be found in Anirban DasGupta, <a href="http://www.math.ucdavis.edu/~tracy/courses/math135A/UsefullCourseMaterial/birthday.pdf">
The matching, birthday and the strong birthday problem: a contemporary review</a>, Journal of Statistical Planning and Inference 130 (2005), 377-389. This paper claims that if $W$ is the number of triplets of people having the same birthday, $m$ is the number of days in the year, and $n$ is the number of people, then</p>
<p>$$ P(W \ge 1) = 1 - \sum_{i=0}^{\lfloor n/2 \rfloor} {m! n! \over i! (n-2i)! (m-n+i)! 2^i m^n} $$</p>
<p>No derivation or source is given; I think the idea is that the term corresponding to $i$ is the probability that there are $i$ birthdays shared by 2 people each and $n-2i$ birthdays with one person each.</p>
<p>In particular, if $m = 365, n = 30$ this formula gives $0.0285$, not far from Byron's approximation.</p>
|
geometry | <p>Is there a proof that the ratio of a circle's diameter and the circumference is the same for all circles, that doesn't involve some kind of limiting process, e.g. a direct geometrical proof?</p>
| <p>Limits are not involved in the problem of proving that $\pi(C)$ is independent of the circle $C$.</p>
<p>In geometrical definitions of $\pi$, to a circle $C$ is associated a sequence of finite polygonal objects and thus a sequence of numbers (or lengths, or areas, or ratios of those) $\pi_k(C)$. This sequence is thought of as a set of approximations converging to $\pi$, but that doesn't concern us here; what is important is that the sequence is <em>independent of the circle C</em>. Any further aspects of the sequence such as its limit or the rate of convergence will also be the same for any two circles.</p>
<p>(edit: an example of a "geometrical definition" of a sequence of approximants $\pi_k(C)$ is: perimeter of a regular $k$-sided polygon inscribed in circle C, divided by the diameter of C. Also, the use of words like <em>limit</em> and <em>approximation</em> above does not reflect any assumption that the sequences have limits or that an environment involving limits has been set up. We are demonstrating that if $\pi(C)$ is defined using some construction on the sequence, then whether that construction involves limits or not, it must produce the same answer for any two circles.)</p>
<p>The proof that $\pi_k(C_1) = \pi_k(C_2)$ of course would just apply the similarity of polygons and the behavior of length and area with respect to changes of scale. This argument does not assume a limit-based theory of length and area, because the theory of length and area for polygons in Euclidean geometry only requires dissections and rigid motions ("cut-and-paste equivalence" or <em>equidecomposability</em>). Any polygonal arc or region can be standardized to an interval or square by a finite number of (area and length preserving) cut-and-paste dissections. Numerical calculations involving the $\pi_k$, such as ratios of particular lengths or areas, can be understood either as applying to equidecomposability classes of polygons, or to the standardizations. In both interpretations, due to the similitude, the results will be the same for $C_1$ and $C_2$.</p>
<p>(You might think that this is proving a different conclusion, that the equidecomposability version of $\pi$ for the two circles is equal, and not the numerical equality of $\pi$ within a theory that has real numbers as lengths and areas for arbitrary curved figures. However, any real number-based theory, including elementary calculus, Jordan measure, and Lebesgue measure, is set up with a minimum requirement of compatibility with the geometric operations of dissection and rigid motion, so once equidecomposability is known, numerical equality will also follow.) </p>
| <p>Intuitively, all circles are similar and therefore doubling the diameter also doubles the circumference. The same applies to ratios other than 2.</p>
<p>To make this rigorous, we have to consider what we mean by “the length of the circumference.” The usual rigorous definition uses integration and therefore relies on the notion of limits. I guess that any rigorous definition of the length of a curve ultimately requires the notion of limits.</p>
<p>Edit: Rephrased a little to make the connection between the two paragraphs clearer.</p>
|
game-theory | <p>I have a fairly simple question. How many legal states of chess exists? "Legal" as in allowed by the rules and "state" as an unique configuration of the pieces.</p>
<p>I'm <em>not</em> asking for the number of possible chess games. I'm asking for the number of possible <em>legal states</em>, or chess board configurations, the rules of chess allows.</p>
<p>For example, a white pawn can't be on [A-H]1 and <strike>a king can't be in check by two different pieces at the same time</strike>. Such states are obviously not allowed by the rules and illegal.</p>
| <p>You should clarify whether you wish to differentiate between positions based on en passant, castling, and whose side it is to move. Francis Labelle and others have used the term "chess position" to indicate a board state including the above information, and "chess diagram" to indicate a board state not including the above information, i.e. just what pieces are on the board and where. In neither case is information for drawing rules like the 50-move rule or the triple repetition rule included.</p>
<p>The best upper bound found for the number of chess positions is 7728772977965919677164873487685453137329736522, or about $7.7 * 10^{45}$, based on a complicated program by John Tromp; according to him, better documentation is required in order for the program to be considered verifiable. He also has a much simpler program that gives an upper bound of $4.5 * 10^{46}$.</p>
<p>For chess diagrams, Tromps simpler program gives an upper bound of about $2.2 * 10^{46}$; he does not say what bound is obtained by the complicated program, but it is probably a little less than half (since side to move doubles the bound, whereas castling and en passant add relatively little), so likely about $3.8 * 10^{45}$. More information at <a href="https://tromp.github.io/chess/chess.html" rel="noreferrer">Tromp's website</a></p>
<p>The best bound published in a journal was obtained by Shirish Chinchalkar in "An Upper Bound for the Number of Reachable Positions". I do not have access to this paper, but according to Tromp it is about $10^{46.25}$. Although it refers to "positions", it could very easily be a bound for the number of diagrams.</p>
<p>As for lower bounds, they are much more difficult, since a given position could be illegal for very subtle reasons. Wikipedia has claimed that the number of positions is "between $10^{43}$ and $10^{47}$", but I think it is unlikely that the lower bound has been proven.</p>
<p>I would guess that the actual number of diagrams is between $10^{44}$ and $10^{45}$, but this is purely speculation.</p>
| <p>Since you want all the states "that the rules allow," it is very unlikely that you can get an exact answer with current computers, but perhaps you can get upper and lower bounds. One hand-wavy proof for why this should be is that in order to count states, you would almost certainly have to enumerate them (or enumerate equivalence classes of them where you have a counting formula for each equivalence class). But in computer-based chess strategy, enumeration of possible future states (and understanding which state can lead to which state) is how virtually all computer chess players play, and yet they can't enumerate everything to the point of finding an initial winning strategy (or proving that no winning strategy exists). In fact, even super-computers can't even come close.</p>
|
combinatorics | <p>Is there a closed form of the following infinite product?
<span class="math-container">$$\prod_{n=1}^\infty\sqrt[2^n]{\frac{\Gamma(2^n+\frac{1}{2})}{\Gamma(2^n)}}$$</span></p>
| <p>The beautiful idea of Raymond Manzoni can actually be made rigorous. Consider a <strong>finite</strong> product $\prod_{n=1}^{L}$ and take its logarithm. After using duplication formula for the gamma function and telescoping, it simplifies to the following:
$$\sum_{n=1}^{L}\frac{1}{2^n}\ln\frac{\Gamma(2^n+\frac12)}{\Gamma(2^n)}=\left(1-2^{-L}\right)\ln\left(2\sqrt{\pi}\right)-2L\ln2+2\cdot\frac{1}{2^{L+1}}\ln\Gamma(2^{L+1}).$$
This is an exact relation, valid for any $L$. Now it suffices to use Stirling,
$$\frac{1}{N}\ln\Gamma(N)=\ln N-1+O\left(\frac{\ln N}{N}\right)\qquad \mathrm{as}\; N\rightarrow\infty$$
to get
$$\sum_{n=1}^{\infty}\frac{1}{2^n}\ln\frac{\Gamma(2^n+\frac12)}{\Gamma(2^n)}=\ln\left(2\sqrt{\pi}\right)+2\left(\ln 2-1\right)=\ln\frac{8\sqrt{\pi}}{e^2}.$$
So the answer is indeed $\displaystyle\frac{8\sqrt{\pi}}{e^2}$.</p>
| <p>I got $\quad \displaystyle 8\sqrt{\pi}\,e^{-2}$</p>
<p>Now let's search a proof...</p>
<p>I'll use the classical 'duplication formula' for $\Gamma$ :
$$\Gamma(z)\Gamma\left(z+\frac 12\right)=\sqrt{2\pi}\ 2^{1/2-2z}\,\Gamma(2z) $$
so that we have :
$$\frac{\Gamma(2^n)\Gamma\left(2^n+\frac 12\right)}{2^{2^{n+1}}\Gamma\left(2^{n+1}\right)}=2\sqrt{\pi}\ $$
and I thought at using 'kind of multiplicative telescoping' but it seems that considering telescoping of the logarithms as nicely done by O.L. is less confusing!</p>
|
probability | <p>Intuitively, what's the difference between 2 following terms on the right hand side of the law of total variance?</p>
<p><span class="math-container">$$\text{Var}(Y) = \Bbb E\left[\text{Var}\left(Y|X\right)\right] + \text{Var}\left(\Bbb E[Y|X]\right)$$</span></p>
| <p>This law is assuming that you are "breaking up" the sample space for $Y$ based on the values of some other random variable $X$.</p>
<p>In this context, both $Var(Y|X)$ and $E[Y|X]$ are <em>random variables</em>. Each realization assumes that we first draw $X$ from its unconditional distribution, then sample $Y$ from its conditional distribution given $X=x$.</p>
<p>The first term says that we want the expected variance of $Y$ as we average over all values of $X$. HOWEVER, remember that the $Var[Y|X=x]$ is taken <em>with respect to the conditional mean</em> $E[Y|X=x]$. Therefore, this does not take into account the movement of the mean itself, just the variation about each, possibly varying, mean.</p>
<p>This is where the second term comes in: It does not care about the variability <em>about</em> $E[Y|X=x]$, just the variability of $E[Y|X]$ itself.</p>
<p>If we treat each $X=x$ as a separate "treatment", then the first term is measuring the average <em>within sample</em> variance, while the second is measuring the <em>between sample</em> variance.</p>
| <p>From my experience, people learning about that theorem for the first time often have trouble understanding why the second term, <em>i.e.</em> <span class="math-container">$\mathrm{Var}[\mathrm{E}(Y|X)]$</span>, is necessary. Since the question asks for the intuition, I think a visual explanation that can also act as a mnemonic device could be a useful addition to the already existing answers.</p>
<p>Let's assume <span class="math-container">$P(x,y)$</span> is given by a 2D Gaussian that is aligned with the axes:</p>
<p><a href="https://i.sstatic.net/5PvKam.png" rel="noreferrer"><img src="https://i.sstatic.net/5PvKam.png" alt="enter image description here"></a> </p>
<p>Now, for each fixed value <span class="math-container">$X=X_i$</span>, we get a distribution <span class="math-container">$P(Y|X=X_i)$</span>, as in the figure below:</p>
<p><a href="https://i.sstatic.net/3A5yw.png" rel="noreferrer"><img src="https://i.sstatic.net/3A5yw.png" alt="enter image description here"></a></p>
<p>Since all of those 1D distributions have the same expectation <span class="math-container">$E(Y)$</span>, it intuitively makes sense that <span class="math-container">$\mathrm{Var}[Y]$</span> should be the average of all their individual variances, <em>i.e.</em>, we have <span class="math-container">$\mathrm{Var}(Y)=\mathrm{E}[\mathrm{Var}[Y|X]]$</span>, which is the first term in the theorem as written in the question.</p>
<p>Now, let's see what happens if we rotate the 2D Gaussian so that it is no longer aligned with the axes:</p>
<p><a href="https://i.sstatic.net/KPokO.png" rel="noreferrer"><img src="https://i.sstatic.net/KPokO.png" alt="enter image description here"></a></p>
<p>We see that in this case, <span class="math-container">$\mathrm{Var}[Y]$</span> doesn't only depend on the individual variances of the <span class="math-container">$P(Y|X=X_i)$</span> distributions, but that it also depends on how spread out the distributions themselves are along the <span class="math-container">$Y$</span> axis. For example, the further the mean of <span class="math-container">$P(Y|X=X_n)$</span> is from the mean of <span class="math-container">$P(Y|X=X_1)$</span>, the larger the overall interval spanned by all the values of <span class="math-container">$Y$</span> will be. As a result, it is no longer sufficient to only consider <span class="math-container">$\mathrm{Var}(Y)=\mathrm{E}[\mathrm{Var}[Y|X]]$</span>, and we need to account for the variability of the means of the <span class="math-container">$P(Y|X=X_i)$</span> distributions. This is the intuition behind the second term, <em>i.e.</em> <span class="math-container">$\mathrm{Var}[\mathrm{E}(Y|X)]$</span>.</p>
|
logic | <p>I have read <a href="https://math.stackexchange.com/questions/3270/what-does-only-mean">this</a> question. I am now stuck with the difference between "<em>if and only if</em>" and "<em>only if</em>". Please help me out.</p>
<p>Thanks</p>
| <p>Let's assume A and B are two statements. Then to say "A only if B" means that A can only ever be true when B is true. That is, B is necessary for A to be true. To say "A if and only if B" means that A is true if B is true, and B is true if A is true. That is, A is necessary and sufficient for B. Succinctly,</p>
<p>$A \text{ only if } B$ is the logic statement $A \Rightarrow B$. </p>
<p>$A \text{ iff } B$ is the statement $(A \Rightarrow B) \land (B \Rightarrow A)$</p>
| <p>I will find a million dollars inside this locker only if I know the combination.</p>
<p>But that doesn't mean I will find a million dollars there <em>if</em> I know the combination. After all, there might be only a half million in there.</p>
|
combinatorics | <p>The <a href="http://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a> have a reputation for turning up everywhere, but the occurrence described below, in the analysis of an (incorrect) algorithm, is still mysterious to me, and I'm curious to find an explanation.</p>
<hr />
<p>For situations where a quadratic-time sorting algorithm is fast enough, I usually use the following:</p>
<pre><code>//Given array a[1], ... a[n]
for i = 1 to n:
for j = i+1 to n:
if a[i] > a[j]:
swap(a[i],a[j])
</code></pre>
<p>It looks like bubble sort, but is closer to selection sort. It is easy to see why it works: in each iteration of the outer loop, <code>a[i]</code> is set to be the smallest element of <code>a[i…n]</code>.</p>
<p>In a programming contest many years ago, one of the problems essentially boiled down to sorting:</p>
<blockquote>
<p>Given a list of distinct values <span class="math-container">$W_1, W_2, \dots, W_n$</span>, find the indices when it is sorted in ascending order. In other words, find the permutation <span class="math-container">$(S_1, S_2, \dots, S_n)$</span> for which <span class="math-container">$W_{S_1} < W_{S_2} < \dots < W_{S_n}$</span>.</p>
</blockquote>
<p>This is simply a matter of operating on the indices rather than on the array directly, so the correct code would be:</p>
<pre><code>//Given arrays S[1], ..., S[n] (initially S[i]=i ∀i) and W[1], ..., W[n]
for i = 1 to n:
for j = i+1 to n:
if W[S[i]] > W[S[j]]:
swap(S[i],S[j])
</code></pre>
<p>But in the heat of the contest, I instead coded a program that did, incorrectly:</p>
<pre><code>for i = 1 to n:
for j = i+1 to n:
if W[i] > W[j]:
swap(S[i],S[j])
</code></pre>
<p>I realised the mistake after the contest ended, and later while awaiting the results, with desperate optimism I tried to figure out the odds that for some inputs, my program would accidentally give the right answer anyway. Specifically, I counted the number of permutations of an arbitrary list <span class="math-container">$W_1, \dots, W_n$</span> with distinct values (since only their <em>order</em> matters, not their actual values) for which the incorrect algorithm above gives the correct answer, for each n:</p>
<pre><code>n Number of "lucky" permutations
0 1
1 1
2 2
3 5
4 14
5 42
6 132
7 429
8 1430
9 4862
10 16796
11 58786
12 208012
</code></pre>
<p>These are the Catalan numbers! But why? I've tried to prove this occasionally in my free time, but never succeeded.</p>
<hr />
<p>What I've tried:</p>
<ul>
<li><p>The (pseudo)algorithm can be represented in more formal notation as the product of all inversions in a permutation. That is, we want to prove that the number of permutations <span class="math-container">$\sigma \in S_n$</span> such that <span class="math-container">$$\prod_{i=1}^{n}\prod_{\substack{j \in \left\{i+1,i+2,\ldots,n\right\}; \\ \sigma_i > \sigma_j}}(i,j) = \sigma^{-1}$$</span> (with the convention that multiplication is done left to right) is <span class="math-container">$C_n$</span>. This change of notation does not make the problem any simpler.</p>
</li>
<li><p>I briefly skimmed through <a href="http://www-math.mit.edu/%7Erstan/ec/" rel="nofollow noreferrer">Stanley's famous list of Catalan problems</a>, but this does not seem to be (directly) in the list. :-)</p>
</li>
<li><p>Some computer experimentation suggests that the lucky permutations are those that avoid the <a href="http://en.wikipedia.org/wiki/Permutation_pattern" rel="nofollow noreferrer">pattern</a> 312, the number of which is apparently the Catalan numbers. But I have no idea how to prove this, and it may not be the best approach...</p>
</li>
</ul>
| <p>Your suspicions are correct. Let's show that a permutation is lucky iff it avoids the pattern 312.</p>
<p>For an injection <span class="math-container">$W$</span> from <span class="math-container">$\{1,\ldots,k\}$</span> to <span class="math-container">$\{n-k+1,\ldots,n\}$</span>, let <span class="math-container">$N(W)$</span> denote the result of removing <span class="math-container">$W(1)$</span> and increasing all elements below <span class="math-container">$W(1)$</span> by <span class="math-container">$1$</span>. For example, <span class="math-container">$N(32514) = N(3524)$</span>.</p>
<p><b>Lemma 1.</b>
If <span class="math-container">$W$</span> avoids <span class="math-container">$312$</span> then so does <span class="math-container">$N(W)$</span>.</p>
<p><b>Proof.</b>
Clear since the relative order of elements in <span class="math-container">$N(W)$</span> is the same as the corresponding elements in <span class="math-container">$W$</span>.</p>
<p><b>Lemma 2.</b>
Suppose <span class="math-container">$W$</span> avoids <span class="math-container">$312$</span>. After running one round of the algorithm, <span class="math-container">$S(1)$</span> contains the index of the minimal element in <span class="math-container">$W$</span>, and <span class="math-container">$W \circ S = N(W)$</span>.</p>
<p><b>Proof.</b>
The lemma is clear if <span class="math-container">$W(1)$</span> is the minimal element. Otherwise, since <span class="math-container">$W$</span> avoids <span class="math-container">$312$</span>, all elements below <span class="math-container">$W(1)$</span> form a decreasing sequence <span class="math-container">$W(1) = W(i_1) > \cdots > W(i_k)$</span>. The algorithm puts the minimal one <span class="math-container">$W(i_k)$</span> at <span class="math-container">$S(1)$</span>, and puts <span class="math-container">$W(i_t)$</span> at <span class="math-container">$W(i_{t+1})$</span>.</p>
<p><b>Theorem 1.</b>
If <span class="math-container">$W$</span> avoids <span class="math-container">$312$</span> then <span class="math-container">$W$</span> is lucky.</p>
<p><b>Proof.</b>
Apply Lemma 2 repeatedly. Lemma 1 ensures that the injection always avoids <span class="math-container">$312$</span>.</p>
<p>For the other direction, we need to be slightly more careful.</p>
<p><b>Lemma 3.</b>
If <span class="math-container">$W$</span> contains a pattern <span class="math-container">$312$</span> in which <span class="math-container">$3$</span> doesn't correspond to <span class="math-container">$W(1)$</span> then <span class="math-container">$N(W)$</span> contains a pattern <span class="math-container">$312$</span>.</p>
<p><b>Proof.</b>
The pattern survives in <span class="math-container">$N(W)$</span> since all relative positions are maintained.</p>
<p><b>Lemma 4.</b>
If <span class="math-container">$W$</span> doesn't contain a pattern <span class="math-container">$312$</span> in which <span class="math-container">$3$</span> corresponds to <span class="math-container">$W(1)$</span> and <span class="math-container">$1$</span> corresponds to the minimum of <span class="math-container">$W$</span> then after running one round of the algorithm, <span class="math-container">$S(1)$</span> contains the index of the minimal element, and <span class="math-container">$W \circ S = N(W)$</span>.</p>
<p><b>Proof.</b>
Follows directly from the proof of Lemma 2.</p>
<p>Thus we should expect trouble if there are <span class="math-container">$i<j$</span> such that <span class="math-container">$W(1) > W(j) > W(i)$</span>. However, if <span class="math-container">$W(i)$</span> is not the minimal element, the trouble won't be immediate.</p>
<p>List the elements which are smaller than <span class="math-container">$W(1)$</span> as <span class="math-container">$W(t_1),\ldots,W(t_k)$</span>, and suppose that <span class="math-container">$W(t_r) < W(t_{r+1}) > \cdots > W(t_k)$</span>. One round of the algorithm puts <span class="math-container">$t_r$</span> at the place of <span class="math-container">$t_{r+1}$</span>. The following rounds maintain the relative order of the elements in positions <span class="math-container">$t_{r+1},\ldots,t_k$</span>, and so in the final result, the position which should have contained <span class="math-container">$t_{r+1}$</span> will contain <span class="math-container">$t_r$</span>.</p>
<p>Example: <span class="math-container">$W = 632541$</span>. The final result is <span class="math-container">$S = 652134$</span>, which corresponds to the permutation <span class="math-container">$143625$</span>. We can see that <span class="math-container">$S(1)$</span> is correct since <span class="math-container">$W$</span> satisfies the conditions of Lemma 4. We have <span class="math-container">$t_r = 3$</span> and <span class="math-container">$W(t_r) = 2, W(t_{r+1}) = 5$</span>. We see that indeed <span class="math-container">$W(S(5)) = 2$</span> instead of <span class="math-container">$5$</span>.</p>
<p><b>Theorem 2.</b>
If <span class="math-container">$W$</span> contains <span class="math-container">$312$</span> then <span class="math-container">$W$</span> is unlucky.</p>
<p><b>Proof.</b>
Along the lines of the discussion above.</p>
| <p>I think that your program is doing something very similar to a simple stack sort. I found this paper online <a href="http://www.combinatorics.org/Volume_9/PDF/v9i2a1.pdf" rel="nofollow">http://www.combinatorics.org/Volume_9/PDF/v9i2a1.pdf</a> which explains both the basic stack sort and some variations. In the case of the basic stack sort, Donald Knuth has proved that the number of sortable permutations of length n is $C_n$ (this is proposition 1.3 in the linked paper).</p>
<p>Your method does not replicate stack sort exactly, but I think it will output a cyclically shifted version of what the stack sort algorithm will produce. I'll edit my answer if I figure out some more details.</p>
|
linear-algebra | <p>I can follow the definition of the transpose algebraically, i.e. as a reflection of a matrix across its diagonal, or in terms of dual spaces, but I lack any sort of geometric understanding of the transpose, or even symmetric matrices.</p>
<p>For example, if I have a linear transformation, say on the plane, my intuition is to visualize it as some linear distortion of the plane via scaling and rotation. I do not know how this distortion compares to the distortion that results from applying the transpose, or what one can say if the linear transformation is symmetric. Geometrically, why might we expect orthogonal matrices to be combinations of rotations and reflections?</p>
| <p>To answer your second question first: an orthogonal matrix $O$ satisfies $O^TO=I$, so $\det(O^TO)=(\det O)^2=1$, and hence $\det O = \pm 1$. The determinant of a matrix tells you by what factor the (signed) volume of a parallelipiped is multipled when you apply the matrix to its edges; therefore hitting a volume in $\mathbb{R}^n$ with an orthogonal matrix either leaves the volume unchanged (so it is a rotation) or multiplies it by $-1$ (so it is a reflection).</p>
<p>To answer your first question: the action of a matrix $A$ can be neatly expressed via its singular value decomposition, $A=U\Lambda V^T$, where $U$, $V$ are orthogonal matrices and $\Lambda$ is a matrix with non-negative values along the diagonal (nb. this makes sense even if $A$ is not square!) The values on the diagonal of $\Lambda$ are called the singular values of $A$, and if $A$ is square and symmetric they will be the absolute values of the eigenvalues.</p>
<p>The way to think about this is that the action of $A$ is first to rotate/reflect to a new basis, then scale along the directions of your new (intermediate) basis, before a final rotation/reflection.</p>
<p>With this in mind, notice that $A^T=V\Lambda^T U^T$, so the action of $A^T$ is to perform the inverse of the final rotation, then scale the new shape along the canonical unit directions, and then apply the inverse of the original rotation.</p>
<p>Furthermore, when $A$ is symmetric, $A=A^T\implies V\Lambda^T U^T = U\Lambda V^T \implies U = V $, therefore the action of a symmetric matrix can be regarded as a rotation to a new basis, then scaling in this new basis, and finally rotating back to the first basis. </p>
| <p>yoyo has succinctly described my intuition for orthogonal transformations in the comments: from <a href="http://en.wikipedia.org/wiki/Polarization_identity">polarization</a> you know that you can recover the inner product from the norm and vice versa, so knowing that a linear transformation preserves the inner product ($\langle x, y \rangle = \langle Ax, Ay \rangle$) is equivalent to knowing that it preserves the norm, hence the orthogonal transformations are precisely the linear <a href="http://en.wikipedia.org/wiki/Isometry">isometries</a>. </p>
<p>I'm a little puzzled by your comment about rotations and reflections because for me a rotation is, <em>by definition</em>, an orthogonal transformation of determinant $1$. (I say this not because I like to dogmatically stick to definitions over intuition but because this definition is elegant, succinct, and agrees with my intuition.) So what intuitive definition of a rotation are you working with here?</p>
<p>As for the transpose and symmetric matrices in general, my intuition here is not geometric. First, here is a comment which may or may not help you. If $A$ is, say, a stochastic matrix describing the transitions in some <a href="http://en.wikipedia.org/wiki/Markov_chain">Markov chain</a>, then $A^T$ is the matrix describing what happens if you run all of those transitions backwards. Note that this is not at all the same thing as inverting the matrix in general. </p>
<p>A slightly less naive comment is that the transpose is a special case of a structure called a <a href="http://en.wikipedia.org/wiki/Dagger_category">dagger category</a>, which is a category in which every morphism $f : A \to B$ has a dagger $f^{\dagger} : B \to A$ (here the adjoint). The example we're dealing with here is implicitly the dagger category of Hilbert spaces, which is relevant to quantum mechanics, but there's another dagger category relevant to a different part of physics: the $3$-<a href="http://en.wikipedia.org/wiki/Cobordism">cobordism category</a> describes how space can change with time in relativity, and here the dagger corresponds to just flipping a cobordism upside-down. (Note the similarity to the Markov chain example.) Since relativity and quantum mechanics are both supposed to describe the time evolution of physical systems, it's natural to ask for ways to relate the two dagger categories I just described, and this is (roughly) part of <a href="http://en.wikipedia.org/wiki/Topological_quantum_field_theory">topological quantum field theory</a>.</p>
<p>The punchline is that for me, "adjoint" is intuitively "time reversal." (Unfortunately, what this has to do with self-adjoint operators as observables in quantum mechanics I'm not sure.)</p>
|
logic | <p>How do you in general prove that a function is well-defined?</p>
<p>$$f:X\to Y:x\mapsto f(x)$$</p>
<p>I learned that I need to prove that every point has exactly one image. Does that mean that I need to prove the following two things:</p>
<ol>
<li>Every element in the domain maps to an element in the codomain:<br>
$$x\in X \implies f(x)\in Y$$</li>
<li>The same element in the domain maps to the same element in the codomain:
$$x=y\implies f(x)=f(y)$$</li>
</ol>
<hr>
<p>At the moment I'm trying to prove this function is well-defined: $$f:(\Bbb Z/12\mathbb Z)^∗→(\Bbb Z/4\Bbb Z)^∗:[x]_{12}↦[x]_4 ,$$ but I'm more interested in the general procedure.</p>
| <p>When we write $f\colon X\to Y$ we say three things:</p>
<ol>
<li>$f\subseteq X\times Y$.</li>
<li>The domain of $f$ is $X$.</li>
<li>Whenever $\langle x,y_1\rangle,\langle x,y_2\rangle\in f$ then $y_1=y_2$. In this case whenever $\langle x,y\rangle\in f$ we denote $y$ by $f(x)$.</li>
</ol>
<p>So to say that something is well-defined is to say that all three things are true. If we know <em>some</em> of these we only need to verify the rest, for example if we know that $f$ has the third property (so it is a function) we need to verify its domain is $X$ and the range is a subset of $Y$. If we know those things we need to verify the third condition.</p>
<p>But, and that's important, if we do not know that $f$ satisfies the third condition we cannot write $f(x)$ because that term assumes that there is a unique definition for that element of $Y$.</p>
| <p>Okay, I'm trying to answer my own question here. This is how a function is defined in "<a href="http://books.google.nl/books/about/Reading_Writing_and_Proving.html?id=AhVCXPE5yukC&redir_esc=y" rel="noreferrer">Reading, Writing, and Proving: A Closer Look at Mathematics</a>".</p>
<blockquote>
<p>Recall that a relation $f$ from $X$ to $Y$ is a subset of $X\times Y$,
and therefore the elements of $f$ are ordered pairs $(x,y)$.</p>
<p>A function $f:X\to Y$ is a relation $f$ from $X$ to $Y$ satisfying:<br>
i). $\forall x\in X ,\exists y\in Y :(x,y)\in f $<br>
ii). $\forall x\in X,\forall y_1,y_2 \in Y : (x,y_1),(x,y_2)\in f\implies y_1=y_2$</p>
<p>An function is often called an map or a mapping. The set is $X$ is
called the domain and denoted by $\text{dom}(f)$, and the set $Y$ is
called the codomain and denoted by $\text{cod}(f)$. <em>When we know what
these two sets are and the two conditions are satisfied, we say that
$f$ is a well defined function.</em></p>
<p>Condition i) makes sure that each element in $X$ is related to some
element of $Y$, while condition ii) makes sure that no element in $X$ is
related to more than one element of $Y$. Note that it may be the case
that an element of $Y$ has no element in $X$ to which it is related;
or an element of $Y$ could be related to more than one element of $X$.</p>
</blockquote>
<p>Therefore, like Asaf Karagila mentioned, if you want to prove that $f$ is a well defined funciton, and the domain $X$ and codomain $Y$ are given, then you need to show that: </p>
<ol>
<li><p>$f$ is a relation from $X$ to $Y$<br>
$f\subseteq X\times Y$ </p></li>
<li><p>The domain of $f$ is $X$, every element in $X$ is related to some element of $Y$ $\forall x\in X ,\exists y\in Y :(x,y)\in f $</p></li>
<li><p>No element of $X$ is related to more than one element of $Y$
$\forall x\in X,\forall y_1,y_2 \in Y : (x,y_1),(x,y_2)\in f\implies y_1=y_2$</p></li>
</ol>
|
linear-algebra | <p>I am trying to write a program that will perform <a href="http://en.wikipedia.org/wiki/Optical_character_recognition" rel="noreferrer">OCR</a> on a mobile phone, and I recently encountered this article : </p>
<hr>
<p><img src="https://i.sstatic.net/2jenb.png" alt="article excerpt"></p>
<hr>
<p>Can someone explain this to me ? </p>
| <p><a href="https://math.stackexchange.com/a/92176/4423">J.M. has given a very good answer</a> explaining singular values and how they're used in low rank approximations of images. However, a few pictures always go a long way in appreciating and understanding these concepts. Here is an example from one of my presentations from I don't know when, but is exactly what you need. </p>
<p>Consider the following grayscale image of a hummingbird (left). The resulting image is a $648\times 600$ image of MATLAB <code>double</code>s, which takes $648\times 600\times 8=3110400$ bytes. </p>
<p><img src="https://i.sstatic.net/twFL0m.png" alt="enter image description here">
<img src="https://i.sstatic.net/LzRaNm.png" alt="enter image description here"></p>
<p>Now taking an SVD of the above image gives us $600$ singular values that, when plotted, look like the curve on the right. Note that the $y$-axis is in decibels (i.e., $10\log_{10}(s.v.)$). </p>
<p>You can clearly see that after about the first $20-25$ singular values, it falls off and the bulk of it is so low, that any information it contains is negligible (and most likely noise). So the question is, <em>why store all this information if it is useless?</em></p>
<p>Let's look at what information is actually contained in the different singular values. The figure on the left below shows the image recreated from the first 10 singular values ($l=10$ in J.M.'s answer). We see that the <em>essence</em> of the picture is basically captured in just 10 singular values out of a total of 600. Increasing this to the first 50 singular values shows that the picture is almost exactly reproduced (to the human eye). </p>
<p>So if you were to save just the first 50 singular values and the associated left/right singular vectors, you'd need to store only $(648\times 50 + 50 + 50\times 600)\times 8=499600$ bytes, which is only about 16% of the original! (I'm sure you could've gotten a good representation with about 30, but I chose 50 arbitrarily for some reason back then, and we'll go with that.) </p>
<p><img src="https://i.sstatic.net/bMUWbm.png" alt="enter image description here">
<img src="https://i.sstatic.net/1BJZ5m.png" alt="enter image description here"></p>
<p>So what exactly do the smaller singular values contain? Looking at the next 100 singular values (figure on the left), we actually see some fine structure, especially the fine details around the feathers, etc., which are generally indistinguishable to the naked eye. It's probably very hard to see from the figure below, but you certainly can in <a href="https://i.sstatic.net/eibGG.png" rel="noreferrer">this larger image</a>. </p>
<p><img src="https://i.sstatic.net/eibGGm.png" alt="enter image description here">
<img src="https://i.sstatic.net/hPktum.png" alt="enter image description here"></p>
<p>The smallest 300 singular values (figure on the right) are complete junk and convey no information. These are most likely due to sensor noise from the camera's CMOS. </p>
| <p>It is rather unfortunate that they used "eigenvalues" for the description here (although it's correct); it is better to look at principal component analysis from the viewpoint of singular value decomposition (SVD).</p>
<p>Recall that any $m\times n$ matrix $\mathbf A$ possesses the singular value decomposition $\mathbf A=\mathbf U\mathbf \Sigma\mathbf V^\top$, where $\mathbf U$ and $\mathbf V$ are orthogonal matrices, and $\mathbf \Sigma$ is a diagonal matrix whose entries $\sigma_k$ are called <em>singular values</em>. Things are usually set up such that the singular values are arranged in nonincreasing order: $\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_{\min(m,n)}$.</p>
<p>By analogy with the eigendecomposition (eigenvalues and eigenvectors), the $k$-th column of $\mathbf U$, $\mathbf u_k$, corresponding to the $k$-th singular value is called the <em>left singular vector</em>, and the $k$-th column of $\mathbf V$, $\mathbf v_k$, is the <em>right singular vector</em>. With this in mind, you can treat $\mathbf A$ as a sum of outer products of vectors, with the singular values as "weights":</p>
<p>$$\mathbf A=\sum_{k=1}^{\min(m,n)} \sigma_k \mathbf u_k\mathbf v_k^\top$$</p>
<p>Okay, at this point, some people might be grumbling "blablabla, linear algebra nonsense". The key here is that if you treat images as matrices (gray levels, or RGB color values sometimes), and then subject those matrices to SVD, it will turn out that some of the singular values $\sigma_k$ are <em>tiny</em>. The key to "approximating" these images then, is to treat those tiny $\sigma_k$ as zero, resulting in what is called a "low-rank approximation":</p>
<p>$$\mathbf A\approx\sum_{k=1}^\ell \sigma_k \mathbf u_k\mathbf v_k^\top,\qquad \ell \ll \min(m,n)$$</p>
<p>The matrices corresponding to images can be big, so it helps if you don't have to keep all those singular values and singular vectors around. (The "two", "ten", and "thirty" images mean what they say: $n=256$, and you have chosen $\ell$ to be $2$, $10$, and $30$ respectively.) The criterion of when to zero out a singular value depends on the application, of course.</p>
<p>I guess that was a bit long. Maybe I should have linked you to <a href="http://www.mathworks.com/moler/eigs.pdf" rel="noreferrer">Cleve Moler's book</a> instead for this (see page 21 onwards, in particular).</p>
<hr>
<p><strong>Edit 12/21/2011:</strong></p>
<p>I've decided to write a short <em>Mathematica</em> notebook demonstrating low-rank approximations to images, using the famous <a href="http://www.cs.cmu.edu/~chuck/lennapg/" rel="noreferrer">Lenna test image</a>. The notebook can be obtained from me upon request.</p>
<p>In brief, the $512\times 512$ example image can be obtained through <code>ImageData[ColorConvert[ExampleData[{"TestImage", "Lena"}], "Grayscale"]]</code>. A plot of $\log_{10} \sigma_k$ looks like this:</p>
<p><img src="https://i.sstatic.net/XD4wd.png" alt="log singular values of Lenna"></p>
<p>Here is a comparison of the original Lenna image with a few low-rank approximations:</p>
<p><img src="https://i.sstatic.net/FAC0i.png" alt="Lenna and low-rank approximations"></p>
<p>At least to my eye, taking $120$ out of $512$ singular values (only a bit more than $\frac15$ of the singular values) makes for a pretty good approximation.</p>
<p>Probably the only caveat of SVD is that it is a rather slow algorithm. For images that are quite large, taking the SVD might take a long time. At least one only deals with a single matrix for grayscale pictures; for RGB color pictures, one must take the SVD of three matrices, one for each color component.</p>
|
game-theory | <p>Two players each get <span class="math-container">$n=10$</span> marbles. Every alternating turn, one player (say, the player whose turn it is) hides an amount of own marbles in fist, and, the other player must guess if hidden amount is odd or even, and, that other player (i.e., the player whose turn it is not) also must bet an amount of own marbles. If guess is right, the player whose turn it is gives (as much as possible) the amount of bet marbles to opponent. If guess is wrong, the player whose turn it is takes the amount of bet marbles from opponent. Next, the turn now alternates to the other player. The game stops when one player has all <span class="math-container">$2n=20$</span> marbles.</p>
<p>The losing player gets killed (in the series, that is).</p>
<p>Which strategies for both players (perhaps one strategy for the one who gets first turn, and one strategy for the other player) give maximal probabilities to win (and to not get killed).</p>
<p>We must assume both players know the starting conditions and are perfect mathematicians and logicians.</p>
<p>On a side note: is this an old or new game? If it is a known old game, can anyone tell where its 'official' rules (and, maybe, solution(s)) are documented?</p>
<p><strong>remark</strong></p>
<p>(series details coming so <em>warning</em>: spoiler ahead)</p>
<p>There is a YT video where rules are explained to be: if guesser guesses wrong, guesser must give amount 'h' that was hidden by hider, not amount 'b' bet by guesser.</p>
<p><a href="https://www.youtube.com/watch?v=GX4AkD_vdhw" rel="nofollow noreferrer">https://www.youtube.com/watch?v=GX4AkD_vdhw</a></p>
<p>The video also gives the simple solution for that variant.</p>
<p>And presenter also mentions rules are not clear, but, most people, he says, believe 'h'. Some comments claim otherwise.</p>
<p>Fair enough.</p>
<p>From examples in series it is not entirely clear to me either what the rule is.</p>
<p>There is only one example where guesser guesses wrong.</p>
<p>In that example, guesser bets b=2, and hider hides h=3, and guesser guesses odd. One can see the guesser give 2 (amount b bet), and, next it is shown guesser has only one more left. The guesser stops playing. If the rule were to give 3 (amount h hidden), guesser would immediately loose, and if the guards were paying attention, guesser would have been killed. If the rule were to give 2 (amount b bet) then guesser, upon becoming hider, would also loose. But, current guesser is a cheater so, in both cases (having to give b=2 or h=3), it fits him to hold that last marble.</p>
<p>Note that the cheater is portrayed to be clever and his opponent to be dumb. And the opponent did not know the game.</p>
<p>However, if rule were to have to give h(=3), then guesser would cheat hard in not holding on to the rules, and, be lucky to still be alive. After all, the outcome of each games in the series is said to be fatal, but fair. But, if rule were to have to give b(=2), then guesser would still cheat hard by stopping the game, and, even manipulating opponents marbles. A moderately clever guard paying attention would notice game was <strong>not</strong> played fair, and, a somewhat more clever guard paying attention would even know guesser lost in <strong>any</strong> case.</p>
| <p>Essentially, this is like one player choosing "odd" or "even" freely, and the other player trying to guess. <em>That</em> game is straightforward: both players should choose "odd" and "even" with equal probability, and then it becomes a martingale so the amount you choose to bet is irrelevant: you win with probability <span class="math-container">$1/2$</span>.</p>
<p>Your game has an added complication: if it is your turn to hide marbles, and you only have one left, you can only hide an odd number and you lose.</p>
<p>However, all this means is that no-one will ever bet <span class="math-container">$k-1$</span> marbles if they have <span class="math-container">$k$</span> left: it would be a better strategy to bet all <span class="math-container">$k$</span>, since this is better if you guess right, and no worse if you guess wrong (you certainly lose either way). Note that it is not possible for your opponent to have exactly <span class="math-container">$k-1$</span> marbles if you have <span class="math-container">$k$</span>, on parity grounds. However, since we've established that no-one would ever bet <span class="math-container">$k-1$</span> marbles when they have <span class="math-container">$k$</span>, it follows that no-one will ever have exactly one marble left at their turn to hide, so the game becomes a martingale again.</p>
<p>tl;dr the best strategy is "always pick even/odd with equal probability, and if you have <span class="math-container">$k$</span> bet any number of marbles you like as long as 1) you don't bet more marbles than your opponent has left, and 2) you don't bet exactly <span class="math-container">$k-1$</span>". Then each player has equal probability of winning.</p>
| <p>This game is an example of a <a href="https://encyclopediaofmath.org/wiki/Recursive_game" rel="nofollow noreferrer">recursive game</a>, whose properties were first studied by <a href="https://en.wikipedia.org/wiki/Hugh_Everett_III" rel="nofollow noreferrer">Hugh Everett</a> in <a href="https://www.degruyter.com/document/doi/10.1515/9781400882151-004/html" rel="nofollow noreferrer">a paper</a> of <span class="math-container">$1958$</span>, titled <em>Recursive games</em>. A more modern (<span class="math-container">$2011$</span>) exposition of recursive games, along with the closely related concept of stochastic games of <a href="https://en.wikipedia.org/wiki/Lloyd_Shapley" rel="nofollow noreferrer">Lloyd Shapley</a> and Dean Gillette, is given in <a href="https://dl.acm.org/doi/pdf/10.1145/1993636.1993665" rel="nofollow noreferrer">this paper</a> (which I have not read).</p>
<p>This game has <span class="math-container">$38$</span> of what Everett called "game elements", which are just the different situations the players can find themselves in throughout the game. Calling the two players Alf and Beth, the game elements can be conveniently labelled <span class="math-container">$\ A_1, A_2, \dots, A_{19}\ $</span> and <span class="math-container">$\ B_1, B_2, \dots, B_{19}\ $</span>. The subscript of each game element is the number of marbles that Alf currently holds (so <span class="math-container">$20$</span> minus that number will be the number of marbles that Beth then holds). In game elements <span class="math-container">$\ A_i\ $</span>, Alf is the better and guesser, while Beth is the hider, whereas in <span class="math-container">$\ B_i\ $</span> it is the other way round. For notational convenience I'll add four more game elements <span class="math-container">$\ A_0, A_{20}, B_0\ $</span> and <span class="math-container">$\ B_{20}\ $</span>, representing the various won and lost situations as follows:
<span class="math-container">\begin{align}
A_0&:\hspace{0.5em}\text{Alf lost because Beth bet as many marbles as he had and}\\
&\hspace{1.5em}\text{guessed right.}\\
A_{20}&:\hspace{0.5em}\text{Alf won because Beth bet all her marbles and guessed}\\ &\hspace{1.5em}\text{wrong.}\\
B_0&:\hspace{0.5em}\text{Alf lost because he bet all his marbles and guessed}\\
&\hspace{1.5em}\text{wrong.}\\
B_{20}&:\hspace{0.5em}\text{Alf won because he bet as many marbles as Beth had and}\\
&\hspace{1.5em}\text{guessed right.}
\end{align}</span></p>
<p>In <span class="math-container">$\ A_i\ (i\ne0,20)\ $</span>, Alf has <span class="math-container">$\ 2i\ $</span> (pure) strategies <span class="math-container">$\ (g, b)\in\{\text{odd},\text{even}\}\,\times$$\{1,2,\dots,i\}\ $</span>. If <span class="math-container">$\ i\ne19\ $</span>, Beth has just two pure strategies, <span class="math-container">$\ h\in$$\{\text{odd},\text{even}\}\ $</span>, while in <span class="math-container">$\ A_{19}\ $</span> she has no choice but to take <span class="math-container">$\ h=\text{odd}\ $</span>. In <span class="math-container">$\ B_i\ $</span>, the player's roles are reversed, Beth has <span class="math-container">$\ 2(20-i)\ $</span> pure strategies <span class="math-container">$\ (g, b)\in$$\{\text{odd},\text{even}\}\,\times$$\{1,2,\dots,20-i\}\ $</span>, and it is in <span class="math-container">$\ B_1\ $</span> where Alf has no choice but to take <span class="math-container">$\ h=\text{odd}\ $</span>. In theory, the players could make their choices of strategies depend on all the past history of the game, but one of the results of Everett's investigation is that the players don't have to do that to play optimally.</p>
<p>If Alf chooses the strategy <span class="math-container">$\ (g,b)\ $</span> in <span class="math-container">$\ A_i\ (i\ne0,20)\ $</span>, and Beth chooses the strategy <span class="math-container">$\ h\ $</span>, then the outcome is a new game element <span class="math-container">$\ B_{\min(i+b,20)}\ $</span> when <span class="math-container">$\ g=h\ $</span>, or <span class="math-container">$\ B_{i-b}\ $</span> when <span class="math-container">$\ g\ne h\ $</span>. Likewise, If Beth chooses the strategy <span class="math-container">$\ (g,b)\ $</span> in <span class="math-container">$\ B_i\ (i\ne0,20)\ $</span>, and Alf chooses the strategy <span class="math-container">$\ h\ $</span>, then the outcome is a new game element <span class="math-container">$\ A_{\max(i-b,0)}\ $</span> when <span class="math-container">$\ g=h\ $</span>, or <span class="math-container">$\ A_{i+b}\ $</span> when <span class="math-container">$\ g\ne h\ $</span>. The game starts in either the game element <span class="math-container">$\ A_{10}\ $</span> or <span class="math-container">$\ B_{10}\ $</span>, and terminates whenever the outcome of a game element is either <span class="math-container">$\ A_0\ $</span> or <span class="math-container">$\ B_0\ $</span>, when Beth wins and receives payoff <span class="math-container">$\ {+}1\ $</span>, and Alf loses and receives payoff <span class="math-container">$\ {-}1\ $</span>, or <span class="math-container">$\ A_{20}\ $</span> or <span class="math-container">$\ B_{20}\ $</span>, in which case Alf wins and receives payoff <span class="math-container">$\ {+}1\ $</span>, and Beth loses and receives payoff <span class="math-container">$\ {-}1\ $</span>.</p>
<p>It's possible for the players to choose strategies for which the game will never terminate. This will occur, for instance, if the game starts in <span class="math-container">$\ A_{10}\ $</span>, Alf always chooses strategy <span class="math-container">$\ (\text{odd},1)\ $</span> in that game element, and the strategy odd in <span class="math-container">$\ B_{11}\ $</span>, and Beth always chooses the strategy even in <span class="math-container">$\ A_{10}\ $</span> and strategy <span class="math-container">$\ (\text{even},1)\ $</span> in <span class="math-container">$\ B_{11}\ $</span>. If the game never terminates, then both players receive payoff <span class="math-container">$\ 0$</span>.</p>
<p>Because there are only a finite number of strategies in each game element, another of Everett's results guarantees that there exist optimal stationary mixed strategies for both players. "Stationary" here means that the strategy chosen in each game element is independent of the past history of the game.</p>
<p>Finally, a third result of Everett's tells us that if
<span class="math-container">$$
v:\big\{A_0,A_1,\dots,A_{20},B_0,B_1,\dots,B_{20}\big\}\rightarrow\mathbb{R}
$$</span>
is a function such that
<span class="math-container">\begin{align}
v\big(A_0\big)&=v\big(B_0\big)=v\big(B_1\big)=-1\ ,\\
v\big(A_{20}\big)&=v\big(B_{20}\big)=v\big(A_{19}\big)=1\ ,\\
\end{align}</span>
for each <span class="math-container">$\ i\in\{1,2,\dots,18\}\ ,\ v(A_i) $</span> is the value of the matrix game with payoff matrix
<span class="math-container">$$
\pmatrix{v\big(B_{i-1}\big)&v\big(B_{i+1}\big)\\
v\big(B_{i-2}\big)&v\big(B_{i+2}\big)\\
\vdots&\vdots\\
v\big(B_0\big)&v\big(B_{\min(20,2i)}\big)\\
v\big(B_{i+1}\big)&v\big(B_{i-1}\big)\\
v\big(B_{i+2}\big)&v\big(B_{i-2}\big)\\
\vdots&\vdots\\
v\big(B_{\min(20,2i)}\big)&v\big(B_0\big)}\ ,
$$</span>
and for each <span class="math-container">$\ i\in\{2,\dots,19\}\ ,\ v\big(B_i\big)\ $</span> is the value of the matrix game with payoff matrix
<span class="math-container">$$
\pmatrix{v\big(A_{i+1}\big)&v\big(A_{i+2}\big)&\dots&v\big(A_{20}\big)&v\big(A_{i-1}\big)&v\big(A_{i-2}\big)&\dots&v\big(A_{\max(0,20-2i)}\big)\\
v\big(A_{i-1}\big)&v\big(A_{i-2}\big)&\dots&v\big(A_{\max(0,20-2i)}\big)&v\big(A_{i+1}\big)&v\big(A_{i+2}\big)&\dots&v\big(A_{20}\big)
}\ ,
$$</span>
then the optimal strategies in the game elements <span class="math-container">$\ A_i\ $</span> and <span class="math-container">$\ B_i\ $</span>, respectively, are to choose the optimal (mixed) strategies in the above matrix games. Alf's optimal strategy in <span class="math-container">$\ A_{19}\ $</span>, and Beth's optimal strategy in <span class="math-container">$\ B_1\ $</span>, is <span class="math-container">$\ (\text{odd},1)\ $</span>.</p>
<p>In fact,
<span class="math-container">$$
v(A_i)=\cases{\frac{i}{10}-1&for $\ i\ne19 $\\
1&for $\ i=19 $}\\
v(B_i)=\cases{\frac{i}{10}-1&for $\ i\ne1 $\\
-1&for $\ i=1 $}\\
$$</span>
is the (unique) function that satisfies these conditions, and solving the matrix games above when this function is used confirms that the strategies given in <a href="https://math.stackexchange.com/questions/4272419/best-strategies-for-squid-game-episode-6-game-4-marble-game-players/4273916#4273916">Especially Lime's answer</a> are optimal.</p>
<p>If <span class="math-container">$\ i\ne1,19\ $</span>, then the probability that Alf will win from game element <span class="math-container">$\ A_i\ $</span> or <span class="math-container">$\ B_i\ $</span> is <span class="math-container">$\ \frac{i}{20}\ $</span> and the probability that Beth will win is <span class="math-container">$\ 1-\frac{i}{20}\ $</span>. The probability is <span class="math-container">$1$</span> that Beth will win from game element <span class="math-container">$\ B_1\ $</span> and that Alf will win from game element <span class="math-container">$\ A_{19}\ $</span>.</p>
<p><strong>Update:</strong> I mentioned in a comment below that I originally excluded strategies where a player bets more marbles than his or her opponents holds, because such a strategy is non-optimal and I thought including them would unnecessarily complicate the notation. I have since realised that these strategies can be included without overly complicating the notation, and a few days ago I modified the answer to allow them as possibile choices for the players.</p>
<p>Also, the original payoff matrices only included half the better-guesser's strategies, given by the size of the bet. I've now modified them to include all the better-guesser's strategies. The <span class="math-container">$\ k^\text{th}\ $</span> row of the payoff matrix for <span class="math-container">$\ A_i\ $</span> represents Alf's strategy of betting <span class="math-container">$\ k\ $</span> marbles and guessing odd if <span class="math-container">$\ k\le i\ $</span>, or the strategy of betting <span class="math-container">$\ k-i\ $</span> marbles and guessing even if <span class="math-container">$\ k>i\ $</span>. The <span class="math-container">$\ k^\text{th}\ $</span> column of the payoff matrix for <span class="math-container">$\ B_i\ $</span> similarly represents the same strategies for Beth.</p>
|
logic | <p>Let's say that I prove statement $A$ by showing that the negation of $A$ leads to a contradiction. </p>
<p>My question is this: How does one go from "so there's a contradiction if we don't have $A$" to concluding that "we have $A$"?</p>
<p>That, to me, seems the exact opposite of logical. It sounds like we say "so, I'll have a really big problem if this thing isn't true, so out of convenience, I am just going to act like it's true". </p>
| <p>Proof by contradiction, as you stated, is the rule$\def\imp{\Rightarrow}$ "$\neg A \imp \bot \vdash A$" for any statement $A$, which in English is "If you can derive the statement that $\neg A$ implies a contradiction, then you can derive $A$". As pointed out by others, this is not a valid rule in intuitionistic logic. But I shall now show you why you probably have no choice but to agree with the rule (under certain mild conditions).</p>
<p>You see, given any statement $A$, the law of excluded middle says that "$A \lor \neg A$" is true, which in English is "Either $A$ or $\neg A$". Now is there any reason for this law to hold? If you desire that everything you can derive comes with direct evidence of some sort (such as various constructive logics), then it might not hold, because sometimes we have neither evidence for nor against a statement. However, if you believe that the statements you can make have meaning in the real world, then the law obviously holds because the real world either satisfies a statement or its negation, regardless of whether you can figure out which one.</p>
<p>The same reasoning also shows that a contradiction can never be true, because the real world never satisfies both a statement and its negation at the same time, simply by the meaning of negation. This gives the principle of explosion, which I will come to later.</p>
<p>Now given the law of excluded middle consider the following reasoning. If from $\neg A$ I can derive a contradiction, then $\neg A$ must be impossible, since my other rules are truth-preserving (starting from true statements they derive only true statements). Here we have used the property that a contradiction can never be true. Since $\neg A$ is impossible, and by law of excluded middle we know that either $A$ or $\neg A$ must be true, we have no other choice but to conclude that $A$ must be true.</p>
<p>This explains why proof by contradiction is valid, as long as you accept that for every statement $A$, exactly one of "$A$" and "$\neg A$" is true. The fact that we use logic to reason about the world we live in is precisely why almost all logicians accept classical logic. This is why I said "mild conditions" in my first paragraph.</p>
<p>Back to the principle of explosion, which is the rule "$\bot \vdash A$" for any statement $A$. At first glance, this may seem even more unintuitive than the proof by contradiction rule. But on the contrary, people use it without even realizing. For example, if you do not believe that I can levitate, you might say "If you can levitate, I will eat my hat!" Why? Because you know that if the condition is false, then whether the conclusion is true or false is completely irrelevant. They are implicitly assuming the rule that "$\bot \imp A$" is always true, which is equivalent to the principle of explosion.</p>
<p>We can hence show by a formal deduction that the law of excluded middle and the principle of explosion together give the ability to do proofs by contradiction:</p>
<p>[Suppose from "$\neg A$" you can derive "Contradiction".]</p>
<p>  $A \lor \neg A$. [law of excluded middle]</p>
<p>  If $A$:</p>
<p>    $A$.</p>
<p>  If $\neg A$:</p>
<p>    Contradiction.</p>
<p>    Thus $A$. [principle of explosion]</p>
<p>  Therefore $A$. [disjunction elimination]</p>
<p>Another possible way to obtain the proof by contradiction rule is if you accept double negation elimination, that is "$\neg \neg A \vdash A$" for any statement $A$. This can be justified by exactly the same reasoning as before, because if "$A$" is true then "$\neg A$" is false and hence "$\neg \neg A$" is true, and similarly if "$A$" is false so is "$\neg \neg A$". Below is a formal deduction showing that contradiction elimination and double negation elimination together give the ability to do proofs by contradiction:</p>
<p>[Suppose from "$\neg A$" you can derive "Contradiction".]</p>
<p>  If $\neg A$:</p>
<p>    Contradiction.</p>
<p>  Therefore $\neg \neg A$. [contradiction elimination / negation introduction]</p>
<p>  Thus $A$. [double negation elimination]</p>
| <p>A contradiction isn't a “problem”. A contradiction is an impossibility. This isn't a matter of saying “Gee, if I have fewer than 20 dollars in the back I won't be able to go out to dinner and I want to so badly, I'll just assume I have more than 20 dollars.” This is a matter of walking into the bank and saying "I'd like to withdraw 20 dollars" and having a trapdoor under you collapse and a 300 lb security guard jumping on your spleen shouting in you ear “You don't <em>have</em> it!!! You don't have it!!” </p>
<p>You can't just say “Oh, I got a contradiction when I assumed I had 20 dollars... But that doesn't mean I don't have 20 dollars.”</p>
<p>It means <em>precisely</em> that. It is <em>impossible</em> for you to have 20. So you must conclude you <em>don't</em> have 20 dollars.</p>
<p>If you get a contradiction, it just isn't possible for A to be false. </p>
<p>A contradiction, by its definition is an impossibility. So if you assume A isn't true and you get a contradiction. You have <em>proven</em> that it is <em>impossible</em> for A not to be true. If it is <em>impossible</em> for something not to be true what other options are there? </p>
|
number-theory | <p>Let $w = \sqrt[3]{1}+\sqrt[3]{2}+\sqrt[3]{4}$.<br/>
How to prove that there are no triples $(a,b,c)$, such that</p>
<ul>
<li>$a,b,c \in \mathbb{Q}$;</li>
<li>$a \leqslant b \leqslant c$;</li>
<li>$(a,b,c)\ne (1,2,4)$;</li>
<li>$w = \sqrt[3]{a}+\sqrt[3]{b}+\sqrt[3]{c}$.</li>
</ul>
<p>Or maybe there exists one?</p>
| <p>Everything will follow from the iteration of the following lemma (whose proof is explained
at the very end of this answer) :</p>
<p><b> FUNDAMENTAL LEMMA. </b> Let ${\mathbb K}$ be a field, and let
$a$ be a non-cube in ${\mathbb K}$. Then all the “new” cubes in ${\mathbb K}(\sqrt[3]a)$
are made up of $a$ and “old” cubes : they can be written as $w^3,aw^3$ or $a^2(w^3)$ for some $w\in {\mathbb K}$.</p>
<p><b> COROLLARY 1. </b> Let ${\mathbb K}$ be a subfield of $\mathbb R$, and let $a_1,a_2, \ldots, a_n$ be a sequence of real numbers such that for any $i$, $a_i$ is a non-cube
in ${\mathbb K}(\sqrt[3]{a_1},\sqrt[3]{a_2}, \ldots ,\sqrt[3]{a_{i-1}})$. Then all the cubes in ${\mathbb K}(\sqrt[3]{a_1},\sqrt[3]{a_2}, \ldots ,\sqrt[3]{a_{n}})$ can be written as $a_1^{i_1}a_2^{i_2} \ldots a_n^{i_n}(w^3)$ where $w\in {\mathbb K}$ and each
$i_k$ is $0,1$ or $2$.</p>
<p><b> COROLLARY 2. </b> Let $r_1,r_2, \ldots ,r_n$ be positive rational numbers<br>
such that there is no-nontrivial relationship between them of the form
$r_1^{i_1}r_2^{i_2} \ldots r_n^{i_n}=w^3$, where each
$i_k$ is $0,1$ or $2$, and $w$ is rational (non-trivial means that not all the $i_k$ are zero). Then the $3^n$ algebraic numbers $s_1^{i_1}s_2^{i_2} \ldots s_n^{i_n}$ (where
$s_i=\sqrt[3]{r_i}$) are linearly independent over the rationals.</p>
<p>Now, suppose that $\sqrt[3]{a}+\sqrt[3]{b}+\sqrt[3]{c}=\sqrt[3]{1}+\sqrt[3]{2}+\sqrt[3]{4}$. The degree
of the extension ${\mathbb Q}(\sqrt[3]{2},\sqrt[3]{4},\sqrt[3]{a},\sqrt[3]{b},\sqrt[3]{c})$
(call it $d$) is a power of three. If it were strictly greater than $3$, then $d\geq 9$
and by corollary 2, any non-trivial relation between cube roots would need at least nine
terms, which is too much for the six we've got. So $d=3$, which means that $a,b,c$ must all be, up to sign, (possibly negative) powers of $2$. So there are essentially three cases :</p>
<p>$$
\begin{eqnarray}
\sqrt[3]{2^i}+\sqrt[3]{2^j}+\sqrt[3]{2^k}&=& \sqrt[3]{1}+\sqrt[3]{2}+\sqrt[3]{4} \\
\sqrt[3]{2^i}+\sqrt[3]{2^j}-\sqrt[3]{2^k}&=& \sqrt[3]{1}+\sqrt[3]{2}+\sqrt[3]{4} \\
\sqrt[3]{2^i}-\sqrt[3]{2^j}-\sqrt[3]{2^k}&=& \sqrt[3]{1}+\sqrt[3]{2}+\sqrt[3]{4} \\
\end{eqnarray}
$$
where $i,j,k$ are integers. It is easy now to see then that $i,j,k$ must form a complete
system of residues modulo $3$, and to check that $(1,2,4)$ is indeed the only solution
to the original problem.</p>
<p><b> PROOF OF FUNDAMENTAL LEMMA </b>. Let ${\mathbb L}={\mathbb K}(\sqrt[3]a)$. We have
$[{\mathbb L}:{\mathbb K}]=3$. Let $c$ be a cube in $\mathbb L$. Then, there are
$x_1,x_2,x_3 \in K$ such that $c=(x_1+x_2\sqrt[3]{a}+x_3\sqrt[3]{a^2})^3$. Expanding, we find $c=C_0+3C_1\sqrt[3]a+3C_2\sqrt[3]{a^2}$, with </p>
<p>$$
\begin{eqnarray}
C_0&=&\bigg(x_1^3 + 6ax_1x_2x_3 + (ax_2^3 + a^2x_3^3)\bigg), \\
C_1&=&\bigg(x_1^2x_2 + ax_1x_3^2 + ax_2^2x_3\bigg), \\
C_2&=&\bigg(x_1^2x_3 + x_1x_2^2 + ax_2x_3^2\bigg)
\end{eqnarray}
$$ </p>
<p>So $C_1$ and $C_2$ must both be $0$, and hence</p>
<p>$$
ax_2x_3(x_2^3-ax_3^3)=(x_2^2+x_1x_3)C_1-(ax_3^2+x_1x_2)C_2=0
$$</p>
<p>Since $a$ is not a cube in $\mathbb K$, we must have $x_3=0$ or $x_2=0$.
If $x_3=0$, $C_2$ simplifies to $x_1x_2^2$, and if $x_2=0$, $C_2$ simplifies
to $x_1^2x_3$. So at least two of $x_1,x_2,x_3$ are zero, which means that $x_1+x_2\sqrt[3]{a}+x_3\sqrt[3]{a^2}$ is a multiple (by some element of $\mathbb K$)
of $1,\sqrt[3]{a}$, or $\sqrt[3]{a^2}$, as wished.</p>
| <p>Denote $t=\sqrt[3]2,w=e^{\frac{2\pi i}{3}}=-\frac{1}{2}+\frac{\sqrt3}{2}i,A=\sqrt[3]a,B=\sqrt[3]b,C=\sqrt[3]c.$</p>
<p>If $A,B,C$ are all $\notin \mathbb{Q},$ rewrite the equation as
$$A+B+C-t-t^2=1.\tag1$$</p>
<blockquote>
<p><strong>Lemma:</strong> Every number $g(\theta)$ of the field $K(\theta)$ is likewise an algebraic number over $k$ of degree at most $n$. The relative conjugates of a number $a=g(\theta)$ are the distinct numbers among the numbers $g(\theta_i)\ (i=1,2,\dots, n)$. Each conjugate to $a$ appears equally often among the $g(\theta_i).$ </p>
</blockquote>
<p>You can find this lemma in Erich Hecke, <em>Lectures on the theory of algebraic numbers</em>, page 61.</p>
<p>Denote $K=\mathbb{Q}(A,B,C,t)=\mathbb{Q}(\theta)$ for some algebraic number $\theta.$ Assume $[K:\mathbb{Q}]=n=3^r,$ where $r$ is an integer and $1\leq r \leq4.$</p>
<p>Now the conjugates to $A$ with respect to $\mathbb{Q}$ are $A,Aw,Aw^2,$ so do $B,C,t.$ If $A=g_1(\theta),$ and $\theta_i\ (i=1,2,\dots, n)$ are the conjugates to $\theta$ with respect to $\mathbb{Q},$ then $g_1(\theta_i) (i=1,2,\dots, n)$ are the conjugates to $A$ with respect to $\mathbb{Q},$ and each of $A,Aw,Aw^2$ appears equally often among the $g_1(\theta_i),$ namely $3^{r-1}$ times.</p>
<p>Assume that $A+B+C-t-t^2=g_1(\theta)+g_2(\theta)+g_3(\theta)-g_4(\theta)-g_4(\theta)^2=G(\theta).$ If $\theta$ goes over $\theta_i\ (i=1,2,\dots, n)$ then $g_1(\theta)$ goes over $A,Aw,Aw^2,$ and each of them appears $3^{r-1}$ times. So do $B,C,t.$</p>
<p>Since $A,B,C,t$ are all $\notin \mathbb{Q},$we have $$\sum_{i=1}^{n}g_1(\theta_i)=\frac{n}{3}\sum_{i=0}^{2}Aw^i=0.$$
So do $B,C,t.$ Hence we get $$\sum_{i=1}^{n}G(\theta_i)=0.$$
But $$\sum_{i=1}^{n}G(\theta_i)=\sum_{i=1}^{n}1=n,$$
a contradiction. Hence $A,B,C$ are all $\notin \mathbb{Q}$ is impossible. WLOG we assume that $A \in \mathbb{Q}.$ If $B,C$ are all $\notin \mathbb{Q},$ then $$\sum_{i=1}^{n}G(\theta_i)=nA=n, A=1.$$
Now rewrite $(1)$ to $$B+C-t-t^2=0.$$</p>
<p>We get $$B/t+C/t-t/t-t^2/t=0.$$
$$\sqrt[3]{\frac{b}{2}}+\sqrt[3]{\frac{c}{2}}-t=1.$$
For the same reason we get $\dfrac{b}{2}=1$ or $\dfrac{c}{2}=1.$ Now we are done.</p>
<p>This method can easily generalize to other cases. For example, we can prove that $\sqrt[3]{3},\sqrt[5]{23},\sqrt[11]{24}$ are linearly independent in $\mathbb{Q}.$</p>
|
linear-algebra | <p>How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$
Could anyone explain this to me, please?</p>
<p>I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components...</p>
<p>When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.</p>
| <p>There exists an infinite number of vectors in 3 dimension that are perpendicular to a fixed one.
They should only satisfy the following formula:
$$(3\mathbf{i}+4\mathbf{j}-2\mathbf{k}) \cdot v=0$$</p>
<p>For finding all of them, just choose 2 perpendicular vectors, like $v_1=(4\mathbf{i}-3\mathbf{j})$ and $v_2=(2\mathbf{i}+3\mathbf{k})$ and any linear combination of them is also perpendicular to the original vector: $$v=((4a+2b)\mathbf{i}-3a\mathbf{j}+3b\mathbf{k}) \hspace{10 mm} a,b \in \mathbb{R}$$</p>
| <p>Take cross product with any vector. You will get one such vector.</p>
|
differentiation | <p>In a scientific paper, I've seen the following</p>
<p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-1}\frac{\delta K}{\delta p}K^{-1}$$</span></p>
<p>where <span class="math-container">$K$</span> is a <span class="math-container">$n \times n$</span> matrix that depends on <span class="math-container">$p$</span>. In my calculations I would have done the following</p>
<p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-2}\frac{\delta K}{\delta p}=-K^{-T}K^{-1}\frac{\delta K}{\delta p}$$</span></p>
<p>Is my calculation wrong?</p>
<p>Note: I think <span class="math-container">$K$</span> is symmetric.</p>
| <p>The major trouble in matrix calculus is that the things are no longer commuting, but one tends to use formulae from the scalar function calculus like $(x(t)^{-1})'=-x(t)^{-2}x'(t)$ replacing $x$ with the matrix $K$. <strong>One has to be more careful here and pay attention to the order</strong>. The easiest way to get the derivative of the inverse is to derivate the identity $I=KK^{-1}$ <em>respecting the order</em>
$$
\underbrace{(I)'}_{=0}=(KK^{-1})'=K'K^{-1}+K(K^{-1})'.
$$
Solving this equation with respect to $(K^{-1})'$ (again paying attention to the order (!)) will give
$$
K(K^{-1})'=-K'K^{-1}\qquad\Rightarrow\qquad (K^{-1})'=-K^{-1}K'K^{-1}.
$$</p>
| <p>Yes, your calculation is wrong, note that $K$ may not commute with $\frac{\partial K}{\partial p}$, hence you must apply the chain rule correctly. The derivative of $\def\inv{\mathrm{inv}}\inv \colon \def\G{\mathord{\rm GL}}\G_n \to \G_n$ is <strong>not</strong> given by $\inv'(A)B = -A^2B$, but by $\inv'(A)B = -A^{-1}BA^{-1}$. To see that, note that for small enough $B$ we have
\begin{align*}
\inv(A + B) &= (A + B)^{-1}\\
&= (\def\I{\mathord{\rm Id}}\I + A^{-1}B)^{-1}A^{-1}\\
&= \sum_k (-1)^k (A^{-1}B)^kA^{-1}\\
&= A^{-1} - A^{-1}BA^{-1} + o(\|B\|)
\end{align*}
Hence, $\inv'(A)B= -A^{-1}BA^{-1}$, and therefore, by the chain rule
$$ \partial_p (\inv \circ K) = \inv'\circ K\bigl(\partial_p K) = -K^{-1}(\partial_p K) K^{-1} $$ </p>
|
linear-algebra | <p>Why is the determinant as a function from <span class="math-container">$M_n(\mathbb{R})$</span> to <span class="math-container">$\mathbb{R}$</span> continuous? Can anyone explain precisely and rigorously? So far, I know the explanation which comes from the facts that</p>
<ul>
<li>polynomials are continuous,</li>
<li>sum and product of continuous functions are continuous.</li>
</ul>
<p>Also I have the confusion regarding the metric on <span class="math-container">$M_n(\mathbb{R})$</span>.</p>
| <p><span class="math-container">$M_n(\mathbb R)$</span> is just <span class="math-container">$\mathbb R^{n^2}$</span> with the euclidian metric.</p>
<p>determinant is continuous, because it is a polynomial in the coordinates
<span class="math-container">$$
\det(X) = \det ([x_{i,j}])= \sum_\sigma \text{sgn}(\sigma) \prod_{i=1}^{n} x_{\sigma(i),i},
$$</span>
where <span class="math-container">$\sigma:\{1,\dots,n\}\to\{1,\dots,n\}$</span> is a permutation.</p>
| <p>Recall that the determinant can be computed by a sum of determinants of minors, that is "sub"-matrices of smaller dimension.</p>
<p>Now we can prove by induction that $\det$ is continuous:</p>
<ul>
<li>For $n=1$, $A\in M_1(\mathbb R)$ is simply a scalar we have that $\det A=A$, and surely the identity function is continuous.</li>
<li><p>Suppose that for $n$ we have that $\det$ is continuous on $M_n(\mathbb R)$, let $A\in M_{n+1}(\mathbb R)$. We know that $\det A$ can be calculated as the alternating sum over one of first row, when calculating the $\det$ of the appropriate minor. </p>
<p>So $\det A$ is written as a sum and scalar multiplication of $\det$ on a smaller dimension. From the induction hypothesis these are continuous and therefore $\det$ is continuous on $n+1\times n+1$ matrices.</p></li>
</ul>
|
logic | <p>I am studying entailment in classical first-order logic.</p>
<p>The Truth Table we have been presented with for the statement $(p \Rightarrow q)\;$ (a.k.a. '$p$ implies $q$') is:
$$\begin{array}{|c|c|c|}
\hline
p&q&p\Rightarrow q\\ \hline
T&T&T\\
T&F&F\\
F&T&T\\
F&F&T\\\hline
\end{array}$$</p>
<p>I 'get' lines 1, 2, and 3, but I do not understand line 4. </p>
<p>Why is the statement $(p \Rightarrow q)$ True if both p and q are False?</p>
<p>We have also been told that $(p \Rightarrow q)$ is logically equivalent to $(~p || q)$ (that is $\lnot p \lor q$).</p>
<p>Stemming from my lack of understanding of line 4 of the Truth Table, I do not understand why this equivalence is accurate.</p>
<hr>
<blockquote>
<p><strong>Administrative note.</strong> You may experience being directed here even though your question was actually about line 3 of the truth table instead. In that case, see the companion question <a href="https://math.stackexchange.com/questions/70736/in-classical-logic-why-is-p-rightarrow-q-true-if-p-is-false-and-q-is-tr">In classical logic, why is $(p\Rightarrow q)$ True if $p$ is False and $q$ is True?</a> And even if your original worry was about line 4, it might be useful to skim the other question anyway; many of the answers to either question attempt to explain <em>both</em> lines.</p>
</blockquote>
| <p>Here is an example. Mathematicians claim that this is true: </p>
<p><strong>If $x$ is a rational number, then $x^2$ is a rational number</strong> </p>
<p>But let's consider some cases. Let $P$ be "$x$ is a rational number". Let $Q$ be "$x^2$ is a rational number".<br>
When $x=3/2$ we have $P, Q$ both true, and $P \rightarrow Q$ of the form $T \rightarrow T$ is also true.<br>
When $x=\pi$ we have $P,Q$ both false, and $P \rightarrow Q$ of the form $F \rightarrow F$ is true.<br>
When $x=\sqrt{2}$ we have $P$ false and $Q$ true, so $P \rightarrow Q$ of the form $F \rightarrow T$ is again true. </p>
<p>But the assertion in bold I made above means that we never <em>ever</em> get the case $T \rightarrow F$, no matter what number we put in for $x$.</p>
| <p>Here are two explanations from the books on my shelf followed by my attempt. The first one is probably the easiest justification to agree with. The second one provides a different way to think about it.</p>
<p>From Robert Stoll's “Set Theory and Logic” page 165:</p>
<blockquote>
<p>To understand the 4th line, consider the statement <span class="math-container">$(P \land Q) \to P$</span>. We expect this to be true regardless of the choice of <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>. But if <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are both false, then <span class="math-container">$P \land Q$</span> is false, and we are led to the conclusion that if both antecedent and consequent are false, a conditional is true.</p>
</blockquote>
<p>From Herbert Enderton's “A Mathematical Introduction to Logic” page 21:</p>
<blockquote>
<p>For example, we might translate the English sentence, ”If you're telling the truth then I'm a monkey's uncle,” by the formula <span class="math-container">$(V \to M)$</span>. We assign this formula the value <span class="math-container">$T$</span> whenever you are fibbing. In assigning the value <span class="math-container">$T$</span>, we are certainly not assigning any causal connection between your veracity and any simian features of my nephews or nieces. The sentence in question is a <em>conditional</em> statement. It makes an assertion about my relatives provided a certain <em>condition</em> — that you are telling the truth — is met. Whenever that condition fails, the statement is vacuously true.</p>
<p>Very roughly, we can think of a conditional formula <span class="math-container">$(p \to q)$</span> as expressing a <em>promise</em> that if a certain condition is met (viz., that <span class="math-container">$p$</span> is true), then <span class="math-container">$q$</span> is true. If the condition <span class="math-container">$p$</span> turns out not to be met, then the promise stands unbroken, regardless of <span class="math-container">$q$</span>.</p>
</blockquote>
<p>That's why it's said to be “vacuously true”. That <span class="math-container">$(p \to q)$</span> is True when both <span class="math-container">$p$</span>, <span class="math-container">$q$</span> are False is different from saying the conclusion <span class="math-container">$q$</span> is True (which would be a contradiction). Rather, this is more like saying “we cannot show <span class="math-container">$p \to q)$</span> to be false here” and Not False is True.</p>
|
geometry | <p>Today we learned about Pythagoras' theorem. Sadly, I can't understand the logic behind it.</p>
<p><em>$A^{2} + B^{2} = C^{2}$</em></p>
<p><img src="https://i.sstatic.net/RTMeE.png" alt="enter image description here"></p>
<p>$C^{2} = (5 \text{ cm})^2 + (7 \text{ cm})^2$ <br>
$C^{2} = 25 \text{ cm}^2 + 49 \text{ cm}^2$ <br>
$C^{2} = 74 \text{ cm}^2$<br><br>
${x} = +\sqrt{74} \text{ cm}$<br></p>
<p>Why does the area of a square with a side of $5$ cm + the area of a square with a side of $7$ cm always equal to the missing side's length squared?</p>
<p>I asked my teacher but she's clueless and said Pythagoras' theorem had nothing to do with squares.</p>
<p>However, I know it does because this formula has to somehow make sense. Otherwise, it wouldn't exist.</p>
| <p>Ponder this image and you will see why your intuition about what is going on is correct:</p>
<p><img src="https://i.sstatic.net/POhH1.png" alt="enter image description here"></p>
| <p>It is not quite important that the shapes of the figures you put on the edges of your right triangle are squares. They can be any figure, since the area of a figure goes with the square of its side. So you can put pentagons, stars, or a camel's shape... In the following picture the area of the large pentagon is the sum of the areas of the two smaller ones:</p>
<p><img src="https://i.sstatic.net/ZTtcc.png" alt="I'm am not able to rescale Camels with geogebra... so here are pentagons"></p>
<p>But you can also put a triangle similar to the original one, and you can put it <em>inside</em> instead of <em>outside</em>. So here comes the proof of Pitagora's Theorem.</p>
<p><img src="https://i.sstatic.net/6LOE5.png" alt="enter image description here"></p>
<p>Let $ABC$ be your triangle with a right angle in $C$. Let $H$ be the projection of $C$ onto $AB$. You can easily notice that $ABC$, $ACH$ and $CBH$ are similar. And they are the triangles constructed inside the edges of $ABC$. Clearly the area of $ABC$ is the sum of the areas of $ACH$ and $CBH$, so the theorem is proven.</p>
<p>In my opionion this is the proof which gives the essence of the theorem.</p>
|
probability | <p>Two people have to spend exactly 15 consecutive minutes in a bar on a given day, between 12:00 and 13:00. Assuming uniform arrival times, what is the probability they will meet?</p>
<p>I am mainly interested to see how people would model this formally. I came up with the answer 50% (wrong!) based on the assumptions that:</p>
<ul>
<li>independent uniform arrival</li>
<li>they will meet iff they actually overlap by some $\epsilon > 0$ </li>
<li>we can measure time continuously</li>
</ul>
<p>but my methods felt a little ad hoc to me, and I would like to learn to make it more formal.</p>
<p>Also I'm curious whether people think the problem is formulated unambiguously. I added the assumption of independent arrival myself for instance, because I think without such an assumption the problem is not well defined.</p>
| <p>This is a great question to answer graphically. First note that the two can't arrive after 12:45, since they have to spend at least 15 minutes in the bar. Second, note that they meet if their arrival times differ by less than 15 minutes.</p>
<p>If we plot the arrival time of person 1 on the x axis, and person 2 on the y axis, then they meet if the point representing their arrival times is between the two blue lines in the figure below. So we just need to calculate that area, relative to the area of the whole box.</p>
<p>The whole box clearly has area</p>
<p>$$45 \times 45 = 2025$$</p>
<p>The two triangles together have area</p>
<p>$$2\times \tfrac{1}{2} \times 30 \times 30 = 900$$</p>
<p>Therefore the probability of the friends meeting is</p>
<p>$$(2025-900)/2025 = 1125/2025 = 5/9$$</p>
<p><img src="https://i.sstatic.net/Yw8sL.png" alt="Graph"></p>
| <p><img src="https://i.sstatic.net/zaxIv.png" alt="solution"></p>
<p>$$ \text{chance they meet} = \frac{\text{green area}}{\text{green area + blue area}} = \frac{45^2 - 30^2}{45^2} $$</p>
|
geometry | <p>I am trying to understand the notion of an orientable manifold.<br>
Let M be a smooth n-manifold. We say that M is orientable if and only if there exists an atlas $A = \{(U_{\alpha}, \phi_{\alpha})\}$ such that $\textrm{det}(J(\phi_{\alpha} \circ \phi_{\beta}^{-1}))> 0$ (where defined). My question is:<br>
Using this definition of orientation, how can one prove that the Möbius strip is not orientable? </p>
<p>Thank you!</p>
| <p>If you had an orientation, you'd be able to define at each point $p$ a unit vector $n_p$ normal to the strip at $p$, in a way that the map $p\mapsto n_p$ is continuous. Moreover, this map is completely determined once you fix the value of $n_p$ for some specific $p$. (You have two possibilities, this uses a tangent plane at $p$, which is definable using a $(U_\alpha,\phi_\alpha)$ that covers $p$.)</p>
<p>The point is that the positivity condition you wrote gives you that the normal at any $p'$ is independent of the specific $(U_{\alpha'},\phi_{\alpha'})$ you may choose to use, and path connectedness gives you the uniqueness of the map. Now you simply check that if you follow a loop around the strip, the value of $n_p$ changes sign when you return to $p$, which of course is a contradiction.</p>
<p>(This is just a formalization of the intuitive argument.)</p>
| <p>Let $M:=\{(x,y)|x\in\mathbb R, -1<y<1\}$ be an infinite strip and choose an $L>0$. The equivalence relation $(x+L,-y)\sim(x,y)$ defines a Möbius strip $\hat M$. Let $\pi: M \to \hat M$ be the projection map. The Möbius strip $\hat M$ inherits the differentiable structure from ${\mathbb R}^2$. We have to prove that $\hat M$ does not admit an atlas of the described kind which is compatible with the differentiable structure on $\hat M$. Assume that there is such an atlas $(U_\alpha,\phi_\alpha)_{\alpha\in I}$. We then define a function $\sigma:{\mathbb R}\to\{-1,1\}$ as follows: For given $x\in{\mathbb R}$ the point $\pi(x,0)$ is in $\hat M$, so there is an $\alpha\in I$ with $\pi(x,0)\in U_\alpha$. The map $f:=\phi_\alpha^{-1}\circ\pi$ is a diffeomorphism in a neighbourhood $V$ of $(x,0)$. Put $\sigma(x):=\mathrm{sgn}\thinspace J_f(x,0)$, where $J_f$ denotes the Jacobian of $f$. One easily checks that $\sigma(\cdot)$ is well defined and is locally constant, whence it is constant on ${\mathbb R}$. On the other hand we have $f(x+L,y)\equiv f(x,-y)$ in $V$ which implies $\sigma(L)=-\sigma(0)$ -- a contradiction.</p>
|
matrices | <p>I happened to stumble upon the following matrix:
$$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$</p>
<p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then:
$$ P(A)=\begin{bmatrix}
P(a) & P'(a) \\
0 & P(a)
\end{bmatrix}$$</p>
<p>Where $P'(a)$ is the derivative evaluated at $a$.</p>
<p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me:
$$ \exp(A)=\begin{bmatrix}
e^a & e^a \\
0 & e^a
\end{bmatrix}$$
and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p>
<p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get:
$$ P(A)=\begin{bmatrix}
\frac{1}{a} & -\frac{1}{a^2} \\
0 & \frac{1}{a}
\end{bmatrix}$$
And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p>
<p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p>
<p>I have two questions:</p>
<ol>
<li><p>Why is this happening?</p></li>
<li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li>
</ol>
| <p>If $$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$
then by induction you can prove that
$$ A^n = \begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} \tag 1
$$
for $n \ge 1 $. If $f$ can be developed into a power series
$$
f(z) = \sum_{n=0}^\infty c_n z^n
$$
then
$$
f'(z) = \sum_{n=1}^\infty n c_n z^{n-1}
$$
and it follows that
$$
f(A) = \sum_{n=0}^\infty c_n A^n = I + \sum_{n=1}^\infty c_n
\begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} = \begin{bmatrix}
f(a) & f'(a) \\
0 & f(a)
\end{bmatrix} \tag 2
$$
From $(1)$ and
$$
A^{-1} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}
$$
one gets
$$
A^{-n} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}^n =
(-a^{-2})^{n} \begin{bmatrix}
-a & 1 \\
0 & -a
\end{bmatrix}^n \\ =
(-1)^n a^{-2n} \begin{bmatrix}
(-a)^n & n (-a)^{n-1} \\
0 & (-a)^n
\end{bmatrix} =
\begin{bmatrix}
a^{-n} & -n a^{-n-1} \\
0 & a^{-n}
\end{bmatrix}
$$
which means that $(1)$ holds for negative exponents as well.
As a consequence, $(2)$ can be generalized to functions
admitting a Laurent series representation:
$$
f(z) = \sum_{n=-\infty}^\infty c_n z^n
$$</p>
| <p>It's a general statement if <span class="math-container">$J_{k}$</span> is a Jordan block and <span class="math-container">$f$</span> a function matrix then
<span class="math-container">\begin{equation}
f(J)=\left(\begin{array}{ccccc}
f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & \frac{f''(\lambda_{0})}{2!} & \ldots & \frac{f^{(n-1)}(\lambda_{0})}{(n-1)!}\\
0 & f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & & \vdots\\
0 & 0 & f(\lambda_{0}) & \ddots & \frac{f''(\lambda_{0})}{2!}\\
\vdots & \vdots & \vdots & \ddots & \frac{f'(\lambda_{0})}{1!}\\
0 & 0 & 0 & \ldots & f(\lambda_{0})
\end{array}\right)
\end{equation}</span>
where
<span class="math-container">\begin{equation}
J=\left(\begin{array}{ccccc}
\lambda_{0} & 1 & 0 & 0\\
0 & \lambda_{0} & 1& 0\\
0 & 0 & \ddots & 1\\
0 & 0 & 0 & \lambda_{0}
\end{array}\right)
\end{equation}</span>
This statement can be demonstrated in various ways (none of them short), but it's a quite known formula. I think you can find it in various books, like in Horn and Johnson's <em>Matrix Analysis</em>.</p>
|
probability | <p>How do we go about deriving the values of mean and variance of a Gaussian Random Variable $X$ given its probability density function ?</p>
| <p><strong>UPDATE 21-03-2017</strong><br />
A much faster way is to differentiate both sides of</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}dx=1$$</span>
with respect to the two parameters <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma^2$</span> (RHS will then be zero).</p>
<hr />
<p>The Gaussian pdf is defined as
<span class="math-container">$$f_X(x) =\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}$$</span></p>
<p>where <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma$</span> are two parameters, with <span class="math-container">$\sigma >0$</span>.
By definition of the mean we have
<span class="math-container">$$E(X) = \int_{-\infty}^{\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}dx$$</span>
which using integral properties can be written as</p>
<p><span class="math-container">$$E(X) = \int_{-\infty}^{\infty}(x+\mu)\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx$$</span></p>
<p><span class="math-container">$$=\int_{-\infty}^{\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx \;+\; \int_{-\infty}^{\infty}\mu\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx \qquad [1]$$</span></p>
<p>The first integral, call it <span class="math-container">$I_1$</span> equals zero, because we integrate an odd function (the product of an odd times an even function gives an odd function), over an interval centered at zero. Being pedantic, we have using additivity</p>
<p><span class="math-container">$$I_1 = \int_{-\infty}^0x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx + \int_{0}^{\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx$$</span>
Swapping the integration limits in the first we have</p>
<p><span class="math-container">$$I_1 = -\int_{0}^{-\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx + \int_{0}^{\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx$$</span></p>
<p>and using again integral properties we have</p>
<p><span class="math-container">$$I_1 = \int_{0}^{\infty}(-x)\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{(-x)^2}{2\sigma^2}\right\}dx + \int_{0}^{\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx$$</span></p>
<p><span class="math-container">$$\Rightarrow I_1 = -\int_{0}^{\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx + \int_{0}^{\infty}x\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx = 0\qquad [2]$$</span></p>
<p>So we have that</p>
<p><span class="math-container">$$E(X) = \int_{-\infty}^{\infty}\mu\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx $$</span></p>
<p>Multiply by <span class="math-container">$\sigma \sqrt2$</span> to obtain</p>
<p><span class="math-container">$$E(X) = \int_{-\infty}^{\infty}\mu\frac{1}{\sqrt{\pi}}e^{-x^2} dx = \mu\frac{2}{\sqrt{\pi}}\int_{0}^{\infty}e^{-x^2} dx$$</span></p>
<p>...the last term because the integrand is an even function.</p>
<p>Now <span class="math-container">$$\frac{2}{\sqrt{\pi}}\int_{0}^{\infty}e^{-x^2} dx = \lim_{t\rightarrow \infty}\frac{2}{\sqrt{\pi}}\int_{0}^{t}e^{-x^2} dx = \lim_{t\rightarrow \infty} \text{erf}(t) = 1$$</span></p>
<p>where "erf" is the error function. So we end up with
<span class="math-container">$$E(X) = \mu$$</span>
i.e. that the parameter <span class="math-container">$\mu$</span> is the mean of the distribution.</p>
<p><strong>VARIANCE</strong><br />
We have</p>
<p><span class="math-container">$$\text {Var}(X) = \int_{-\infty}^{\infty}(x-\mu)^2\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}dx$$</span></p>
<p>Applying the same tricks as before we have</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty}(x-\mu)^2\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}dx = \int_{-\infty}^{\infty}x^2\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}dx $$</span></p>
<p><span class="math-container">$$=\sigma \sqrt2\int_{-\infty}^{\infty}(\sigma \sqrt2x)^2\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{(\sigma \sqrt2x)^2}{2\sigma^2}\right\}dx = \sigma^2\frac{4}{\sqrt{\pi}}\int_{0}^{\infty}x^2e^{-x^2}dx$$</span></p>
<p>Define <span class="math-container">$t=x^2\Rightarrow x= \sqrt t$</span> and <span class="math-container">$dt = 2xdx = 2\sqrt tdx \Rightarrow dx = (2\sqrt t)^{-1}dt$</span>. Substituting</p>
<p><span class="math-container">$$V(X) = \sigma^2\frac{4}{\sqrt{\pi}}\int_{0}^{\infty}(\sqrt t)^2(2\sqrt t)^{-1}e^{-t}dt =
\sigma^2\frac{4}{\sqrt{\pi}}\frac 12 \int_{0}^{\infty}t^{\frac 32 -1}e^{-t}dt= \sigma^2\frac{4}{\sqrt{\pi}}\frac 12 \Gamma\left(\frac 32\right)$$</span></p>
<p><span class="math-container">$$\Rightarrow V(X) = \sigma^2\frac{4}{\sqrt{\pi}}\frac 12 \frac {\sqrt \pi}{2} = \sigma^2$$</span></p>
<p>where <span class="math-container">$\Gamma()$</span> is the Gamma function. So the parameter <span class="math-container">$\sigma$</span> is the square-root of the variance, i.e. the standard deviation.</p>
| <p>If <span class="math-container">$x\mapsto f(x)$</span> is the density function of a random variable <span class="math-container">$X$</span> with expected value <span class="math-container">$0$</span> and variance <span class="math-container">$1$</span>, then <span class="math-container">$x\mapsto \frac1\sigma f\left(\frac{x-\mu}{\sigma}\right)$</span> is the density function of <span class="math-container">$\mu+\sigma X$</span>, and thus of a random variable with expected value <span class="math-container">$\mu$</span> and variance <span class="math-container">$\sigma^2$</span>.</p>
<p>That can be shown by thinking about the substitution <span class="math-container">$u = \dfrac{x-\mu}{\sigma}$</span> and <span class="math-container">$du=\dfrac{dx}\sigma$</span>.</p>
<p>Therefore the problem reduces to this: How do we show that
<span class="math-container">$$
x\mapsto \frac{1}{\sqrt{2\pi}}\exp\left(\frac{-x^2}{2}\right)
$$</span>
is the density of a distribution with expectation <span class="math-container">$0$</span> and variance <span class="math-container">$1$</span>?</p>
<p>If you know that the expectation of a distribution with density <span class="math-container">$f$</span> is <span class="math-container">$\int_{-\infty}^\infty xf(x)\,dx$</span>, then you know that you need to find
<span class="math-container">$$
\int_{-\infty}^\infty x \frac{1}{\sqrt{2\pi}}\exp\left(\frac{-x^2}{2}\right)\,dx.
$$</span></p>
<p>Since this is the integral of an odd function over an interval that is symmetric about <span class="math-container">$0$</span>, it must be equal to <span class="math-container">$0$</span> unless the positive and negative parts both diverge to <span class="math-container">$\infty$</span>. The positive part is a constant times
<span class="math-container">$$
\int_0^\infty x \exp\left(\frac{-x^2}{2}\right)\,dx = \int_0^\infty \exp(u)\,du,
$$</span>
where <span class="math-container">$u=-x^2/2$</span> so <span class="math-container">$du=x\,dx$</span>. Clearly this is finite, and the negative part can be treated the same way.</p>
<p>The variance is
<span class="math-container">\begin{align}
& \int_{-\infty}^\infty x^2 \frac{1}{\sqrt{2\pi}}\exp\left(\frac{-x^2}{2}\right)\,dx \\[8pt]
= {} & 2 \int_0^\infty x^2 \frac{1}{\sqrt{2\pi}}\exp\left(\frac{-x^2}{2}\right)\,dx \\[8pt]
= {} & 2\int_0^\infty x^2 \frac{1}{\sqrt{2\pi}}\exp\left(\frac{-x^2}{2}\right)\,dx \\[8pt]
= {} & 2\frac{1}{\sqrt{2\pi}} \int_0^\infty \Big(x\Big)\Big(x\exp\left(\frac{-x^2}{2}\right)\,dx\Big) \\[8pt]
= {} & \frac{2}{\sqrt{2\pi}} \int x\,dv \\[8pt]
= {} & \frac{2}{\sqrt{2\pi}} \left( xv-\int v\,dx \right) \\[8pt]
= {} & \frac{2}{\sqrt{2\pi}} \left(\left[-x\exp\left(\frac{-x^2}{2}\right)\right]_0^\infty
-\int_0^\infty -\exp\left(\frac{-x^2}{2}\right) \,dx \right) \\[8pt]
= {} & \frac{2}{\sqrt{2\pi}} \int_0^\infty \exp\left(\frac{-x^2}{2}\right) \,dx.
\end{align}</span></p>
<p>You know that this is equal to <span class="math-container">$1$</span> if you know how the normalizing constant in the density function was found.</p>
|
logic | <p>I keep facing <a href="https://en.wikipedia.org/wiki/K%C5%91nig%27s_lemma" rel="nofollow noreferrer">König's lemma</a> "Every finitely branching infinite tree over <span class="math-container">$\mathbb{N}$</span> has infinite branch". Why it is not taken "obvious" but needs a careful proof?</p>
<p>It seems somewhat obvious, but I guess I overlook something.</p>
| <p>If you mean take König's Lemma as almost an axiom of set theory, then you are on sketchy ground. While it is an intuitively obvious statement, note that at first glance so, too, is the following statement:</p>
<blockquote>
<p>If <span class="math-container">$T$</span> is a tree of height <span class="math-container">$\omega_1$</span> in which all levels are countable, then there is a cofinal branch through <span class="math-container">$T$</span>.</p>
</blockquote>
<p>Unfortunately, the above is <em>false</em>, as the existence of <a href="https://en.wikipedia.org/wiki/Aronszajn_tree" rel="nofollow noreferrer">Aronszajn trees</a> immediately implies.</p>
<p>Lest you think that the heart of the difference is between <span class="math-container">$\omega$</span> and <span class="math-container">$\omega_1$</span> is that the former is closed under the (cardinal) exponential function <span class="math-container">$\lambda \mapsto 2^\lambda$</span> while the latter is not, note that even if we assume that <span class="math-container">$\kappa$</span> is an <a href="https://en.wikipedia.org/wiki/Inaccessible_cardinal" rel="nofollow noreferrer">inaccessible cardinal</a>, the statement</p>
<blockquote>
<p>If <span class="math-container">$T$</span> is a tree of height <span class="math-container">$\kappa$</span> in which all levels have cardinality <span class="math-container">$< \kappa$</span>, then there is a cofinal branch through <span class="math-container">$T$</span></p>
</blockquote>
<p>is only true if, in addition, <span class="math-container">$\kappa$</span> is <a href="https://en.wikipedia.org/wiki/Weakly_compact_cardinal" rel="nofollow noreferrer">weakly compact</a>.</p>
<p>We should be very careful when we ascribe the adjective <em>obvious</em>. We can prove König's Lemma carefully from the usual axioms, and so there is no harm in requiring that we do so.</p>
<p>Note, however, that a <a href="https://en.wikipedia.org/wiki/Reverse_mathematics#Weak_K.C3.B6nig.27s_lemma_WKL0" rel="nofollow noreferrer">weak form of König's Lemma</a> <em>is</em> taken as a basic axiom in certain important subsystems of second-order arithmetic, and many mathematically interesting theorems have been shown to be equivalent to this axiom over even weaker subsystems.</p>
| <p>Two of the answers indicate how naive analogues of the result fail at larger cardinals. One test of obviousness of a result is how malleable it is, and these failures indicate that there are subtle obstructions.</p>
<p>An additional issue that the previous answers have not addressed is that even if a finite branching infinite tree is very easy to describe, this does not mean that any of its branches are. For an example of this issue see <a href="https://math.stackexchange.com/a/307335/462">this answer</a>, and for more refined versions, see the section on "computability aspects" in the <a href="https://en.wikipedia.org/wiki/K%C3%B6nig%27s_lemma#Computability_aspects" rel="nofollow noreferrer">Wikipedia entry</a> for König's lemma.</p>
<p>Informally, we tend to say that a result stating the existence of an object is obvious, if the object can be easily computed from the data. In the present case, we can make this informal claim precise in the context of computability theory, and we have precise results showing that it fails. The harder it is to exhibit a branch, the less convincing our claim of obviousness. (And we can go further and combine both tests, seeing how much harder it is to exhibit a branch when we relax the assumption that the tree is finite branching: There are computable trees over <span class="math-container">$\mathbb N$</span> with branches but without <a href="https://en.wikipedia.org/wiki/Hyperarithmetical_theory" rel="nofollow noreferrer">hyperarithmetical</a> branches.)</p>
<p>Yet a third test of obviousness lies in the ("direct") consequences of a principle, though this test perhaps carries less weight than the others. König's lemma for trees over <span class="math-container">$\mathbb N$</span> is equivalent over the weak theory <span class="math-container">$\mathsf{RCA}_0$</span> to the infinite Ramsey theorem for triples: Fix a finite set of colors, and assign a color to each <span class="math-container">$3$</span>-sized set of natural numbers. Then there is an infinite subset of <span class="math-container">$\mathbb N$</span> all of whose <span class="math-container">$3$</span>-sized subsets have the same color. (For <span class="math-container">$\mathsf{RCA}_0$</span> and reverse mathematics, see <a href="https://en.wikipedia.org/wiki/Reverse_mathematics" rel="nofollow noreferrer">here</a>.) Again, it is "computationally demanding" to exhibit such an infinite homogeneous set, even in cases where the coloring is very easy to describe. For a direct proof of the equivalence of these principles, see <a href="https://mathoverflow.net/questions/128618/deriving-konigs-lemma-directly-from-infinite-ramseys-theorem-for-triples">here</a>. Ramsey's theorem for triples trivially implies the corresponding result for pairs. In his book <strong>Linear orderings</strong>, Joseph Rosenstein presents the following nice example of a simple coloring of the <span class="math-container">$2$</span>-sized sets without simple infinite homogeneous sets: Linearly order the <span class="math-container">$2$</span>-sized sets of natural numbers by saying that <span class="math-container">$\{a,b\}<\{c,d\}$</span> iff <span class="math-container">$a+b<c+d$</span> or (<span class="math-container">$a+b=c+d$</span> but <span class="math-container">$\min\{a,b\}<\min\{c,d\}$</span>). This order begins <span class="math-container">$$\{0,1\}<\{0,2\}<\{0,3\}<\{1,2\}<\{0,4\}<\{1,3\}<\{0,5\}<\dots$$</span>
Now color red those pairs <span class="math-container">$\{a,b\}$</span> that appear in even places in this enumeration, and color blue those that appear in odd places.</p>
|
linear-algebra | <p>I have two square matrices: <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. <span class="math-container">$A^{-1}$</span> is known and I want to calculate <span class="math-container">$(A+B)^{-1}$</span>. Are there theorems that help with calculating the inverse of the sum of matrices? In general case <span class="math-container">$B^{-1}$</span> is not known, but if it is necessary then it can be assumed that <span class="math-container">$B^{-1}$</span> is also known.</p>
| <p>In general, <span class="math-container">$A+B$</span> need not be invertible, even when <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are. But one might ask whether you can have a formula under the additional assumption that <span class="math-container">$A+B$</span> <em>is</em> invertible.</p>
<p>As noted by Adrián Barquero, there is <a href="http://www.jstor.org/stable/2690437" rel="noreferrer">a paper by Ken Miller</a> published in the <em>Mathematics Magazine</em> in 1981 that addresses this.</p>
<p>He proves the following:</p>
<p><strong>Lemma.</strong> If <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> are invertible, and <span class="math-container">$B$</span> has rank <span class="math-container">$1$</span>, then let <span class="math-container">$g=\operatorname{trace}(BA^{-1})$</span>. Then <span class="math-container">$g\neq -1$</span> and
<span class="math-container">$$(A+B)^{-1} = A^{-1} - \frac{1}{1+g}A^{-1}BA^{-1}.$$</span></p>
<p>From this lemma, we can take a general <span class="math-container">$A+B$</span> that is invertible and write it as <span class="math-container">$A+B = A + B_1+B_2+\cdots+B_r$</span>, where <span class="math-container">$B_i$</span> each have rank <span class="math-container">$1$</span> and such that each <span class="math-container">$A+B_1+\cdots+B_k$</span> is invertible (such a decomposition always exists if <span class="math-container">$A+B$</span> is invertible and <span class="math-container">$\mathrm{rank}(B)=r$</span>). Then you get:</p>
<p><strong>Theorem.</strong> Let <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> be nonsingular matrices, and let <span class="math-container">$B$</span> have rank <span class="math-container">$r\gt 0$</span>. Let <span class="math-container">$B=B_1+\cdots+B_r$</span>, where each <span class="math-container">$B_i$</span> has rank <span class="math-container">$1$</span>, and each <span class="math-container">$C_{k+1} = A+B_1+\cdots+B_k$</span> is nonsingular. Setting <span class="math-container">$C_1 = A$</span>, then
<span class="math-container">$$C_{k+1}^{-1} = C_{k}^{-1} - g_kC_k^{-1}B_kC_k^{-1}$$</span>
where <span class="math-container">$g_k = \frac{1}{1 + \operatorname{trace}(C_k^{-1}B_k)}$</span>. In particular,
<span class="math-container">$$(A+B)^{-1} = C_r^{-1} - g_rC_r^{-1}B_rC_r^{-1}.$$</span></p>
<p>(If the rank of <span class="math-container">$B$</span> is <span class="math-container">$0$</span>, then <span class="math-container">$B=0$</span>, so <span class="math-container">$(A+B)^{-1}=A^{-1}$</span>).</p>
| <p>It is shown in <a href="http://dspace.library.cornell.edu/bitstream/1813/32750/1/BU-647-M.version2.pdf" rel="noreferrer">On Deriving the Inverse of a Sum of Matrices</a> that </p>
<p><span class="math-container">$(A+B)^{-1}=A^{-1}-A^{-1}B(A+B)^{-1}$</span>.</p>
<p>This equation cannot be used to calculate <span class="math-container">$(A+B)^{-1}$</span>, but it is useful for perturbation analysis where <span class="math-container">$B$</span> is a perturbation of <span class="math-container">$A$</span>. There are several other variations of the above form (see equations (22)-(26) in this paper). </p>
<p>This result is good because it only requires <span class="math-container">$A$</span> and <span class="math-container">$A+B$</span> to be nonsingular. As a comparison, the SMW identity or Ken Miller's paper (as mentioned in the other answers) requires some nonsingualrity or rank conditions of <span class="math-container">$B$</span>.</p>
|
number-theory | <p>Is it possible to find all integers $m>0$ and $n>0$ such that $n+1\mid m^2+1$ and $m+1\,|\,n^2+1$ ?</p>
<p>I succeed to prove there is an infinite number of solutions, but I cannot progress anymore.</p>
<p>Thanks !</p>
| <p>Some further results along the lines of thought of @individ:</p>
<p>Suppose $p$ and $s$ are solutions to the Pell's equation:
$$-d\cdot p^2+s^2=1$$
Then,
\begin{align}
m &= a\cdot p^2+b\cdot pq +c\cdot q^2\\
n &= a\cdot p^2-b\cdot pq +c\cdot q^2
\end{align}
are solutions if $(a,b,c,d)$ are: (these are the only sets that I found using the computer)
\begin{align}
(10,4,-2,-15)\\
(39,12,-3,-65)\\
\end{align}
Sadly, the solutions are negative.</p>
<p>Here are some examples:
\begin{align}
(m,n) &= (-6,-38) &(a,b,c,d,p,q)&=(10,4,-2,-15,1,4)\\
(m,n) &= (-290,-2274) &(a,b,c,d,p,q)&=(10,4,-2,-15,8,31)\\
(m,n) &= (-15171,-64707) &(a,b,c,d,p,q)&=(39,12,-3,-65,16,129)\\
(m,n) &= (-1009692291,-4306907523) &(a,b,c,d,p,q)&=(39,12,-3,-65,4128,33281)\\
(m,n) &= (-67207138138563,-286676378361411) &(a,b,c,d,p,q)&=(39,12,-3,-65,1065008,8586369)\\
\end{align}
P.S. I am also very curious how @individ thought of this parametrization. </p>
| <p>It is convenient to introduce <span class="math-container">$x:=m+1$</span> and <span class="math-container">$y:=n+1$</span>. Then
<span class="math-container">$$
\begin{cases}
x\ |\ y^2 - 2y + 2,\\
y\ |\ x^2 - 2x + 2.
\end{cases}
$$</span>
It follows that <span class="math-container">$\gcd(x,y)\mid 2$</span>, and thus there are two cases to consider:</p>
<hr />
<p><strong>Case <span class="math-container">$\gcd(x,y)=1$</span>.</strong> We have
<span class="math-container">$$xy\ |\ x^2 + y^2 - 2x - 2y + 2.$$</span></p>
<p>This is solved via <a href="https://en.wikipedia.org/wiki/Vieta_jumping" rel="nofollow noreferrer">Vieta jumping</a>. Namely, one can show that <span class="math-container">$\frac{x^2 + y^2 - 2x - 2y + 2}{xy}\in\{ 0, -4, 8 \}$</span>. The value <span class="math-container">$0$</span> corresponds to an isolated solution <span class="math-container">$x=y=1$</span>, while each of the other two produces an infinite series of solutions, where <span class="math-container">$x,y$</span> represent consecutive terms of the following linear recurrence sequences:
<span class="math-container">$$1,\ -1,\ 5,\ -17,\ 65,\ \dots \qquad(s_k=-4s_{k-1}-s_{k-2}+2)$$</span>
and
<span class="math-container">$$-1, -1, -5, -37, -289,\ \dots, \qquad(s_k=8s_{k-1}-s_{k-2}+2).$$</span></p>
<p>However, there are no entirely positive solutions here.</p>
<hr />
<p><strong>Case <span class="math-container">$\gcd(x,y)=2$</span>.</strong> Letting <span class="math-container">$x:=2u$</span> and <span class="math-container">$y:=2v$</span> with <span class="math-container">$\gcd(u,v)=1$</span>, we similarly get
<span class="math-container">$$
\begin{cases}
u\ |\ 2v^2 - 2v + 1,\\
v\ |\ 2u^2 - 2u + 1.
\end{cases}
$$</span>
Unfortunately, Vieta jumping is not applicable here. Still, if we fix
<span class="math-container">$$k:=\frac{2u^2 + 2v^2 - 2u - 2v + 1}{uv},$$</span>
then the problem reduces to the following Pell-Fermat equation:
<span class="math-container">$$((k^2-16)v + 2k+8)^2 - (k^2-16)(4u - kv - 2)^2 = 8k(k+4).$$</span></p>
<p><strong>Example.</strong> Value <span class="math-container">$k=9$</span> gives
<span class="math-container">$$z^2 - 65t^2 = 936.$$</span>
with <span class="math-container">$z:=65v + 26$</span> and <span class="math-container">$t:=4u - 9v - 2$</span>. It has two series of integer solutions in <span class="math-container">$z,t$</span>:
<span class="math-container">$$\begin{bmatrix} z_\ell\\ t_\ell\end{bmatrix} = \begin{bmatrix} 129 & -1040\\ -16 & 129\end{bmatrix}^\ell \begin{bmatrix} z_0\\ t_0\end{bmatrix}$$</span>
with initial values <span class="math-container">$(z_0,t_0) \in \{(39,-3),\ (-1911,237)\}$</span>.</p>
<p>Not every value of <span class="math-container">$(z_\ell, t_\ell)$</span> corresponds to integer <span class="math-container">$u,v$</span>. Since the corresponding matrix has determinant 260, we need to consider sequences <span class="math-container">$(z_\ell, t_\ell)$</span> modulo 260. It can be verified that only first sequence produces integer <span class="math-container">$u,v$</span> and only for odd <span class="math-container">$\ell$</span>, that is
<span class="math-container">\begin{split}
\begin{bmatrix} v_s\\ u_s\end{bmatrix} &= \begin{bmatrix} 65 & 0\\ -9 & 4\end{bmatrix}^{-1}\left(\begin{bmatrix} 129 & -1040\\ -16 & 129\end{bmatrix}^{2s+1} \begin{bmatrix} 39\\ -3\end{bmatrix} + \begin{bmatrix} -26\\ 2\end{bmatrix}\right)\\
&=\begin{bmatrix} 70433 & -16512\\ 16512 & -3871\end{bmatrix}^s \begin{bmatrix} 627/5 \\ 147/5\end{bmatrix} - \begin{bmatrix} 2/5 \\ 2/5\end{bmatrix}
\end{split}</span>
or in a recurrence form:
<span class="math-container">\begin{cases}
v_s = 70433\cdot v_{s-1} -16512\cdot u_{s-1} + 21568,\\
u_s = 16512\cdot v_{s-1} -3871\cdot u_{s-1} + 5056,
\end{cases}</span>
with the initial value <span class="math-container">$(v_0,u_0) = (125, 29)$</span>. The next couple of values is <span class="math-container">$(8346845, 1956797)$</span> and <span class="math-container">$(555582723389, 130248348509)$</span>, and it can be seen that the sequence grows quite fast.</p>
<p><strong>UPDATE.</strong> Series of positive solutions exist for
<span class="math-container">$$k\in \{9, 13, 85, 97, 145, 153, 265, 289, 369, \dots\}.$$</span>
Most likely, this set is infinite, but I do not know how to prove this.</p>
|
probability | <p><span class="math-container">$\newcommand{\F}{\mathcal{F}} \newcommand{\powset}[1]{\mathcal{P}(#1)}$</span>
I am reading lecture notes which contradict my understanding of random variables. Suppose we have a probability space <span class="math-container">$(\Omega, \mathcal{F}, Pr)$</span>, where </p>
<ul>
<li><p><span class="math-container">$\Omega$</span> is the set of outcomes</p></li>
<li><p><span class="math-container">$\F \subseteq \powset{\Omega}$</span> is the collection of events, a <span class="math-container">$\sigma$</span>-algebra</p></li>
<li><p><span class="math-container">$\Pr:\Omega\to[0,1]$</span> is the mapping outcomes to their probabilities.</p></li>
</ul>
<p>If we take the standard definition of a random variable <span class="math-container">$X$</span>, it is actually a function from the sample space to real values, i.e. <span class="math-container">$X:\Omega \to \mathbb{R}$</span>.</p>
<p>What now confuses me is the precise definition of the term <em>support</em>. </p>
<p><a href="https://en.wikipedia.org/wiki/Support_%28mathematics%29" rel="noreferrer">According to Wikipedia</a>:</p>
<blockquote>
<p>the support of a function is the set of points where the function is
not zero valued.</p>
</blockquote>
<p>Now, applying this definition to our random variable <span class="math-container">$X$</span>, these <a href="http://www.math.fsu.edu/~paris/Pexam/3-Random%20Variables.pdf" rel="noreferrer">lectures notes</a> say:</p>
<blockquote>
<p>Random Variables – A random variable is a real valued function defined
on the sample space of an experiment. Associated with each random
variable is a probability density function (pdf) for the random
variable. The sample space is also called the support of a random
variable.</p>
</blockquote>
<p>I am not entirely convinced with the line <em>the sample space is also callled the support of a random variable</em>. </p>
<p>Why would <span class="math-container">$\Omega$</span> be the support of <span class="math-container">$X$</span>? What if the random variable <span class="math-container">$X$</span> so happened to map some element <span class="math-container">$\omega \in \Omega$</span> to the real number <span class="math-container">$0$</span>, then that element would not be in the support?</p>
<p>What is even more confusing is, when we talk about support, do we mean that of <span class="math-container">$X$</span> or that of the distribution function <span class="math-container">$\Pr$</span>? </p>
<p><a href="https://math.stackexchange.com/questions/416035/support-vs-range-of-a-random-variable">This answer says</a> that:</p>
<blockquote>
<p>It is more accurate to speak of the support of the distribution than
that of the support of the random variable.</p>
</blockquote>
<p>Do we interpret the <em>support</em> to be</p>
<ul>
<li>the set of outcomes in <span class="math-container">$\Omega$</span> which have a non-zero probability, </li>
<li>the set of values that <span class="math-container">$X$</span> can take with non-zero probability?</li>
</ul>
<p>I think being precise is important, although my literature does not seem very rigorous.</p>
| <blockquote>
<p>I am not entirely convinced with the line <em>the sample space is also called the support of a random variable</em> </p>
</blockquote>
<p>That looks quite wrong to me.</p>
<blockquote>
<p>What is even more confusing is, when we talk about support, do we mean that of $X$ or that of the distribution function $Pr$?</p>
</blockquote>
<p>In rather informal terms, the "support" of a random variable $X$ is defined as the support (in the function sense) of the density function $f_X(x)$. </p>
<p>I say, in rather informal terms, because the density function is a quite intuitive and practical concept for dealing with probabilities, but no so much when speaking of probability in general and formal terms. For one thing, it's not a proper function for "discrete distributions" (again, a practical but loose concept). </p>
<p>In more formal/strict terms, the comment of Stefan fits the bill.</p>
<pre><code>Do we interpret the support to be
- the set of outcomes in Ω which have a non-zero probability,
- the set of values that X can take with non-zero probability?
</code></pre>
<p>Neither, actually. Consider a random variable that has a uniform density in $[0,1]$, with $\Omega = \mathbb{R}$.
Then the support is the full interval $[0,1]$ - which is a subset of $\Omega$. But, then, of course, say $x=1/2$ belongs to the support. But the probability that $X$ takes this value is zero.</p>
| <h1>TL;DR</h1>
<p>The support of a random variable <span class="math-container">$X$</span> can be defined as the smallest closed set <span class="math-container">$R_X \in \mathcal{B}$</span> such that its probability is 1, as Did pointed out in their comment. An alternative definition is the one given by Stefan Hansen in his comment: the set of points in <span class="math-container">$\mathbb{R}$</span> around which any ball (i.e. open interval in 1-D) with nonzero radius has a nonzero probability. (See the section "Support of a random variable" below for a proof of the equivalence of these definitions.)</p>
<p>Intuitively, if any neighbourhood around a point, no matter how small, has a nonzero probability, then that point is in the support, and vice-versa.</p>
<hr />
<br>
<p>I'll start from the beginning to make sure we're using the same definitions.</p>
<h1>Preliminary definitions</h1>
<h2>Probability space</h2>
<p><span class="math-container">$\newcommand{\A}{\mathcal{A}} \newcommand{\powset}[1]{\mathcal{P}(#1)} \newcommand{\R}{\mathbb{R}} \newcommand{\deq}{\stackrel{\scriptsize def}{=}} \newcommand{\N}{\mathbb{N}}$</span>
Let <span class="math-container">$(\Omega, \A, \Pr)$</span> be a probability space, defined as follows:</p>
<ul>
<li><p><span class="math-container">$\Omega$</span> is the set of <strong>outcomes</strong></p>
</li>
<li><p><span class="math-container">$\A \subseteq \powset{\Omega} $</span> is the collection of <strong>events</strong>, a <a href="https://en.wikipedia.org/wiki/Sigma-algebra#Definition" rel="nofollow noreferrer"><span class="math-container">$\sigma$</span>-algebra</a></p>
</li>
<li><p><span class="math-container">$\Pr\colon\ \mathbf{\A}\to[0,1]$</span> is the <strong>mapping of events to their probabilities</strong>.
It has to satisfy some properties:</p>
<ul>
<li><span class="math-container">$\Pr(\Omega) = 1$</span> (we know <span class="math-container">$\Omega \in \A$</span> since <span class="math-container">$\A$</span> is a <span class="math-container">$\sigma$</span>-algebra of <span class="math-container">$\Omega$</span>)</li>
<li>has to be <a href="https://en.wikipedia.org/wiki/Sigma_additivity#%CF%83-additive_set_functions" rel="nofollow noreferrer">countably additive</a></li>
</ul>
</li>
</ul>
<br>
<h2>Random variable</h2>
<p>A random variable <span class="math-container">$X$</span> is defined as a map <span class="math-container">$X\colon\; \Omega \to \R$</span> such that, for any <span class="math-container">$x\in\R$</span>, the set <span class="math-container">$\{\omega \in \Omega \mid X(\omega) \le x\}$</span> is an element of <span class="math-container">$\A$</span>, ergo, an element of <span class="math-container">$\Pr$</span>'s domain to which a probability can be assigned.</p>
<p>We can think of <span class="math-container">$X$</span> as a "realisation" of <span class="math-container">$\Omega$</span>, in that it assigns a real number to each outcome in <span class="math-container">$\Omega$</span>. Intuitively, this condition means that we are assigning numbers to outcomes in an order such that the set of outcomes whose assigned number is less than a certain threshold (think of cutting the real number line at the threshold and forming the set of outcomes whose number falls
on or to the left of that) is always one of the events in <span class="math-container">$\A$</span>, meaning we can assign it a probability.</p>
<p>This is necessary in order to define the following concepts.</p>
<h3>Cumulative Distribution Function of a random variable</h3>
<p>The probability distribution function (or <strong>cumulative distribution function</strong>) of a random variable <span class="math-container">$X$</span> is defined as the map
<span class="math-container">$$
\begin{align}
F_X \colon \quad \R \ &\to\ [0, 1] \\
x\ &\mapsto\ \Pr(X \le x) \deq \Pr(X^{-1}(I_x))
\end{align}
$$</span></p>
<p>where <span class="math-container">$I_x \deq (-\infty, x]$</span>. (NB: <span class="math-container">$X^{-1}$</span> denotes preimage, not inverse; <span class="math-container">$X$</span> might well be non-injective.)</p>
<p>For notational clarity, define the following:</p>
<ul>
<li><span class="math-container">$\Omega_{\le x} \deq X^{-1}((-\infty, x]) = X^{-1}(I_x)$</span></li>
<li><span class="math-container">$\Omega_{> x} \deq X^{-1}((x, +\infty)) = X^{-1}(\overline{I_x}) = \overline{\Omega_{\le x}}$</span> where <span class="math-container">$\overline{\phantom{\Omega}}$</span> denotes set complement (in <span class="math-container">$\R$</span> or <span class="math-container">$\Omega$</span>, depending on the context)</li>
<li><span class="math-container">$\Omega_{< x} \deq X^{-1}((-\infty, x)) = \displaystyle\bigcup_{n\in\N} X^{-1} \left(I_{x-\frac{1}{n}}\right)$</span></li>
<li><span class="math-container">$\Omega_{=x} \deq X^{-1}(x) = \Omega_{\le x} \setminus \Omega_{< x}$</span></li>
</ul>
<p>we know all of these are still in <span class="math-container">$\A$</span> since <span class="math-container">$\A$</span> is a <span class="math-container">$\sigma$</span>-algebra.</p>
<p>We can see that</p>
<ul>
<li><span class="math-container">$\Pr(X > x) \deq \Pr(\Omega_{>x}) = \Pr(\overline{\Omega_{\le x}}) = 1 - \Pr(\Omega_{\le x}) = 1 - F_X(x)$</span></li>
</ul>
<br>
<ul>
<li><span class="math-container">$\Pr(X < x) \deq \Pr(\Omega_{<x}) = \Pr\left(\displaystyle\bigcup_{n\in\N} X^{-1} \left(I_{x-\frac{1}{n}}\right)\right)$</span> <span class="math-container">$= \lim\limits_{n \to \infty} \Pr(X \le x - \frac{1}{n}) = \lim\limits_{n \to \infty} F_X(x - \frac{1}{n}) = \lim\limits_{t \to x^-} F_X(t) \deq F_X(x^-)$</span>
<br><br>
since <span class="math-container">$X^{-1} \left(I_{x-\frac{1}{n}}\right) \subseteq X^{-1} \left(I_{x-\frac{1}{n+1}}\right)$</span> for all <span class="math-container">$n\in\N$</span>.</li>
</ul>
<br>
<ul>
<li><span class="math-container">$\Pr(X = x) \deq \Pr(\Omega_{=x}) = \Pr(\Omega_{\le x} \setminus \Omega_{<x})= \Pr(\Omega_{\le x}) - \Pr(\Omega_{<x}) = F_X(x) - F_X(x^-)$</span></li>
</ul>
<p>and so forth.</p>
<p>Note that the limit that defines <span class="math-container">$F(x^-)$</span> always exists because <span class="math-container">$F_X$</span> is nondecreasing (since if <span class="math-container">$x< y$</span>, then <span class="math-container">$\Omega_{\le x} \subseteq \Omega_{\le y}$</span> and <span class="math-container">$\Pr$</span> is <span class="math-container">$\sigma$</span>-additive) and bounded above (by <span class="math-container">$1$</span>), so the monotone convergence theorem guarantees that the images by <span class="math-container">$F_X$</span> by of <em>any</em> nondecreasing sequence approaching <span class="math-container">$x$</span> from the left will also converge, and thus the continuous limit <span class="math-container">$\lim_{t \to x^-} F_X(t)$</span> exists.</p>
<br>
<h2>Probability measure on <span class="math-container">$\R$</span> by <span class="math-container">$X$</span></h2>
<p>The mapping defined by <span class="math-container">$X$</span> is sufficient to uniquely define a probability measure on <span class="math-container">$\R$</span>; that is, a map
<span class="math-container">$$
\begin{align}
P_X \colon \quad \mathcal{B} \subset \powset{\R} \ &\to \ [0, 1]\\
A \ &\mapsto \ \Pr(X \in A) \deq \Pr(X^{-1}(A))
\end{align}
$$</span>
that assigns to any set <span class="math-container">$A \in \mathcal{B}$</span> the probability of the corresponding event in <span class="math-container">$\A$</span>.</p>
<p>Here <span class="math-container">$\mathcal{B}$</span> is the <a href="https://en.wikipedia.org/wiki/Borel_set" rel="nofollow noreferrer">Borel <span class="math-container">$\sigma$</span>-algebra</a> in <span class="math-container">$\R$</span>, which is, loosely speaking, the smallest <span class="math-container">$\sigma$</span>-algebra containing all of the semi-intervals <span class="math-container">$(-\infty, x]$</span>. The reason why <span class="math-container">$P_X$</span> is defined only on those sets is because in our definition we only required <span class="math-container">$X^{-1}(A) \in \A$</span> to be true for the semi-intervals of the form <span class="math-container">$A = (-\infty, x]$</span>; thus <span class="math-container">$X^{-1}(A)$</span> is an element of <span class="math-container">$\A$</span> only when <span class="math-container">$A$</span> is "generated" by those semi-intervals, their complements, and countable unions/intersections thereof (according to the rules of a <span class="math-container">$\sigma$</span>-algebra).</p>
<br>
<hr />
<br>
<h1>Support of a random variable</h1>
<h2>Formal definition</h2>
<p>Formally, the <strong>support</strong> of <span class="math-container">$X$</span> can be defined as the smallest closed set <span class="math-container">$R_X \in \mathcal{B}$</span> such that <span class="math-container">$P_X(R_X) = 1$</span>, as Did pointed out in their comment.</p>
<p>An alternative but equivalent definition is the one given by Stefan Hansen in his comment:</p>
<blockquote>
<p>The support of a random variable <span class="math-container">$X$</span> is the set <span class="math-container">$\{x\in \R \mid P_X(B(x,r))>0, \text{ for all } r>0\}$</span> where <span class="math-container">$B(x,r)$</span> denotes the ball with center at <span class="math-container">$x$</span> and radius <span class="math-container">$r$</span>. In particular, the support is a subset of <span class="math-container">$\R$</span>.</p>
</blockquote>
<p>(This can be generalized as-is to random variables with values in <span class="math-container">$\R^n$</span>, but I'll stick to <span class="math-container">$\R$</span> as that's how I defined random variables.)
The equivalence can be proven as follows:</p>
<blockquote>
<p><strong>Proof</strong> <br>
Let <span class="math-container">$R_X$</span> be the smallest closed set in <span class="math-container">$\mathcal{B}$</span> such that <span class="math-container">$P_X(R_X) = 1$</span>.
Since <span class="math-container">$\R \setminus {R_X}$</span> is open, for every <span class="math-container">$x$</span> outside <span class="math-container">$R_X$</span> there exists a radius <span class="math-container">$r\in\R_{>0}$</span> such that the open ball <span class="math-container">$B(x, r)$</span> lies completely outside of <span class="math-container">$R_X$</span>.
That, in turn, implies that <span class="math-container">$P_X(B(x, r)) = 0$</span>—otherwise, if this were strictly positive, <span class="math-container">$P_X(R_X \cup B(x, r)) = P_X(R_X) + P_X(B(x, r)) > P_X(R_X) = 1$</span>, a contradiction.</p>
<p>Conversely, let <span class="math-container">$x \in \R$</span> and suppose <span class="math-container">$P_X(B(x, r)) = 0$</span> for some <span class="math-container">$r\in\R_{>0}$</span>. Then <span class="math-container">$B(x, r)$</span> lies completely outside <span class="math-container">$R_X$</span>, and, in particular, <span class="math-container">$x$</span> is not in <span class="math-container">$R_X$</span>. Otherwise <span class="math-container">$R_X \setminus B(x, r)$</span> would be a closed set smaller than <span class="math-container">$R_X$</span> satisfying <span class="math-container">$P_X(R_X') = 1$</span>.</p>
<p>This proves <span class="math-container">$\R \setminus R_X = \{x\in\R \mid \exists r \in \R_{>0}\quad P_X(B(x, r)) = 0\}$</span>
Negating the predicate, one gets <span class="math-container">$R_X = \{x\in\R \mid \forall r \in \R_{>0}\quad P_X(B(x, r)) > 0\}$</span>.</p>
</blockquote>
<p>But more often, different definitions are given.</p>
<br>
<h2>Alternative definition for discrete random variables</h2>
<p>A discrete random variable can be defined as a random variable <span class="math-container">$X$</span> such that <span class="math-container">$X(\Omega)$</span> is countable (either finite or countably infinite). Then, for a discrete random variable the support can be defined as</p>
<p><span class="math-container">$$R_X \deq \{x\in\R \mid \Pr(X = x) > 0\}\,.$$</span></p>
<p>Note that <span class="math-container">$R_X \subseteq X(\Omega)$</span> and thus <span class="math-container">$R_X$</span> is countable. We can prove this by proving its contrapositive:</p>
<blockquote>
<p>Suppose <span class="math-container">$x \in \R$</span> and <span class="math-container">$x \notin X(\Omega)$</span>. We can distinguish two cases: either <span class="math-container">$x < y$</span> <span class="math-container">$\forall y \in R_X$</span>, or <span class="math-container">$x > y$</span> <span class="math-container">$\forall y \in R_X$</span>, or neither.</p>
<p>Suppose <span class="math-container">$x < y$</span> <span class="math-container">$\forall y \in R_X$</span>. Then <span class="math-container">$\Pr(X = x) \le \Pr(X \le x) = \Pr(X^{-1}(I_x)) = \Pr(\emptyset) = 0$</span>, since <span class="math-container">$\forall \omega\in\Omega\ X(\omega) > x$</span>. Ergo, <span class="math-container">$x\notin R_X$</span>.</p>
<p>The case in which <span class="math-container">$x > y$</span> <span class="math-container">$\forall y \in X(\Omega)$</span> is analogous.</p>
<p>Suppose now <span class="math-container">$\exists y_1, y_2 \in X(\Omega)$</span> such that <span class="math-container">$y_1 < x < y_2$</span>. Let <span class="math-container">$S = \{y\in X(\Omega) \mid y < x\}$</span>, which is. Thus <span class="math-container">$\sup L$</span> and exists, and <span class="math-container">$\lim_{y \to x^-} F_X(y) = F_X(\sup L)$</span> since <span class="math-container">$F_X$</span> is nondecreasing and bounded above. Thus, since <span class="math-container">$\sup L \le x$</span>, <span class="math-container">$F_X(x) \ge F_X(\sup L)$</span> and therefore <span class="math-container">$\Pr(X=x) = F_X(x) - F_X(x^-) \ge F_X(\sup L) - F_X(x^-) = 0$</span>.</p>
</blockquote>
<br>
<h2>Alternative definition for continuous random variables</h2>
<p>Notice that for absolutely continuous random variables (that is, random variables whose distribution function is continuous on all of <span class="math-container">$\R$</span>), <span class="math-container">$\Pr(X = x) = 0$</span> for all <span class="math-container">$x\in \R$</span>—since, by definition of continuity, <span class="math-container">$F_X(x^-) = F_X(x)$</span>. But that doesn't mean that the outcomes in <span class="math-container">$X^{-1}({x})$</span> are "impossible", informally speaking. Thus, in this case, the support is defined as</p>
<p><span class="math-container">$$ R_X = \mathrm{closure}{\{x \in \R \mid f_X(x) > 0\}}\,,$$</span></p>
<p>which intuitively can be justified as being the set of points around which we can make an arbitrarily small interval on which the integral of the PDF is strictly positive.</p>
|
linear-algebra | <p>I am sure the answer to this is (kind of) well known. I've searched the web and the site for a proof and found nothing, and if this is a duplicate, I'm sorry.</p>
<p>The following question was given in a contest I took part. I had an approach but it didn't solve the problem.</p>
<blockquote>
<p>Consider $V$ <strong>a</strong> linear subspace of the real vector space $\mathcal{M}_n(\Bbb{R})$ ($n\times n$ real entries matrices) such that $V$ contains only singular matrices (i.e matrices with determinant equal to $0$). What is the maximal dimension of $V$?</p>
</blockquote>
<p>A quick guess would be $n^2-n$ since if we consider $W$ the set of $n\times n$ real matrices with last line equal to $0$ then this space has dimension $n^2-n$ and it is a linear space of singular matrices.</p>
<p>Now the only thing there is to prove is that if $V$ is a subspace of $\mathcal{M}_n(\Bbb{R})$ of dimension $k > n^2-n$ then $V$ contains a non-singular matrix. The official proof was unsatisfactory for me, because it was a combinatorial one, and seemed to have few things in common with linear algebra. I was hoping for a pure linear algebra proof.</p>
<p>My approach was to search for a permutation matrix in $V$, but I used some 'false theorem' in between, which I am ashamed to post here. </p>
| <p>We can show more generally that if $\mathcal M$ is a linear subspace of $\mathcal M_n(\mathbb R)$ such that all its element have a rank less than or equal to $p$, where $1\leq p<n$, then the dimension of $\mathcal M$ is less than or equal to $np$. To see that, consider the subspace $\mathcal E:=\left\{\begin{pmatrix}0&B\\^tB&A\end{pmatrix}, A\in\mathcal M_{n-p}(\mathbb R),B\in\mathcal M_{p,n-p}(\mathbb R) \right\}$. Its dimension is $p(n-p)+(n-p)^2=(n-p)(p+n-p)=n(n-p)$. Let $\mathcal M$ a linear subspace of $\mathcal M_n(\mathbb R)$ such that $\displaystyle\max_{M\in\mathcal M}\operatorname{rank}(M)=p$. We can assume that this space contains the matrix $J:=\begin{pmatrix}I_p&0\\0&0\end{pmatrix}\in\mathcal M_n(\mathbb R)$. Indeed, if $M_0\in\mathcal M$ is such that $\operatorname{rank}M_0=p$, we can find $P,Q\in\mathcal M_n(\mathbb R)$ invertible matrices such that $J=PM_0Q$, and the map $\varphi\colon \mathcal M\to\varphi(\mathcal M)$ defined by $\varphi(M)=PMQ$ is a rank-preserving bijective linear map.</p>
<p>If we take $M\in\mathcal M\cap \mathcal E$, then we can show, considering $M+\lambda J\in\mathcal M$, that $M=0$. Therefore, since
$$\dim (\mathcal M+\mathcal E)=\dim(\mathcal M)+\dim(\mathcal E)\leq \dim(\mathcal M_n(\mathbb R))=n^2, $$<br>
we have
$$\dim (\mathcal M)\leq n^2-n(n-p)=np.$$</p>
| <p>If you consider a subspace <span class="math-container">$\mathcal{M}$</span> of <strong>symmetric</strong> singular matrices of rank <span class="math-container">$p$</span>, <a href="https://math.stackexchange.com/a/66936/445105">Davide's argument</a> can be reused to prove <span class="math-container">$\text{dim}(\mathcal{M})\le \frac{p(p+1)}2$</span>.</p>
<h3>Proof</h3>
<p>Consider</p>
<p><span class="math-container">$$
\mathcal E:=\left\{\begin{pmatrix}0&B\\B^T&A\end{pmatrix}, A\in \text{Sym}_{n-p}(\mathbb R),B\in\mathcal M_{p,n-p}(\mathbb R) \right\}.
$$</span>
which is a subspace of the symmetric <span class="math-container">$n\times n$</span> matrices <span class="math-container">$\text{Sym}_n(\mathbb{R})$</span> with dimension
<span class="math-container">$$
\text{dim}(\mathcal E) = p(n-p) + \frac{(n-p)(n-p+1)}2
= \frac{n-p}2\bigl(2p + (n-p+1)\bigr)
= \frac{(n-p)(n+p+1)}2
$$</span>
With similar arguments, <span class="math-container">$\mathcal{M}\cap \mathcal{E} = \emptyset$</span>, which results in
<span class="math-container">$$
\text{dim}(\mathcal E) + \text{dim}(\mathcal{M})
\le \text{Sym}_n(\mathbb R)
= \frac{n(n+1)}2
$$</span>
And thus
<span class="math-container">$$
\text{dim}(\mathcal{M})
\le \frac{n(n+1)}2 - \frac{(n-p)(n+p+1)}2
= \frac{n^2 + n}2 - \frac{(n^2 - p^2) + (n-p)}2
= \frac{p^2+p}2
$$</span></p>
|
logic | <p>I'm not quite sure that I really understand WHY I need to use <strong>implication</strong> for <strong>universal quantification</strong>, and <strong>conjunction</strong> for <strong>existential quantification</strong>.</p>
<p>Let $F$ be the domain of fruits and</p>
<p>$$A(x) : \text{is an apple}$$</p>
<p>$$D(x) : \text{is delicious}$$</p>
<p>Let's say:
$$\forall{x} \in F, A(x) \implies D(x)$$
Is <strong>correct</strong> and means <em>all apples are delicous</em>. </p>
<p>Whereas,
$$\forall{x} \in F, A(x) \land D(x)$$
is <strong>incorrect</strong> because this would be saying that <em>all fruits are apples and delicious</em> which is wrong.</p>
<p>But when it comes to the <strong>existential quantifier</strong>:
$$\exists{x} \in F, A(x) \land D(x)$$
Is <strong>correct</strong> and means <em>there is some apple that is delicious</em>.</p>
<p>Also,
$$\exists{x} \in F, A(x) \implies D(x)$$
Is <strong>incorrect</strong>, but I cannot tell why. To me it says <em>there is some fruit that if it is an apple, it is delicious</em>.</p>
<p>I cannot tell the difference in this case, and why the second case is incorrect?</p>
| <blockquote>
<p>To me it says <em>there is some fruit that if it is an apple, it is delicious</em>.</p>
</blockquote>
<p>This is absolutely correct. There exists a fruit such that <em>if</em> it is an apple, then it is delicious. Let $x$ be such a fruit. We have two cases for what $x$ may be here:</p>
<ul>
<li>$x$ is an apple. Then $x$ is delicious. This is the $x$ you are searching for.</li>
<li>$x$ is not an apple. Now the statement "if $x$ is an apple, then $x$ is delicious" automatically holds true. Since $x$ is not an apple, the conclusion doesn't matter. The statement is vacuously true.</li>
</ul>
<p>So the statement $\exists{x} \in F, A(x) \implies D(x)$ fails to capture precisely your desired values of $x$, i.e., apples which are delicious, because it also includes other fruits.</p>
| <blockquote>
<p>I'm not quite sure that I really understand WHY I need to use implication for universal quantification, and conjunction for existential quantification.</p>
</blockquote>
<p>Modifying your analysis a bit, let $A$ be the set of apples, and $D$ the set of delicious things.</p>
<p>$\forall x: [x\in A \implies x\in D]$ means all apples are delicious. Often written $\forall x\in A :x\in D$</p>
<p>$\forall x: [x\in A \land x\in D]$ means everything is a delicious apple.</p>
<p>$\exists x:[x\in A \land x\in D]$ means there exists at least one delicious apple. Often written $\exists x\in A:x\in D$, or equivalently $\exists x\in D: x\in A$</p>
<p>What does $\exists x:[x\in A \implies x\in D]$ mean? It is equivalent to $\exists x:[x\notin A \lor x\in D]$.</p>
<p>For a given $x$ then, either of the following possibilities that will satisfy this condition:</p>
<ol>
<li><p>$x\in A \land x\in D$, i.e. there exists at least one delicious apple (as above) </p></li>
<li><p>$x\notin A\land x\in D$, i.e. there exists at least one non-apple that is delicious</p></li>
<li><p>$x\notin A\land x\notin D$, i.e. there exists at least one non-apple that is not delicious</p></li>
</ol>
<p>So, the implication allows for more possibilities than the conjunction. In particular, the implication allows for the possibility that there are no apples. The conjunction does not.</p>
<p>Furthermore, $\exists x: [x\in A \implies x\in D]$ is a set theoretic variation of the so-called Drinker's Paradox. Here's where it gets crazy! For <em>any</em> set $A$ and <em>any</em> proposition $P$, we can prove using ordinary set theory that $$\exists x: [x\in A \implies P]$$</p>
<p>You could even prove, for example, that $$\exists x: [x\in A \implies x\notin A]$$
So, to avoid confusion, you would probably want to avoid such constructs in mathematics. For a formal development, see <a href="https://dcproof.wordpress.com/" rel="noreferrer">The Drinker's Paradox: A Tale of Three Paradoxes</a> at my blog.</p>
|
logic | <p>In classical propositional logic, <strong>or</strong> (aka. <em>disjunction</em>) is a Boolean function $\vee:\{0,1\}^2\longrightarrow\{0,1\}$ defined as $\vee:=\{ (00,0), (01,1), (10,1),(11,1) \}$, and <strong>and</strong> (aka. <em>conjunction</em>) is a Boolean function $\wedge: \{0,1\}^2\longrightarrow\{0,1\}$ defined as $\{(00,0),(01,0),(10,0),(11,1)\}$.</p>
<p>There's also <a href="https://en.wikipedia.org/wiki/Rule_of_inference" rel="noreferrer">rules of inference</a> for these functions, such as $\vee$-introduction, $\wedge$-introduction, $\vee$-elimination, and $\wedge$-elimination, as well as other machinery from logic.</p>
<hr>
<p>So, <strong>or</strong> and <strong>and</strong> are not the same object. But when using them in math reasoning, sometimes the 2 can be used interchangeably.</p>
<p><strong>Example.</strong> Let $X$ be a topological space, let $A\subseteq X$, let $\text{Lim }A$ be the set of <a href="https://en.wikipedia.org/wiki/Limit_point" rel="noreferrer">limit points</a> of $A$. The <em>closure</em> of $A$ is defined as $A\cup\text{Lim }A$. But the set $A\cup \text{Lim }A$ is defined as the set $\{x\in X\ |\ x\in A\ \vee\ x\in \text{Lim }A\}$, namely, the set of all $x\in X$ where $x\in A$ <strong>or</strong> $x\in \text{Lim }A$. Equivalently, th closure of $A$ is the set $A$ <em>and</em> its limit points.</p>
<p>(This is the first example that came to mind, but hopefully you get the idea.)</p>
<p>I know I'm mixing formal logic with natural (human!) language to the point of frivolity, but I'm wondering if there's a non-mundane reason for this equivalence. Perhaps <strong>or</strong> and <strong>and</strong> are "dual", in some sense? Or perhaps the equivalence arises just because natural language is really weird?</p>
<p>(One could disregard this question by saying "the meaning of <strong>or</strong> in logic is not the same as in natural language" (usually natural language uses <em>exclusive-or</em>), just like the meaning of <strong>if</strong> in logic is not the same as in natural language (usually natural language uses <em>if-and-only-if</em>), but I get the feeling something else is going on.)</p>
| <p>Consider: "All fruits and vegetables are nutritious"</p>
<p>Rather than:</p>
<p>$$\forall x ((F(x) \land V(x)) \rightarrow N(x)) \text{ Wrong!}$$</p>
<p>it translates as:</p>
<p>$$\forall x ((F(x) \lor V(x)) \rightarrow N(x))$$</p>
<p>But it <em>also</em> translates as (and is indeed equivalent to):</p>
<p>$$\forall x (F(x)\rightarrow N(x))\land \forall x (V(x) \rightarrow N(x))$$</p>
<p>So now we see that "Fruits and vegetables are nutritious" is really just shorthand for "fruits are nutritious and vegetables are nutritious"</p>
<p>Applied to your case:</p>
<p>"the closure of A is the set A and its limit points."</p>
<p>can be translated as:</p>
<p>$$\forall x ((x \in A \lor x \in \text{Lim} A) \rightarrow x \in Closure(A))$$</p>
<p>or as:</p>
<p>$$\forall x (x \in A \rightarrow x \in Closure(A)) \land \forall x (x \in \text{Lim} A) \rightarrow x \in Closure(A))$$</p>
<p>In other words, the confusion is because of the difference between the <em>disjunction of conditions</em> and <em>conjunction of conditionals</em>.</p>
| <p>This is a vague question, of course, but I want to try to give some
perspective on it based on some thinking I’ve done lately.</p>
<p>I think this is a matter of implicitly using active / passive, which behave dually in a somewhat precise mathematical manner. First, closely observe how
we use active and passive regarding the relation of containment:</p>
<p>Let $X$ be a given set, and let $A$, $B$ and $T$ be subsets of $X$. Then we say</p>
<ul>
<li>$T$ contains $A$ <em>and</em> $B$, when we mean $T \supseteq A ∪ B$, and</li>
<li>$T$ is contained by $A$ <em>and</em> $B$, when we mean $T ⊆ A ∩ B$.</li>
</ul>
<p>This is because we actually mean to avoid repeating some clauses
by using the conjunction “and”, in order to shorten
sentences. The statements above are (most often) linguistically perceived
to be equivalent to respectively</p>
<ul>
<li>$T$ contains $A$ <em>and</em> $T$ contains $B$.</li>
<li>$T$ is contained by $A$ <em>and</em> $T$ is contained by $B$.</li>
</ul>
<p>So, the conjunction “<em>and</em>” implicitly serves as a conjunction of sentences or propositions.</p>
<p>Note: I don’t mean to define mathematical parlance here, I try to
observe it.</p>
<p>Now, if we say that “<em>the closure of $A$ is $A$ and its limits points</em>”,
I suspect we immediately understand this to say “<em>the closure of
$A$ is the set $T$ containing (exactly) $A$ and all limit points of $A$</em>”. By the above observation, that is the set $T$ such that $T \supseteq A ∪ B$ (when $B$ denotes the limit points of $A$) and $T$ contains nothing more (so $T = A ∪ B$).</p>
<p>However, if we use “<em>and</em>” in other contexts as “<em>$T$ is in $A$ and $B$</em>”, we mean “$T$ is contained in $A$ and $B$”, so $T ⊆ A ∩ B$.</p>
<hr>
<p>Mathematically, this can be interpreted e.g. <a href="https://en.wikipedia.org/wiki/Lattice_(order)" rel="nofollow noreferrer">lattice</a>-theoretically: Let $L = (L, ≤, ∨, ∧)$ be a lattice, $∨$ denoting the join (maximum) of two elements, $∧$ denoting the meet (minimum) of two elements. Then there’s a dual lattice $L^{\mathrm{op}} = (L, ≥, ∧, ∨)$ and $L$ and $L^{\mathrm{op}}$ are antitonely isomorphic as lattices.</p>
<p>In the dual lattice $L^\mathrm{op}$, let’s write for all $x, y ∈ L$</p>
<ul>
<li>“$y ≤'x$” if $y ≥ x$ in $L$,</li>
<li>“$x ∨' y$” for $x ∧ y$ in $L$,</li>
<li>“$x ∧' y$” for $x ∨ y$ in $L$.</li>
</ul>
<p>So $L^\mathrm{op}$ has order $≤'$, join $∨'$ and meet $∧'$.</p>
<p>If we introduce parlance for the relation, the antitone isomorphism is reflected in our language. For elements $x, y ∈ L$, let’s say $x$ <em>kills</em> $y$ whenever $x ≤ y$ in $L$. (I like <em>killing</em> here because it's so violently vivid.) As humans, we now also <em>automatically</em> say $y$ <em>is killed by</em> $x$ for $x ≤ y$, that is for $y ≤' x$.</p>
<p>Next, observe that for $a, b, t ∈ L$
$$t ≤ a ∧ b \Longleftrightarrow t ≤ a~\text{and}~t ≤ b.$$
Using our parlance:
$$\text{$t$ kills $a ∧ b$} \Longleftrightarrow \text{$t$ kills $a$ and $b$}$$
This justifies $a ∧ b$ to be called “<em>$a$ and $b$</em>”.</p>
<p>And the analogous is true for $L^{\mathrm{op}}$ (where we have to add primes “$'$” to all symbols and replace “<em>kills</em>” by “<em>is killed by</em>”). Therefore, for $a, b, t ∈ L$, we have
\begin{align*}
\text{$t$ is killed by $a$ and $b$} &⇔ t ≤' a ∧' b' \\
&⇔t ≥ a ∨ b \\
&⇔\text{$t$ is killed by $a ∨ b$},
\end{align*}
also justiying $a ∨ b$ to be called “<em>$a$ and $b$</em>”.</p>
<p>The difference lies in which perspective we take on an implicit relation – from above or from below. And it depends on whether we use (implicit) verbs describing the relation in active or passive.</p>
<p>In our case, $L$ is the powerset of some space $X$ with $\supseteq$ as order and <em>killing</em> is <em>containing</em>.</p>
<p>This also explains why such a confusion doesn't happen when lingustically using “<em>or</em>”: We never say “<em>$A$ or $B$</em>” as in “<em>the closure of $A$ is $A$ or its limit points</em>”. This is because $t ≤ a ∨ b ⇔ \text{$t ≤ a$ or $t ≤ b$}$ does not hold for general lattices (think of disjoint union), so there’s no justification for saying “<em>$a$ or $b$</em>” for $a ∨ b$; and there need not be any other element fulfilling this universal property: There is (in general) no set “containing exactly $A$ or its limit points”.</p>
<p>(And in a sense, this generalises to categories with limits and colimits.)</p>
|
number-theory | <p>Given a <a href="https://en.m.wikipedia.org/wiki/Complex_number" rel="noreferrer">complex number</a> <span class="math-container">$\begin{aligned}\frac{z}{n}=x+iy\end{aligned}$</span> and a <a href="https://en.m.wikipedia.org/wiki/Gamma_function" rel="noreferrer">gamma function</a> <span class="math-container">$\Gamma(z)$</span> with <span class="math-container">$x\gt0$</span>, it is conjectured that the following <a href="https://en.m.wikipedia.org/wiki/Continued_fraction" rel="noreferrer">continued fraction</a> for <a href="https://en.m.wikipedia.org/wiki/Trigonometric_functions#Sine.2C_cosine_and_tangent" rel="noreferrer"><span class="math-container">$\displaystyle\tan\left(\frac{z\pi}{4z+2n}\right)$</span></a> is true</p>
<p><span class="math-container">$$\begin{split}\displaystyle\tan\left(\frac{z\pi}{4z+2n}\right)&=\frac{\displaystyle\Gamma\left(\frac{z+n}{4z+2n}\right)\Gamma\left(\frac{3z+n}{4z+2n}\right)}{\displaystyle\Gamma\left(\frac{z}{4z+2n}\right)\Gamma\left(\frac{3z+2n}{4z+2n}\right)}\\&=\cfrac{2z}{2z+n+\cfrac{(n)(4z+n)} {3(2z+n)+\cfrac{(2z+2n)(6z+2n)}{5(2z+n)+\cfrac{(4z+3n)(8z+3n)}{7(2z+n)+\ddots}}}}\end{split}$$</span></p>
<p><strong><em>Corollaries</em></strong>:</p>
<p>By taking the limit(which follows after <a href="https://en.m.wikipedia.org/wiki/Abel%27s_theorem" rel="noreferrer">abel's theorem</a>)
<span class="math-container">$$
\begin{aligned}\lim_{z\to0}\frac{\displaystyle\tan\left(\frac{z\pi}{4z+2}\right)}{2z}=\frac{\pi}{4}\end{aligned},
$$</span>
we recover the well known <a href="https://en.m.wikipedia.org/wiki/Pi" rel="noreferrer">continued fraction for <span class="math-container">$\pi$</span></a></p>
<p><span class="math-container">$$\begin{aligned}\cfrac{4}{1+\cfrac{1^2}{3+\cfrac{2^2}{5+\cfrac{3^2}{7+\ddots}}}}=\pi\end{aligned}$$</span></p>
<p>If we let <span class="math-container">$z=1$</span> and <span class="math-container">$n=2$</span>,then we have <a href="https://en.m.wikipedia.org/wiki/Square_root_of_2" rel="noreferrer">the square root of <span class="math-container">$2$</span></a> <span class="math-container">$$\begin{aligned}{1+\cfrac{1}{2+\cfrac{1} {2+\cfrac{1}{2+\cfrac{1}{2+\ddots}}}}}=\sqrt{2}\end{aligned}$$</span></p>
<p><strong><em>Q</em></strong>: How do we prove rigorously that the conjectured continued fraction is true and converges for all complex numbers <span class="math-container">$z$</span> with <span class="math-container">$x\gt0$</span>?</p>
<p><strong><em>Update</em></strong>:I initially defined the continued fraction <span class="math-container">$\displaystyle\tan\left(\frac{z\pi}{4z+2}\right)$</span> for only natural numbers,but as a matter of fact it holds for all complex numbers <span class="math-container">$z$</span> with real part greater than zero.Moreover,this continued fraction is a special case of the general continued fraction found in this <a href="https://math.stackexchange.com/questions/1739431/conjectured-general-continued-fraction-for-the-quotient-of-gamma-functions">post</a>.</p>
| <p>The proposed continued fraction
<span class="math-container">\begin{equation}
\displaystyle\tan\left(\frac{z\pi}{4z+2n}\right)=\cfrac{2z}{2z+n+\cfrac{(n)(4z+n)} {3(2z+n)+\cfrac{(2z+2n)(6z+2n)}{5(2z+n)+\cfrac{(4z+3n)(8z+3n)}{7(2z+n)+\ddots}}}}
\end{equation}</span>
can be written as
<span class="math-container">\begin{equation}
\displaystyle\tan\left(\frac{z\pi}{4z+2n}\right)=\cfrac{2z/\left( 2z+n \right)}{1+\cfrac{(n)/\left( 2z+n \right)\cdot(4z+n)/\left( 2z+n \right)} {3+\cfrac{(2z+2n)/\left( 2z+n \right)\cdot(6z+2n)/\left( 2z+n \right)}{5+\cfrac{(4z+3n)/\left( 2z+n \right)\cdot(8z+3n)/\left( 2z+n \right)}{7+\ddots}}}}
\end{equation}</span>
Denoting <span class="math-container">$u=\cfrac{z}{4z+2n}$</span>, the factors of the numerators are
<span class="math-container">\begin{equation}
\frac{n}{2z+n}=1-4u\,;\quad\frac{4z+n}{2z+n}=1+4u\,;\quad\frac{2z+2n}{2z+n}=2-4u\,;\quad\frac{6z+2n}{2z+n}=2+4u\,;\cdots
\end{equation}</span>
Then, the fraction can be simplified as
<span class="math-container">\begin{equation}
\displaystyle\tan\left(\pi u\right)=\cfrac{4u}{1+\cfrac{\cfrac{1-16u^2}{1\cdot3}} {1+\cfrac{\cfrac{4-16u^2}{3\cdot5}}{1+\cfrac{\cfrac{9-16u^2}{5\cdot7}}{1+\ddots}}}}
\end{equation}</span>
It is thus a special case of the continued fraction found in <a href="https://math.stackexchange.com/a/1798142/348034">this answer</a>:
<span class="math-container">\begin{equation}
\tan\left(\alpha\tan^{-1}z\right)=\cfrac{\alpha z}{1+\cfrac{\frac{(1^2-\alpha^2)z^2}{1\cdot 3}} {1+\cfrac{\frac{(2^2-\alpha^2)z^2}{3\cdot 5}}{1+\cfrac{\frac{(3^2-\alpha^2)z^2}{5\cdot 7}}{1+\ddots}}}}
\end{equation}</span>
here <span class="math-container">$z=1$</span> and <span class="math-container">$\alpha=4u$</span>. The brilliant proof is based on a continued fraction due to Nörlund.</p>
| <p>The ratio
<span class="math-container">$$\tan\dfrac{\pi z}{4z+2n}
= \dfrac{\Gamma\left(\dfrac{z+n}{4z+2n}\right)\Gamma\left(\dfrac{3z+n}{4z+2n}\right)}{\Gamma\left(\dfrac{z}{4z+2n}\right)\Gamma\left(\dfrac{3z+2n}{4z+2n}\right)}\hspace{100mu}\tag1$$</span>
can be obtained, applying "real" identity</p>
<blockquote>
<p><span class="math-container">$$\sin\pi x = \dfrac\pi{\Gamma(x)\Gamma(1-x)}\hspace{100mu}\tag2$$</span></p>
</blockquote>
<p>to the expression
<span class="math-container">$$\tan\dfrac\pi2\dfrac z{2z+n}
= \dfrac{\sin\pi\dfrac z{4z+2n}}{\sin\pi\dfrac{z+n}{4z+2n}},$$</span>
so it looks nice and quite correct.</p>
<p>Continued fraction can be obtained, using <a href="https://www.wolframalpha.com/input/?i=continued%20fraction%20tan%20x" rel="noreferrer">known continued fraction of the tangent</a> function in the form of
<span class="math-container">$$\tan \dfrac{\pi x}4 = \cfrac x{1+\operatorname{
\Large K}\hspace{-27mu}\phantom{\Big|}_{k=1}^{\large ^{\,\infty}}\cfrac{(2k-1)^2-x^2}2}\hspace{100mu}\tag3$$</span>
with
<span class="math-container">$$x=\dfrac{2z}{2z+n}.$$</span></p>
|
matrices | <p>Suppose $A=uv^T$ where $u$ and $v$ are non-zero column vectors in ${\mathbb R}^n$, $n\geq 3$. $\lambda=0$ is an eigenvalue of $A$ since $A$ is not of full rank. $\lambda=v^Tu$ is also an eigenvalue of $A$ since
$$Au = (uv^T)u=u(v^Tu)=(v^Tu)u.$$
Here is my question:</p>
<blockquote>
<p>Are there any other eigenvalues of $A$?</p>
</blockquote>
<p>Added:</p>
<p>Thanks to Didier's comment and anon's answer, $A$ can not have other eigenvalues than $0$ and $v^Tu$. I would like to update the question:</p>
<blockquote>
<p>Can $A$ be diagonalizable?</p>
</blockquote>
| <p>We're assuming $v\ne 0$. The orthogonal complement of the linear subspace generated by $v$ (i.e. the set of all vectors orthogonal to $v$) is therefore $(n-1)$-dimensional. Let $\phi_1,\dots,\phi_{n-1}$ be a basis for this space. Then they are linearly independent and $uv^T \phi_i = (v\cdot\phi_i)u=0 $. Thus the the eigenvalue $0$ has multiplicity $n-1$, and there are no other eigenvalues besides it and $v\cdot u$.</p>
| <p>As to your last question, when is $A$ diagonalizable?</p>
<p>If $v^Tu\neq 0$, then from anon's answer you know the algebraic multiplicity of $\lambda$ is at least $n-1$, and from your previous work you know $\lambda=v^Tu\neq 0$ is an eigenvalue; together, that gives you at least $n$ eigenvalues (counting multiplicity); since the geometric and algebraic multiplicities of $\lambda=0$ are equal, and the other eigenvalue has algebraic multiplicity $1$, it follows that $A$ is diagonalizable in this case.</p>
<p>If $v^Tu=0$, on the other hand, then the above argument does not hold. But if $\mathbf{x}$ is nonzero, then you have $A\mathbf{x} = (uv^T)\mathbf{x} = u(v^T\mathbf{x}) = (v\cdot \mathbf{x})u$; if this is a multiple of $\mathbf{x}$, $(v\cdot\mathbf{x})u = \mu\mathbf{x}$, then either $\mu=0$, in which case $v\cdot\mathbf{x}=0$, so $\mathbf{x}$ is in the orthogonal complement of $v$; or else $\mu\neq 0$, in which case $v\cdot \mathbf{x} = v\cdot\left(\frac{v\cdot\mathbf{x}}{\mu}\right)u = \left(\frac{v\cdot\mathbf{x}}{\mu}\right)(v\cdot u) = 0$, and again $\mathbf{x}$ lies in the orthogonal complement of $v$; that is, the only eigenvectors lie in the orthogonal complement of $v$, and the only eigenvalue is $0$. This means the eigenspace is of dimension $n-1$, and therefore the geometric multiplicity of $0$ is strictly smaller than its algebraic multiplicity, so $A$ is not diagonalizable.</p>
<p>In summary, $A$ is diagonalizable if and only if $v^Tu\neq 0$, if and only if $u$ is not orthogonal to $v$. </p>
|
differentiation | <blockquote>
<p>I just encountered this function $f(x,y)=\min(x,y)$. I wonder what the partial derivatives of it look like.</p>
</blockquote>
| <p>$$
f(x, y) = \min(x,y) = \begin{cases}
x & \text{if } x \le y \\
y & \text{if } x \gt y
\end{cases}
$$</p>
<p>The function isn't differentiable along $y = x$, but the partial derivatives are straightforward otherwise.</p>
<p>$$
\frac{\partial f(x, y)}{\partial x} = \begin{cases}
1 & \text{if } x \lt y \\
0 & \text{if } x \gt y
\end{cases}
$$</p>
<p>$$
\frac{\partial f(x, y)}{\partial y} = \begin{cases}
0 & \text{if } x \lt y \\
1 & \text{if } x \gt y
\end{cases}
$$</p>
<p>Here is a plot of the function to help you see the derivatives and why it's not differentiable along $y = x$:</p>
<p><img src="https://i.sstatic.net/j2Yja.png" alt="plot"></p>
| <p>If $(a,b)$ is below the line $x=y$, then the function has value $y$ on a neighborhood of $(a,b)$, so the partial derivatives are
$$\begin{align*}
\left.\frac{\partial f}{\partial x}\right|_{(x,y)=(a,b)}&=0\\
\left.\frac{\partial f}{\partial y}\right|_{(x,y)=(a,b)} &= 1.
\end{align*}$$
Symmetrically, if $(a,b)$ is "above" the line $x=y$, then the function has value $x$ on a neighborhood of $(a,b)$, so the partial derivatives are:
$$\begin{align*}
\left.\frac{\partial f}{\partial x}\right|_{(x,y)=(a,b)}&=1\\
\left.\frac{\partial f}{\partial y}\right|_{(x,y)=(a,b)} &= 0.
\end{align*}$$</p>
<p>If $(a,b)$ is on the line $x=y$, then the function has value $y$ as we approach along a constant $y$ direction from the right, and value $x$ if we approach along a constant $y$ direction on the left. So the partial with respect to $x$ is $1$ from the left and $0$ from the right, hence does not exist at $(a,b)$. Similarly for $y$.</p>
<p>So the function is differentiable <em>away</em> from the line $x=y$, with values as given above.</p>
|
combinatorics | <p>I was hoping to find a more "mathematical" proof, instead of proving logically $\displaystyle \sum_{k = 0}^n {n \choose k}^2= {2n \choose n}$.</p>
<p>I already know the logical Proof:</p>
<p>$${n \choose k}^2 = {n \choose k}{ n \choose n-k}$$</p>
<p>Hence summation can be expressed as:</p>
<p>$$\binom{n}{0}\binom{n}{n} + \binom{n}{1}\binom{n}{n-1} + \cdots + \binom{n}{n}\binom{n}{0}$$</p>
<p>One can think of it as choosing $n$ people from a group of $2n$
(imagine dividing a group of $2n$ into $2$ groups of $n$ people each. I can get $k$ people from group $1$ and another $n-k$ people from group $2$. We do this from $k = 0$ to $n$.</p>
| <p>The combinatorial explanation is straightforward. There's also a roundabout approach through what are called "generating functions." The binomial theorem tells us that</p>
<p>$$(1+x)^n(x+1)^n=\left(\sum_{a=0}^n\binom{n}{a}x^a\right)\left(\sum_{b=0}^n\binom{n}{b}x^{n-b}\right)=\sum_{c=0}^{2n}\left(\sum_{a+n-b=c}\binom{n}{a}\binom{n}{b}\right)x^c$$</p>
<p>The $x^n$ coefficient of the above occurs with $c=n$, wherein the coefficient is</p>
<p>$$\sum_{a+n-b=n}\binom{n}{a}\binom{n}{b}=\sum_{a=0}^n\binom{n}{a}^2.$$</p>
<p>However, the $x^n$ coefficient of $(1+x)^n(x+1)^n=(1+x)^{2n}$, again by the binomial theorem, is</p>
<p>$$\binom{2n}{n}. $$</p>
<p>Equating the two gives the result.</p>
| <p>Since $\dbinom n k= \dbinom n {n-k}$, the identity
$$
\sum_{k=0}^n \binom n k ^2 = \binom {2n} n
$$
is the same as
$$
\sum_{k=0}^n \binom n k \binom n {n-k} = \binom {2n} n.
$$
So say a committee consists of $n$ Democrats and $n$ Republicans, and one will choose a subcommittee of $n$ members. One may choose $k$ Democrats and $n-k$ Republicans in $\dbinom n k \cdot \dbinom n {n-k}$ ways. The number of Democrats is in the set $\{0,1,2,\ldots,n\}$, thus ranging from all Republicans to all Democrats. The sum then gives the total number of ways to choose $n$ out of $2n$.</p>
|
logic | <p>I would like to know if someone can explain in a somehow down to earth (almost logic free) way why is it true that in an axiom system where there is some statement $P$ such that $P$ and its negation $\lnot P$ are true, then every statement in the system is true?</p>
<p>I'm not sure if this can be done, but basically since I don't know any formal logic at all, I'm interested in seeing if at least the argument can be conveyed in an intuitive way, or if the idea can be explained without talking about first or second order logic and using symbols like $\top$, $\bot$, and $\vdash$.</p>
<p>This <a href="https://math.stackexchange.com/questions/5564/why-an-inconsistent-formal-system-can-prove-everything">previous question</a> is like the formal version (which I don't understand) so maybe my question can be thought of as a version for dummies of that question. </p>
<p>Thanks a lot in advance.</p>
| <p>I'll try to say this all in plain English:</p>
<p>Let's say we decide to accept the following two facts: (1) "I am a fish", and (2) "I am not a fish". Just keep those in mind.</p>
<p>Now let's pick any old statement, say: (3) "You can fly". Now let's prove that the statement is true!</p>
<p>Alright, we've already accepted that (1) "I am a fish". Of course, any time I have a true statement <em>P</em>, I can make a new true statement by making the statement "<em>P</em> or <em>Q</em> is true." Because to check if an 'or' statement is true, I only need to check that one of them is true. (If I tell you "My name is Dylan OR I can spit fire," you don't need to wait around with a fire extinguisher to tell if that statement is true. It's true because the first part of it is true).</p>
<p>So by this logic, the statement (4) "I am a fish or you can fly" must be true (since the first part is true.)</p>
<p>OK, but now let's say, in general, I have some 'or' statement "<em>P</em> or <em>Q</em>" and I know for a fact that the whole statement is true. If I also know that <em>P</em> is <em>false</em> then I can conclude that <em>Q</em> is true. Right? Because an 'or' statement is true if and only if <em>at least one</em> of the statements inside it is true, so if I rule out one of them the other one <em>must</em> be true. (So if I always tell the truth and I tell you that you have a billion dollars in your bank account OR I just ate a sandwich, you can check your bank account and quickly conclude that I just ate lunch... unless you're very wealthy.)</p>
<p>Alright, so far so good. We <em>know</em> the statement "I am a fish or you can fly" is definitely true. But wait, we <em>also</em> know that the statement "I am a fish" is <em>false</em> (remember, it's one of the things we assumed in the very beginning!). So that means, by what we just talked about, that the statement "You can fly" must be true.</p>
<p>So voilà! Using the magic of a contradictory system, we've proven you can fly!</p>
| <p>It is somewhat misleading to say "every statement is true" about an inconsistent theory. This might be a point of confusion in this question, and in general the difference between "truth" and "provability" causes many other confusions, so we have to be careful to distinguish them.</p>
<p>"Truth" is a property that a statement has in a particular model. In other words a particular statement is either true or false in a particular model, assuming the statement is written in a formal language compatible with the model.</p>
<p>An inconsistent theory has no models at all. If you have no models, there's no model in which any particular statement can be true. It is correct to say that every statement in the language of the theory is <em>provable</em> from an inconsistent theory, and that every statement in the language of the theory is <em>semantically entailed</em> by the theory. But it's an abuse of language to say that every statement is "true" in the theory: true in what model? </p>
<p>Sometimes, when people are writing an informal proof, they say things like "assume $A$ is true" or "assume $B$ is false". But these are just figures of speech; the actual proof system usually has other ways of dealing with hypotheses than to mark them as "true" and "false". Alternatively, you can view those sayings as abbreviated forms of "assume $A$ is true in some fixed, unspecified model", "assume $B$ is false in our fixed, unspecified model". That interpretation of the informal proof is fine for any consistent theory. But that interpretation is more difficult for inconsistent theories. Because there are no models, it will be a counterfactual statement. </p>
|
geometry | <p>My son is in 2nd grade. His math teacher gave the class a quiz, and one question was this:</p>
<blockquote>
<p>If a triangle has 3 sides, and a rectangle has 4 sides,
how many sides does a circle have?</p>
</blockquote>
<p>My first reaction was "0" or "undefined". But my son wrote "$\infty$" which I think is a reasonable answer. However, it was marked wrong with the comment, "the answer is 1".</p>
<p>Is there an accepted correct answer in geometry?</p>
<p><strong>edit:</strong> I ran into this teacher recently and mentioned this quiz problem. She said she thought my son had written "8." She didn't know that a sideways "8" means infinity.</p>
| <p>The answer depends on the definition of the word "side." I think this is a terrible question (edit: <em>to put on a quiz</em>) and is the kind of thing that will make children hate math. "Side" is a term that should really be reserved for polygons. </p>
| <p>My third-grade son came home a few weeks ago with similar homework questions:</p>
<blockquote>
<p>How many faces, edges and vertices do the following
have?</p>
<ul>
<li>cube</li>
<li>cylinder</li>
<li>cone</li>
<li>sphere</li>
</ul>
</blockquote>
<p>Like most mathematicians, my first reaction was that for
the latter objects the question would need a precise
definition of face, edge and vertex, and isn't really
sensible without such definitions.</p>
<p>But after talking about the problem with numerous people, conducting a kind of social/mathematical experiment, I observed something intriguing. What I observed was that
none of my non-mathematical friends and acquaintances had
any problem with using an intuitive geometric concept here,
and they all agreed completely that the answers should be</p>
<ul>
<li>cube: 6 faces, 12 edges, 8 vertices</li>
<li>cylinder: 3 faces, 2 edges, 0 vertices</li>
<li>cone: 2 faces, 1 edge, 1 vertex</li>
<li>sphere: 1 face, 0 edges, 0 vertices</li>
</ul>
<p>Indeed, these were also the answers desired by my
son's teacher (who is a truly outstanding teacher). Meanwhile, all of my mathematical
colleagues hemmed and hawed about how we can't really
answer, and what does "face" mean in this context anyway,
and so on; most of them wanted ultimately to say that a
sphere has infinitely many faces and infinitely many
vertices and so on. For the homework, my son wrote an explanation giving the answers above, but also explaining that there was a sense in which some of the answers were infinite, depending on what was meant.</p>
<p>At a party this past weekend full of
mathematicians and philosophers, it was a fun game to first
ask a mathematician the question, who invariably made various objections and refusals and and said it made no sense and so on, and then the
non-mathematical spouse would forthrightly give a completely clear
account. There were many friendly disputes about it that evening.</p>
<p>So it seems, evidently, that our extensive mathematical training has
interfered with our ability to grasp easily what children and
non-mathematicians find to be a clear and distinct
geometrical concept.</p>
<p>(My actual view, however, is that it is our training that has taught us that the concepts are not so clear and distinct, as witnessed by numerous borderline and counterexample cases in the historical struggle to find the right definitions for the $V-E+F$ and other theorems.) </p>
|
game-theory | <p>This is actually a pretty interesting combinatorics problem, as there's several different branches in the mission tree that split off and join together, and of course there are prerequisites for each mission. Essentially what I'm wondering is - how many possible ways are there to get from Nines And AK's to Reuniting The Families? I attached a screenshot of that part of the mission tree for reference. <a href="https://i.sstatic.net/knBXF.jpg" rel="noreferrer">LOS SANTOS MISSION TREE</a></p>
<p>NOTE: For those who have never played the game, ALL missions in the ENTIRE tree must be completed before Reuniting The Families. Each line coming from the bottom of a box leads to the mission(s) that are unlocked after that one, and each line from the top of a box leads to the mission(s) that must be completed before it.</p>
| <p>This is <em>almost</em> an easy problem with a nice solution.</p>
<p>If "Burning Desire" weren't a prerequisite for "Doberman", then the graph of missions would be a <a href="https://en.wikipedia.org/wiki/Series-parallel_graph" rel="nofollow noreferrer"><em>series-parallel graph</em></a>. For these, the problem of counting the number of orders is very easy. Suppose you have series-parallel graphs <span class="math-container">$G_1, G_2$</span> with <span class="math-container">$n_1+2, n_2+2$</span> vertices and we've computed that there's <span class="math-container">$\tau_1, \tau_2$</span> ways to order them respectively. Then:</p>
<ul>
<li>If we join <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> in series, then all vertices of <span class="math-container">$G_1$</span> need to come before all vertices of <span class="math-container">$G_2$</span>, so there are <span class="math-container">$\tau_1 \cdot \tau_2$</span> orders.</li>
<li>If we join <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> in parallel, then we can interleave the vertices of <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> in any of <span class="math-container">$\frac{(n_1+n_2)!}{n_1!\,n_2!}$</span> ways, for <span class="math-container">$\frac{(n_1+n_2)!}{n_1!\,n_2!} \cdot \tau_1 \cdot \tau_2$</span> orders.</li>
</ul>
<p>Just a few of these steps would have given us the total number of orders of the Los Santos missions, which is a quick process you could almost do by hand (but probably want a calculator to evaluate the final answer): it is
<span class="math-container">$$\binom{19}{9} \binom{4}{2} \binom{9}{3} \binom{3}{1} \binom{8}{3} = 7\,821\,830\,016.$$</span> Each of these binomials corresponds to joining two graphs in parallel, where we interleave two independent branches. For instance, the <span class="math-container">$\binom93$</span> corresponds to the number of ways to interleave the 3 leftmost missions (Running Dog, Wrong Side of the Tracks, and Just Business) with the 6 other missions on the left side of the diagram.</p>
<p>Also, if we just want a quick estimate of the final answer, then the above over-estimates it by about a factor of <span class="math-container">$2$</span>: roughly (but not exactly) half the time, you'll be counting orders that put Doberman before Burning Desire, which are not valid. </p>
<hr>
<p>Unfortunately, we can't get the exact answer this way, and must resort to brute force. The following Mathematica code implements one such strategy:</p>
<ul>
<li>Find the list of missions that can be done without prerequisites: <code>Pick[VertexList[graph], VertexInDegree[graph], 0]</code></li>
<li>For each of those missions, <code>s</code>, recursively compute the number of ways to do the rest of the missions if you do <code>s</code> first: <code>topCount[VertexDelete[graph, s]]</code>.</li>
</ul>
<p>Here's the code.</p>
<pre><code>topCount[graph_] := topCount[graph] =
Total[Table[
topCount[VertexDelete[graph, s]],
{s, Pick[VertexList[graph], VertexInDegree[graph], 0]}
]];
</code></pre>
<p>Applying this Mathematica function to this graph gives a final answer of <span class="math-container">$4\,303\,018\,512$</span>.</p>
| <p>While it’s true that the calculation would have been easier if “Burning Desire” weren’t a prerequisite for “Doberman”, that doesn’t mean we have to resort to brute force. We can correct for this in a few steps.</p>
<p>As Misha Lavrov noted, the number of orders without this constraint would have been</p>
<p><span class="math-container">$$
\binom42\binom96\binom31\binom83\binom{19}9=7821830016\;.
$$</span></p>
<p>From this we have to subtract the number of orders in which “Doberman” precedes “Burning Desire”. We can visualize this in the graph by removing the line from “Burning Desire” to “Doberman” and instead adding a line from “Doberman” to “Burning Desire”. The resulting graph is again almost a series-parallel graph, but again there’s one prerequisite that messes things up – in this case it’s the constraint that “Madd Dogg’s Rhymes” comes before “Burning Desire”.</p>
<p>So we use the same approach again and first count without satisfying that constraint; the result is</p>
<p><span class="math-container">$$
\binom74\binom31\binom51\binom{10}3\binom{19}{11}=4761666000\;.
$$</span></p>
<p>So we have to subtract that from the first result; but then we’ve subtracted too much because this includes orders in which “Burning Desire” comes before “Madd Dogg’s Rhymes”. So we have to remove the line from “Madd Dogg’s Rhymes” to “Burning Desire” and instead draw one from “Burning Desire” to “Madd Dogg’s Rhymes”. Now it’s the line from “Life’s a Beach” to “Madd Dogg’s Rhymes” that gets in the way. Removing that would leave “Life’s a Beach” hanging in the air, so we connect it to ”Reuniting the Families” instead. The resulting count is</p>
<p><span class="math-container">$$
\binom41\binom41\binom61\binom81\binom{13}3\binom{19}5=2554066944\;.
$$</span></p>
<p>Now we have to draw a line from “Madd Dogg’s Rhymes” to “Life’s a Beach”. Now the line from “OG LOC” to “Life’s a Beach” is the problem, so we remove it, which leads to a count of</p>
<p><span class="math-container">$$
\binom31\binom51\binom71\binom91\binom{14}3\binom{19}4=1333266480\;.
$$</span></p>
<p>Now we draw a line from “Life’s a Beach” to “OG LOC”, and this time the resulting graph is series-parallel – it merely has a redunant line from “Lines and AK's” to “OG LOC” that we can remove without having to compensate for it. The resulting contribution is</p>
<p><span class="math-container">$$
\binom72\binom91\binom{11}1\binom{13}1\binom{18}3=22054032\;.
$$</span></p>
<p>Thus, altogether we count</p>
<p><span class="math-container">$$
7821830016-(4761666000-(2554066944-(1333266480-22054032)))=4303018512
$$</span></p>
<p>different orders, in agreement with Misha Lavrov’s count. </p>
|
matrices | <p>The <a href="http://en.wikipedia.org/wiki/Trace_%28linear_algebra%29">trace</a> is the sum of the elements on the diagonal of a matrix. Is there a similar operation for the sum of <em>all</em> the elements in a matrix?</p>
| <p>I don't know if it has a nice name or notation, but for the matrix $\mathbf A$ you could consider the quadratic form $\mathbf e^\top\mathbf A\mathbf e$, where $\mathbf e$ is the column vector whose entries are all $1$'s.</p>
| <p>The term "grand sum" is commonly used, if only informally, to represent the
sum of all elements. </p>
<p>By the way, the grand sum is a very important quantity in the contexts of
Markovian transition matrices and other probabilistic applications of linear
algebra. </p>
<p>Regards,
Scott</p>
|
differentiation | <p><strong>The question</strong>: Can we define differentiable functions between (some class of) sets, "without <span class="math-container">$\Bbb R$</span>"<sup>*</sup> so that it</p>
<ol>
<li>Reduces to the traditional definition when desired?</li>
<li>Has the same use in at least some of the higher contexts where we would use the present differentiable manifolds?</li>
</ol>
<hr>
<p><strong>Motivation/Context:</strong></p>
<p>I was a little bit disappointed when I learned to differentiate on manifolds. Here's how it went.</p>
<p>A younger me was studying metric spaces as the first unit in a topology course when a shiny new generalization of continuity was presented to me. I was thrilled because I could now consider continuity in a whole new sense, on function spaces, finite sets, even the familiar reals with a different metric, etc. Still, I felt like I hadn't escaped the reals (why I wanted to I can't say, but I digress) since I was still measuring distance (and therefore continuity, in my mind) with a real number: <span class="math-container">$$d: X\to\huge\Bbb R$$</span></p>
<p>If the reader for some reason shares or at least empathizes with (I honestly can't explain this fixation of mine) the desire to have definitions not appeal to other sets/structures<sup>*</sup>, then they will understand my excitement in discovering an even more general definition, the standard one on arbitrary topological spaces. I was home free; the new definition was totally untied to the ever-convenient real numbers, and of course I could recover my first (calculus) definition, provided I toplogized <span class="math-container">$\Bbb R^n$</span> adequately (and it seemed very reasonable to toplogize <span class="math-container">$\Bbb R^n$</span> with Pythagoras' theorem, after all).</p>
<p>Time passed and I kept studying, and through other courses (some simultaneous to topology, some posterior) a new sort of itch began to develop, this time with <em>differentiable</em> functions. On the one hand, I had definitions (types of convergence, compact sets, orientable surfaces, etc.) and theorems (Stone-Weierstrass, Arzelá-Ascoli, Brouwer fixed point, etc.) completely understandable through my new-found topology. On the other hand, the definition of a derivative was still the same as ever, I could not see it nor the subsequent theorems "from high above" as with topological arguments.</p>
<p>But then a new hope (happy may 4th) came with a then distant but closely approaching subject, differential geometry. The prospect of "escaping" once again from the terrestrial concepts seemed very promising, so I decided to look ahead and open up a few books to see if I could finally look down on my old derivative from up top in the conceptual clouds. My expectation was that, just like topology had first to define a generalized "closeness structure" i.e. lay the grounds on which general continuous functions could be defined via open sets, I would now encounter the analogous "differentiable structure" (I had no idea what this should entail but I didn't either for topology so why not imagine it). And so it went: "oh, so you just... and then you take it to <span class="math-container">$\Bbb R^n$</span>... and you use <em>the same definition of differentiable</em>".</p>
<p>Why is this so? How come we're able to abstract continuity into definitions within the same set, but for differentiability, we have to "pass through" the reals? I realize that this really has to do with why we have to generalize in the first place, so what happens is that the respective generalizations have usefulness in the new contexts, hence the second point in my question statement.</p>
<p>Why I imagine this is plausible, a priori, is because there's a historical standard: start with the low-level definitions <span class="math-container">$\rightarrow$</span> uncover some properties <span class="math-container">$\rightarrow$</span> realize these are all you wanted anyhow, and redefine as that which possesses the properties. Certainly, derivatives have properties that can be just as well stated for slightly more general sets! (e.g. linearity, but of course this is far from enough). But then, we'll all agree that there's even been a lust for conducting the above process, <em>everywhere</em> possible, so maybe there are very strong obstructions indeed, which inhibit it's being carried out in this case. In this case, I should ask what these obstructions are, or how I should begin identifying them.</p>
<p>Thank you for reading this far if you have, I hope someone can give some insight (or just a reference would be great!).</p>
<p><sup>* If I'm being honest, before asking this I should really answer the question of what on earth I mean, precisely, by "a structure that doesn't appeal to another". First of all, I might come across a new definition that apparently doesn't use <span class="math-container">$\Bbb R$</span>, but is "isomorphic" to having done so (easy example: calling <span class="math-container">$\Bbb R$</span> a different name). Furthermore, I'm always inevitably appealing to (even naïve) set theory, the natural numbers, etc. without any fuss. So, if my qualms are to have a logical meaning, there should be a quantifiable difference in appealing to <span class="math-container">$\Bbb R$</span> vs. appealing to set theory and other preexisting constructs. If the respondent can remark on this, super-kudos (and if they can but the answer would be long and on the whole unrelated, say this and I'll post another question). </sup></p>
| <p>There "is" a way, since in algebraic geometry, we do not work over the real numbers in general, yet we use techniques inspired from differentiation all the time. It is not the way preferred by most differential geometry textbooks who stick to charts and differentiable structures, but it still works. </p>
<p>A ringed space $(X,\mathcal O_X)$ is a topological space together with a sheaf of rings $\mathcal O_X$ on it. A locally ringed space is a ringed space such that the stalks $\mathcal O_{X,p}$ are local rings for each $p \in X$. If $(X,\mathcal O_X)$ is a locally ringed space, we can define the <strong>cotangent space</strong> at $p$ via $\mathfrak m_{X,p}/\mathfrak m_{X,p}^2$ where $\mathfrak m_{X,p}$ is the unique maximal ideal of $\mathcal O_{X,p}$. This is a vector space over the field $k(p) \overset{def}= \mathcal O_{X,p}/\mathfrak m_{X,p}$, so we can define the tangent space as the dual $k(p)$-vector space to $\mathfrak m_{X,p}/\mathfrak m_{X,p}^2$, or in other words, the set of all linear maps $\mathfrak m_{X,p}/\mathfrak m_{X,p}^2 \to k(p)$. </p>
<p>The idea is that if you have a linear map $\mathfrak m_{X,p}/\mathfrak m_{X,p}^2 \to k(p)$, and you are given a "function" $f \in \mathfrak m_{X,p}$, the value of the linear map at $f$ should give you the "direction"al derivative of $f$. </p>
<p>Of course, this level of abstraction removes any reference to coordinate patches and such, so it is hard to see what's going on. To remove all the algebro-geometric nonsense and get a particular example, take a manifold $M$ (which is a topological space) and consider the sheaf $\mathcal O_M$ of $C^{\infty}$-functions on $M$, that is, if $U \subseteq M$ is an open set, $\mathcal O_M(U)$ is the set of smooth functions $U \to \mathbb R$. Then $\mathcal O_{M,p}$ consists of all germs of functions at $p$, and $\mathfrak m_{M,p}$ is the maximal ideal of those germs whose value at $p$ is zero. The ideal $\mathfrak m_{M,p}^2$ is the ideal of all finite sums of products of two functions in $\mathfrak m_{M,p}$, and in particular such functions always vanish with multiplicity $\ge 2$ (this is the product rule for differentiation). It is usually shown that the dual of $\mathfrak m_{M,p}/\mathfrak m_{M,p}^2$ corresponds to the space of all derivations at $p$ ; note that in this case, we have $k(p) \simeq \mathbb R$, so that the linear maps $\mathfrak m_{M,p}/\mathfrak m_{M,p}^2 \to k(p)$ actually take values in the real numbers. </p>
<p>Of course, the sheaf $\mathcal O_M$ needs to be defined, and this is usually done at some point using coordinate patches ; you do not get away from dealing with charts when doing differential geometry. Sure, at some point you stop using them if you deal with coordinate-free approaches, but they lie somewhere in the treatment of the theory. My point is that the ideas of differentiation do generalize, and this is just a quick glance of how it does ; algebraic geometry takes "differentiation" to a whole new level. </p>
<p>As continuity has been weakened and played with in many different ways (weak/weak-star convergence in functional analysis, semi-upper-lower-continuity in optimization, etc.) and differentiation too (Fréchet, Gâteaux, directional derivatives, semi-directional derivatives, Dini derivatives), you should always remember one thing : yes generalizations are useful, but you should never forget <em>why</em> you wanted to generalize in the first place. It can be either because you want to pay attention to a certain class of problems that you cannot solve and want to have a clearer point of view or to build stronger tools, but generalizing for the sake of generalizing usually leads to being confused and losing intuition, which is not what you want. To this day I am still scared of the use of Dini derivatives...</p>
<p>Hope that helps,</p>
| <p>It seems some of the frustration is with having to define manifolds from the inside out, by pasting together coordinate charts. It's possible to axiomatize a notion of "smooth object" in such a way that we can work with smooth functions on analogues of manifolds, function spaces, products, quotients, intersections, infinitesimally small spaces, and anything else you (or, at least, I) can imagine, and thus define spaces from the outside in, never using coordinates or charts in the definitions. This theory is called synthetic differential geometry (SDG.) </p>
<p>SDG gives a very different definition of smooth function than one uses classically, because one works in a logical framework in which it's impossible to even <em>define</em> non-smooth functions. So, synthetically, a smooth function is...any function between spaces in a model of SDG! This doesn't mean much until you know about what it takes to be a model of SDG. What SDG axiomatizes is what an object $R$ has to do to be able to act like a smooth line to do some differential geometry: so it has some elements which square to zero on which every function is linear (which leads to the definition of derivative,) a much bigger collection of nilpotent elements on which every function is defined by its Taylor series, every function on it has an antiderivative, etc...Then the other smooth objects all have their maps determined, at least locally, by their relationship to this "line" and its infinitesimally small subspaces.</p>
<p>So perhaps this sounds a bit too much like basing manifolds on $\mathbb{R}$ to you. I can assure you that $R$ can be <em>wildly</em> more exotic than $\mathbb{R}$, if that helps; and again objects in SDG are not by any means constructed locally out of finite products of $R$. A more compelling objection is probably that you've never heard of this, which is because, as you can already tell, it's very different from the geometry you know; and to actually develop the foundations requires large amounts of category theory that most geometers don't want to learn. However, the good news is that it's possible to do all of classical differential geometry within this framework, so that one can, at least in principle, think in synthetic terms and then translate proofs into language more familiar to the community.</p>
|
number-theory | <p>In 1847 <a href="http://en.wikipedia.org/wiki/Gabriel_Lam%C3%A9">Lamé</a> had announced that he had proven Fermat's Last Theorem. This "proof" was based on the unique factorization in $\mathbb{Z}[e^{2\pi i/p}]$. However, <a href="http://en.wikipedia.org/wiki/Ernst_Kummer">Kummer</a>, proved that when $p=23$ we do not have unique factorization, and in fact Kummer proved this 3 years earlier in 1844.</p>
<p>My question is: how can you prove that $\mathbb{Z}[\zeta_p]$ does not have unique factorization when $p = 23$, but does for $p < 23$?</p>
| <p>Two remarks.</p>
<ol>
<li><p>Class field theory can be eliminated from Paul's answer: if the degree $(L:K)$ of an extension of number fields is coprime to the class number $h_K$ of $K$, the $h_K \mid h_L$.
The proof follows by observing that the transfer of ideal classes $j: Cl(K) \longrightarrow Cl(L)$ composed with the relative norm is just raising to the $(L:K)$-th power in the class group of $K$. Since the relative degree in the example at hand is $11$, this remark applies here.</p></li>
<li><p>Showing that the class number is $1$ for $p < 23$ is difficult. Kummer proved that the ring of integers inside the 5th roots of unity is Euclidean, but this method becomes impossible to use for $p > 7$. Kummer showed that the class number is the product of two factors $h^-$ and $h^+$, and gave a simple formula for $h^-$. Using this result it is easy to show that $h^- = 1$ for $p < 23$. Showing that the factor $h^+$ is trivial is much more difficult. You can get an idea of the difficulty by consulting Schoof's calculations in the appendix of the 2nd edition of Washington's book.</p></li>
</ol>
| <p>Surely not reiterating history, but giving a semi-conceptual reason for the non-PID-ness of the 23rd cyclotomic field: the field $k=\mathbb Q(\sqrt{-23})$ has class number 3, as follows not-too-labor-intensely from the class-number formula for complex-quadratic fields. Now we grant ourselves something a bit serious, but 100 years old, about the "Hilbert classfield": the maximal unramified abelian extension of a number field has Galois group isomorphic to the ideal class group. Thus, there is an unramified cubic abelian extension $H$ of $k$. Since the 23rd cyclotomic field $F$ is of degree 11 over $k$ (e.g., by Gauss sum considerations), that cyclotomic field does not contain $H$. Thus, the compositum $HF$ is unramified (by multiplicativity of ramification indices) cubic over $F$. Again by the Hilbert-classfield result, this implies that the class number of $F$ is divisible by $3$.</p>
|
game-theory | <p>What are some examples of mathematical games that can take an unbounded amount of time (a.k.a. there are starting positions such that for any number $n$, there is a line of play taking $>n$ times) but is finite (every line of play eventually ends.) Also, it would be nice if it were recreational in nature (by this I mean I basically mean its nontrivial, I could conceivably be enjoyed by humans theortically.) One answer per game please, but edit variants into the same answer.</p>
<p>This is a big-list question, so many answers would be appreciated.</p>
| <p>One example is <a href="http://en.wikipedia.org/wiki/Sylver_coinage">Sylver's Coinage</a> played like so:</p>
<p>Player's alternate selecting positive integers ($1, 2, 3, 4\dots$). The rule is that no number is allowed to be expressed as sum (with possible duplicates) of the previous. For example, say $\{4, 8, 5, 7\}$ ($8$ would have had to been said before $4$) was previously said. Then $6$ could be said, but $14$ could not, for it is equal to $5+5+4$ (or $7+7$.) The player who says $1$ loses! (If you prefer normal play convention, outlaw the number $1$ and then the last player who can move wins!)</p>
<p>This game is unbounded ($\{n, n-1, n-2, \dots n/2\}$), but, according to a theorem of arithmetic, finite (if you have trouble even seeing how this game could end, note that when $2$ and $3$ are said, all even numbers, and all odd number more that $3$ are illegal.</p>
<p>Does anyone know any variants (say based on different mathematical structures than positive integers? Ordinals maybe?)</p>
| <p>One example is the ring game, which is defined <a href="https://mathoverflow.net/questions/93276/a-game-on-noetherian-rings">here</a> - in essence, the game (or a slight variant thereon) can be described as:</p>
<blockquote>
<p>Start with a Noetherian ring $R$. At each turn, replace $R$ with a quotient thereof. If a player can make no legal moves (i.e. $R$ is a field, thus has no proper quotients).</p>
</blockquote>
<p>One can notice that if we play this on $\mathbb Z$, then this is equivalent to the game $*\omega$, since the first move takes it to $\mathbb Z/n\mathbb Z$ which is clearly equivalent to $*k$ where $k$ is the number of (not necessarily distinct) prime factors of $n$. However, as evidenced by <a href="https://math.stackexchange.com/questions/158924/the-ring-game-on-kx-y-z">certain unanswered questions</a>, the structure of the game can get fairly complicated when we consider more complicated rings. One could generalize to replace "ring" with any algebraic structure they desire - like one could play on groups with no infinite ascending chains of normal subgroups.</p>
|
number-theory | <p>Let us calculate the remainder after division of $27$ by $10$.</p>
<p>$27 \equiv 7 \pmod{10}$</p>
<p>We have $7$. So let's calculate the remainder after divison of $27$ by $7$.</p>
<p>$ 27 \equiv 6 \pmod{7}$</p>
<p>Ok, so let us continue with $6$ as the divisor...</p>
<p>$ 27 \equiv 3 \pmod{6}$</p>
<p>Getting closer...</p>
<p>$ 27 \equiv 0 \pmod{3}$</p>
<p>Good! We have finally reached $0$ which means we are unable to continue the procedure. </p>
<p>Let's make a function that counts the modulo operations we need to perform until we finally arrive at $0$.</p>
<p>So we find some remainder $r_{1}$ after division of <em>some</em> $a$ by <em>some</em> $b$, then we find some remainder $r_{2}$ after division of $a$ by $r_{1}$ and we repeat the procedure until we find such index $i$ that $r_{i} = 0$.</p>
<p>Therefore, let $$ M(a, b) = i-1$$</p>
<p>for $a, b \in \mathbb{N}, b \neq 0 $
(I like to call it "modulity of a by b", thence <strong>M</strong>)</p>
<p>For our example: $M(27, 10) = 3$.</p>
<p>Notice that $M(a, b) = 0 \Leftrightarrow b|a $ (this is why $i-1$ feels nicer to me than just $i$)</p>
<p>Recall what happens if we put a white pixel at such $(x, y)$ that $y|x$:
<a href="https://i.sstatic.net/jMnYy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jMnYy.png" alt="Symphony of numbers"></a>
This is also the plot of $M(x, y) = 0$.
(the image is reflected over x and y axes for aesthetic reasons. $(0, 0)$ is exactly in the center)</p>
<p>What we see here is the common <em>divisor plot</em> that's already been studied extensively by prime number researchers.</p>
<p>Now here's where things start getting interesting:</p>
<p>What if we put a pixel at such $(x, y)$ that $M(x, y) = 1$?
<a href="https://i.sstatic.net/Zzq4a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zzq4a.png" alt="enter image description here"></a>
Looks almost like the <em>divisor plot</em>... but get a closer look at the rays. It's like copies of the divisor plot are growing on each of the original line!</p>
<p>How about $M(x, y) = 2$?
<a href="https://i.sstatic.net/Y5wfn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y5wfn.png" alt="enter image description here"></a>
Copies are growing on the copies!
Note that I do not overlay any of the images, I just follow this single equation.</p>
<p>Now here is my favorite.
Let us determine luminosity ($0 - 255$) of a pixel at $(x, y)$ by the following equation:
$$255 \over{ M(x,y) + 1 }$$
(it is therefore full white whenever $y$ divides $x$, half-white if $M(x, y) = 1$ and so on)
<a href="https://i.sstatic.net/NcbXl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NcbXl.jpg" alt="enter image description here"></a></p>
<p>The full resolution version is around ~35 mb so I couldn't upload it here (I totally recommend seeing this in 1:1):
<a href="https://drive.google.com/file/d/0B_gBQSJQBKcjakVSZG1KUVVoTmM/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/0B_gBQSJQBKcjakVSZG1KUVVoTmM/view?usp=sharing</a></p>
<p>What strikes me the most is that some black stripes appear in the gray area and they most often represent prime number locations.</p>
<h2>Trivia</h2>
<ul>
<li><p>The above plot with and without prime numbers marked with red stripes:</p>
<p><a href="https://i.sstatic.net/FT3JA.jpg" rel="nofollow noreferrer">https://i.sstatic.net/FT3JA.jpg</a></p>
<p><a href="https://i.sstatic.net/kSrNB.png" rel="nofollow noreferrer">https://i.sstatic.net/kSrNB.png</a></p></li>
<li><p>The above plot considering only prime $x$:</p>
<p><a href="https://i.sstatic.net/JWawM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JWawM.jpg" alt="http://i.imgur.com/3iNqL3F.jpg"></a></p>
<p>Formula: $255 \over{ M(p_{x},y) }$ (note I do not add $1$ to the denominator because it would be full white only at $y$ equal $1$ or the prime. Therefore, the pixel is fully white when $p_{x}$ mod $y = 1$ )</p>
<p>Full 1:1 resolution: <a href="https://drive.google.com/file/d/0B_gBQSJQBKcjTWMzc3ZHWmxERjA/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/0B_gBQSJQBKcjTWMzc3ZHWmxERjA/view?usp=sharing</a></p>
<p>Interestingly, these modulities form a divisor plot of their own. </p></li>
<li><p>Notice that for $ M(a, b) = i-1, r_{i-1}$ results in either $1$ or a divisor of $a$ (which is neither $1$ nor $a$).</p>
<p>I put a white pixel at such $(x, y)$ that for $M(x, y) = i - 1$, it is true that $r_{i-1}\neq 1 \wedge r_{i-1} | x$ (the one before last iteration results in a remainder that divides $x$ and is not $1$ (the uninteresting case))</p>
<p><a href="https://i.sstatic.net/9Ml5s.png" rel="nofollow noreferrer">https://i.sstatic.net/9Ml5s.png</a></p>
<p>It is worth our notice that growth of $M(a, b)$ is rather slow and so if we could discover a rule by which to describe a suitable $b$ that most often leads to encountering a proper factor of $a$, we would discover a primality test that works really fast (it'd be $O(M(a, b))$ because we'd just need to calculate this $r_{i-1}$).</p></li>
<li><p>Think of $M'(a, b)$ as a function that does not calculate $a$ mod $b$ but instead does $M(a, b)$ until a zero is found.
These two are plots of $M'''(x, y)$ with and without primes marked:</p>
<p><a href="https://i.sstatic.net/yfOyY.png" rel="nofollow noreferrer">https://i.sstatic.net/yfOyY.png</a></p>
<p><a href="https://i.sstatic.net/dagbd.png" rel="nofollow noreferrer">https://i.sstatic.net/dagbd.png</a></p></li>
<li><p>Plot of $M(x, 11)$, enlarged 5 times vertically:</p>
<p><a href="https://i.sstatic.net/ykb3B.png" rel="nofollow noreferrer">https://i.sstatic.net/ykb3B.png</a></p>
<p>Can't notice any periodicity in the first 1920 values even though it's just 11.</p>
<p>For comparison, plot of $x$ mod $11$ (1:1 scale):</p>
<p><a href="https://i.sstatic.net/viKAU.png" rel="nofollow noreferrer">https://i.sstatic.net/viKAU.png</a></p></li>
<li><p>As it's been pointed out in the comments, subsequent iterations of $M(a, b)$ look very much like <a href="https://en.wikipedia.org/wiki/Euclidean_algorithm" rel="nofollow noreferrer">Euclidean algorithm</a> for finding the greatest common divisors using repeated modulo. A strikingly similar result can be obtained if for $(x, y)$ we plot the number of steps of $gcd(x, y)$: </p>
<p><a href="https://i.sstatic.net/E4OmH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4OmH.png" alt="enter image description here"></a></p>
<p>I've also found similar picture <a href="https://en.wikipedia.org/wiki/Euclidean_algorithm#Algorithmic_efficiency" rel="nofollow noreferrer">on wikipedia</a>:
<a href="https://i.sstatic.net/d4LoE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d4LoE.png" alt="enter image description here"></a></p>
<p>This is basically the plot of algorithmic efficiency of $gcd$.</p>
<p>Somebody even drew a <a href="https://math.stackexchange.com/a/311527/94156">density plot here on stackexchange</a>.</p>
<p><strong>The primes, however, are not so clearly visible in GCD plots</strong>. Overall, they seem more orderly and <strong>stripes do not align vertically</strong> like they do when we use $M(a, b)$ instead.</p>
<p>Here's a convenient comparative animation between <em>GCD timer</em> (complexity plot) and my Modulity function ($M(x, y)$). Best viewed in 1:1 zoom. $M(x, y)$ <strong>appears to be different in nature from Euclid's GCD algorithm.</strong></p>
<p><a href="https://i.sstatic.net/FsQFp.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FsQFp.gif" alt="enter image description here"></a></p></li>
</ul>
<h2>Questions</h2>
<ul>
<li>Where is $M(a, b)$ used in mathematics? </li>
<li>Is it already named somehow?</li>
<li>How could one estimate growth of $M(a, b)$ with relation to both $a$ and $b$, or with just $a$ increasing?</li>
<li>What interesting properties could $M(a, b)$ possibly have and could it be of any significance to number theory?</li>
</ul>
| <p>$\newcommand{\Natural}{\mathbb{N}}$
$\newcommand{\Integer}{\mathbb{Z}}$
$\newcommand{\Rational}{\mathbb{Q}}$
$\newcommand{\Real}{\mathbb{R}}$
$\newcommand{\abs}[1]{\left\vert#1\right\vert}$
$\newcommand{\paren}[1]{\left(#1\right)}$
$\newcommand{\brac}[1]{\left[#1\right]}$
$\newcommand{\set}[1]{\left\{#1\right\}}$
$\newcommand{\seq}[1]{\left<#1\right>}$
$\newcommand{\floor}[1]{\left\lfloor#1\right\rfloor}$
$\DeclareMathOperator{\GCD}{GCD}$
$\DeclareMathOperator{\TL}{TL}$</p>
<p>Here are some rediscovered (but fairly old) connections between the analysis
of Euclid's GCD algorithm, the Farey series dissection of the continuum,
and continued fractions. Some of these topics are all treated in chs. 3 and 10 in
(1) by Hardy and Wright. Long time ago the author of this response asked this question in the newsgroup sci.math and this is a collected summary with some new findings, after the main responder's analysis, that of Gerry Myerson. Additional contributions and thanks to Dave L. Renfro, James Waldby, Paris Pamfilos, Robert Israel, Herman Rubin and Joe Riel. References may be a little mangled.</p>
<p><strong>Introduction</strong></p>
<p>When studying the asymptotic density distribution of $\phi(n)/n$ or of $\phi(n)$ versus $n$, both graphs display certain <em>trend lines</em> around which many positive integers accumulate. On the $\phi(n)/n$ graph they are horizontal, while on the $\phi(n)$ versus $n$ graph they have varying slopes $r$, with $0\le r \le 1$. This density distribution has been studied extensively. Schoenberg, for example in [9, pp. 193-194] showed that $\phi(n)/n$ has a continuous distribution function $D(r)$ (also in [4, p.96]). Later in [10, p.237] he proved that under fairly general conditions $D(r)$ exists for a multiplicative function, leading to necessary and sufficient conditions for the existence and continuity of such a $D$ for an additive arithmetical function. Erdos showed ([3, p. 96]) that $D(r)$ for $\phi(n)/n$ is purely singular, hence trend lines $rx$ exist for almost any $r\in [0,1]$. Weingartner in [11, p. 2680] and Erdos in [2, p. 527] derived explicit bounds for the number of integers such that $\phi(n)/n\le r$. Here we first briefly try to explain those trend lines and then we present a theorem which suggests that they follow certain fractal patterns related to the Farey series $\mathfrak{F}_n$ of order $n$, which exist in a graph which times the asymptotic performance of the Euclidian GCD algorithm. Because the timing of the Euclidian GCD algorithm is involved, this theorem can ultimately be used to speed-up factorization by trial and error searches. Additionally, these trend lines are also connected with a certain function which plays a role in tetration.</p>
<p><strong>Notation</strong></p>
<p>To avoid complicated notation, we always notate primes with $p$, $q$ and $p_i$, $q_i$, with $i\in I=\set{1,2,\ldots,t}$, $t\in\Natural$ an appropriate finite index set of naturals. Naturals will be notated with $m$, $n$, $k$, etc. Reals will be notated with $x$, $y$, $r$, $a$, $\epsilon$, etc. $\floor{x}$ denotes the familiar floor function. Functions will be notated with $f$, $g$, etc. The Greatest Common Divisor of $m$ and $n$ will be denoted with $\GCD(m,n)$. When we talk about the $\GCD$ algorithm, we explicitly mean the <em>Euclidean</em> algorithm for $\GCD(m,n)$.</p>
<p><strong>The Trend Lines in the Asymptotic Distribution of $\phi(n)$</strong></p>
<p>For an introduction to Euler's $\phi$ function and for some of its basic properties the reader may consult [5, p. 52], [6, p.~20] and [1, p.25]. Briefly, $\phi(n)$ counts the number of positive integers $k$ with $k\le n$ and $\GCD(n,k)=1$. For calculations with $\phi$, the author used the Maple package (8), although for $n\in\Natural$ one can use the well known identity,</p>
<p>$$
\begin{equation}\label{eq31}
\phi(n)=n\cdot \prod_{p|n}\paren{1-\frac{1}{p}}
\end{equation}
$$</p>
<p>The graph of $\phi(n)$ as a function of $n\in\set{1,\ldots,10000}$, showing some of the trend lines, is shown next in Fig. 1.</p>
<p><a href="https://i.sstatic.net/kbmma.jpg" rel="noreferrer"><img src="https://i.sstatic.net/kbmma.jpg" alt="fig1"></a></p>
<p>$\phi(n)$ as a function of $n$, for $n\in\set{1,\ldots,10000}$</p>
<p>Trying to fit the trend lines in figure 1 experimentally, it looks as though the lines might be related to specific functions.</p>
<p>For example the uppermost line looks like the line $f(x)\sim x-1$, since $\phi(p)=p-1$ for $p$ prime. The second major trend looks like the line $f(x)\sim x/2$ although this is far from certain. The next major trends look like they are $f(x)\sim x/3$ and $f(x)\sim 2x/3$.</p>
<p>Although the uppermost line contains primes, it also contains other numbers, such as for example $n=pq$ with $p$ and $q$ both large primes. In fact, if $n$ is any number all of whose prime factors are large then $\phi(n)$ will be relatively close to $n$. Let's see what happens exactly.</p>
<p><strong>Theorem 3.1</strong>: The non-prime trend lines on the graph of $\phi(n)$ versus $n$ follow the functions $f(r,s,x)=rx/s$, with $r=\prod_{p|n}(p-1)/(q-1)$, $s=\prod_{p|n}p/q$, where $q$ is the largest prime $q|n$.</p>
<p><strong>Proof:</strong> We first give some examples. For $n=2^kq$, $q>2$, $\phi(n)=n(1-1/2)(1-1/q)=(n/2)(q-1)/q$. For large $q$, $t=(q-1)/q\sim 1$, hence also for large $n$, $\phi(n)\sim n/2$, consequently these numbers follow approximately the trend $f(1,2,x)=x/2$.</p>
<p>For $n=3^kq$, $q>3$, $\phi(n)=n(1-1/3)(1-1/q)=(2n/3)(q-1)/q$. For large $q$, again $t=(q-1)/q\sim 1$, hence also for large $n$, $\phi(n)\sim 2 n /3$, consequently these numbers follow the trend $f(2,3,x)=2x/3$.</p>
<p>Generalizing, for $n=p^kq$, $q>p$, $\phi(n)=n(1-1/p)(1-1/q)=(p-1)/p(q-1)/q$. For large $q$, again $t=(q-1)/q\sim 1$, hence also for large $n$, $\phi(n)\sim (p-1) n /p$, consequently these numbers follow the trend $f(p-1,p,x)=(p-1)x/p$.</p>
<p>For $n=2^k3^lq$, $q>3$, $\phi(n)=n(1-1/2)(1-1/3)(1-1/q)=(n/2)(2/3)(q-1)/q=(n/3)(q-1)/q$, hence again for large $n$, $\phi(n) \sim n/3$, consequently these numbers follow the trend $f(1,3,x)=x/3$.</p>
<p>For $n=2^k5^lq$, $q>5$, $\phi(n)=n(1-1/2)(1-1/5)(1-1/q)=(2n/5)(q-1)/q$, hence the trend is $f(2,5,x)=2x/5$.</p>
<p>For $n=3^k5^lq$, the trend will be $f(8,15,x)=8x/15$.</p>
<p>For $n=2^k3^l5^mq$, the trend will be $f(4,15,x)=4x/15$.</p>
<p>Generalizing, for $n=\prod p_i^{k_i}q$, $q>p_i$ the trend will be:</p>
<p>$$
\begin{equation}\label{eq32}
f\paren{\prod_i (p_i-1),\prod_i p_i,x}=
\prod_i \paren{1-\frac{1}{p_i}}x
\end{equation}
$$</p>
<p>and the theorem follows.</p>
<p>In figure 2 we present the graph of $\phi(n)$ along with some trend lines $\TL$:</p>
<p>$$
\begin{equation*}
\begin{split}
\TL&=\set{x-1,x/2,2x/3,4x/5}\\
&\cup \set{x/3,6x/7,2x/5}\\
&\cup \set{3x/7,8x/15,4x/7,4x/15,24x/35}\\
&\cup\set{2x/7,12x/35,16x/35,8x/35}
\end{split}
\end{equation*}
$$</p>
<p><a href="https://i.sstatic.net/DbwY3.jpg" rel="noreferrer"><img src="https://i.sstatic.net/DbwY3.jpg" alt="fig2"></a></p>
<p>$\phi(n)$ combined with the trend lines $f_k(x)\in \TL$, $k\in\set{1,\ldots,16}$</p>
<p>The trend lines correspond to $n$ having the following factorizations:</p>
<p>$$
\begin{equation*}
\begin{split}
F &\in\set{q,2^kq,3^kq,5^kq}\\
&\cup\set{2^{k_1}3^{k_2}q,7^kq,2^{k_1}5^{k_2}q}\\
&\cup\set{2^{k_1}7^{k_2}q,3^{k_1}5^{k_2}q,3^{k_1}7^{k_2}q,2^{k_1}3^{k_2}5^{k_3}q,5^{k_1}7^{k_2}q}\\
&\cup\set{2^{k_1}3^{k_2}7^{k_3}q,2^{k_1}5^{k_2}7^{k_3}q,3^{k_1}5^{k_2}7^{k_3}q,2^{k_1}3^{k_2}5^{k_3}7^{k_4}q}
\end{split}
\end{equation*}
$$</p>
<p>We now proceed to investigate the nature of these trend lines. The <em>Farey series</em> $\mathfrak{F}_n$ of order $n\ge 2$ ([5, p.23]), is the ascending series of irreducible fractions between 0 and 1 whose denominators do not exceed $n$. Thus, $h/k\in \mathfrak{F}_n$, if $0\le h \le k\le n$, $\GCD(h,k)=1$. Individual terms of a specific Farey series of order $n\ge 2$ are indexed by $m\ge 1$, with the first term being 0 and the last 1. Maple code for creating the Farey series of order $n$ is given in the Appendix.</p>
<p><strong>Theorem 3.2</strong>: The Farey series $\mathfrak{F}_n$ of order $n$ satisfies the following identities:</p>
<p>$$
\begin{equation*}
\begin{split}
\abs{\mathfrak{F}_n}&=\abs{\mathfrak{F}_{n-1}}+\phi(n)\\
\abs{\mathfrak{F}_n}&=1+\sum_{m=1}^n\phi(m)
\end{split}
\end{equation*}
$$</p>
<p><strong>Proof</strong>: By induction on $n$. $\mathfrak{F}_2=\set{0,1/2,1}$, hence $\abs{\mathfrak{F}_2}=3$, since there are 3 irreducible fractions of order $n=2$. Note that the irreducible fractions of order $n$ are necessarily equal to the irreducible fractions of order $n-1$ plus $\abs{\set{k/n\colon k\le n,\GCD(k,n)=1}}=\phi(n)$, and the first identity follows. The second identity follows as an immediate consequence of the first identity and induction, and the theorem follows.</p>
<p>In [5, p.23], we find the following theorem:</p>
<p><strong>Theorem 3.3</strong>: If $0<\xi<1$ is any real number, and $n$ a positive integer, then there is an irreducible fraction $h/k$ such that
$0<k\le n$, $\abs{\xi-h/k}\le 1/k/(n+1)$.</p>
<p>We can now reformulate Theorem 3.1, which follows as a consequence of Theorem 3.3.</p>
<p><strong>Corollary 3.4</strong>: The trend lines on the graph of $\phi(n)$ versus $n$ follow the functions $g(n,m,x)=\mathfrak{F}_{n,m}\cdot x$.</p>
<p>Proof: Note that for large $n=p^k$, $\phi(n)/n\to 1$. For large $n=\prod_i p_i^{k_i}$, $\phi(n)/n\to 1/\zeta(1)=0$. Putting $\xi= \phi(n)/n$, Theorem 3.3 guarantees the existence of an irreducible fraction $h/k$, and some $n$, such that $\phi(n)/n$ is close to a member $h/k$ of $\mathfrak{F}_n$ and the result follows.</p>
<p>The trend lines on the graph of $\phi(n)$ versus $n$ are completely (and uniquely) characterized by either description. For example, consider the factorizations $2^k3^l5^mq$, with $q>5$ and $k,l,m\ge0$. Then if $n=2^kq$, $\phi(n)/n\sim 1/2$, if $n=3^lq$, $\phi(n)/n\sim 2/3$, if $n=5^mq$, $\phi(n)/n \sim 4/5$, if $n=2^k3^lq$, $\phi(n)/n\sim 1/3$, if $n=2^k5^mq$, $\phi(n)/n\sim 2/5$, if $n=3^l5^mq$, $\phi(n)/n\sim 8/15\sim 2/3$, if $n=2^k3^l5^mq$, $\phi(n)/n\sim 4/15\sim 1/3$, all of which are close to members of $\mathfrak{F}_{5}=\set{0,1/5,1/4,1/3,2/5,1/2,3/5,2/3,3/4,4/5,1}$.</p>
<p>In figure 3 we present the graph of $\phi(n)$ along with with $g(10,m,x)$.</p>
<p><a href="https://i.sstatic.net/tIxaR.jpg" rel="noreferrer"><img src="https://i.sstatic.net/tIxaR.jpg" alt="fig3"></a></p>
<p>$\phi(n)$ combined with the functions $g(10,m,x)$</p>
<p><strong>The Asymptotic $\epsilon$-Density of the Trend Lines of $\phi(n)$</strong></p>
<p>We will need the following counting theorem.</p>
<p><strong>Theorem 4.1</strong>: If $i\in\Natural$, $L=\set{a_1,a_2,\ldots,a_i}$ a set of distinct numbers $a_i\in\Natural$ and $N\in\Natural$, then the number of numbers $n\le N$ of the form $n=\prod_i a_i^{k_i}$ for some $k_i\in\Natural$, is given by $S(L,N)$, where,</p>
<p>$$
\begin{equation*}
S(L,N)=
\begin{cases}
\floor{\log_{a_{\abs{L}}}(N)} &\text{, if $\abs{L}=1$}\\
\sum\limits_{k=1}^{\floor{\log_{a_{\abs{L}}}(N)}}S\paren{L\setminus\set{a_{\abs{L}}},\floor{\frac{N}{a_{\abs{L}}^k}}}&\text{, otherwise}
\end{cases}
\end{equation*}
$$</p>
<p><strong>Proof</strong>: We use induction on $i=\abs{L}$. When $i=1$, the number of numbers $n\le N$ of the form $n=a^k$ is exactly $\floor{\log_a(N)}$. Now assume that the expression $S(L,N)$ gives the number of $n\le N$, with $n=\prod_i a_i^{k_i}$, when $i=\abs{L}>1$. We are interested in counting the number of $m\le N$, with $m=\prod_i a_i^{k_i}a_{i+1}^{k_{i+1}}$. If we divide $N$ by any power $a_{i+1}^k$, we get numbers of the form $n$, which we have already counted. The highest power of $a_{i+1}$ we can divide $N$ by, is $a_{i+1}^k$, with $k=\floor{\log_{a_{i+1}}(N)}$, hence the total number of such $m$ is exactly,</p>
<p>$$
\sum_{k=1}^{\floor{\log_{a_{i+1}}(N)}}S(L\setminus\set{a_{i+1}},\floor{N/a_{i+1}^k})=S(L\cup \set{a_{i+1}},N)
$$</p>
<p>so the expression is valid for $i+1=\abs{L\cup \set{a_{i+1}}}$ and the theorem follows.</p>
<p>In the Appendix we provide two Maple procedures which can count the number of such $n$. The first using brute force, the second using Theorem 4.1. The results are identical.</p>
<p>We define now the asymptotic $\epsilon$-density of a line $f(x)=rx$, with $0<r<1$.</p>
<p><strong>Definition 4.2</strong>: Given $r\in\Real$, $0<r<1$ and $N\in\Natural$, then the asymptotic $\epsilon$-density of the line $f(x)=rx$ at $n$ is,</p>
<p>$$
\begin{equation*}
D_{\epsilon}(N,f(x))=\frac{\abs{\set{n\le N\colon \abs{1-\frac{\phi(n)}{f(n)}}\le\epsilon}}}{N}
\end{equation*}
$$</p>
<p>Briefly, $D_{\epsilon}(N,f(x))$ counts the distribution of positive integers inside a strip of width $2\epsilon$ centered around the line $f(x)$, from 1 to $N$. If one wishes, one could alternatively interpret $\epsilon$-density as the difference $\abs{D(r_1)-D(r_2)}$, where $D(r)$ is Schoenberg's finite distribution function on $\phi(n)$ versus $n$ and $\epsilon=\abs{r_1-r_2}/2$. Following Definition 4.2, we now have,</p>
<p><strong>Theorem 4.3</strong>: The trend lines of $\phi$ are the lines of maximal $\epsilon$-density.</p>
<p><strong>Proof</strong>: Let $n\le N$ and $\epsilon> 0$ be given. Then, by the Fundamental theorem of Arithmetic (FTA) $n$ has a unique factorization $n=\prod_i p_i^{k_i}q^k$, for some $p_i$ and $q$, with $q>p_i$. Consider the set $K_{f(x)}=\set{n\le N\colon \abs{1-\phi(n)/f(n)}\le\epsilon}$. If $1/q\le\epsilon$ then, $\abs{1-\phi(n)/f(\prod_i (p_i-1),\prod_i p_i,n)}=\abs{1-(1-1/q)}=1/q\le\epsilon$, therefore $n$ belongs to $K_{f(r,s,x)}$, with $r=\prod_i (p_i-1)$ and $s=\prod_i p_i$. If $1/q>\epsilon$ then $n$ belongs to $K_{f(r(q-1),sq,x)}$. Hence, for each $n\le N$, $\phi(n)$ falls $\epsilon$-close to or on a unique trend line $f(r,s,x)$ for appropriate $r$ and $s$ and the theorem follows.</p>
<p>Theorem 4.3 can be reformulated in terms of Farey series $\mathfrak{F}_n$. It is easy to see that the functions $g(n,m,x)$ are <em>exactly</em> the lines of maximal $\epsilon$-density. This follows from the proof of theorem 3.3 ([5, p.31]): Because $\phi(n)/n$ always falls in an interval bounded by two successive fractions of $\mathfrak{F}_n$, say $h/k$ and $h'/k'$, it follows that $\phi(n)/n$ will always fall in one of the intervals</p>
<p>$$
\begin{equation*}
\paren{\frac{h}{k},\frac{h+h'}{k+k'}}, \paren{\frac{h+h'}{k+k'},\frac{h'}{k'}},
\end{equation*}
$$</p>
<p>Hence, $\phi(n)/n$ falls $\epsilon$-close to either $g(n,m,x)$, or $g(n,m+1,x)$, for sufficiently large $n$.</p>
<p>In figure 4 we present the 0.01-density counts of the trends $f_k(x)\in \TL$ for the sample space $\set{1,\ldots,10000}$.</p>
<p><a href="https://i.sstatic.net/zmil4.jpg" rel="noreferrer"><img src="https://i.sstatic.net/zmil4.jpg" alt="fig4"></a></p>
<p>$D_{0.01}(10000,f_k(x))$ for $f_k(x)\in \TL$, $k\in\set{1,\ldots,16}$</p>
<p>$\sum_{k=1}^{16}D_{0.01}(10000,f_k(x))\sim 0.5793$, so for $\epsilon=0.01$ approximately half the sample space falls onto the trend lines $\TL$. $D_{0.01}(10000,f_1(x))\sim 0.1205$, while the Prime Number Theorem (PNT) gives $P(n=prime)=1/\log(10000)\sim 0.10857$.</p>
<p>We define now the asymptotic 0-density of a line $f(x)=rx$, with $0<r<1$.</p>
<p><strong>Definition 4.4</strong>: Given $r\in\Real$, $0<r<1$ and $N\in\Natural$, then the 0-density of the line $f(x)=rx$ is,</p>
<p>$$
\begin{equation*}
D_0\paren{N,f(x)}=\lim\limits_{\epsilon\to 0}D_{\epsilon}\paren{N,f(x)}
\end{equation*}
$$</p>
<p>The reader is welcome to try to generate different graphs for different densities (including 0-densities) using the Maple code in the Appendix. The 0-densities for $N=10000$ are shown in figure 5.</p>
<p><a href="https://i.sstatic.net/Fxy8q.jpg" rel="noreferrer"><img src="https://i.sstatic.net/Fxy8q.jpg" alt="fig5"></a></p>
<p>$D_0(10000,f_k(x))$ for $f_k(x)\in \TL$, $k\in\set{1,\ldots,16}$</p>
<p>We observe that the 0-densities of the trend lines of $m$ and $n$ look like they are roughly inversely proportional to the products $\prod_i p_i$ when $m$ and $n$ have the same number of prime divisors, although this appears to be false for at least one pair of trend lines (bins 3 and 12 on figure 5):</p>
<p>$2\cdot 3\cdot 7>3$, while $\abs{\set{n\le 10000\colon n= 2^k3^l7^m}}=S(\set{2,3,7},10000)=43\gt\abs{\set{n\le 10000\colon n=3^k}}=S(\set{3},10000)=8$</p>
<p>The trend line density is a rough indicator of the probability $n$ has one of the mentioned factorizations in $F$. The calculated densities of figures 4 and 5 of course concern only the sample space $\set{1,\ldots,N}$, with $N=10000$ and the primes we are working with, $\set{2,3,5,7}$. If $N$ (or the lower bound, 1) or the set of primes changes, these probabilities will have to be recalculated experimentally.</p>
<p>Then we have,</p>
<p><strong>Theorem 4.5</strong>: Given $N\in\Natural$, $r=\prod_i (p_i-1)$, $s=\prod_i p_i$, and $L=\set{p_1,p_2,\ldots,p_i}$, then</p>
<p>$$
\begin{equation*}
D_0\paren{N,f(r,s,x)}=\frac{S(L,N)}{N}
\end{equation*}
$$</p>
<p><strong>Proof</strong>: The $\epsilon$-density of the trend line $f(r,s,x)$ is $\abs{K}/N$, with $K$ being $\set{n\le N\colon \abs{1-\phi(n)/f(r,s,n)}\le\epsilon}$. As $\epsilon\to 0$, $K$ will contain exactly only those $n$ having the factorization $n=\prod_i p_i^{k_i}$ and the theorem follows by applying Theorem 4.1 with $a_i=p_i$.</p>
<p><strong>Remark</strong>: Note that the existence of Schoenberg's continuous distribution function $D(r)$ together with theorem 4.5 automatically guarantee the following:</p>
<p><strong>Corollary 4.6</strong>: Given $r$, $s$ and $L$ as in theorem 4.5 then</p>
<p>$$
\begin{equation*}
\lim_{N\to\infty}\lim_{\epsilon\to 0}D_{\epsilon}\paren{N,f(r,s,x)}=\lim_{N\to\infty}D_0\paren{N,f(r,s,x)}=\lim_{N\to\infty}\frac{S(L,N)}{N}<\infty
\end{equation*}
$$</p>
<p><strong>The Timing of the Euclidian GCD Algorithm</strong></p>
<p>The Euclidean GCD algorithm has been analyzed extensively (see <a href="https://i.sstatic.net/oSW2t.jpg" rel="noreferrer">6</a> for example). For two numbers with $m$ and $n$ digits respectively, it is known to be $O((m+n)^2)$ in the worst case if one uses the crude algorithm. This can be shortened to $O((m+n)\cdot \log(m+n)\cdot \log(\log(m+n)))$, and if one uses the procedure which gets the smallest absolute remainder, trivially the length of the series is logarithmic in $m+n$. So the worst time, using the crude algorithm, is $O((m+n)^2\cdot \log(m+n))$, with the corresponding bound for the asymptotically better cases. It has been proved by Gabriel Lame that the worst case occurs when $m$ and $n$ are successive Fibonacci numbers.</p>
<p>Using the Maple code on the Appendix, in figure 6 we show the timing performance graph of the Euclidean GCD algorithm as a function of how many steps it takes to terminate for integers $m$ and $n$, relative to the <em>maximum</em> number of steps. Darker lines correspond to faster calculations. The time performance of $\GCD(m,n)$ is exactly equal to the time performance of $\GCD(n,m)$, hence the graph of figure 6 is symmetric with respect to the line $m=n$.</p>
<p><a href="https://i.sstatic.net/Ciyv9.jpg" rel="noreferrer"><img src="https://i.sstatic.net/Ciyv9.jpg" alt="fig6"></a></p>
<p>Time of $\GCD(m,n)$ for $(m,n)\in\set{1,\ldots,200}\times\set{1,\ldots,200}$</p>
<p><strong>A Probabilistic Theorem</strong></p>
<p>If we denote by $\mathfrak{A}$ the class of all $\GCD$ algorithms, then for $1\le m,n\le N\in\Natural$, we define the function $S[G,N]\colon \mathfrak{A}\times\Natural\to\Natural$ to be the <em>number of steps</em> of the Euclidean algorithm for $\GCD(m,n)$. If $H$ denotes the <em>density</em> of the hues on figure 6, ranging from black (few steps) to white (many steps), then figure 6 suggests,</p>
<p>$$
\begin{equation}\label{eq61}
S\brac{\GCD(m,n),N}\sim H\brac{f(n,m,x)}\sim g(n,m,x)
\end{equation}
$$</p>
<p>Keeping in mind that $S\brac{\GCD(m,n),N}=S\brac{\GCD(n,m),N}$ and interpreting grey scale hue $H$ as (black pixel) $\epsilon$-density (a probability) on figure 6, relation the above suggests,</p>
<p><strong>Theorem 6.1</strong>: Given $\epsilon>0$, $N\in\Natural$, if $m\le N$ and $\min\set{\phi(m)\colon m\le N}\le n\le N$, $\phi$'s trend lines of highest $\epsilon$-density (as in figure 1) correspond to the lines of fastest $\GCD(m,n)$ calculations (as in figure6), or:</p>
<p>$$
\begin{equation*}
S\brac{\GCD(n,m),N}\sim D_{\epsilon}(N,f(n,m,x))\sim g(n,m,x)
\end{equation*}
$$</p>
<p><strong>Proof</strong>: First, we present figures 1 and 6 superimposed using Photoshop as figure 7. Next we note that on the sample space $\set{1,\ldots,N}$, both figures 1 and 6 share a common dominant feature: The emergence of trend lines $g(n,m,x)$. As established by Theorem 4.3, on figure 1 these lines are the lines of highest asymptotic $\epsilon$-density, given by $D_{\epsilon}(N,f(n,m,x))$. On the other hand, on figure 6 note that $n=\phi(m)$ by superposition of the two figures, hence using the fundamental identity for $\phi$, $n=m\prod_i (p_i-1)/p_i\Rightarrow n/m\sim f(\prod_i (p_i-1),\prod_i p_i,x)\sim f(n,m,x)\sim g(n,m,x)$ therefore $n/m\sim g(n,m,x)$. The trend lines $g(n,m,x)$ are already established as the regions of highest $\epsilon$-density, because their locations are close to irreducible fractions $n/m$ (for which $\GCD(m,n)=1$), which are fractions which minimize $S\brac{\GCD(n,m),N}$, therefore $S\brac{\GCD(n,m),N}$ is maximized away from these trend lines and minimized close to them, and the theorem follows.</p>
<p>To demonstrate Theorem 6.1, we present an example. The $\epsilon$-densities of the trend lines of $\phi$ on figure 4 for the space $\set{1,2,\ldots,N}$, $N=10000$ and for the primes we used, $\set{2,3,5,7}$ are related to the speed of the GCD algorithm in our space. For example, the highest 0.01-density trend line in our space is the line corresponding to the factorization $m=2^kq$. For prime $q>2$, $\phi(m)\sim m/2$. From figure 6, $\phi(m)=n$, hence $m/2=n$. Thus the fastest GCD calculations in our space with these four primes will occur when $n=m/2$. This is validated on figure 6. The next highest 0.01-density trend lines correspond to the factorizations $m=3^kq$, $m=2^k3^lq$ and $m=5^kq$. In these cases, for $q>5$, $\phi(m)\sim m/3$, $\phi(m)\sim 2m/3$ and $\phi(m)\sim 4m/5$ respectively. From figure 6 again, $\phi(m)=n$, hence the next fastest GCD calculations in our space will occur when $n=m/3$, $n=2m/3$ and $n=4m/5$. This is also validated on figure 6. The process continues in a similar spirit, until our $0.01$-density plot is exhausted for our space and the primes we are working with.</p>
<p>When we are working with all primes $p_i\le N$, Theorem 6.1 suggests that the fastest GCD calculations will occur when $m=\prod_i p_i^{k_i}q$, which correspond to the cases $\phi(m)=n\Rightarrow n=m\prod_i(1-1/p_i)\Rightarrow n=m\prod_i (p_i-1)/\prod_i p_i$. These lines will eventually fill all the black line positions on figure 6 above the line $n=\min\set{\phi(m)\colon m\le N}$, according to the grey hue gradation on that figure.</p>
<p>If one maps the vertical axis $[0,N]$ of figure 6 onto the interval $[0,1]$ and then the latter onto a circle of unit circumference, one gets a <em>Farey dissection of the continuum</em>, as in [5, p.29]. Hence, the vertical axis of figure 6 represents an alternate form of such a dissection. This dissection of figure 6 is a <em>rough map</em> of the nature of factorization of $n$. Specifically, the asymptotic distribution of $\phi(n)/n$ in $[0,1]$, indicates (in descending order) roughly whether $n$ is a power of a (large) prime ($\phi(n)/n\sim 1$, top), a product of specific prime powers according to a corresponding Farey series ($\phi(n)/n\sim \mathfrak{F}_n$), or a product of many (large) prime powers ($\phi(n)/n\sim 0$, bottom).</p>
<p><a href="https://i.sstatic.net/73MC8.jpg" rel="noreferrer"><img src="https://i.sstatic.net/73MC8.jpg" alt="fig8"></a></p>
<p>The trend lines of $\phi$'s asymptotic density correspond to the fastest GCD calculations, or, the totient is the discrete Fourier transform of the gcd, evaluated at 1 (<a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function#Fourier_transform" rel="noreferrer">GCDFFT</a>).</p>
<p><strong>Practical Considerations of Theorem 6.1</strong></p>
<p>What is the <em>practical</em> use (if any) of theorem 6.1? The first obvious use is that one can make a fairly accurate probabilistic statement about the speed of $\GCD(m,n)$ for specific $m$ and $n$, by `inspecting' the $\epsilon$-density of the line $rx$, where $r=m/n$ (or $1/r=n/m$). To demonstrate this, we use an example with two (relatively large) random numbers. Let:</p>
<p>$m=63417416233261881847163666172162647118471531$, and</p>
<p>$n=84173615117261521938172635162731711117360011$.</p>
<p>Their ratio is approximately equal to $r=m/n\sim 0.7534120538$, so it suffices to determine a measure of the $\epsilon$-density of the line $rx$ on the graph of figure 6}. To locate the line $rx$ on the graph, we use Maple to construct a rectangle whose sides are exactly at a ratio of $r$. This rectangle is shown superimposed with figure 6, on figure 8. The $\epsilon$-density of the line $rx$ is fairly high (because it falls close to a trend line of $\phi$), which suggests that the timing of $\GCD(m,n)$ for those specific $m$ and $n$ will likely be "relatively fast", compared to the worst case of $\GCD(m,n)$ for $m$ and $n$ in that range ($0.1-0.9\cdot 10^{44}$). Note that for $k\ge 1$, we have $S[\GCD(m,n),N]=S[\GCD(km,kn),N]$, so we can determine the approximate speed of $\GCD(m,n)$, by reducing $m$ and $n$ suitably. To an accuracy of 10 decimal places for example, we can be certain that $S[\GCD(m,n),N]\sim S[\GCD(7534120538,10^{10}),N]$, since $7534120538/10^{10}\sim m/n$.</p>
<p>The real practical use of this theorem however, lies not so much in determining the <em>actual</em> speed of a specific GCD calculation (that's pretty much impossible by the probabilistic nature of the theorem), rather, in <em>avoiding</em> trying to factorize a large number $m$, when the ratio $r=p/m$ for various known primes $p$ determines lines $rx$ of relatively <em>low</em> $\epsilon$-density on figure 6. The latter can effectively be used to speed up factorization searches by trial and error, acting as an additional sieve which avoids such <em>timely unfavorable</em> primes and picks primes for which $\GCD(m,p)$ runs to completion relatively fast.</p>
<p><strong>Remark</strong>: Note that such timely unfavorable primes can <em>still</em> be factors of $m$. The usefulness of such a heuristic filter, lies in that it doesn't <em>pick</em> them first, while searching, leaving them for last.</p>
<p><a href="https://i.sstatic.net/oSW2t.jpg" rel="noreferrer"><img src="https://i.sstatic.net/oSW2t.jpg" alt="fig8"></a></p>
<p>Speed of $\GCD(m,n)$, with the given $m$ and $n$ of section 7.</p>
<hr>
<p><strong>Addendum #3</strong> (for your last comment, re: similarity of the two algorithms)</p>
<p>Yes, you are right, because the algorithm you describe by your "modulity" function is not the one I thought you were using. The explanation is the same as the one I've given you, before. Let me summarize: The GCD algorithm counter works as follows:</p>
<blockquote>
<p>GCD := proc (m, n) local T, M, N, c; M := m/igcd(m, n); N := n/igcd(m,
n); c := 0; while 0 < N do T := N; N := <code>mod</code>(M, N); M := T; c := c+1
end do; c end proc</p>
</blockquote>
<p>Result, nice and smooth (modulo 1):</p>
<p><a href="https://i.sstatic.net/EBUda.jpg" rel="noreferrer"><img src="https://i.sstatic.net/EBUda.jpg" alt="enter image description here"></a></p>
<p>From your comment description, you seem to be asking about:</p>
<blockquote>
<p>Mod := proc (m, n) local a, b, c, r; a := m/igcd(m, n); b := n/igcd(m,
n); c := 0; r := b; while 1 < r do r := <code>mod</code>(a, r); c := c+1 end do;
c end proc</p>
</blockquote>
<p>Result, nice, but not smooth:</p>
<p><a href="https://i.sstatic.net/4ouJX.jpg" rel="noreferrer"><img src="https://i.sstatic.net/4ouJX.jpg" alt="enter image description here"></a></p>
<p>And that's to be expected, as I said in my Addendum #2. Your "modulity" algorithm, is NOT equivalent to the GCD timer, since you are reducing modulo $r$, always. There are exactly $\phi(a)$ integers less than $a$ and relatively prime to $a$, so you are getting an additional dissection of the horizontal continuum, as per $\phi(a)$, for $1\le a\le 200$.</p>
<hr>
<p><strong>References</strong></p>
<ol>
<li>T.M. Apostol, Introduction to analytic number theory, Springer-Verlag, New York, Heidelberg, Berlin, 1976.</li>
<li>P. Erdos, Some remarks about additive and multiplicative functions, Bull. Amer. Math. Soc. <strong>52</strong>(1946), 527-537.</li>
<li>_, Problems and results on Diophantine Approximations (II), Compositio Math. <strong>16</strong> (1964), 52-65.</li>
<li>_, On the distribution of numbers of the form $\sigma(n)/n$ and on some related questions, Pacific Journal of Mathematics, <strong>52(1)</strong> 1974, 59-65.</li>
<li>G.H. Hardy and E.M. Right, An introduction to the theory of numbers, Clarendon Press, Oxford, 1979.</li>
<li>K. Ireland and M. Rosen, A classical introduction to modern number theory, Springer-Verlag, New York, Heidelberg, Berlin, 1982.</li>
<li>D. Knuth, The art of computer programming, volume 2: Semi-numerical algorithms, Addison-Wesley, 1997.</li>
<li>D. Redfern, The Maple Handbook, Springer-Verlag, New York, 1996.</li>
<li>I.J. Schoenberg, Uber die asymptotische verteilung reeler zahlen mod 1, Math Z. <strong>28</strong> (1928), 171-199.</li>
<li>_, On asymptotic distributions of arithmetical functions, Trans. Amer. Math. Soc. <strong>39</strong> (1936), 315-330.</li>
<li>A. Weingartner, The distribution functions of $\sigma(n)/n$ and $n/\phi(n)$, Procceedings of the American Mathematical Society, <strong>135(9)</strong> (2007), 2677-2681.</li>
</ol>
| <p>Some additional notes, which I cannot add to my previous answer, because apparently I am close to a 30K character limit and MathJax complains.</p>
<hr>
<p><strong>Addendum</strong></p>
<p>The fundamental pattern which emerges in $\phi(n)$ then, is that of the Farey series dissection of the continuum. This pattern is naturally related to <a href="https://en.wikipedia.org/wiki/Euclid%27s_orchard" rel="nofollow noreferrer">Euclid's Orchard</a>.</p>
<p>Euclid's Orchard is basically a listing of the Farey sequence of (all irreducible) fractions $p_k/q_k$ for the unit interval, with height equal to $1/q_k$, at level $k$:</p>
<p><a href="https://i.sstatic.net/DV0DB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DV0DB.png" alt="enter image description here"></a></p>
<p>Euclid's Orchard on [0,1].</p>
<p>In turn, Euclid's Orchard is related to <a href="https://en.wikipedia.org/wiki/Thomae%27s_function" rel="nofollow noreferrer">Thomae's Function</a> and to <a href="https://en.wikipedia.org/wiki/Nowhere_continuous_function" rel="nofollow noreferrer">Dirichlet's Function</a>:</p>
<p><a href="https://i.sstatic.net/yoE5F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yoE5F.png" alt="enter image description here"></a></p>
<p>Thomae's Function on [0,1].</p>
<p>The emergence of this pattern can be seen easier in a combined plot, that of the GCD timer and Euclid's Orchard on the unit interval:</p>
<p><a href="https://i.sstatic.net/tQqQm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tQqQm.jpg" alt="enter image description here"></a></p>
<p>Farey series dissection of the continuum of [0,1].</p>
<p>Euclid's Orchard is a fractal. It is the most "elementary" fractal in a sense, since it characterizes the structure of the unit interval, which is essential in understanding the continuum of the real line.</p>
<p>Follow some animated gifs which show zoom-in at specific numbers:</p>
<p><a href="https://i.sstatic.net/NocVZ.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NocVZ.gif" alt="enter image description here"></a></p>
<p>Zoom-in at $\sqrt{2}-1$.</p>
<p><a href="https://i.sstatic.net/ZUe0d.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZUe0d.gif" alt="enter image description here"></a></p>
<p>Zoom-in at $\phi-1$.</p>
<p>The point of convergence of the zoom is marked by a red indicator.</p>
<p>White vertical lines which show up during the zoom-in, are (a finite number of open covers of) irrationals. They are the irrational limits of the convergents of the corresponding continued fractions which are formed by considering any particular tower-top path that descends down to the irrational limit.</p>
<p>In other words, a zoom-in such as those shown, displays some specific continued fraction decompositions for the (irrational on these two) target of the zoom.</p>
<p>The corresponding continued fraction decomposition (and its associated convergents) are given by the tops of the highest towers, which descend to the limits.</p>
<hr>
<p><strong>Addendum #2</strong> (for your last comment to my previous answer)</p>
<p>For the difference between the two kinds of graphs you are getting - because I am fairly certain you are still confused - what you are doing produces two different kinds of graphs. If you set M(x,y) to their natural value, you are forcing a smooth graph like the the GCD timer. If you start modifying M(x,y) or set it to other values (($M(x,y)=k$ or if you calculate as $M(x,p^k)$), you will begin reproducing vertical artifacts which are characteristic of $\phi$. And that, because as you correctly observe, doing so, you start dissecting the horizontal continuum as well (in the above case according to $p^k$, etc). In this case, the appropriate figure which reveals the vertical cuts, would be like the following:</p>
<p><a href="https://i.sstatic.net/RPv1h.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RPv1h.jpg" alt="enter image description here"></a></p>
<hr>
<p><strong>Appendix:</strong></p>
<p>Some Maple procedures for numerical verification of some of the theorems and for the generation of some of the figures.</p>
<p>generate fig.1:</p>
<blockquote>
<p>with(numtheory): with(plots): N:=10000;
liste:=[seq([n,phi(n)],n=1..N)]: with(plots):#Generate fig.1
p:=plot(liste,style=point, symbol=POINT,color=BLACK): display(p);</p>
</blockquote>
<p>Generate fig.2:</p>
<blockquote>
<p>q:=plot({x-1,x/2,x/3,2*x/3,2*x/5, 4*x/5,4*x/15,8*x/15,2*x/7,3*x/7,
4*x/7,6*x/7,8*x/35,12*x/35,16*x/35,24*x/35},x=0..N,color=grey):
display({p,q});#p as in example 1.</p>
</blockquote>
<p>Generate fig.3:</p>
<blockquote>
<p>F:=proc(n) #Farey series local a,a1,b,b1,c,c1,d,d1,k,L;
a:=0;b:=1;c:=1;d:=n;L:=[0]; while (c < n) do k:=floor((n+b)/d);
a1:=c;b1:=d;c1:=k<em>c-a;d1:=k</em>d-b;a:=a1;b:=b1;c:=c1;d:=d1;
L:=[op(L),a/b]; od: L; end:
n:=10;
for m from 1 to nops(F(n)) do f:=(m,x)->F(n)[m]*x; od:
q:={}; with(plots): for m from 1 to nops(F(n)) do
qn:=plot(f(m,x),x=0..10000,color=grey); q:=q union {qn}; od:
display(p,q);</p>
</blockquote>
<p>Implements Theorem 4.1:</p>
<blockquote>
<p>S:=proc(L,N) local LS,k,ub; LS:=nops(L);#find how many arguments if
LS=1 then floor(logL[LS]); else ub:=floor(logL[LS]);
add(S(L[1..LS-1],floor(N/L[LS]^k)),k=1..ub); fi; end:</p>
</blockquote>
<p>Brute force approach for Theorem 4.1:</p>
<blockquote>
<p>search3:=proc(N,a1,a2,a3,s) local cp,k1,k2,k3; cp:=0; for k1 from 1 to
s do for k2 from 1 to s do for k3 from 1 to s do if
a1^k1*a2^k2*a3^k3 <= N then
cp:=cp+1;fi;od; od; od; cp; end:</p>
</blockquote>
<p>Verify Theorem 4.1:</p>
<blockquote>
<p>L:=[5,6,10];N:=1738412;S(L,N); 37 s:=50 #maximum exponent for brute
force search search3(N,5,6,10,s); 37 #identical</p>
</blockquote>
<p>Times GCD algorithm:</p>
<blockquote>
<p>reduce:=proc(m,n) local T,M,N,c; M:=m/gcd(m,n);#GCD(km,kn)=GCD(m,n)
N:=n/gcd(m,n); c:=0; while M>1 do T:=M; M:=N; N:=T;#flip M:=M mod
N;#reduce c:=c+1; od; c; end:</p>
</blockquote>
<p>Generate fig.6:</p>
<blockquote>
<p>max:=200; nmax:=200; rt:=array(1..mmax,1..nmax); for m from 1 to mmax
do for n from 1 to nmax do rt[m,n]:=reduce(n,m); # assign GCD steps
to array od; od;
n:='n';m:='m'; rz:=(m,n)->rt[m,n]; # convert GCD steps to function
p:=plot3d(rz(m,n),
m=1..mmax,n=1..nmax,
grid=[mmax,nmax],
axes=NORMAL,
orientation=[-90,0],
shading= ZGREYSCALE,
style=PATCHNOGRID,
scaling=CONSTRAINED): display(p);</p>
</blockquote>
<hr>
|
number-theory | <p>Can you check if my proof is right?</p>
<p>Theorem. $\forall x\geq8, x$ can be represented by $5a + 3b$ where $a,b \in \mathbb{N}$.</p>
<p>Base case(s): $x=8 = 3\cdot1 + 5\cdot1 \quad \checkmark\\
x=9 = 3\cdot3 + 5\cdot0 \quad \checkmark\\
x=10 = 3\cdot0 + 5\cdot2 \quad \checkmark$</p>
<p>Inductive step:</p>
<p>$n \in \mathbb{N}\\a_1 = 8, a_n = a_1 + (x-1)\cdot3\\
b_1 = 9, b_n = b_1 + (x-1)\cdot3 = a_1 +1 + (x-1) \cdot 3\\
c_1 = 10, c_n = c_1 + (x-1)\cdot3 = b_1 + 1 + (x-1) \cdot 3 = a_1 + 2 + (x-1) \cdot 3\\
\\
S = \{x\in\mathbb{N}: x \in a_{x} \lor x \in b_{x} \lor x \in c_{x}\}$</p>
<p>Basis stays true, because $8,9,10 \in S$</p>
<p>Lets assume that $x \in S$. That means $x \in a_{n} \lor x \in b_{n} \lor x \in c_{n}$.</p>
<p>If $x \in a_n$ then $x+1 \in b_x$,</p>
<p>If $x \in b_x$ then $x+1 \in c_x$,</p>
<p>If $x \in c_x$ then $x+1 \in a_x$.</p>
<p>I can't prove that but it's obvious. What do you think about this?</p>
| <p><strong>Proof by induction.</strong><br>
For the base case $n=8$ we have $8=5+3$. <br>
Suppose that the statement holds for $k$ where $k\gt 8$. We show that it holds for $k+1$.</p>
<p>There are two cases.</p>
<p>1) $k$ has a $5$ as a summand in its representation.</p>
<p>2) $k$ has no $5$ as a summand in its representation.</p>
<p><strong>For case 1</strong>, we delete "that $5$" in the sum representation of $k$ and replace it by two "$3$"s ! This proves the statement for $k+1$.</p>
<p><strong>For case 2</strong>, since $k\gt 8$, then $k$ has at least three "$3$"s in its sum representation. We remove these three $3$'s and replace them by two fives! We obtain a sum representation for $k+1$. This completes the proof.</p>
| <p>I would avoid induction and use the elementary Euclidean division algorithm (Eda).</p>
<p>Let $n\geq8$ be an integer. Then, by the Eda, there exist integer $q$ and $r$ such that $r\in\{0,1,2\}$ and $$n=3q+r.$$</p>
<ul>
<li><p>If $r=0$, we are done since $n=3q$.</p></li>
<li><p>If $r=1$, then $q\geq3$ (because $n\geq8$). Hence $n=3(q-3)+10$.</p></li>
<li><p>If $r=2$, then $q\geq2$ (because $n\geq8$). Hence $n=3(q-1)+5$.</p></li>
</ul>
|
combinatorics | <p>Can anyone recommend a good introductory textbook on analytic combinatorics for self studying? It should be for a first year graduate student.</p>
| <p>One book I'd highly recommend is <strong>Peter J. Cameron's</strong></p>
<blockquote>
<p><a href="http://www.maths.qmw.ac.uk/~pjc/comb/"><strong><em>Combinatorics: Topics, Techniques, Algorithms</em></strong></a></p>
</blockquote>
<p>The first link above is to site for the book, which includes multiple resources, including links, solutions to problems (good for self-study, etc.), additional exercises and projects. I used it in an early graduate "Special Topics" class on Combinatorics. (It's geared to be "basic" for grads, advanced for undergrads. Indeed, the text is designed in such a way as to provide ample material for a two-semester sequence, for self-study, and as a reference for future needs.) I have since self-studied the text on my own, to cover material which simply could not be covered in a single semester. </p>
<p>The text is loaded, and expansive in its coverage: dealing not only with combinatorial "content", but also combinatorial techniques, proofs, as well as algorithms for those with computational interest in combinatorics etc...and Cameron himself suggests which chapters to study for different aims.</p>
<p>You can peruse/preview the book at Amazon: <a href="http://rads.stackoverflow.com/amzn/click/0521457610"><strong><em>Take a look inside</em></strong></a>, and also through the top link.</p>
<p><strong>Table of Contents</strong></p>
<p>Preface<br>
1. What is combinatorics?<br>
2. On numbers and counting<br>
3. Subsets, partitions, permutations<br>
4. Recurrence relations and generating functions<br>
5. The principle of inclusion and exclusion <br>
6. Latin squares and SDRs<br>
7. Extremal set theory<br>
8. Steiner triple theory<br>
9. Finite geometry<br>
10. Ramsey's theorem<br>
11. Graphs<br>
12. Posets, lattices and matroids<br>
13. More on partitions and permutations<br>
14. Automorphism groups and permutation groups<br>
15. Enumeration under group action<br>
16. Designs<br>
17. Error-correcting codes<br>
18. Graph colourings<br>
19. The infinite<br>
20. Where to from here?<br>
Answers to selected exercises<br>
Bibliography<br>
Index.<br></p>
<p>Also, for a sample of Cameron's writing style, here's a pdf format of his <a href="http://www.maths.qmul.ac.uk/~pjc/notes/comb.pdf"><strong><em>Combinatorics Notes</em></strong></a> (also available at Cameron's home page, which is accessible from the book's site to which I've included a link at the top this post). </p>
<hr>
<p>One other text that might suit your needs is <a href="http://rads.stackoverflow.com/amzn/click/0486425363"><strong><em>Introduction to Combinatorial analysis</em></strong></a> by John Riordan, considered by many to be a "classic*.</p>
| <p><img src="https://i.sstatic.net/VfO7z.jpg" alt="enter image description here"></p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0817642889" rel="noreferrer">A Path to Combinatorics for Undergraduates: Counting Strategies [Paperback]
Titu Andreescu (Author), Zuming Feng (Author)</a></p>
<p>Authors take a problem and start solving it. You start following solution and then all of a sudden they say <strong>"Not so fast my friend.. not so fast..."</strong> pointing out how that logic was faulty and then give the actual solution. This gives me kicks.</p>
|
number-theory | <p>Sequences that avoid arithmetic progressions have been studied, e.g., "Sequences Containing No 3-Term Arithmetic Progressions," Janusz Dybizbański, 2012, <a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i2p15" rel="noreferrer">journal link</a>.</p>
<p>I started to explore sequences that avoid both arithmetic and geometric progressions,
i.e., avoid <span class="math-container">$x, x+c, x+2c$</span> and avoid <span class="math-container">$y, c y, c^2 y$</span> anywhere in the sequence
(not necessarily consecutively).
Starting with <span class="math-container">$(1,2)$</span>, one cannot extend with <span class="math-container">$3$</span> because <span class="math-container">$(1,2,3)$</span> forms an
arithemtical progression, and one cannot extend with <span class="math-container">$4$</span> because <span class="math-container">$(1,2,4)$</span> is a geometric
progression. But <span class="math-container">$(1,2,5)$</span> is fine.</p>
<p>Continuing in the same manner leads to the following "greedy" sequence:
<span class="math-container">$$1, 2, 5, 6, 12, 13, 15, 16, 32, 33, 35, 39, 40, 42, 56, 81, 84, 85, 88,$$</span>
<span class="math-container">$$90, 93, 94, 108, 109, 113, 115, 116, 159, 189, 207, 208, 222, \ldots$$</span></p>
<p>This sequence is not in the OEIS.
Here are a few questions:</p>
<blockquote>
<p><strong>Q1</strong>. What is its growth rate? </p>
</blockquote>
<p><br />
<img src="https://i.sstatic.net/wQ9Ai.jpg" alt="Avoiding3Terms">
<br /></p>
<blockquote>
<p><strong>Q2</strong>. Does <span class="math-container">$\sum_{i=1}^\infty 1/s_i$</span> converge? (Where <span class="math-container">$s_i$</span> is the <span class="math-container">$i$</span>-th term of the above
sequence.)</p>
<p><strong>Q3</strong>. If it does, does it converge to <em>e</em>?
<em>Update</em>: No. The sum appears to be approximately <span class="math-container">$2.73 > e$</span>, as per
@MichaelStocker and @Turambar.</p>
</blockquote>
<p>That is wild numerical speculation. The first 457 terms (the extent
of the graph above) sum to 2.70261.
<hr />
<strong>Addendum</strong>. <em>11Jul2014</em>. Starting with <span class="math-container">$(0,1)$</span> rather than <span class="math-container">$(1,2)$</span> renders
a direct hit on OEIS <a href="https://oeis.org/A225571" rel="noreferrer">A225571</a>.</p>
| <p><span class="math-container">$\color{brown}{\textbf{HINT}}$</span></p>
<p>Denote the target sequence <span class="math-container">$\{F_3(n)\}$</span> and let us try to estimate the probability <span class="math-container">$P(N)\}$</span> that natural number belongs to <span class="math-container">$\{F_3\}.$</span></p>
<p>Suppose
<span class="math-container">$$F_3(1)=1,\quad F_3(2)=2,\quad P(1)=P(2)=P(5) = P(6) = 1,\\ P(3)=P(4)=P(7)=P(8)=0,\tag1$$</span>
<span class="math-container">$$V(N)=\sum\limits_{i=1}^{N}P(i),\quad F_3(N) = \left[V^{-1}(N)\right].\tag2$$</span></p>
<p>Let <span class="math-container">$P_a(N)$</span> is the probability that N does not belong to arithmetic progression and <span class="math-container">$P_g(N)$</span> is the similar probability for geometric progressions.</p>
<p>Suppose
<span class="math-container">$$P(N) = P_a(N)P_g(N)).\tag3$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Arithmetic Probability estimation.}}$</span></p>
<p>Suppose
<span class="math-container">$$P_a(N)=\prod\limits_{k=1}^{[N/2]}P_a(N,k),\tag4$$</span>
where <span class="math-container">$P_a(N,k)$</span> is the probability that arithmetic progression <span class="math-container">$\{N-2k,N-k, N\}$</span> does not exist for any <span class="math-container">$j.$</span>
Suppose
<span class="math-container">$$P_a(N,k) = \big(1-P(N-2k)\big)\big((1-P(N-k)\big).\tag5$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Geometric Probability estimation.}}$</span></p>
<p>Suppose
<span class="math-container">$$P_g(N)=\prod\limits_{k=1}^{\left[\,\sqrt Nn\,\right]}P_g(N,k),\tag6$$</span>
where <span class="math-container">$P_g(N,k)$</span> is the probability that geometric progression <span class="math-container">$\left(\dfrac{N}{k^2}, \dfrac Nk, N\right\}$</span> with the denominator <span class="math-container">$k$</span> does not exist for all <span class="math-container">$i,j.$</span></p>
<p>Taking in account that the geometric progression can exist only if <span class="math-container">$k^2\,| \ N,$</span> suppose
<span class="math-container">$$P_g(N,k) = \left(1-\dfrac1{k^2}P\left(\dfrac{N}{k^2}\right)\right)\left(1-P\left(\dfrac{N}{k}\right)\right).\tag7$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Common model.}}$</span></p>
<p>Common model can be simplified to next one,
<span class="math-container">\begin{cases}
P(N) = 1-\prod\limits_{k=1}^{[N/2-1]}\Big(1-\big(1-P(N-2k)\big)\big(1-P(N-k)\big)\Big)\\
\times\prod\limits_{k=1}^{\left[\sqrt N\right]}\left(1-\left(1-\dfrac1{k^2}P\left(\dfrac{N}{k^2}\right)\right)\left(1-P\left(\dfrac{N}{k}\right)\right)\right)\\[4pt]
P(1)=P(2)=P(5) = P(6) = 1,\quad P(3)=P(4)=P(7)=P(8)=0\\
V(N)=\sum\limits_{i=1}^{N}P(i),\quad F_3(n) = \left[V^{-1}(n)\right].\tag8
\end{cases}</span></p>
<p>Looking the solution in the form of
<span class="math-container">$$\left\{\begin{align}
&P(N)=P_v(N),\quad\text{where}\quad v=\left[\dfrac{N-1 \mod 4}2\right],\\[4pt]
&P_0(N)=
\begin{cases}
1,\quad\text{if}\quad N<9\\
cN^{-s},\quad\text{otherwise}
\end{cases}\\[4pt]
&P_1(N)=
\begin{cases}
0,\quad\text{if}\quad N<9\\
dN^{-t},\quad\text{otherwise}
\end{cases}
\end{align}\right.\tag9$$</span></p>
<p>then
<span class="math-container">$$\begin{cases}
&P_0(N) =1-&\prod\limits_{k=1}^{[N/4-1]}\Big(1-\big(1-P_0(N-4k)\big)\big(1-P_1(N-2k)\big)\Big)\\
&&\times\Big(1-\big(1-P_1(N-4k-2)\big)\big(1-P_0(N-2k-1)\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}P_0\left(\dfrac{N}{(2k-1)^2}\right)\right)
\left(1-P_1\left(\dfrac{N}{2k-1}\right)\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}P_1\left(\dfrac{N}{4k^2}\right)\right)
\left(1-P_0\left(\dfrac{N}{2k}\right)\right)\right)\\[4pt]
&P_1(N) = 1- &\prod\limits_{k=1}^{[N/4-1]}\Big(1-\big(1-P_1(N-4k)\big)\big(1-P_0(N-2k)\big)\Big)\\
&&\Big(1-\big(1-P_0(N-4k-2)\big)\big(1-P_1(N-2k-1)\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}P_1\left(\dfrac{N}{(2k-1)^2}\right)\right)
\left(1-P_0\left(\dfrac{N}{2k-1}\right)\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}P_0\left(\dfrac{N}{4k^2}\right)\right)
\left(1-P_1\left(\dfrac{N}{2k}\right)\right)\right).
\end{cases}\tag{10}$$</span></p>
<p>Taking in account <span class="math-container">$(9),$</span> can be written
<span class="math-container">$$\begin{cases}
&P_0(N) =1-&\prod\limits_{k=[N/4-2]}^{[N/4-1]}\,c(N-2k-1)^{-s}\prod\limits_{k=1}^{[N/4-3]}\Big(1-\big(1-c(N-4k)^{-s}\big)\big(1-d(N-2k)^{-t}\big)\Big)\\
&&\times\Big(1-\big(1-d(N-4k-2)^{-t}\big)\big(1-c(N-2k-1)^{-s}\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}c\left(\dfrac{N}{(2k-1)^2}\right)^{-s}\right)
\left(1-d\left(\dfrac{N}{2k-1}\right)^{-t}\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}d\left(\dfrac{N}{4k^2}\right)^{-t}\right)
\left(1-c\left(\dfrac{N}{2k}\right)^{-s}\right)\right)\\[4pt]
&P_1(N) = 1- &\prod\limits_{k=[N/4-2]}^{[N/4-1]}\,c(N-2k)^{-s}\prod\limits_{k=1}^{[N/4-3]}\Big(1-\big(1-d(N-4k)^{-t}\big)\big(1-(N-2k)^{-s}\big)\Big)\\
&&\Big(1-\big(1-(N-4k-2)^{-s}\big)\big(1-d(N-2k-1)^{-t}\big)\Big)\\
&&\times\prod\limits_{k=1}^{\left[\sqrt N/2\right]-1}
\left(1-\left(1-\dfrac1{(2k-1)^2}d\left(\dfrac{N}{(2k-1)^2}\right)^{-t}\right)
\left(1-c\left(\dfrac{N}{2k-1}\right)^{-s}\right)\right)\\[4pt]
&&\times\left(1-\left(1-\dfrac1{4k^2}c\left(\dfrac{N}{4k^2}\right)^{-s}\right)
\left(1-d\left(\dfrac{N}{2k}\right)^{-t}\right)\right).
\end{cases}\tag{11}$$</span></p>
<p>Model <span class="math-container">$(11)$</span> should be checked theoretically and practically, but it gives the approach to the required estimations.</p>
<p>The next steps are estimation of parameters <span class="math-container">$$c,d,s,t$$</span> and using of obtained model.</p>
| <p>Since both @awwalker and @mathworker21 mentioned Erdős' conjecture, and because
a paper discussing this conjecture was just published, I thought I would mention it:</p>
<blockquote>
<p><strong>Erdős Conjecture</strong> (1940s or 1950s). If <span class="math-container">$A \subset \mathbb{N}$</span>
satisfies <span class="math-container">$\sum_{n \in A} \frac{1}{n}= \infty$</span>, then
<span class="math-container">$A$</span> contains arbitrarily long arithmetic progressions.</p>
</blockquote>
<p>In </p>
<ul>
<li>Grochow, Joshua. "New applications of the polynomial method: The cap
set conjecture and beyond." <em>Bulletin of the American Mathematical
Society</em>, Vol.56, No.1, Jan. 2019,</li>
</ul>
<p>he says:</p>
<blockquote>
<p>"It remains open even to prove that a set <span class="math-container">$A$</span> satisfying the hypothesis contains <span class="math-container">$3$</span>-term arithmetic progressions."</p>
</blockquote>
|
geometry | <p>Does anyone know if there is a name for the curve which is a helix, which itself has a helical axis? I tried to draw what I mean:</p>
<p><a href="https://i.sstatic.net/4vCSd.png"><img src="https://i.sstatic.net/4vCSd.png" alt="enter image description here"></a></p>
| <p>According to <a href="http://www.sciencedirect.com/science/article/pii/S0378437100005720"><strong>this journal article</strong></a>, we may call it a <em>doubly-twisted helix</em> or generally a <em>multiply-twisted helix</em>.</p>
<p><a href="https://i.sstatic.net/VBsXQ.jpg"><img src="https://i.sstatic.net/VBsXQ.jpg" alt="enter image description here"></a></p>
| <p>For a light bulb the wire is called a "coiled coil filament" in <a href="https://en.wikipedia.org/wiki/Incandescent_light_bulb#Coiled_coil_filament">this</a> Wikipedia article.</p>
<p><a href="https://i.sstatic.net/CrTqp.jpg"><img src="https://i.sstatic.net/CrTqp.jpg" alt="coiled coil filament"></a></p>
<p>The German word is "Doppelwendel" (roughly translated: double screw).</p>
|
probability | <p><a href="https://math.nyu.edu/faculty/tabak/publications/M107495.pdf" rel="noreferrer">Kuang and Tabak (2017)</a> mentions that:</p>
<blockquote>
<p>"closed-form solutions of the multidimensional optimal transport problems are relatively rare, a number of numerical algorithms have been proposed."</p>
</blockquote>
<p>I'm wondering if there are some resources (lecture notes, papers, etc.) that collect/contain known solutions to optimal transport and/or Wasserstein distance between two distributions in dimensions greater than 1. For example, let <span class="math-container">$ \mathcal{N_1}(\mu_1, \Sigma_1) $</span> and <span class="math-container">$ \mathcal{N_2}(\mu_2, \Sigma_2) $</span> denote two Gaussian distributions with different means and covariances matrices. Then the optimal transport map between them is:</p>
<p><span class="math-container">$$ x \longrightarrow \mu_2 + A( x - \mu_1 ) $$</span> where <span class="math-container">$ A = \Sigma_1^{- 1/2} (\Sigma_1^{1/2} \Sigma_2 \Sigma_1^{1/2})^{1/2} \Sigma_1^{- 1/2}$</span>. And so the Wasserstein 2 distance is</p>
<p><span class="math-container">$$ W_2 ( \mathcal{N_1}(\mu_1, \Sigma_1), \mathcal{N_2}(\mu_2, \Sigma_2) ) = || \mu_1 - \mu_2 ||^2_2 + \mathrm{Tr}( \Sigma_1 + \Sigma_2 - 2( \Sigma_1^{1/2} \Sigma_2 \Sigma_1^{1/2} )^{1/2} ) $$</span> where <span class="math-container">$\mathrm{Tr}$</span> is the trace operator.</p>
<p>It will be nice to know more worked out examples of optimal transport, such as uniform distributions between different geometric objects, e.g. concentric and overlapping balls, between rectangles, etc.</p>
| <p>Although a bit old, this is indeed a good question. Here is my bit on the matter:</p>
<ol>
<li><p>Regarding Gaussian Mixture Models, see
<em>A Wasserstein-type distance in the space of Gaussian Mixture Models</em>, <a href="https://arxiv.org/pdf/1907.05254.pdf" rel="nofollow noreferrer">Delon and Desolneux (2020)</a>.</p>
</li>
<li><p>Using the 2-Wasserstein metric, <a href="https://papers.nips.cc/paper/7149-learning-from-uncertain-curves-the-2-wasserstein-metric-for-gaussian-processes.pdf" rel="nofollow noreferrer">Mallasto and Feragen (2017)</a> (<em>Learning from uncertain curves: The 2-Wasserstein metric for Gaussian processes</em>) geometrize the space of Gaussian processes with <span class="math-container">$L_2$</span> mean and covariance functions over compact index spaces.</p>
</li>
<li><p>Wasserstein space of elliptical distributions are characterized by <a href="https://arxiv.org/pdf/1805.07594.pdf" rel="nofollow noreferrer">Muzellec and Cuturi (2019)</a> (<em>Generalizing Point Embeddings using the Wasserstein Space of Elliptical Distributions</em>). Authors show that for elliptical probability distributions, Wasserstein distance can be computed via a simple Riemannian descent procedure. (<strong>Not closed form</strong>)</p>
</li>
<li><p>Tree metrics as ground metrics yield negative definite OT metrics that can be computed in a closed form. Sliced-Wasserstein distance is then a particular (special) case (the tree is a chain): <a href="https://proceedings.neurips.cc/paper_files/paper/2019/file/2d36b5821f8affc6868b59dfc9af6c9f-Paper.pdf" rel="nofollow noreferrer">Le et al. (2019)</a> (<em>Tree-Sliced Variants of Wasserstein Distances</em>).</p>
</li>
<li><p><em>Sinkhorn distances/divergences</em> (<a href="https://arxiv.org/pdf/1306.0895.pdf" rel="nofollow noreferrer">Cuturi (2013)</a>) are now treated as new forms of distances (e.g. not approximations to <span class="math-container">$\mathcal{W}_2^2$</span>) (<a href="https://arxiv.org/pdf/1810.02733.pdf" rel="nofollow noreferrer">Genevay et al. (2019)</a>). Recently, this entropy regularized optimal transport distance is found to admit a closed form for Gaussian measures: <a href="https://arxiv.org/pdf/2006.02572.pdf" rel="nofollow noreferrer">Janati et al. (2020)</a>. This fascinating finding also extends to the unbalanced case.</p>
</li>
<li><p>Variations of Wasserstein distance such as Gromov-Wasserstein distance have been studied for Gaussian distributions. For those cases, <a href="https://arxiv.org/pdf/2104.07970.pdf" rel="nofollow noreferrer">Salmona et al. (2019)</a> have found a simple linear method for finding the Gromov-Monge map.</p>
</li>
<li><p><a href="https://arxiv.org/pdf/2202.05722.pdf" rel="nofollow noreferrer">Bunne et al. (2023)</a> has shown that the dynamic formulation of optimal transport, also known as the Schrödinger bridge (SB) problem, admits a closed form solution for Gaussian measures. In other words, they provide closed-form expressions for SBs between Gaussian measures. This is exciting due to the connections to the recently popularized diffusion models.</p>
</li>
<li><p><a href="https://arxiv.org/pdf/2303.11844.pdf" rel="nofollow noreferrer">Chizat (2023)</a> has introduced the <span class="math-container">$(\lambda, \tau)$</span>-barycenter as the unique probability measure that minimizes the sum of entropic
optimal transport. He has shown that for isotropic Gaussians, there is a closed-form solution.</p>
</li>
<li><p><a href="https://arxiv.org/pdf/2206.08780.pdf" rel="nofollow noreferrer">Bonet et al. (2023)</a> have proposed a closed form solution for calculating the Wasserstein distance between an arbitrary distribution and a uniform distribution, both situated on a circular domain.</p>
</li>
<li><p><a href="https://arxiv.org/pdf/2304.02402.pdf" rel="nofollow noreferrer">Beraha and Pegoraro (2023)</a> derive an (almost) closed form expression for optimal transportation of measures supported on the unit-circle.</p>
</li>
<li><p>Generalized Sobolev Transport (GST) for probability measures on a graph also admits a closed form solution: <a href="https://arxiv.org/pdf/2402.04516.pdf" rel="nofollow noreferrer">Mahey et al. (2024)</a></p>
</li>
<li><p>In another work, <a href="https://arxiv.org/pdf/2307.01770.pdf" rel="nofollow noreferrer">Mahey et al. (2024)</a> provide a new closed form of the Wasserstein distance in the particular case where either one of the input distributions is supported on a line.</p>
</li>
</ol>
<p>I would be happy to keep this list up to date and evolving.</p>
| <p>Optimal transport (OT) problems admit closed-form analytical solutions in a very few notable cases, e.g. in 1D or between Gaussians. Below I cite articles providing analytical solutions for the <strong>1-dimensional case</strong> only (does 1D mean univariate?)</p>
<p><strong>Formula 3</strong> in the following gives a closed-form analytical solution for Wasserstein distance in the case of 1-D probability distributions, but a source for the formula isn't given and I wonder how to convert it to a discretized linear programming model:</p>
<ol>
<li>Kolouri et al (2019) "Generalized Sliced Wasserstein Distances" <a href="https://arxiv.org/pdf/1902.00434.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1902.00434.pdf</a></li>
</ol>
<p><strong>Formula 9</strong> in the following also gives a closed-form solution:</p>
<ol start="2">
<li>Kolouri et al (2019) "Sliced-Wasserstein Auto-encoders" <a href="https://openreview.net/pdf?id=H1xaJn05FQ" rel="nofollow noreferrer">https://openreview.net/pdf?id=H1xaJn05FQ</a></li>
</ol>
<p><strong>Formula 7</strong> in the article below does as well:</p>
<ol start="3">
<li>Kolouri et al (2017) "Optimal Mass Transport: Signal-processing and machine learning applications" <a href="https://www.math.ucdavis.edu/%7Esaito/data/acha.read.s19/kolouri-etal_optimal-mass-transport.pdf" rel="nofollow noreferrer">https://www.math.ucdavis.edu/~saito/data/acha.read.s19/kolouri-etal_optimal-mass-transport.pdf</a></li>
</ol>
|
geometry | <p>Having a circle with the centre $(x_c, y_c)$ with the radius $r$ how to know whether a point $(x_p, y_p)$ is inside the circle?</p>
| <p>The distance between $\langle x_c,y_c\rangle$ and $\langle x_p,y_p\rangle$ is given by the Pythagorean theorem as $$d=\sqrt{(x_p-x_c)^2+(y_p-y_c)^2}\;.$$ The point $\langle x_p,y_p\rangle$ is inside the circle if $d<r$, on the circle if $d=r$, and outside the circle if $d>r$. You can save yourself a little work by comparing $d^2$ with $r^2$ instead: the point is inside the circle if $d^2<r^2$, on the circle if $d^2=r^2$, and outside the circle if $d^2>r^2$. Thus, you want to compare the number $(x_p-x_c)^2+(y_p-y_c)^2$ with $r^2$.</p>
| <p>The point is inside the circle if the distance from it to the center is less than <span class="math-container">$r$</span>. Symbolically, this is
<span class="math-container">$$\sqrt{|x_p-x_c|^2+|y_p-y_c|^2}< r.$$</span></p>
|
number-theory | <blockquote>
<p>I need to find a way of proving that the square roots of a finite set
of different primes are linearly independent over the field of
rationals. </p>
</blockquote>
<p>I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.</p>
| <p>Below is a simple proof from one of my old sci.math posts, followed by reviews of related papers.</p>
<p><strong>Theorem</strong> <span class="math-container">$ $</span> Let <span class="math-container">$\rm\,Q\,$</span> be a field with <span class="math-container">$2 \ne 0,\,$</span> and <span class="math-container">$\rm\, L = Q(S)\,$</span> be an extension of <span class="math-container">$\rm\,Q\,$</span> generated by <span class="math-container">$\rm\,n\,$</span> square roots <span class="math-container">$\rm\,S \!=\! \{ \sqrt{a}, \sqrt{b},\ldots \}$</span> of <span class="math-container">$\rm\ a,b,\,\ldots \in Q.\,$</span> If all nonempty subsets of <span class="math-container">$\rm S $</span> have product <span class="math-container">$\rm\not\in\! Q\,$</span> then each successive
adjunction <span class="math-container">$\rm\ Q(\sqrt{a}),\ Q(\sqrt{a},\sqrt{b}),\,\ldots$</span> doubles degree over <span class="math-container">$\rm Q,\,$</span> so, in total, <span class="math-container">$\rm\, [L:Q] = 2^n.\,$</span> Thus the <span class="math-container">$\rm 2^n$</span> subproducts of the product of <span class="math-container">$\rm\,S\, $</span> are a basis of <span class="math-container">$\rm\,L\,$</span> over <span class="math-container">$\rm\,Q.$</span></p>
<p><strong>Proof</strong> <span class="math-container">$\ $</span> By induction on the tower height <span class="math-container">$\rm\,n =$</span> number of root adjunctions. The Lemma below implies <span class="math-container">$\rm\ [1, \sqrt{a}\,]\ [1, \sqrt{b}\,] = [1, \sqrt{a}, \sqrt{b}, \sqrt{ab}\,]\ $</span> is a <span class="math-container">$\rm\,Q$</span>-vector space basis of <span class="math-container">$\rm\, Q(\sqrt{a}, \sqrt{b})\,$</span> iff <span class="math-container">$\:\!1\:\!$</span> is the only basis element in <span class="math-container">$\rm\,Q.\,$</span>
We lift it to <span class="math-container">$\rm\, n > 2\,$</span> i.e. <span class="math-container">$\, [1, \sqrt{a_1}\,]\ [1, \sqrt{a_2}\,]\cdots [1, \sqrt{a_n}\,]\,$</span> with <span class="math-container">$2^n$</span> elts.</p>
<p><span class="math-container">$\rm n = 1\!:\ L = Q(\sqrt{a})\ $</span> so <span class="math-container">$\rm\,[L:Q] = 2,\,$</span> since <span class="math-container">$\rm\,\sqrt{a}\not\in Q\,$</span> by hypothesis.</p>
<p><span class="math-container">$\rm n > 1\!:\ L = K(\sqrt{a},\sqrt{b}),\,\ K\ $</span> of height <span class="math-container">$\rm\,n\!-\!2.\,$</span> By induction <span class="math-container">$\rm\,[K:Q] = 2^{n-2} $</span> so we need only show: <span class="math-container">$\rm\ [L:K] = 4,\,$</span> since then <span class="math-container">$\rm\,[L:Q] = [L:K]\ [K:Q] = 4\cdot 2^{n-2}\! = 2^n.\,$</span> The lemma below shows <span class="math-container">$\rm\,[L:K] = 4\,$</span> if <span class="math-container">$\rm\ r = \sqrt{a},\ \sqrt{b},\ \sqrt{a\,b}\ $</span> all <span class="math-container">$\rm\not\in K,\,$</span>
true by induction on <span class="math-container">$\rm\,K(r)\,$</span> of height <span class="math-container">$\rm\,n\!-\!1\,$</span> shows <span class="math-container">$\rm\,[K(r):K] = 2\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\rm\,r\not\in K.\ \ $</span> <span class="math-container">$\bf\small QED$</span></p>
<hr />
<p><strong>Lemma</strong> <span class="math-container">$\rm\ \ [K(\sqrt{a},\sqrt{b}) : K] = 4\ $</span> if <span class="math-container">$\rm\ \sqrt{a},\ \sqrt{b},\ \sqrt{a\,b}\ $</span> all <span class="math-container">$\rm\not\in K\,$</span> and <span class="math-container">$\rm\, 2 \ne 0\,$</span> in <span class="math-container">$\rm\,K.$</span></p>
<p><strong>Proof</strong> <span class="math-container">$\ \ $</span> Let <span class="math-container">$\rm\ L = K(\sqrt{b}).\,$</span> <span class="math-container">$\rm\, [L:K] = 2\,$</span> by <span class="math-container">$\rm\,\sqrt{b} \not\in K,\,$</span> so it suffices to show <span class="math-container">$\rm\, [L(\sqrt{a}):L] = 2.\,$</span> This fails only if <span class="math-container">$\rm\,\sqrt{a} \in L = K(\sqrt{b})$</span> <span class="math-container">$\,\Rightarrow\,$</span> <span class="math-container">$\rm \sqrt{a}\ =\ r + s\ \sqrt{b}\ $</span> for <span class="math-container">$\rm\ r,s\in K,\,$</span> which is false, because squaring yields <span class="math-container">$\rm\,\color{#c00}{(1)}:\ \ a\ =\ r^2 + b\ s^2 + 2\,r\,s\ \sqrt{b},\, $</span> which is contra to hypotheses as follows:</p>
<p><span class="math-container">$\rm\qquad\qquad rs \ne 0\ \ \Rightarrow\ \ \sqrt{b}\ \in\ K\ \ $</span> by solving <span class="math-container">$\color{#c00}{(1)}$</span> for <span class="math-container">$\rm\sqrt{b},\,$</span> using <span class="math-container">$\rm\,2 \ne 0$</span></p>
<p><span class="math-container">$\rm\qquad\qquad\ s = 0\ \ \Rightarrow\ \ \ \sqrt{a}\ \in\ K\ \ $</span> via <span class="math-container">$\rm\ \sqrt{a}\ =\ r + s\ \sqrt{b}\ =\ r \in K$</span></p>
<p><span class="math-container">$\rm\qquad\qquad\ r = 0\ \ \Rightarrow\ \ \sqrt{a\,b}\in K\ \ $</span> via <span class="math-container">$\rm\ \sqrt{a}\ =\ s\ \sqrt{b},\, \ $</span>times <span class="math-container">$\rm\,\sqrt{b}.\qquad$</span> <span class="math-container">$\bf\small QED$</span></p>
<p>In the classical case <span class="math-container">$\rm\:Q\:$</span> is the field of rationals and the square roots have radicands being distinct primes. Here it is quite familiar that a product of any nonempty subset of them is irrational <a href="https://math.stackexchange.com/a/1104334/242">since,</a> over a UFD,
a product of coprime elements is a square iff each factor is a square
(mod unit multiples). Hence the classical case satisfies the theorem's hypotheses.</p>
<p>Elementary proofs like that above are often credited to Besicovitch
(see below). But I have not seen his paper so I cannot say for sure
whether or not Besicovic's proof is essentially the same as above.
Finally, see the papers reviewed below for some stronger results.</p>
<hr />
<p>2,33f 10.0X<br />
Besicovitch, A. S.<br />
On the linear independence of fractional powers of integers.<br />
J. London Math. Soc. 15 (1940). 3-6.</p>
<p>Let <span class="math-container">$\ a_i = b_i\ p_i,\ i=1,\ldots s\:,\:$</span> where the <span class="math-container">$p_i$</span> are <span class="math-container">$s$</span> different primes and
the <span class="math-container">$b_i$</span> positive integers not divisible by any of them. The author proves
by an inductive argument that, if <span class="math-container">$x_j$</span> are positive real roots of
<span class="math-container">$x^{n_j} - a_j = 0,\ j=1,...,s ,$</span> and <span class="math-container">$P(x_1,...,x_s)$</span> is a polynomial with
rational coefficients and of degree not greater than <span class="math-container">$n_j - 1$</span> with respect
to <span class="math-container">$x_j,$</span> then <span class="math-container">$P(x_1,...,x_s)$</span> can vanish only if all its coefficients vanish. <span class="math-container">$\quad$</span> Reviewed by W. Feller.</p>
<hr />
<p>15,404e 10.0X<br />
Mordell, L. J.<br />
On the linear independence of algebraic numbers.<br />
Pacific J. Math. 3 (1953). 625-630.</p>
<p>Let <span class="math-container">$K$</span> be an algebraic number field and <span class="math-container">$x_1,\ldots,x_s$</span> roots of the equations
<span class="math-container">$\ x_i^{n_i} = a_i\ (i=1,2,...,s)$</span> and suppose that (1) <span class="math-container">$K$</span> and all <span class="math-container">$x_i$</span> are real, or
(2) <span class="math-container">$K$</span> includes all the <span class="math-container">$n_i$</span> th roots of unity, i.e. <span class="math-container">$ K(x_i)$</span> is a Kummer field.
The following theorem is proved. A polynomial <span class="math-container">$P(x_1,...,x_s)$</span> with coefficients
in <span class="math-container">$K$</span> and of degrees in <span class="math-container">$x_i$</span>, less than <span class="math-container">$n_i$</span> for <span class="math-container">$i=1,2,\ldots s$</span>, can vanish only if
all its coefficients vanish, provided that the algebraic number field <span class="math-container">$K$</span> is such
that there exists no relation of the form <span class="math-container">$\ x_1^{m_1}\ x_2^{m_2}\:\cdots\: x_s^{m_s} = a$</span>, where <span class="math-container">$a$</span> is a number in <span class="math-container">$K$</span> unless <span class="math-container">$\ m_i \equiv 0 \mod n_i\ (i=1,2,...,s)$</span>. When <span class="math-container">$K$</span> is of the second type, the theorem was proved earlier by Hasse [Klassenkorpertheorie,
Marburg, 1933, pp. 187--195] by help of Galois groups. When <span class="math-container">$K$</span> is of the first
type and <span class="math-container">$K$</span> also the rational number field and the <span class="math-container">$a_i$</span> integers, the theorem was proved by Besicovitch in an elementary way. The author here uses a proof analogous to that used by Besicovitch [J. London Math. Soc. 15b, 3--6 (1940) these Rev. 2, 33]. <span class="math-container">$\quad$</span> Reviewed by H. Bergstrom.</p>
<hr />
<p>46 #1760 12A99<br />
Siegel, Carl Ludwig<br />
Algebraische Abhaengigkeit von Wurzeln. (German)<br />
Acta Arith. 21 (1972), 59-64.</p>
<p>Two nonzero real numbers are said to be equivalent with respect to a real
field <span class="math-container">$R$</span> if their ratio belongs to <span class="math-container">$R$</span>. Each real number <span class="math-container">$r \ne 0$</span> determines
a class <span class="math-container">$[r]$</span> under this equivalence relation, and these classes form a
multiplicative abelian group <span class="math-container">$G$</span> with identity element <span class="math-container">$[1]$</span>. If <span class="math-container">$r_1,\dots,r_h$</span>
are nonzero real numbers such that <span class="math-container">$r_i^{n_i}\in R$</span> for some positive integers <span class="math-container">$n_i\
(i=1,...,h)$</span>, denote by <span class="math-container">$G(r_1,...,r_h) = G_h$</span> the subgroup of <span class="math-container">$G$</span> generated by
<span class="math-container">$[r_1],\dots,[r_h]$</span> and by <span class="math-container">$R(r_1,...,r_h) = R_h$</span> the algebraic extension field of
<span class="math-container">$R = R_0$</span> obtained by the adjunction of <span class="math-container">$r_1,...,r_h$</span>. The central problem
considered in this paper is to determine the degree and find a basis of <span class="math-container">$R_h$</span>
over <span class="math-container">$R$</span>. Special cases of this problem have been considered earlier by A. S.
Besicovitch [J. London Math. Soc. 15 (1940), 3-6; MR 2, 33] and by L. J.
Mordell [Pacific J. Math. 3 (1953), 625-630; MR 15, 404]. The principal
result of this paper is the following theorem: the degree of <span class="math-container">$R_h$</span> with respect
to <span class="math-container">$R_{h-1}$</span> is equal to the index <span class="math-container">$j$</span> of <span class="math-container">$G_{h-1}$</span> in <span class="math-container">$G_h$</span>, and the powers <span class="math-container">$r_i^t\
(t=0,1,...,j-1)$</span> form a basis of <span class="math-container">$R_h$</span> over <span class="math-container">$R_{h-1}$</span>. Several interesting
applications and examples of this result are discussed. <span class="math-container">$\quad$</span> Reviewed by H. S. Butts</p>
| <p>Assume that there was some linear dependence relation of the form</p>
<p>$$ \sum_{k=1}^n c_k \sqrt{p_k} + c_0 = 0 $$</p>
<p>where $ c_k \in \mathbb{Q} $ and the $ p_k $ are distinct prime numbers. Let $ L $ be the smallest extension of $ \mathbb{Q} $ containing all of the $ \sqrt{p_k} $. We argue using the field trace $ T = T_{L/\mathbb{Q}} $. First, note that if $ d \in \mathbb{N} $ is not a perfect square, we have that $ T(\sqrt{d}) = 0 $. This is because $ L/\mathbb{Q} $ is Galois, and $ \sqrt{d} $ cannot be a fixed point of the action of the Galois group as it is not rational. This means that half of the Galois group maps it to its other conjugate $ -\sqrt{d} $, and therefore the sum of all conjugates cancel out. Furthermore, note that we have $ T(q) = 0 $ iff $ q = 0 $ for rational $ q $.</p>
<p>Taking traces on both sides we immediately find that $ c_0 = 0 $. Let $ 1 \leq j \leq n $ and multiply both sides by $ \sqrt{p_j} $ to get</p>
<p>$$ c_j p_j + \sum_{1 \leq k \leq n, k\neq j} c_k \sqrt{p_k p_j} = 0$$</p>
<p>Now, taking traces annihilates the second term entirely and we are left with $ T(c_j p_j) = 0 $, which implies $ c_j = 0 $. Since $ j $ was arbitrary, we conclude that all coefficients are zero, proving linear independence.</p>
|
differentiation | <p>In complex analysis class professor said that in complex analysis if a function is differentiable once, it can be differentiated infinite number of times. In real analysis there are cases where a function can be differentiated twice, but not 3 times.</p>
<p>Do anyone have idea what he had in mind? I mean specific example where function can be differentiated two times but not three?</p>
<p><strong>EDIT.</strong> Thank you for answers!
but if we replace $x\to z$ and treat it as a complex function. Why are we not getting in the same problem? Why according to my professor it is still differentiable at $0$?</p>
| <blockquote>
<p><em>What function cannot be differentiated $3$ times?</em></p>
</blockquote>
<p>Take an integrable discontinuous function (such as the sign function), and integrate it three times. Its first integral is the absolute value function, which is continuous: as are all of its other integrals.</p>
| <p>$f(x) =\begin{cases}
x^3, & \text{if $x\ge 0$} \\
-x^3, & \text{if $x \lt 0$} \\
\end{cases}$</p>
<p>The first and second derivatives equal $0$ when $x=0$, but the third derivative results in different slopes.</p>
|
linear-algebra | <p>I’m learning multivariate analysis and I have learnt linear algebra for two semesters when I was a freshman.</p>
<p>Eigenvalues and eigenvectors are easy to calculate and the concept is not difficult to understand. I found that there are many applications of eigenvalues and eigenvectors in multivariate analysis. For example:</p>
<blockquote>
<p>In principal components, proportion of total population variance due to <span class="math-container">$k$</span>th principal component equals <span class="math-container">$$\frac{\lambda_k}{\lambda_1+\lambda_2+\ldots+\lambda_k}$$</span></p>
</blockquote>
<p>I think eigenvalue product corresponding eigenvector has same effect as the matrix product eigenvector geometrically.</p>
<p>I think my former understanding may be too naive so that I cannot find the link between eigenvalue and its application in principal components and others.</p>
<p>I know how to induce almost every step form the assumption to the result mathematically. I’d like to know how to <em><strong>intuitively</strong></em> or <em><strong>geometrically</strong></em> understand eigenvalue and eigenvector in the context of <strong>multivariate analysis</strong> (in linear algebra is also good).</p>
<p>Thank you!</p>
| <p>Personally, I feel that intuition isn't something which is easily explained. Intuition in mathematics is synonymous with experience and you gain intuition by working numerous examples. With my disclaimer out of the way, let me try to present a very informal way of looking at eigenvalues and eigenvectors.</p>
<p>First, let us forget about principal component analysis for a little bit and ask ourselves exactly what eigenvectors and eigenvalues are. A typical introduction to spectral theory presents eigenvectors as vectors which are fixed in direction under a given linear transformation. The scaling factor of these eigenvectors is then called the eigenvalue. Under such a definition, I imagine that many students regard this as a minor curiosity, convince themselves that it must be a useful concept and then move on. It is not immediately clear, at least to me, why this should serve as such a central subject in linear algebra.</p>
<p>Eigenpairs are a lot like the roots of a polynomial. It is difficult to describe why the concept of a root is useful, not because there are few applications but because there are too many. If you tell me all the roots of a polynomial, then mentally I have an image of how the polynomial must look. For example, all monic cubics with three real roots look more or less the same. So one of the most central facts about the roots of a polynomial is that they <em>ground</em> the polynomial. A root literally <em>roots</em> the polynomial, limiting it's shape.</p>
<p>Eigenvectors are much the same. If you have a line or plane which is invariant then there is only so much you can do to the surrounding space without breaking the limitations. So in a sense eigenvectors are not important because they themselves are fixed but rather they limit the behavior of the linear transformation. Each eigenvector is like a skewer which helps to hold the linear transformation into place.</p>
<p>Very (very, very) roughly then, the eigenvalues of a linear mapping is a measure of the distortion induced by the transformation and the eigenvectors tell you about how the distortion is oriented. It is precisely this rough picture which makes PCA very useful. </p>
<p>Suppose you have a set of data which is distributed as an ellipsoid oriented in $3$-space. If this ellipsoid was very flat in some direction, then in a sense we can recover much of the information that we want even if we ignore the thickness of the ellipse. This what PCA aims to do. The eigenvectors tell you about how the ellipse is oriented and the eigenvalues tell you where the ellipse is distorted (where it's flat). If you choose to ignore the "thickness" of the ellipse then you are effectively compressing the eigenvector in that direction; you are projecting the ellipsoid into the most optimal direction to look at. To quote wiki:</p>
<blockquote>
<p>PCA can supply the user with a lower-dimensional picture, a "shadow" of this object when viewed from its (in some sense) most informative viewpoint</p>
</blockquote>
| <p>First let us think what a square matrix does to a vector. Consider a matrix <span class="math-container">$A \in \mathbb{R}^{n \times n}$</span>. Let us see what the matrix <span class="math-container">$A$</span> acting on a vector <span class="math-container">$x$</span> does to this vector. By action, we mean multiplication i.e. we get a new vector <span class="math-container">$y = Ax$</span>.</p>
<p>The matrix acting on a vector <span class="math-container">$x$</span> does two things to the vector <span class="math-container">$x$</span>.</p>
<ol>
<li>It scales the vector.</li>
<li>It rotates the vector.</li>
</ol>
<p>However, for any matrix <span class="math-container">$A$</span>, there are some <em>favored vectors/directions</em>. When the matrix acts on these favored vectors, the action essentially results in just scaling the vector. There is no rotation. These favored vectors are precisely the eigenvectors and the amount by which each of these favored vectors stretches or compresses is the eigenvalue.</p>
<p>So why are these eigenvectors and eigenvalues important? Consider the eigenvector corresponding to the maximum (absolute) eigenvalue. If we take a vector along this eigenvector, then the action of the matrix is maximum. <strong>No other vector when acted by this matrix will get stretched as much as this eigenvector</strong>.</p>
<p>Hence, if a vector were to lie "close" to this eigen direction, then the "effect" of action by this matrix will be "large" i.e. the action by this matrix results in "large" response for this vector. The effect of the action by this matrix is high for large (absolute) eigenvalues and less for small (absolute) eigenvalues. Hence, the directions/vectors along which this action is high are called the principal directions or principal eigenvectors. The corresponding eigenvalues are called the principal values.</p>
|
logic | <p>In <a href="https://math.stackexchange.com/a/440219/1543">this answer</a>, user18921 wrote that the $\iff$ operation is associative, in the sense that</p>
<ul>
<li>$(A\iff B)\iff C$</li>
<li>$A\iff (B\iff C)$ </li>
</ul>
<p>are equivalent statements. </p>
<p>One can brute-force a proof fairly easily (just write down the truth table). But is there a more intuitive reason why this associativity must hold? </p>
| <p>I would say that I personally don't feel that this equivalence is that intuitive. (Maybe it is because I have been thinking too constructively these days.) In classical logic, we have the isomorphism with the Boolean algebra, and that makes it possible to think of $A \leftrightarrow B$ as $A \oplus B \oplus 1$ where $\oplus$ is the exclusive OR. This makes quite a short proof of the associativity of the biconditional. To me, this is the most intuitive way I can think of to justify the associativity.</p>
<p>But as J Marcos mentioned, the equivalence is no longer true in intuitionistic logic. This may serve as an explanation why the equivalence is not supposed to be as <em>intuitive</em>.</p>
<p>(It is straightforward to find a counterexample using the topological model. I will just work out the tedious details for you. Assume $\mathbb R$ is the topological space of interest with the usual topology. Define the valuation $[\![\cdot]\!]$ by $[\![A]\!] = (-1, 1)$, $[\![B]\!] = (0, \infty)$ and $[\![C]\!] = (-\infty, -1) \cup (0, 1)$. It follows that
\begin{align*}
[\![A \leftrightarrow B]\!] & =
[\![A \rightarrow B]\!] \cap [\![B \rightarrow A]\!]
= \text{int}([\![A]\!]^c \cup [\![B]\!]) \cap \text{int}([\![B]\!]^c \cup [\![A]\!]) \\
& = (-\infty, -1) \cup (0, 1) = [\![C]\!] \\
[\![(A \leftrightarrow B) \leftrightarrow C]\!]
& = [\![(A \leftrightarrow B) \to C]\!] \cap [\![C \to (A \leftrightarrow B)]\!]\\
& = \text{int}([\![A \leftrightarrow B]\!]^c \cup [\![C]\!]) \cap
\text{int}([\![C]\!]^c \cup [\![A \leftrightarrow B]\!]) = \mathbb R \\
[\![B \leftrightarrow C]\!] & =
[\![B \rightarrow C]\!] \cap [\![C \rightarrow B]\!] = \text{int}([\![B]\!]^c \cup [\![C]\!]) \cap \text{int}([\![C]\!]^c \cup [\![B]\!]) \\
& = (-1, 0) \cup (0, 1) = A - \{0\}\\
[\![A \leftrightarrow (B \leftrightarrow C)]\!]
& = [\![(B \leftrightarrow C) \to A]\!] \cap [\![A \to (B \leftrightarrow C)]\!] \\
& = \text{int}([\![B \leftrightarrow C]\!]^c \cup [\![A]\!]) \cap
\text{int}([\![A]\!]^c \cup [\![B \leftrightarrow C]\!]) \\
& = \mathbb R - \{0\} \ne \mathbb R
\end{align*}
where int is the interior and $^c$ is the complement.)</p>
| <p>This might be one of the situations where a more general (and therefore stronger) result is more intuitive (depending on how your intuition works). The result I have in mind is that, if you combine any list $\mathcal L$ of formulas (not just three like the $A,B,C$ in the question) by $\iff$, then no matter how you put the parentheses to make it unambiguous, the resulting formula is true if and only if an even number of the formulas in the list $\mathcal L$ are false. In other words, iterated $\iff$ amounts to a parity connective, regardless of the placement of parentheses.</p>
|
logic | <p>Given that the consistency of a system can be proven outside of the given formal system, Gödel says,</p>
<blockquote>
<p>It must be noted that proposition XI... represents no contradiction to the formalities viewpoint of Hilbert....</p>
</blockquote>
<p>Why do others disagree?</p>
<p>Who cares whether a system cannot prove its own consistency? Why would we expect such a thing?</p>
<p>Is it possible that a theory without multiplication, but with some other axiom or axioms (i.e., weaker in one sense but stronger in another) could prove P consistent and itself consistent?</p>
| <p>You asked in a comment:</p>
<blockquote>
<p>Who cares if a theory can prove itself consistent, if it can be proven using some other method?</p>
</blockquote>
<p>Let us first observe that inconsistent theories are entirely worthless. You want to know if $X$ is true, and then you prove that it is a consequence of theory $T$, so you now go around believing $X$. If you later find that $T$ is inconsistent, then $T$ cannot lend any support whatever to $X$. You might as well be flipping coins to decide what theorems are true. $T$ is worthless; you will have to find some other reason to believe $X$.</p>
<p>So you would like to find some theory $T$ for which you have good reason to believe $T$ is consistent. One plausible-seeming method is to choose axioms that are as simple as possible and which seem intuitively true. But the crisis of set theory around 1905 shows that this method <em>does not always work</em>. The comprehension axiom of naïve set theory is as simple and intuitively plausible as anything, and it is false. The discovery that it was false precipitated the formalist programs of Hilbert and Whitehead & Russell, and was the direct motivation for Gödel's work.</p>
<p>So you might like a proof that theory $T$ is consistent. Unfortunately, Gödel's theorem shows that $T$ cannot do it and neither can any theory strictly weaker than $T$. So if you want to show that $T$ is not completely worthless, you need a theory, say $T'$, that is stronger in at least one aspect. So suppose you show that $T$ is consistent, using $T'$.</p>
<p>If $T'$ is chosen to be strictly stronger than $T$, so that $T'$ shows as much as $T$ does and also shows that $T$ is consistent, then now you are in a very stupid position. If you weren't sure that $T$ was consistent, $T'$ cannot help you, because if $T$ is inconsistent, then so is $T'$. So you didn't trust $T$, but you have even <em>less</em> reason to trust $T'$. Suppose Shady Joe offers to sell you the Empire State Building for $5. You are not sure, and suspect that Joe is trying to scam you. Joe insists that he is honest, and suggests his Uncle Louie as a character witness. So you go up to the state prison where Uncle Louie is serving a fifteen-year sentence for fraud, and, from his prison cell, Louie assures you that Shady Joe is offering you a fair deal.</p>
<p>But this is exactly the position we are in. We would like to prove things about numbers, say the twin prime conjecture. We have the Peano axioms, which we believe capture how numbers work. But how do we know that from these axioms there is not some proof, perhaps a very long one, that $4923 = 4925$? This would not show that $4923$ <em>actually</em> equals $4925$; it would only show that PA was broken. We would have to throw away PA and use something else instead, as we did with naïve set theory. If we knew that PA was consistent, we would know that there was no such proof. And there is a proof of the consistency of PA using ZF set theory! But this is like asking Uncle Louie if Shady Joe is telling the truth, because if PA is inconsistent, then so is ZFC; if PA was inconsistent, then ZFC would prove that PA was consistent <em>anyway</em>.</p>
| <p><a href="http://plato.stanford.edu/entries/hilbert-program/" rel="noreferrer">Hilbert's program</a> aimed at "solving" the foundational issues (form Cantor and Frege, to Russell and Brouwer) proving the <em>consistency</em> of the "main" mathematical theories, like arithmetic, real analysis and set theory.</p>
<p>In order to prove that, the program developed concepts and tools of metamathematics, i.e. the discipline based on mathematical logic able to study the mathematical theories as mathematical objcets themselves.</p>
<p>This metamathematical activity has to be "performed" into a "secure" mathematical theory: the "elementary" part of arithmetic.</p>
<p>This theory must supply the tools sufficient for the consistency proof of other theories.</p>
<p>Of course, this "gound-level" mathematical theory must be itself consistent.</p>
<p>Unfortunately, <a href="http://plato.stanford.edu/entries/goedel-incompleteness/" rel="noreferrer">Gödel's Second Incompleteness Theorem</a> shows that a formalized arithmetical theory having "enough resources" (this concept is made precise by the theorem) to be suitable for the metamatheatical aims is not able to prove its own consistency, and neither the consistency of "more powerful" theories like real analysis and set theory.</p>
<hr />
<p>Regarding Gödel's comment after Th.XI :</p>
<blockquote>
<p>I wish to note expressily that Theorem XI <em>do not contradict Hilbert's formalistic viewpoint</em> [emphasis added]. For this viewpoint presupposes only the existence of a consistency proof in which nothing but finitary means of proof is used, and it is conceivable that there exist finitary proofs that <em>cannot</em> be expressed in the formalism of <span class="math-container">$P$</span>.</p>
</blockquote>
<p>And this was exactly what happened with <a href="http://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof" rel="noreferrer">Gentzen's consistency proof</a>.</p>
<p>Some comments are useful here :</p>
<ul>
<li><p>Hilbert's concept of <em>finitary</em> is not precise: but it's hard to escape from the expectation that the "finitary part" of arithmetic must include <span class="math-container">$\mathsf{PA}$</span>; thus, G's Th applies.</p>
</li>
<li><p>Gentzen's proof is hardly "finitistic".</p>
</li>
<li><p>the original aim of Hilbert's program was to "convince" intuitionstic mathematicians that axiomatized set theory was "secure"; if so, any "ostensive" consistency proof based on a model of, e.g. arithmetic, built up into set theory was clearly useless.</p>
</li>
</ul>
<hr />
<p><em>Added</em></p>
<p>Regarding your added question, you can see (in general) :</p>
<ul>
<li>Peter Smith, <a href="https://books.google.it/books?id=-SBpYKebkJMC&printsec=frontcover" rel="noreferrer">An Introduction to Gödel's Theorems</a> (2nd ed 2013).</li>
</ul>
<p>In particular, you can see the overview of <a href="http://en.wikipedia.org/wiki/Presburger_arithmetic" rel="noreferrer">Presburger's arithmetic</a> for a (very weak) first-order theory of the natural numbers with addition, <strong>without</strong> multiplication.</p>
<p>This theory is <em>complete</em> and decidable; thus, it "eludes" G's First Th.</p>
<p>Unfortunately, without multiplication it has not enough resources to implement the arithmetization of syntax; thus, it is unable to "perform" the basic metamathematical tasks and so it cannot prove relevant metamathemetical properties, like <em>consistency</em> of a theory (and neither of itself).</p>
<p><a href="http://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof" rel="noreferrer">Gentzen's consistency proof</a> "eludes" in some way G's Second Th :</p>
<blockquote>
<p>Gentzen showed that the consistency of first-order arithmetic is provable, over the base theory of <em>primitive recursive arithmetic</em> with the additional principle of quantifier-free transfinite induction up to the ordinal <span class="math-container">$ε_0$</span>.</p>
<p>Gentzen's proof also highlights one commonly missed aspect of Gödel's second incompleteness theorem. It is sometimes claimed that the consistency of a theory can only be proved in a stronger theory. The theory obtained by adding quantifier-free transfinite induction to primitive recursive arithmetic proves the consistency of first-order arithmetic but is not stronger than first-order arithmetic.</p>
<p>For example, it does not prove ordinary mathematical induction for all formulae, while first-order arithmetic does (it has this as an axiom schema). The resulting theory is not weaker than first-order arithmetic either, since it can prove a number-theoretical fact - the consistency of first-order arithmetic - that first-order arithmetic cannot. The two theories are simply incomparable.</p>
</blockquote>
|
linear-algebra | <p>Most questions usually just relate to what these can be used for, that's fairly obvious to me since I've been programming 3D games/simulations for a while, but I've never really understood the inner workings of them... I could get the cross product equation as a determinant of a carefully-constructed matrix, </p>
<p>but what I want to ask is... How did the dot and cross product come to be? When were they "invented"? Some detailed proofs? Did someone say: "Hey, wouldn't it be nice if we could construct a way to calculate a vector that is perpendicular to two given operands?"</p>
<p>Basically, how/why do they work?</p>
<p>I would appreciate explanations, links to other explanations, other web resources... I've been searching the Internet lately for explanations, but most of them are on how to use it and nothing that really gives substance to it.</p>
| <p>A little bit more of the 'how and why': the dot product comes about as a natural answer to the question: 'what functions do we have that take two vectors and produce a number?' Keep in mind that we have a natural additive function (vector or componentwise addition) that takes two vectors and produces another vector, and another natural multiplicative function (scalar multiplication) that takes a vector and a number and produces a vector. (We might also want another function that takes two vectors and produces another vector, something more multiplicative than additive — but hold that thought!) For now we'll call this function $D$, and specifically use the notation $D({\bf v},{\bf w})$ for it as a function of the two vectors ${\bf v}$ and ${\bf w}$.</p>
<p>So what kind of properties would we want this hypothetical function to have? Well, it seems natural to start by not distinguishing the two things it's operating on; let's make $D$ symmetric, with $D({\bf v},{\bf w})=D({\bf w},{\bf v})$. Since we have convenient addition and multiplication functions it would be nice if it 'played nice' with them. Specifically, we'd love it to respect our addition for each variable, so that $D({\bf v}_1+{\bf v}_2,{\bf w}) = D({\bf v}_1,{\bf w})+D({\bf v}_2,{\bf w})$ and $D({\bf v},{\bf w}_1+{\bf w}_2) = D({\bf v},{\bf w}_1)+D({\bf v},{\bf w}_2)$; and we'd like it to commute with scalar multiplication similarly, so that $D(a{\bf v}, {\bf w}) = aD({\bf v}, {\bf w})$ and $D({\bf v}, a{\bf w}) = aD({\bf v}, {\bf w})$ — these two conditions together are called <em>linearity</em> (more accurately, 'bilinearity': it's linear in each of its arguments). What's more, we may have some 'natural' basis for our vectors (for instance, 'North/East/up', at least locally), but we'd rather it weren't tied to any particular basis; $D({\bf v},{\bf w})$ shouldn't depend on what basis ${\bf v}$ and ${\bf w}$ are expressed in (it should be <em>rotationally invariant</em>). Furthermore, since any multiple of our function $D$ will satisfy the same equations as $D$ itself, we may as well choose a normalization of $D$. Since $D(a{\bf v},a{\bf v}) = aD({\bf v},a{\bf v}) = a^2D({\bf v},{\bf v})$ it seems that $D$ should have dimensions of (length$^2$), so let's go ahead and set $D({\bf v},{\bf v})$ equal to the squared length of ${\bf v}$, $|{\bf v}|^2$ (or equivalently, set $D({\bf v},{\bf v})$ to $1$ for any unit vector ${\bf v}$; since we chose $D$ to be basis-invariant, any unit vector is as good as any other).</p>
<p>But these properties are enough to define the dot product! Since $$\begin{align}
|{\bf v}+{\bf w}|^2 &= D({\bf v}+{\bf w},{\bf v}+{\bf w}) \\
&= D({\bf v}+{\bf w},{\bf v})+D({\bf v}+{\bf w},{\bf w}) \\
&= D({\bf v},{\bf v})+D({\bf w},{\bf v})+D({\bf v},{\bf w})+D({\bf w},{\bf w})\\
&= D({\bf v},{\bf v})+2D({\bf v},{\bf w})+D({\bf w},{\bf w}) \\
&= |{\bf v}|^2+|{\bf w}|^2+2D({\bf v},{\bf w})
\end{align}$$
then we can simply set $D({\bf v},{\bf w}) = {1\over2} \bigl(|{\bf v}+{\bf w}|^2-|{\bf v}|^2-|{\bf w}|^2\bigr)$. A little arithmetic should convince you that this gives the usual formula for the dot product.</p>
<p>While the specific properties for the cross product aren't precisely the same, the core concept is: it's the only function that satisfies a fairly natural set of conditions. But there's one broad catch with the cross-product — two, actually, though they're related. One is that the fact that the cross product takes two vectors and produces a third is an artifact of $3$-dimensional space; in general the operation that the cross-product represents (orthogonality) can be formalized in $n$ dimensions either as a function from $(n-1)$ vectors to a single result or as a function from $2$ vectors that produces a <em>2-form</em>, essentially a $n(n-1)/2$-dimensional object; coincidentally when $n=3$ this means that the cross-product has the 'vector$\times$vector$\rightarrow$vector' nature that we were looking for. (Note that in $2$ dimensions the natural 'orthogonality' operation is essentially a function from one vector to one vector — it takes the vector $(x,y)$ to the vector $(y,-x)$!) The other catch is lurking in the description of the cross product as a 2-form; it turns out that this isn't <em>quite</em> the same thing as a vector! Instead it's essentially a <em>covector</em> - that is, a linear function from vectors to numbers (note that if you 'curry' the dot-product function $D$ above and consider the function $D_{\bf w}$ such that $D_{\bf w}({\bf v}) = D({\bf v},{\bf w})$, then the resulting object $D_{\bf w}$ is a covector). For most purposes we can treat covectors as just vectors, but not uniformly; the most important consequence of this is one that computer graphics developers have long been familiar with: normals don't transform the same way vectors do! In other words, if we have ${\bf u} = {\bf v}\times{\bf w}$, then for a transform $Q$ it's not (necessarily) the case that the cross product of transformed vectors $(Q{\bf v})\times(Q{\bf w})$ is the transformed result $Q{\bf u}$; instead it's the result ${\bf u}$ transformed by the so-called <em>adjoint</em> of $Q$ (roughly, the inverse of $Q$, with a few caveats). For more background on the details of this, I'd suggest looking into exterior algebra, geometric algebra, and in general the theory of linear forms.</p>
<p><strong>ADDED:</strong> Having spent some more time thinking about this over lunch, I think the most natural approach to understanding where the cross product 'comes from' is through the so-called <em>volume form</em>: a function $V({\bf u}, {\bf v}, {\bf w})$ from three vectors to a number that returns the (signed) volume of the rhomboid spanned by ${\bf u}$, ${\bf v}$, and ${\bf w}$. (This is also the determinant of the matrix with ${\bf u}$, ${\bf v}$, and ${\bf w}$ as its columns, but that's a whole different story...) Specifically, there are two key facts:</p>
<ol>
<li>Given a basis and given some linear function $f({\bf v})$ from vectors to numbers (remember that linear means that $f({\bf v}+{\bf w}) = f({\bf v})+f({\bf w})$ and $f(a{\bf v}) = af({\bf v})$, we can write down a vector ${\bf u}$ such that $f()$ is the same as the covector $D_{\bf u}$ (that is, we have $f({\bf v}) = D({\bf u}, {\bf v})$ for all ${\bf v}$). To see this, let the basis be $(\vec{e}_{\bf x}, \vec{e}_{\bf y}, \vec{e}_{\bf z})$; now let $u_{\bf x} = f(\vec{e}_{\bf x})$, and similarly for $u_{\bf y}$ and $u_{\bf z}$, and define ${\bf u} = (u_{\bf x},u_{\bf y},u_{\bf z})$ (in the basis we were provided). Obviously $f()$ and $D_{\bf u}$ agree on the three basis vectors, and so by linearity (remember, we explicitly said that $f$ was linear, and $D_{\bf u}$ is linear because the dot product is) they agree everywhere.</li>
<li>The volume form $V({\bf u}, {\bf v}, {\bf w})$ is linear in all its arguments - that is, $V({\bf s}+{\bf t}, {\bf v}, {\bf w}) = V({\bf s}, {\bf v}, {\bf w})+V({\bf t}, {\bf v}, {\bf w})$. It's obvious that the form is 'basis-invariant' — it exists regardless of what particular basis is used to write its vector arguments — and fairly obvious that it satisfies the scalar-multiplication property that $V(a{\bf u}, {\bf v}, {\bf w}) = aV({\bf u}, {\bf v}, {\bf w})$ (note that this is why we had to define it as a signed volume - $a$ could be negative!). The linearity under addition is a little bit trickier to see; it's probably easiest to think of the analogous area form $A({\bf v}, {\bf w})$ in two dimensions: imagine stacking the parallelograms spanned by $({\bf u}, {\bf w})$ and $({\bf v}, {\bf w})$ on top of each other to form a sort of chevron, and then moving the triangle formed by ${\bf u}$, ${\bf v}$ and ${\bf u}+{\bf v}$ from one side of the chevron to the other to get the parallelogram $({\bf u}+{\bf v}, {\bf w})$ with the same area. The same concept works in three dimensions by stacking rhomboids, but the fact that the two 'chunks' are the same shape is trickier to see. This linearity, incidentally, explains why the form changes signs when you swap arguments (that is, why $V({\bf u}, {\bf v}, {\bf w}) = -V({\bf v}, {\bf u}, {\bf w})$) : from the definition $V({\bf u}, {\bf u}, {\bf w}) = 0$ for any ${\bf u}$ (it represents the volume of a degenerate 2-dimensional rhomboid spanned by ${\bf u}$ and ${\bf w}$), and using linearity to break down $0 = V({\bf u}+{\bf v}, {\bf u}+{\bf v}, {\bf w})$ shows that $V({\bf u}, {\bf v}, {\bf w}) + V({\bf v}, {\bf u}, {\bf w}) = 0$.</li>
</ol>
<p>Now, the fact that the volume form $V({\bf u}, {\bf v}, {\bf w})$ is linear means that we can do the same sort of 'currying' that we talked about above and, for any two vectors ${\bf v}$ and ${\bf w}$, consider the function $C_{\bf vw}$ from vectors ${\bf u}$ to numbers defined by $C_{\bf vw}({\bf u}) = V({\bf u}, {\bf v}, {\bf w})$. Since this is a linear function (because $V$ is linear, by point 2), we know that we have some vector ${\bf c}$ such that $C_{\bf vw} = D_{\bf c}$ (by point 1). And finally, we <em>define</em> the cross product of the two vectors ${\bf v}$ and ${\bf w}$ as this 'vector' ${\bf c}$. This explains why the cross product is linear in both of its arguments (because the volume form $V$ was linear in all three of its arguments) and it explains why ${\bf u}\times{\bf v} = -{\bf v}\times{\bf u}$ (because $V$ changes sign on swapping two parameters). It also explains why the cross product isn't <em>exactly</em> a vector: instead it's really the linear function $C_{\bf vw}$ disguising itself as a vector (by the one-to-one correspondence through $D_{\bf c}$). I hope this helps explain things better!</p>
| <p>Seems remarkable that no one seems to remember the true origin of the vector and the dot products. The answer is actually hidden in <strong>i</strong>, <strong>j</strong>,<strong>k</strong> notations for the vector. This is the same notation used for quaternions. It happens that Josiah Willard Gibbs did not like the quaternions formalism, which was mostly used in physics at his time and as a result he came up with <strong>vector analysis</strong>. Recall that quaternions have these properties: <span class="math-container">$\mathbf i^2=\mathbf j^2=\mathbf k^2=\mathbf i \mathbf j \mathbf k=-1$</span> and what follows from these identities <span class="math-container">$\mathbf i \mathbf j =\mathbf k$</span> , <span class="math-container">$\mathbf j \mathbf k = \mathbf i$</span>, <span class="math-container">$\mathbf k \mathbf i = \mathbf j$</span> (for example, to get <span class="math-container">$\mathbf i \mathbf j =\mathbf k$</span> use <span class="math-container">$\mathbf i \mathbf j \mathbf k=-1$</span> and multiply by <span class="math-container">$\mathbf k$</span> from the <strong>right</strong>. It is important to keep track from which side you multiply, because quaternion multiplication is not <strong>commutative</strong> (for example if you take <span class="math-container">$\mathbf i \mathbf j =\mathbf k$</span> and multiply both sides by <span class="math-container">$\mathbf j$</span> from the <strong>right</strong> you will get <span class="math-container">$\mathbf i \mathbf j^2 =- \mathbf i = \mathbf k \mathbf j= - \mathbf j \mathbf k$</span>)).</p>
<p>Now take two "vectors" <span class="math-container">$\mathbf u=u_1 \mathbf i + u_2 \mathbf j + u_3 \mathbf k$</span> and <span class="math-container">$\mathbf v=v_1 \mathbf i + v_2 \mathbf j + v_3 \mathbf k$</span> and multiply them as quaternions. What you will discover is that the answer will break in the real (scalar) part and the imaginary (vector) part. The real part (with the minus sign) will be the scalar (dot) product and the imaginary part will be the vector (cross) product.ㅤㅤㅤㅤ</p>
|
combinatorics | <p>why is $\sum\limits_{k=1}^{n} k^m$ a polynomial with degree $m+1$ in $n$?</p>
<p>I know this is well-known. But how to prove it rigorously? Even mathematical induction does not seem so straight-forward.</p>
<p>Thanks.</p>
| <p>Let $V$ be the space of all polynomials $f : \mathbb{N}_{\ge 0} \to F$ (where $F$ is a field of characteristic zero). Define the <em>forward difference operator</em> $\Delta f(n) = f(n+1) - f(n)$. It is not hard to see that the forward difference of a polynomial of degree $d$ is a polynomial of degree $d-1$, hence defines a linear operator $V_d \to V_{d-1}$ where $V_d$ is the space of polynomials of degree at most $d$. Note that $\dim V_d = d+1$. </p>
<p>We want to think of $\Delta$ as a discrete analogue of the derivative, so it is natural to define the corresponding discrete analogue of the integral $(\int f)(n) = \sum_{k=0}^{n-1} f(k)$. But of course we need to prove that this actually sends polynomials to polynomials. Since $(\int \Delta f)(n) = f(n) - f(0)$ (the "fundamental theorem of discrete calculus"), it suffices to show that the forward difference is surjective as a linear operator $V_d \to V_{d-1}$.</p>
<p>But by the "fundamental theorem," the image of the integral is precisely the subspace of $V_d$ of polynomials such that $f(0) = 0$, so the forward difference and integral define an isomorphism between $V_{d-1}$ and this subspace. </p>
<p>More explicitly, you can observe that $\Delta$ is upper triangular in the standard basis, work by induction, or use the <strong>Newton basis</strong> $1, n, {n \choose 2}, {n \choose 3}, ...$ for the space of polynomials. In this basis we have $\Delta {n \choose k} = {n \choose k-1}$, and now the result is <em>really</em> obvious.</p>
<p>The method of finite differences provides a fairly clean way to derive a formula for $\sum n^m$ for fixed $m$. In fact, for any polynomial $f(n)$ we have the "discrete Taylor formula"</p>
<p>$$f(n) = \sum_{k \ge 0} \Delta^k f(0) {n \choose k}$$</p>
<p>and it's easy to compute the numbers $\Delta^k f(0)$ using a finite difference table and then to replace ${n \choose k}$ by ${n \choose k+1}$. I wrote a blog post that explains this, but it's getting harder to find; I also explained it in my <a href="https://math.berkeley.edu/~qchu/TopicsInGF.pdf" rel="noreferrer">notes on generating functions</a>.</p>
| <p>You can set up a recursive formula for $\sum_{k=0}^n k^m $ by noting that</p>
<p>$$(n+1)^{m+1} = \sum_{k=0}^n (k+1)^{m+1}- \sum_{k=0}^n k^{m+1}$$
$$ = { m+1 \choose 1} \sum_{k=0}^n k^m
+ { m+1 \choose 2} \sum_{k=0}^n k^{m-1} + \cdots $$</p>
<p>by expanding the first summation on the RHS by the binomial theorem. Then shift all the other summations except $\sum_{k=0}^n k^m $ to the LHS.</p>
<p>So we get</p>
<p>$${ m+1 \choose 1} \sum_{k=0}^n k^m = (n+1)^{m+1} - { m+1 \choose 2} \sum_{k=0}^n k^{m-1}
- { m+1 \choose 3} \sum_{k=0}^n k^{m-2} + \cdots $$</p>
|
logic | <p>I remember hearing several times the advice that, we should avoid using a proof by contradiction, if it is simple to convert to a direct proof or a proof by contrapositive. Could you explain the reason? Do logicians think that proofs by contradiction are somewhat weaker than direct proofs?</p>
<p>Is there any reason that one would still continue looking for a direct proof of some theorem, although a proof by contradiction has already been found? I don't mean improvements in terms of elegance or exposition, I am asking about logical reasons. For example, in the case of the "axiom of choice", there is obviously reason to look for a proof that does not use the axiom of choice. Is there a similar case for proofs by contradiction?</p>
| <p>To <a href="https://mathoverflow.net/q/12342">this MathOverflow question</a>, I posted the following <a href="https://mathoverflow.net/a/12400">answer</a> (and there are several other interesting answers there):</p>
<ul>
<li><em>With good reason</em>, we mathematicians prefer a direct proof of an implication over a proof by contradiction, when such a proof is available. (all else being equal)</li>
</ul>
<p>What is the reason? The reason is the <em>fecundity</em> of the proof, meaning our ability to use the proof to make further mathematical conclusions. When we prove an implication (p implies q) directly, we assume p, and then make some intermediary conclusions r<sub>1</sub>, r<sub>2</sub>, before finally deducing q. Thus, our proof not only establishes that p implies q, but also, that p implies r<sub>1</sub> and r<sub>2</sub> and so on. Our proof has provided us with additional knowledge about the context of p, about what else must hold in any mathematical world where p holds. So we come to a fuller understanding of what is going on in the p worlds.</p>
<p>Similarly, when we prove the contrapositive (¬q implies ¬p) directly, we assume ¬q, make intermediary conclusions r<sub>1</sub>, r<sub>2</sub>, and then finally conclude ¬p. Thus, we have also established not only that ¬q implies ¬p, but also, that it implies r<sub>1</sub> and r<sub>2</sub> and so on. Thus, the proof tells us about what else must be true in worlds where q fails. Equivalently, since these additional implications can be stated as (¬r<sub>1</sub> implies q), we learn about many different hypotheses that all imply q. </p>
<p>These kind of conclusions can increase the value of the proof, since we learn not only that (p implies q), but also we learn an entire context about what it is like in a mathematial situation where p holds (or where q fails, or about diverse situations leading to q). </p>
<p>With reductio, in contrast, a proof of (p implies q) by contradiction seems to carry little of this extra value. We assume p and ¬q, and argue r<sub>1</sub>, r<sub>2</sub>, and so on, before arriving at a contradiction. The statements r<sub>1</sub> and r<sub>2</sub> are all deduced under the contradictory hypothesis that p and ¬q, which ultimately does not hold in any mathematical situation. The proof has provided extra knowledge about a nonexistant, contradictory land. (Useless!) So these intermediary statements do not seem to provide us with any greater knowledge about the p worlds or the q worlds, beyond the brute statement that (p implies q) alone.</p>
<p>I believe that this is the reason that sometimes, when a mathematician completes a proof by contradiction, things can still seem unsettled beyond the brute implication, with less context and knowledge about what is going on than would be the case with a direct proof.</p>
<p>For an example of a proof where we are led to false expectations in a proof by contradiction, consider Euclid's theorem that there are infinitely many primes. In a common proof by contradiction, one assumes that p<sub>1</sub>, ..., p<sub>n</sub> are <em>all</em> the primes. It follows that since none of them divide the product-plus-one p<sub>1</sub>...p<sub>n</sub>+1, that this product-plus-one is also prime. This contradicts that the list was exhaustive. Now, many beginner's falsely expect after this argument that whenever p<sub>1</sub>, ..., p<sub>n</sub> are prime, then the product-plus-one is also prime. But of course, this isn't true, and this would be a misplaced instance of attempting to extract greater information from the proof, misplaced because this is a proof by contradiction, and that conclusion relied on the assumption that p<sub>1</sub>, ..., p<sub>n</sub> were <em>all</em> the primes. If one organizes the proof, however, as a direct argument showing that whenever p<sub>1</sub>, ..., p<sub>n</sub> are prime, then there is yet another prime not on the list, then one is led to the true conclusion, that p<sub>1</sub>...p<sub>n</sub>+1 has merely a prime divisor not on the original list. (And Michael Hardy mentions that indeed Euclid had made the direct argument.)</p>
| <p>Most logicians consider proofs by contradiction to be equally valid, however some people are <a href="http://en.wikipedia.org/wiki/Constructivism_%28mathematics%29">constructivists/intuitionists</a> and don't consider them valid. </p>
<p>(<strong>Edit:</strong> This is not strictly true, as explained in comments. Only certain proofs by contradiction are problematic from the constructivist point of view, namely those that prove "A" by assuming "not A" and getting a contradiction. In my experience, this is usually exactly the situation that people have in mind when saying "proof by contradiction.")</p>
<p>One possible reason that the constructivist point of view makes a certain amount of sense is that statements like the continuum hypothesis are independent of the axioms, so it's a bit weird to claim that it's either true or false, in a certain sense it's neither.</p>
<p>Nonetheless constructivism is a relatively uncommon position among mathematicians/logicians. However, it's not considered totally nutty or beyond the pale. Fortunately, in practice most proofs by contradiction can be translated into constructivist terms and actual constructivists are rather adept at doing so. So the rest of us mostly don't bother worrying about this issue, figuring it's the constructivists problem.</p>
|
linear-algebra | <p>I have two $2\times 2$ matrices, $A$ and $B$, with the same determinant. I want to know if they are similar or not.</p>
<p>I solved this by using a matrix called $S$:
$$\left(\begin{array}{cc}
a& b\\
c& d
\end{array}\right)$$
and its inverse in terms of $a$, $b$, $c$, and $d$, then showing that there was no solution to $A = SBS^{-1}$. That worked fine, but what will I do if I have $3\times 3$ or $9\times 9$ matrices? I can't possibly make system that complex and solve it. How can I know if any two matrices represent the "same" linear transformation with different bases?</p>
<p>That is, how can I find $S$ that change of basis matrix?</p>
<p>I tried making $A$ and $B$ into linear transformations... but without the bases for the linear transformations I had no way of comparing them.</p>
<p>(I have read that similar matrices will have the same eigenvalues... and the same "trace" --but my class has not studied these yet. Also, it may be the case that some matrices with the same trace and eigenvalues are not similar so this will not solve my problem.)</p>
<p>I have one idea. Maybe if I look at the reduced col. and row echelon forms that will tell me something about the basis for the linear transformation? I'm not really certain how this would work though? Please help.</p>
| <p>There is something called "canonical forms" for a matrix; they are special forms for a matrix that can be obtained intrinsically from the matrix, and which will allow you to easily compare two matrices of the same size to see if they are similar or not. They are indeed based on eigenvalues and eigenvectors.</p>
<p>At this point, without the necessary machinery having been covered, the answer is that it is difficult to know if the two matrices are the same or not. The simplest test you can make is to see whether their <a href="http://en.wikipedia.org/wiki/Characteristic_polynomial">characteristic polynomials</a> are the same. This is <em>necessary</em>, but <strong>not sufficient</strong> for similarity (it is related to having the same eigenvalues).</p>
<p>Once you have learned about canonical forms, one can use either the <a href="http://en.wikipedia.org/wiki/Jordan_canonical_form">Jordan canonical form</a> (if the characteristic polynomial splits) or the <a href="http://en.wikipedia.org/wiki/Frobenius_normal_form">rational canonical form</a> (if the characteristic polynomial does not split) to compare the two matrices. They will be similar if and only if their rational forms are equal (up to some easily spotted differences; exactly analogous to the fact that two diagonal matrices are the same if they have the same diagonal entries, though the entries don't have to appear in the same order in both matrices).</p>
<p>The reduced row echelon form and the reduced column echelon form will not help you, because any two invertible matrices have the same forms (the identity), but need not have the same determinant (so they will not be similar). </p>
| <p>My lecturer, Dr. Miryam Rossett, provided the following in her supplementary notes to her linear 1 course ( with a few small additions of my own ):</p>
<ol>
<li>Show that the matrices represent the same linear transformation according to different bases. This is generally hard to do.</li>
<li>If one is diagonalizable and the other not, then they are not similar.</li>
<li>Examine the properties of similar matrices. Do they have the same rank, the same trace, the same determinant, the same eigenvalues, the same characteristic polynomial. If any of these are different then the matrices are not similar.</li>
<li>Check the geometric multiplicity of each eigenvalue. If the matrices are similar they must match. Another way of looking at this is that for each <span class="math-container">$\lambda_i$</span>, <span class="math-container">$\dim \ker(\lambda_i I-A_k)$</span> must be equal for each matrix. This also implies that for each <span class="math-container">$\lambda_i$</span>, <span class="math-container">$\dim \text{im}(\lambda_i I-A_k)$</span> must be equal since <span class="math-container">$\dim \text{im}+\dim \ker=\dim V$</span></li>
<li>Assuming they're both diagonalizable, if they both have the same eigenvalues then they're similar because similarity is transitive. They'e diagonalizable if the geometric multiplicities of the eigenvalues add up to <span class="math-container">$\dim V$</span>, or if all the eigenvalues are distinct, or if they have <span class="math-container">$\dim V$</span> linearly independent eigenvectors.</li>
</ol>
<p>Numbers 3 and 4 are necessary but not sufficient for similarity.</p>
|
geometry | <p>This is the parallel question to <a href="https://math.stackexchange.com/questions/44684/what-is-the-smallest-number-of-45-circ-60-circ-75-circ-triangles-that-a-squ">this other post with many answers already</a>, in the sense that the <span class="math-container">$(42^\circ,60^\circ,78^\circ)$</span>-similar triangles form the only non-trivial rational-angle tiling of the equilateral triangle (and the regular hexagon), modulo a real conjugation of the coordinate field (a subfield of <span class="math-container">$\mathbf{Q}(\zeta_{60})$</span>) which transforms between <span class="math-container">$(42^\circ,60^\circ,78^\circ)$</span>-similar triangles and <span class="math-container">$(6^\circ,60^\circ,114^\circ)$</span>-similar triangles. (Reference: M. Laczkovich's <em>Tilings of triangles</em>.)</p>
<p>My attempt has been the following:</p>
<p>Since <span class="math-container">$\sin(42^\circ)$</span> and <span class="math-container">$\sin(78^\circ)$</span> have nested radicals, I tried to get rid of them by restricting my basic tiling units to only the <span class="math-container">$60^\circ$</span>-angled isosceles trapezoids and parallelograms that are a single row of the triangular tiles. They have shorter-base-to-leg ratios of the form
<span class="math-container">$$m\cdot\frac{9-3\sqrt{5}}{2}+n\cdot\frac{11-3\sqrt{5}}{2}\quad\left(m,n\ge 0\right)$$</span>
which are automatically algebraic integers. Any potential tiling of the equilateral triangle from these quadrilateral units corresponds to some integer polynomial relation of the above algebraics, whose polynomial degree correlates with the number of quadrilateral pieces in the tiling.</p>
<p>Unfortunately all the above algebraics have large norms, so a blind search for the desired polynomial is out of the question, and I had to reduce the pieces' proportions again to the rationals. I was able to find a <span class="math-container">$60^\circ$</span>-angled isosceles trapezoid with shorter-base-to-leg ratio of <span class="math-container">$10$</span> using <span class="math-container">$79$</span> tiles, and a <span class="math-container">$60^\circ$</span>-angled parallelogram with neighboring sides' ratio of <span class="math-container">$11$</span> using <span class="math-container">$80$</span> tiles. Thus a few more tiles produce a <span class="math-container">$60^\circ$</span>-angled rhombus, and another few more tiles produce a <span class="math-container">$60^\circ$</span>-angled isosceles trapezoid with shorter-base-to-leg ratio <span class="math-container">$1$</span>, three of which tile an equilateral triangle, using a total of <span class="math-container">$121\,170$</span> triangular tiles. While I was at it, I found <a href="https://math.stackexchange.com/questions/2215781/can-a-row-of-five-equilateral-triangles-tile-a-big-equilateral-triangle">this less related post</a> that might reduce my number of tiles to a bit below a hundred thousand.</p>
<p>Meanwhile, I also did a quick computer search through some conceptually simple configurations that attempt to tile the equilateral triangle using less than about <span class="math-container">$50$</span> tiles, and I found nothing at all.</p>
<p>I get this feeling that about a hundred thousand tiles is not the optimal amount for such a tiling, so I'm asking to see if people have better ideas. I'm unable to provide cash incentive as the parallel post did, but anyone who tries this puzzle will sure have my gratitude.</p>
<hr />
<p>Edit suggested by RavenclawPrefect:</p>
<p>To get to the quadrilateral tiling units that I used, the first thing is to de-nest the radicals as I mentioned above. As <span class="math-container">$\mathbf{Q}(\zeta_{60})$</span> is Galois over <span class="math-container">$\mathbf{Q}(\sqrt{3})$</span> (the base field here should not be <span class="math-container">$\mathbf{Q}$</span> but instead the coordinate field of the equilateral triangle), if we can geometrically construct any length <span class="math-container">$\ell$</span> (or techinically, ratio <span class="math-container">$\ell$</span>), such that when we perform the same geometric construction but with all the <span class="math-container">$42^\circ$</span> angles and <span class="math-container">$78^\circ$</span> angles swapped with each other, we still provably construct the same <span class="math-container">$\ell$</span>, then it must hold that <span class="math-container">$\ell\in\mathbf{Q}(\sqrt{5})$</span>, so that <span class="math-container">$\ell$</span> doesn't contain any nested radicals.</p>
<p>There were a couple of ideas on what <span class="math-container">$\ell$</span> should specifically be, most of them parallel ideas that can all be found in the parallel question for the square. I settled on the above <span class="math-container">$\mathbf{Q}(\sqrt{5})$</span>-quadrilaterals (the ones that are a single row of triangular tiles) because they had the smallest numerator norms among others. As a non-example, there was a double-decker idea using <span class="math-container">$9$</span> tiles that resulted in a trapezoid with ratio a rational multiple of <span class="math-container">$889-321\sqrt{5}$</span>, yuck. There were also some non-triviality in which way the triangles should be oriented when being put into a single row, but some more calculation showed that the above <span class="math-container">$(m,n)$</span> form are all we really get. More precisely, a trapezoid also can't have <span class="math-container">$m=0$</span>, and a parallelogram also can't have <span class="math-container">$n=0$</span>.</p>
<p>After all that work, the rest has really been a matter of trial-and-error. Among all the <span class="math-container">$(m,n)$</span> form, I picked a parallelogram with the smallest norm, which is an <span class="math-container">$(m,n)=(0,1)$</span> parallelogram with <span class="math-container">$4$</span> tiles, and rotated it so that it becomes a <span class="math-container">$\frac{11+3\sqrt{5}}{38}$</span>-parallelogram. Then <span class="math-container">$19$</span> of those make a <span class="math-container">$\frac{11+3\sqrt{5}}{2}$</span>-parallelogram with <span class="math-container">$76$</span> tiles, and obviously I combined it to a <span class="math-container">$(1,0)$</span>-trapezoid and a <span class="math-container">$(0,1)$</span>-parallelogram to get to the rational quadrilaterals.</p>
<p>So the process was more like "I frankly don't know what else to do" rather than "I see potential simplifications but I don't know the optimum". It's also why I'm seeking for completely new ideas (see above) that aren't found in the parallel question about the square.</p>
<p>RavenclawPrefect also asked a well-motivated question for if the same tiling could be performed but with congruent tiles. M. Laczkovich proved this is impossible in a subsequent paper <em>Tilings of Convex Polygons with Congruent Triangles</em>.</p>
| <p>I'm posting a new answer to this question, because the techniques I'm using differ substantially from the previous answer and it was already getting quite long. (Much of this answer was written prior to Anders' excellent answer, so it retreads some ground there.)</p>
<p><em>Edit:</em> This answer in turn has been updated many, many times with improved constructions - feel free to peruse the edit history to see my earlier attempts.</p>
<p>To start with, I'd like to better flesh out the constructions outlined in the OP, as I found looking at these diagrams helpful. Define a parallelogram of ratio <span class="math-container">$r$</span> as one with sides <span class="math-container">$1,r,1,r$</span> in cyclic order, and a trapezoid of ratio <span class="math-container">$r$</span> as one with sides <span class="math-container">$1,r,1,r+1$</span> in cyclic order. (I'll implicitly assume that everything has <span class="math-container">$60^\circ$</span> and <span class="math-container">$120^\circ$</span> angles and that all trapezoids are isosceles unless otherwise stated.)</p>
<p>Here is an isosceles trapezoid of ratio <span class="math-container">$\frac{9-3\sqrt{5}}2$</span> made from three <span class="math-container">$\color{blue}{42}-\color{green}{60}-\color{red}{78}$</span> triangles:</p>
<p><a href="https://i.sstatic.net/AEK8I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AEK8I.png" alt="enter image description here" /></a></p>
<p>Here is a parallelogram of a ratio <span class="math-container">$1$</span> larger (so with the same base) made from four such triangles:</p>
<p><a href="https://i.sstatic.net/El8CB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/El8CB.png" alt="enter image description here" /></a></p>
<p>(Note that it is <em>not</em> given by adding a triangle to the previous construction! The bottom three points are in the same location, though.)</p>
<p>As Edward H observes, we can actually extend either of the two parallelograms above by inserting a non-<span class="math-container">$60$</span>-degree parallelogram in between an edge where only red and blue angles meet; this lets us spend <span class="math-container">$2$</span> more triangles to create trapezoids and parallelograms of ratios <span class="math-container">$\frac{9-3\sqrt{5}}{2}$</span> more.</p>
<p>Now, some observations:</p>
<ul>
<li><p>A parallelogram of ratio <span class="math-container">$r$</span> is also a parallelogram of ratio <span class="math-container">$1/r$</span>: just turn it on its side!</p>
</li>
<li><p>Given two parallelograms of ratios <span class="math-container">$r,s$</span>, we can put them together to get a parallelogram of ratio <span class="math-container">$r+s$</span>.</p>
</li>
<li><p>Given a trapezoid of ratio <span class="math-container">$r$</span> and a parallelogram of ratio <span class="math-container">$s$</span>, we can put them together to get a trapezoid of ratio <span class="math-container">$r+s$</span>.</p>
</li>
<li><p>Given two trapezoids of ratios <span class="math-container">$r,s$</span>, we can turn one of them upside down and then put them together to get a parallelogram of ratio <span class="math-container">$r+s+1$</span> (because the top side is one unit shorter than the bottom side).</p>
</li>
<li><p>Given two trapezoids of ratios <span class="math-container">$r,s$</span>, we can put one on top of another to get a trapezoid of ratio <span class="math-container">$rs/(r+s+1)$</span>.</p>
</li>
</ul>
<p>This gives us an obvious path forward: start with our two basic trapezoid and parallelogram solutions (plus their extensions), then combine them in the above ways looking for small tilings of nice rational-ratio trapezoids and parallelograms until we find a set we can nicely fill an equilateral triangle with.</p>
<p>I wrote up some code to perform exact computations with elements of <span class="math-container">$\mathbb{Q}[\sqrt{5}]$</span>, and started storing all of the trapezoids and parallelograms one can make with up to around <span class="math-container">$90$</span> triangles, but bounding the size of the rational numbers involved to prevent the search space from getting too out of hand. (If I have a parallelogram of ratio <span class="math-container">$1173/292-46\sqrt{5}/377$</span>, I'm probably not going to end up needing it.)</p>
<p>This alone doesn't turn up very many rational-ratio shapes, so I ran a second script that checked amongst all of the shapes generated in the previous iteration for those whose irrational parts were the negatives of each other, and combined them into new, rational-ratio shapes.</p>
<p>The results of this search included many interesting constructions, including Anders Kaseorg's 72-triangle solution for a parallelogram of unit ratio, but for our purposes we can focus on three of them: an <span class="math-container">$89$</span>-tile trapezoid of ratio <span class="math-container">$3$</span>, a <span class="math-container">$97$</span>-tile trapezoid of ratio <span class="math-container">$2$</span>, and a <span class="math-container">$113$</span>-tile trapezoid of ratio <span class="math-container">$3/2$</span>, shown below from top to bottom:</p>
<p><a href="https://i.sstatic.net/tfdf1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tfdf1.png" alt="enter image description here" /></a></p>
<p>If we place the ratio-2 trapezoid on top of the ratio-3 trapezoid, we obtain a trapezoid of ratio <span class="math-container">$1$</span> using <span class="math-container">$186$</span> triangles:</p>
<p><a href="https://i.sstatic.net/YgHSY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YgHSY.png" alt="enter image description here" /></a></p>
<p>Triplicating this would let us obtain an equilateral triangle with <span class="math-container">$558$</span> triangles, but we can do better using the ratio-<span class="math-container">$3/2$</span> trapezoid. Observe that three trapezoids form an equilateral triangle if and only if their ratios multiply to <span class="math-container">$1$</span>. So, by stacking a ratio-<span class="math-container">$2$</span> trapezoid on a ratio-<span class="math-container">$3$</span> trapezoid, we obtain a ratio <span class="math-container">$2/3$</span> trapezoid, which combined with the ratio-<span class="math-container">$3/2$</span> trapezoid and the ratio-<span class="math-container">$1$</span> trapezoid forms an equilateral triangle.</p>
<p>Here is an image of the full (asymmetric!) construction, with all <span class="math-container">$(89+97)+(97+113)+113=\textbf{509}$</span> triangles in one piece:</p>
<p><a href="https://i.sstatic.net/Dcxsn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dcxsn.png" alt="enter image description here" /></a></p>
| <p>From the OP, I'm using the fact that we can use <span class="math-container">$79$</span> triangles to tile a trapezoid with side lengths <span class="math-container">$11,1,10,1$</span> and angles of <span class="math-container">$60$</span> and <span class="math-container">$120$</span> degrees, as well as the parallelogram with side lengths <span class="math-container">$1$</span> and <span class="math-container">$11$</span> with <span class="math-container">$80$</span> triangles. This means that we can tile a "diamond" (the union of two edge-connected equilateral triangles) using <span class="math-container">$11\cdot80=880$</span> triangles.</p>
<p>We can then fit all these pieces onto a triangular grid: the trapezoid takes up <span class="math-container">$21$</span> triangles, the skinny parallelogram <span class="math-container">$22$</span>, and the diamond-shaped region just <span class="math-container">$2$</span> (but at a great cost). Of course, any of them can be scaled up by some integer factor and still lie on the grid.</p>
<p>Using some code I wrote to solve tiling problems plus some manual modifications, I have found the following packing of an isosceles trapezoid with base-to-leg ratio <span class="math-container">$1$</span> (in this case, scaled up on the triangular grid by a factor of <span class="math-container">$12$</span> in each dimension):</p>
<p><a href="https://i.sstatic.net/gUI0Y.png" rel="noreferrer"><img src="https://i.sstatic.net/gUI0Y.png" alt="enter image description here" /></a></p>
<p>It uses <span class="math-container">$12$</span> trapezoids and <span class="math-container">$19$</span> diamonds (the latter of varying sizes). Thus, tiling an equilateral triangle with three copies of this shape will use <span class="math-container">$3\cdot(12\cdot79+19\cdot880)=\textbf{53004}$</span> tiles.</p>
<p><strong>Edit</strong> by <em>nickgard</em>:<br />
A smaller tiling of the same trapezoid using <span class="math-container">$10$</span> long trapezoids and <span class="math-container">$12$</span> diamonds.<br />
<span class="math-container">$3\cdot(10\cdot79+12\cdot880)=\textbf{34050}$</span> tiles.</p>
<p><a href="https://i.sstatic.net/5u0VO.png" rel="noreferrer"><img src="https://i.sstatic.net/5u0VO.png" alt="enter image description here" /></a></p>
<p><em>(End of edit)</em></p>
<p><strong>EDIT (RavenclawPrefect):</strong> I've found some improved ways to tile parallelograms, which can be used along with nickgard's solution to reduce the number further.</p>
<p>Here is a tiling of a <span class="math-container">$1\times 2$</span> parallelogram with seven <span class="math-container">$1\times 11$</span> parallelograms (contrast with the <span class="math-container">$22$</span> it would take by joining two rhombi together):</p>
<p><a href="https://i.sstatic.net/CMiDz.png" rel="noreferrer"><img src="https://i.sstatic.net/CMiDz.png" alt="enter image description here" /></a></p>
<p>In general, one can tile a <span class="math-container">$1\times n$</span> parallelogram for <span class="math-container">$n=1,\ldots,9$</span> with <span class="math-container">$11,7,6,6,6,6,6,6,7$</span> skinny parallelograms; these values arise from taking a tiling of an <span class="math-container">$11\times n$</span> rectangle by squares (see <a href="https://oeis.org/A219158" rel="noreferrer">A219158</a> on OEIS) and applying an appropriate affine transformation.</p>
<p>For the <span class="math-container">$1\times 7$</span>, using <span class="math-container">$6$</span> skinny parallelograms gives us <span class="math-container">$6\cdot 80$</span>, but we can also use <span class="math-container">$6$</span> trapezoids as described in Edward H's comment on this answer for <span class="math-container">$6\cdot 79$</span> tiles, which offers a slight improvement.</p>
<p>Using these more efficient packings, I can fill in the "staircase" shape in nickgard's answer as follows:</p>
<p><a href="https://i.sstatic.net/KlJFd.png" rel="noreferrer"><img src="https://i.sstatic.net/KlJFd.png" alt="enter image description here" /></a></p>
<p>This uses a total of <span class="math-container">$4874$</span> tiles in the staircase, <span class="math-container">$4874+10\cdot79 = 5664$</span> in the trapezoid, and <span class="math-container">$\textbf{16992}$</span> in the triangle.</p>
<p><strong>Edit 2 (RavenclawPrefect):</strong> After lots of fiddling around with decomposing the "staircase" shape into nice axis-aligned parallelograms, I realized that I could just apply an affine transformation, turning the whole staircase into a very tall polyomino of size <span class="math-container">${10\choose 2}\cdot 11=495$</span> with "steps" of height <span class="math-container">$11$</span>, and try to tile the resulting thing with squares directly.</p>
<p>This resulted in a substantial improvement, giving a tiling with <span class="math-container">$46$</span> squares (hence, <span class="math-container">$1\times 11$</span> parallelograms once transformed back); the resulting image would not embed well due to its height, but I have uploaded it to imgur <a href="https://i.sstatic.net/VVn90.jpg" rel="noreferrer">here</a>. <em>Update:</em> I have slightly improved this tiling to a <span class="math-container">$45$</span>-square solution, seen <a href="https://i.sstatic.net/o6yJu.jpg" rel="noreferrer">here</a>.</p>
<p>This results in <span class="math-container">$3\cdot(45\cdot80+10\cdot79)=\textbf{13170}$</span> tiles.</p>
<p>Ways this might be improved:</p>
<ul>
<li><p>Trying for a better packing of this <span class="math-container">$495$</span>-omino by squares - my search was not exhaustive, and I think there's at least a <span class="math-container">$30\%$</span> chance it can be tiled more efficiently.</p>
</li>
<li><p>Finding a better packing of some trapezoid or equilateral triangle with these same methods - I certainly haven't optimized things as much as I could.</p>
</li>
<li><p>Finding a more efficient "base" packing of either of the seed shapes used in this tiling, or generating new relatively simple polyiamonds that can be efficiently tiled with <span class="math-container">$42-60-78$</span> triangles.</p>
</li>
</ul>
|
probability | <p>I have a bit trouble distinguishing the following concepts:</p>
<ul>
<li><em>probability measure</em></li>
<li><em>probability function</em> (with special cases <em>probability mass function</em> and <em>probability density function</em>)</li>
<li><em>probability distribution</em></li>
</ul>
<p>Are some of these interchangeable? Which of these are defined with respect to probability spaces and which with respect to random variables?</p>
| <p>The difference between the terms "probability measure" and "probability distribution" is in some ways more of a difference in <em>connotation</em> of the terms rather than a difference between the things that the terms refer to. It's more about the way the terms are used.</p>
<p>A probability distribution or a probability measure is a function assigning probabilities to measurable subsets of some set.</p>
<p>When the term "probability distribution" is used, the set is often <span class="math-container">$\mathbb R$</span> or <span class="math-container">$\mathbb R^n$</span> or <span class="math-container">$\{0,1,2,3,\ldots\}$</span> or some other very familiar set, and the actual values of members of that set are of interest. For example, one may speak of the temperature on December 15th in Chicago over the aeons, or the income of a randomly chosen member of the population, or the particular partition of the set of animals captured and tagged, where two animals are in the same part in the partition if they are of the same species.</p>
<p>When the term "probability measure" is used, often nobody cares just what the set <span class="math-container">$\Omega$</span> is, to whose subsets probabilities are assigned, and nobody cares about the nature of the members or which member is randomly chosen on any particular occasion. But one may care about the values of some function <span class="math-container">$X$</span> whose domain is <span class="math-container">$\Omega$</span>, and about the resulting probability distribution of <span class="math-container">$X$</span>.</p>
<p>"Probablity mass function", on the other hand, is precisely defined. A probability mass function <span class="math-container">$f$</span> assigns a probabilty to each subset containing just <em>one</em> point, of some specified set <span class="math-container">$S$</span>, and we always have <span class="math-container">$\sum_{s\in S} f(s)=1$</span>. The resulting probability distribution on <span class="math-container">$S$</span> is a <em>discrete</em> distribution. Discrete distributions are precisely those that can be defined in this way by a probability mass function.</p>
<p>"Probability density function" is also precisely defined. A probability density function <span class="math-container">$f$</span> on a set <span class="math-container">$S$</span> is a function specifies probabilities assigned to measurable subsets <span class="math-container">$A$</span> of <span class="math-container">$S$</span> as follows:
<span class="math-container">$$
\Pr(A) = \int_A f\,d\mu
$$</span>
where <span class="math-container">$\mu$</span> is a "measure", a function assigning non-negative numbers to measurable subsets of <span class="math-container">$A$</span> in a way that is "additive" (i.e. <span class="math-container">$\mu\left(A_1\cup A_2\cup A_3\cup\cdots\right) = \mu(A_1)+\mu(A_2)+\mu(A_3)+\cdots$</span> if every two <span class="math-container">$A_i,A_j$</span> are mutually exclusive). The measure <span class="math-container">$\mu$</span> need not be a probability measure; for example, one could have <span class="math-container">$\mu(S)=\infty\ne 1$</span>. For example, the function
<span class="math-container">$$
f(x) = \begin{cases} e^{-x} & \text{if }x>0, \\ 0 & \text{if }x<0, \end{cases}
$$</span>
is a probability density on <span class="math-container">$\mathbb R$</span>, where the underlying measure is one for which the measure of every interval <span class="math-container">$(a,b)$</span> is its length <span class="math-container">$b-a$</span>.</p>
| <p>This is my 2 cents, though I'm not an expert:</p>
<p>"Probability Measure" is used in the context of a more precise, math theoretical, context. Kolmogorov in the year 1933 laid down some mathematical constructs to help better understand and handle probabilities from a mathematically rigid point of view. In a nutshell - he defined a "Probability Space" which consists of a set of events, a (<span class="math-container">$\sigma$</span>)-algebra/field on that set (<span class="math-container">$\approx$</span> all the different ways you can subset that original set), and a measure which maps these subsets to a number that measures them. This became the standard way of understanding probability. This framework is important because once you start thinking about probability the way mathematicians do, you encounter all kind of edge cases and problems - which the framework can help you define or avoid.</p>
<p>So, I would say that people who use "Probability Measure" are either involved with deep probability issues, or are simply more math oriented by their education.</p>
<p>Note that a "Probability Space" precedes a "Random Variable" (also known as a "Measurable Function") - which is defined to be a function from the original space to measurable space, often real-valued. I'm not sure, but I think the main point here, is that this allows us to use more "number-oriented" math, than "space-oriented" math. We map the "space" into numbers, and now we can work more easily with it. (There's nothing to prevent us to start with a "number space", e.g., <span class="math-container">$\mathbb R$</span> and define the identity mapping as the Random Variable; But a lot of events are not intrinsically numbers - think of Heads or Tails, and the mapping of them into numbers 0 or 1).</p>
<p>Once we are in the realm of numbers (real line <span class="math-container">$\mathbb R$</span>), we can define <strong>Probability Functions</strong> to help us characterize the behavior of these fantastic probability beasts. The main function is called the "Cumulative Distribution Function" (CDF) - it exists for all valid probability spaces and for all valid random variables, and it completely defines the behavior of the beast (unlike, say, the mean of a random variable, or the variance: you can have different probability beasts with the same mean or the same variance, and even both). It keeps tracks on how much the probability measure is distributed across the real line.</p>
<p>If the random variable mapping is continuous, you will also have a Probability Density Function (PDF), if it's discrete you will have a Probability Mass Function (PMF). If it's mixed, it's complicated.</p>
<p>I think "Probability Distribution" might mean either of these things, but I think most often it will be used in less mathematically precise as it's sort of an umbrella term - it can refer to the distribution of measure on the original space, or the distribution of measure on the real line, characterized by the CDF or PDF/PMF.</p>
<p>Usually, if there's no need to go deep into the math, people will stay on the level of "probability function" or "probability distribution". Though some will venture to the realms of "probability measure" without real justification except the need to be absolutely mathematically precise.</p>
|
number-theory | <p>The background of this question is this: Fermat proved that the equation,
$$x^4+y^4 = z^2$$</p>
<p>has no solution in the positive integers. If we consider the near-miss,
$$x^4+y^4-1 = z^2$$</p>
<p>then this has plenty (in fact, an infinity, as it can be solved by a Pell equation). But J. Cullen, by exhaustive search, found that the other near-miss,
$$x^4+y^4+1 = z^2$$</p>
<p>has none with $0 < x,y < 10^6$.</p>
<p>Does the third equation really have none at all, or are the solutions just enormous?</p>
| <p>A related fun fact. Assuming a <a href="http://arxiv.org/abs/0901.2093" rel="nofollow">conjecture of Tyszka</a>, then one "only" has to search up to $10^{38.532}$ to determine if the equation has finitely many solutions. </p>
<p><em>(<strong>Edit</strong>: Tyszka's conjecture is false, see <a href="http://arxiv.org/abs/1309.2682" rel="nofollow">http://arxiv.org/abs/1309.2682</a> for details.)</em> </p>
<p>Express the equation as a restricted form system:</p>
<blockquote>
<p>$x_1=1,$<br>
$x_3=x_2*x_2,$<br>
$x_4=x_3*x_3$ (here $x_2$ is the $x$ in the original equation),<br>
$x_6=x_5*x_5,$<br>
$x_7=x_6*x_6$ (here $x_5$ is the $y$),<br>
$x_9=x_8*x_8,$ (here $x_8$ is the $z$)<br>
$x_{10}=x_4+x_7,$<br>
$x_{10}=x_9+x_1$ (this pulls it all together).</p>
</blockquote>
<p>This requires 10 variables, so is a subset of Tyszka's system $E_{10}$. By Tyszka's conjecture, if this system has finitely many solutions in the integers then every solution has every variable assigned a value with absolute value at most $2^{2^{n-1}}=2^{512}$. Hence $|x|^4,|y|^4,|z|^2 \le 2^{512}$ so $|x|,|y|\le 2^{128}\lt 10^{38.532}$.</p>
<p>Note that there is a big gap between $10^6$ and the bound given by Tyszka's conjecture. Even if the conjecture <del>does hold</del> <em>had held</em>, there may be infinitely many solutions, but the smallest one may have $x,y>10^{38.53}$.</p>
<p>The message here is just that searching the interval from $0$ to $10^6$ isn't enough.</p>
| <p>I tried the heuristic from the Hardy-Littlewood circle method for this equation. The heuristic suggests that the number of solutions within the range $\max\{\vert x\vert,\vert y\vert,\vert z\vert\}<N$ should be something like</p>
<p>$$\sigma_{\infty}\prod_{p}\sigma_p,$$</p>
<p>where $\sigma_{\infty}$ and $\sigma_{p}$ are real density and local densities.</p>
<p>$$\sigma_{\infty}=\lim_{\epsilon\rightarrow 0}\frac{1}{2\epsilon}\vert\{(x,y,z)\mid\max\{\vert x\vert,\vert y\vert,\vert z\vert\}<N,\vert x^4+y^4+1-z^2\vert<\epsilon\}\vert$$</p>
<p>$$\sigma_p=\lim_{n\rightarrow \infty}\vert\{(x,y,z)\mid x^4+y^4+1\equiv z^2\mod p^n\}\vert/p^{2n}$$</p>
<p>These densities have explicit formulas(if I calculated them correctly)</p>
<p>if $p\equiv 3 \mod 4$,
$$\sigma_p=1-1/p.$$</p>
<p>if $p\equiv 1 \mod 4$,
$$\sigma_p=1+\frac{1+6(-1)^{(p-1)/4}}{p}-\frac{2a}{p^2},$$</p>
<p>where $p=a^2+b^2$, $a\equiv 3\mod 4$.</p>
<p>$\sigma_2=1$ and $$\sigma_{\infty}\approx\frac{B(1/4,1/4)}{2}\log 2N\approx 3.70815\log 2N,$$ where $B$ is Euler beta function.</p>
<p>The infinite product over prime numbers $p$ is not absolutely convergent, but it is indeed convergent. </p>
<p>$$\prod_{p}\sigma_p\approx 0.0193327$$.</p>
<p>$(\pm x,\pm y,\pm z)$ and $(\pm y,\pm x,\pm z)$ are all solutions of the Diophantine equation, so the number of essentially different solutions should be $$\frac{B(1/4,1/4)}{32}\prod_{p}\sigma_p\cdot\log 2N\approx 0.0044805\log 2N.$$ So, if this heuristic works, the first solution may occur near $N\sim\exp(1/0.0044805)/2\approx 4\times 10^{96}$, which means $x,y\sim 10^{48}$. It is possible that the solutions are enormous, but I am not sure if there are other obstructions to the equation.</p>
<hr>
<p><strong>Edit:</strong> I guess things are different for Euler's quartic $$x^4+y^4+z^4=w^4.$$
The real density for this surface is still about $c\log N$, but the infinite product over primes diverges to $\infty$. The infinite product grows like $(\log N)^r$, where $r$ is some real number. So I think the integral points on Euler's quartic is quite "dense" compared to $x^4+y^4+1=z^2$.</p>
|
probability | <p>The formula for computing a k-combination with repetitions from n elements is:
$$\binom{n + k - 1}{k} = \binom{n + k - 1}{n - 1}$$</p>
<p>I would like if someone can give me a simple basic proof that a beginner can understand easily.</p>
| <p>This problem comes by many names - stars and stripes, balls and urns - it's basically a question of how to distribute <span class="math-container">$n$</span> objects (call them "balls") into <span class="math-container">$k$</span> categories (call them "urns"). We can think of it as follows.</p>
<p>Take <span class="math-container">$n$</span> balls and <span class="math-container">$k-1$</span> dividers. If a ball falls between two dividers, it goes into the corresponding urn. If there's nothing between two dividers, then there's nothing in the corresponding urn. Let's look at this with a concrete example.</p>
<p>I want to distribute <span class="math-container">$5$</span> balls into <span class="math-container">$3$</span> urns. As before, take <span class="math-container">$5$</span> balls and <span class="math-container">$2$</span> dividers. Visually:</p>
<p>|ooo|oo</p>
<p>In this order, we'd have nothing in the first urn, three in the second urn and two balls in the third urn. The question then is how many ways can we arrange these 5 balls and two dividers? Clearly: <span class="math-container">$$\dfrac{(5+3-1)!}{5!(3-1)!} = \displaystyle {7 \choose 2} = {7 \choose 5}.$$</span></p>
<p>We have that there are <span class="math-container">$\dfrac{(n+(k-1))!}{(k-1)! n!}$</span> the <span class="math-container">$n$</span> balls and <span class="math-container">$k-1$</span> dividers (since the balls aren't distinct from each other and the dividers aren't distinct from each other). Notice that this is equal to <span class="math-container">$$\displaystyle {n+k-1 \choose k-1} = {n + k - 1 \choose n}.$$</span></p>
| <p>OK, suppose I draw (with replacement) $k$ items from the $n$, and mark them down on a scoresheet that looks like this, by putting an X in the appropriate column each time I draw an item.</p>
<p><img src="https://i.sstatic.net/5s0uS.png" alt="scoresheet"></p>
<p>The result will be $k$ Xs, separated by ($n-1$) vertical bars. Counting the Xs and the vertical bars together is $k+n-1$ items. And what I've drawn is entirely determined by which of those $k+n-1$ items are the Xs. </p>
<p>This is clearly $\binom{k+n-1}{k}$.</p>
<p><img src="https://i.sstatic.net/j0IWh.png" alt="scoresheet filled out"></p>
|
logic | <p>I'm reading some materials on mathematical logic. I wonder how we can "prove" metalogical properties (soundness, completeness, etc.)? As at this point, the proof system has not been verified yet. Isn't this a chicken-and-egg question (we may then have meta-metalogic)? What types of proofs are considered valid at this stage?</p>
| <p>Short answer: yes, it is essentially a chicken-and-egg problem, or perhaps a hermeneutical circle or a spiral. The common interpretation of the incompleteness theorems is that this circularity cannot be avoided. </p>
<p>You can use normal mathematical reasoning for the metatheory - which is not formalized in the first place - or you can choose some formal theory. In the latter case, you can choose a stronger metatheory, like ZFC, or a weaker one, like PRA. </p>
<p>In principle, you would then be able to choose a meta-metatheory, a meta-meta-metatheory, etc., but in practice essentially all the interesting issues arrive at the theory/metatheory level, and the higher levels just repeat these issues rather than giving new issues. </p>
| <p>This is actually not a disagreement with Carl Mummert's answer, but another way of thinking about it. No, it isn't circular. But you don't prove what you might imagine you prove.</p>
<p>All actual proofs in mathematics are in what we call "mathematical English" -- we <em>believe</em> this is all completely rock solid, but essentially it is in the form of sentences and paragraphs and so on. Sometimes (rarely) you even diagram those sentences and break them down into something that looks more like formulas, and follow rigid rules (see Russell's "game on paper" interpretation of math), but ultimately the idea is to be "completely convinced" that a statement is true.</p>
<p>The informality of the above is completely necessary and unavoidable!</p>
<p>When we "prove theorems about theorems" or any other meta-mathematical idea, what we really do is design (in our informal mathematical English) a model where we represent statements as numbers, and so on, so that we can say "this number represents a sentence" and "this number represents a proof of this statement from this set of statements" and so on, we're actually making a statement about numbers.</p>
<p>You <em>convince yourself</em> that this statement about the natural numbers is really the same as the metamathematical statement you care about. This can't be formalized either, but should be seen as "obvious" once explained properly. Then you can prove this statement about numbers in the normal mathematical sense.</p>
|
differentiation | <p>Consider a function <span class="math-container">$g: \mathbb{R^3} \to \mathbb{R}$</span> defined by <span class="math-container">$g(x,y,z) = z^2 -x^3 + 2z + 3y^3$</span> </p>
<p>Find the gradient of <span class="math-container">$g$</span> at the point <span class="math-container">$(2,1,-1)$</span>.</p>
<p>Is the gradient <span class="math-container">$\nabla g(2,1,-1)$</span> given by a vector, that is, <span class="math-container">$\nabla g(2,1,-1) = -12i + 9j$</span>? If so, then what does the directional derivative mean?</p>
| <p>The gradient is a vector; it points in the direction of steepest ascent.</p>
<p>The directional derivative is a number; it is the rate of change when your point in $\Bbb R^3$ moves in that direction. (You can imagine "reducing" your function to a function of a single variable, say $t$, by "slicing" the curve in that direction; the directional derivative is just then the 1-D derivative of that "sliced" function.)</p>
| <p>Be careful that directional derivative of a function is a scalar while gradient is a vector.</p>
<p>The only difference between <em>derivative</em> and <em>directional derivative</em> is the definition of those terms. Remember:</p>
<ul>
<li>Directional derivative is the instantaneous rate of change (which is a scalar) of $f(x,y)$ in the direction of the unit vector $u$.</li>
<li>Derivative is the rate of change of $f(x,y)$, which can be thought of the slope of the function at a point $(x_0,y_0)$.</li>
</ul>
|
number-theory | <p>The polynomial $n^2+n+41$ famously takes prime values for all $0\le n\lt 40$. I have read that this is closely related to the fact that 163 is a Heegner number, although I don't understand the argument, except that the discriminant of $n^2+n+41$ is $-163$. The next smaller Heegner number is 67, and indeed $n^2+n+17$ shows the same behavior, taking prime values for $0\le n\lt 16$. </p>
<p>But 163 is the largest Heegner number, which suggests that this is the last such coincidence, and that there might not be any $k>41$ such that $n^2+n+k$ takes on an unbroken sequence of prime values.</p>
<p>Is this indeed the case? If not, why not?</p>
<p>Secondarily, is there a $k>41$, perhaps extremely large, such that $n^2+n+k$ takes on prime values for $0<n<j$ for some $j>39$?</p>
| <p>See <a href="http://en.wikipedia.org/wiki/Heegner_number#Consecutive_primes" rel="noreferrer">http://en.wikipedia.org/wiki/Heegner_number#Consecutive_primes</a> </p>
<p>Here is <a href="http://gdz.sub.uni-goettingen.de/dms/load/img/?PID=GDZPPN002167719&physid=PHYS_0159" rel="noreferrer">a longer piece Rabinowitz published on the topic below</a></p>
<p>Rabinowitz showed, in 1913, that $x^2 + x + p$ represent the maximal number of consecutive primes if and only if $x^2 + x y + p y^2$ is the only (equivalence class of) positive binary quadratic form of its discriminant. This condition is called "class number one." For negative discriminants the set of such discriminants is finite, called Heegner numbers. </p>
<p>Note that if we take $x^2 + x + ab$ so that the constant term is composite, we get composite outcome both for $x=a$ and $x=b,$ so the thing quits early. In the terms of Rabinowitz, we would also have the form $a x^2 + x y + b y^2,$ which would be a distinct "class" in the same discriminant, thus violating class number one. So it all fits together. That is, it is "reduced" if $0 < a \leq b,$ and distinct from the principal form if also $a > 1.$</p>
<p>For binary forms, I particularly like Buell, here is his page about class number one: <a href="http://books.google.com/books?id=qMa8-FDOR8YC&pg=PA81&lpg=PA81&dq=class%20number%20one%20binary%20quadratic%20forms&source=bl&ots=0I1wN7aIMg&sig=J-VYw4fNijIKFFF02oHkmaBSsb4&hl=en&sa=X&ei=LQsHUdWMCJDSigKPiYGwBA&ved=0CEsQ6AEwBA#v=onepage&q=class%20number%20one%20binary%20quadratic%20forms&f=false" rel="noreferrer">BUELL</a>. He does everything in terms of binary forms, but he also gives the relationship with quadratic fields. Furthermore, he allows both positive and indefinite forms, finally he allows odd $b$ in $a x^2 + b x y + c y^2.$ Note that I have often answered MSE questions about ideals in quadratic fields with these techniques, which are, quite simply, easy. Plus he shows the right way to do Pell's equation and variants as algorithms, which people often seem to misunderstand. Here I mean Lagrange's method mostly.</p>
<p>EEDDIITT: I decided to figure out the easier direction of the Rabinowitz result in my own language. So, we begin with the principal form, $\langle 1,1,p \rangle$ with some prime $p \geq 3. $ It follows that $2p-4 \geq p-1$ and
$$ p-2 \geq \frac{p-1}{2}. $$
Now, suppose we have another, distinct, reduced form of the same discriminant,
$$ \langle a,b,c \rangle. $$ There is no loss of generality in demanding $b > 0.$ So reduced means $$ 1 \leq b \leq a \leq c $$ with $b$ odd, both $a,c \geq 2,$ and
$$ 4ac - b^2 = 4 p - 1. $$ As $b^2 \leq ac,$ we find $3 b^2 \leq 4p-1 $ and, as $p$ is positive,
$ b \leq p.$ </p>
<p>Now, define $$ b = 2 \beta + 1 $$ or $ \beta = \frac{b-1}{2}. $ From earlier we have
$$ \beta \leq p-2. $$
However, from $4ac - b^2 = 4p-1$ we get $4ac+1 = b^2 + 4 p,$ then
$ 4ac + 1 = 4 \beta^2 + 4 \beta + 1 + 4 p, $ then $4ac = 4 \beta^2 + 4 \beta + 4 p, $ then
$$ ac = \beta^2 + \beta + p, $$
with $0 \leq \beta \leq p-2.$ That is, the presence of a second equivalence class of forms has forced $x^2 + x + p$ to represent a composite number ($ac$) with a small value $ x = \beta \leq p-2,$ thereby interrupting the supposed sequence of primes. $\bigcirc \bigcirc \bigcirc \bigcirc \bigcirc$ </p>
<p>EEDDIITTEEDDIITT: I should point out that the discriminants in question must actually be field discriminants, certain things must be squarefree. If I start with $x^2 + x + 7,$ the related $x^2 + x y + 7 y^2$ of discriminant $-27$ is of class number one, but there is the imprimitive form $ \langle 3,3,3 \rangle $ with the same discriminant. Then, as above, we see that $x^2 + x + 7 = 9$ with $x=1 =(3-1)/2.$</p>
<p>[ Rabinowitz, G. “Eindeutigkeit der Zerlegung in Primzahlfaktoren in quadratischen Zahlkörpern.” <em>Proc. Fifth Internat. Congress Math.</em> (Cambridge) <strong>1</strong>, 418-421, 1913. ]</p>
<p>Edit October 30, 2013. Somebody asked at deleted question
<a href="https://math.stackexchange.com/questions/546368/smallest-integer-n-s-t-fn-n2-n-41-is-composite">Smallest positive integer $n$ s.t. f(n) = $n^2 + n + 41$ is composite?</a>
about the other direction in Rabinowitz (1913), and a little fiddling revealed that I also know how to do that. We have a positive principal form $$ \langle 1,1,p \rangle $$ where $$ - \Delta = 4 p - 1 $$ is also prime. Otherwise there is a second genus unless $ - \Delta = 27 $ or $ - \Delta = 343 $ or similar prime power, which is a problem I am going to ignore; our discriminant is minus a prime, $ \Delta = 1- 4 p . $ </p>
<p>We are interested in the possibility that $n^2 + n + p$ is composite for $0 \leq n \leq p-2.$ If so, let $q$ be the smallest prime that divides such a composite number. We have $n^2 + n + p \equiv 0 \pmod q.$ This means $(2n+1)^2 \equiv \Delta \pmod q,$ so we know $\Delta$ is a quadratic residue. Next $n^2 + n + p \leq (p-1)^2 + 1.$ So, the smallest prime factor is smaller than $p-1,$ and $q < p,$ so $q < - \Delta.$</p>
<p>Let's see, the two roots of $n^2 + n + p \pmod q$ add up to $q-1.$ One may confirm that if $m = q-1-n,$ then $m^2 + m + p \equiv 0 \pmod q.$ However, we cannot have $n = (q-1)/2$ with $n^2 + n + p \pmod q,$ because that implies $\Delta \equiv 0 \pmod q,$ and we were careful to make $- \Delta$ prime, with $q < - \Delta.$ Therefore there are two distinct values of $n \pmod q,$ the two values add to $q-1.$ As a result, there is a value of $n$ with $n^2 + n + p \equiv 0 \pmod q$ and $n < \frac{q-1}{2}.$</p>
<p>Using the change of variable matrix
$$
\left(
\begin{array}{cc}
1 & n \\
0 & 1
\end{array}\right)
$$
written on the right, we find that
$$ \langle 1,1,p \rangle \sim \langle 1,2n+1,qs \rangle $$
where $n^2 + n + p = q s,$ with $2n+1 < q$ and $q \leq s.$ As a result, the new form
$$ \langle q,2n+1,s \rangle $$
is of the same discriminant but is already reduced, and is therefore not equivalent to the principal form. Thus, the class number is larger than one. </p>
| <p>To answer the second part of your question: The requirement that $n^2+n+k$ be prime for $0\lt n\lt40$ defines a <a href="http://en.wikipedia.org/wiki/Prime_k-tuple#Prime_constellations">prime constellation</a>. The <a href="http://en.wikipedia.org/wiki/First_Hardy%E2%80%93Littlewood_conjecture#First_Hardy.E2.80.93Littlewood_conjecture">first Hardy–Littlewood conjecture</a>, which is widely believed to be likely to hold, states that the asymptotic density of prime constellations is correctly predicted by a probabilistic model based on independently uniformly distributed residues with respect to all primes. Here's a Sage session that calculates the coefficient for your prime constellation:</p>
<pre><code>sage: P = Primes ();
sage: coefficient = 1;
sage: for i in range (0,10000) :
... p = P.unrank (i);
... if (p < 39^2 + 39) :
... admissible = list (true for j in range (0,p));
... for n in range (1,40) :
... admissible [(n^2+n) % p] = false;
... count = 0;
... for j in range (0,p) :
... if (admissible [j]) :
... count = count + 1;
... else :
... count = p - 39;
... coefficient = coefficient * (count / p) / (1 - 1 / p)^39;
...
sage: coefficient.n ()
2.22848364649549e27
</code></pre>
<p>The change upon doubling the cutoff $10000$ is in the third digit, so the number of such prime constellations up to $N$ is expected to be asymptotically given approximately by $2\cdot10^{27}N/\log^{39}N$. This is $1$ for $N\approx4\cdot10^{54}$, so although there are expected to be infinitely many, you'd probably have quite a bit of searching to do to find one. There are none except the one with $k=41$ up to $k=1000000$:</p>
<pre><code>sage: P = Primes ();
sage: for k in range (0,1000000) :
... success = true;
... for n in range (1,40) :
... if not (n^2 + n + k) in P :
... success = false;
... break;
... if success :
... print k;
41
</code></pre>
|
logic | <p>I just read this whole article:
<a href="http://web.maths.unsw.edu.au/~norman/papers/SetTheory.pdf" rel="noreferrer">http://web.maths.unsw.edu.au/~norman/papers/SetTheory.pdf</a><br>
which is also discussed over here:
<a href="https://math.stackexchange.com/questions/356264/infinite-sets-dont-exist/">Infinite sets don't exist!?</a></p>
<p>However, the paragraph which I found most interesting is not really discussed there. I think this paragraph illustrates where most (read: close to all) mathematicians fundementally disagree with Professor NJ Wildberger. I must admit that I'm a first year student mathematics, and I really don't know close to enough to take sides here. Could somebody explain me here why his arguments are/aren't correct?</p>
<p><strong>These edits are made after the answer from Asaf Karagila.</strong><br>
<strong>Edit</strong> $\;$ I've shortened the quote a bit, I hope this question can be reopened ! The full paragraph can be read at the link above.<br>
<strong>Edit</strong> $\;$ I've listed the quotes from his article, I find most intresting: </p>
<ul>
<li><em>The job [of a pure mathematician] is to investigate the mathematical reality of the world in which we live.</em></li>
<li><em>To Euclid, an Axiom was a fact that was sufficiently obvious to not require a proof.</em></li>
</ul>
<p>And from a discussion with the author on the internet:</p>
<p><em>You are sharing with us the common modern assumption that mathematics is built up from
"axioms". It is not a position that Newton, Euler or Gauss would have had a lot of sympathy with, in my opinion. In this course we will slowly come to appreciate that clear and careful definitions are a much preferable beginning to the study of mathematics.</em></p>
<p>Which leads me to the following question: Is it true that with modern mathematics it is becoming less important for an axiom to be self-evident? It sounds to me that ancient mathematics was much more closely related to physics then it is today. Is this true ?</p>
<blockquote>
<h1>Does mathematics require axioms?</h1>
<p>Mathematics does not require "Axioms". The job of a pure mathematician
is not to build some elaborate castle in the sky, and to proclaim that
it stands up on the strength of some arbitrarily chosen assumptions.
The job is to <em>investigate the mathematical reality of the world in
which we live</em>. For this, no assumptions are necessary. Careful
observation is necessary, clear definitions are necessary, and correct
use of language and logic are necessary. But at no point does one need
to start invoking the existence of objects or procedures that we
cannot see, specify, or implement.</p>
<p>People use the term "Axiom" when
often they really mean <em>definition</em>. Thus the "axioms" of group theory
are in fact just definitions. We say exactly what we mean by a group,
that's all. There are no assumptions anywhere. At no point do we or
should we say, "Now that we have defined an abstract group, let's
assume they exist".</p>
<p>Euclid may have called certain of his initial statements Axioms, but
he had something else in mind. Euclid had a lot of geometrical facts
which he wanted to organize as best as he could into a logical
framework. Many decisions had to be made as to a convenient order of
presentation. He rightfully decided that simpler and more basic facts
should appear before complicated and difficult ones. So he contrived
to organize things in a linear way, with most Propositions following
from previous ones by logical reasoning alone, with the exception of
<em>certain initial statements</em> that were taken to be self-evident. To
Euclid, <em>an Axiom was a fact that was sufficiently obvious to not
require a proof</em>. This is a quite different meaning to the use of the
term today. Those formalists who claim that they are following in
Euclid's illustrious footsteps by casting mathematics as a game played
with symbols which are not given meaning are misrepresenting the
situation.</p>
<p>And yes, all right, the Continuum hypothesis doesn't really need to be
true or false, but is allowed to hover in some no-man's land, falling
one way or the other depending on <em>what you believe</em>. Cohen's proof of
the independence of the Continuum hypothesis from the "Axioms" should
have been the long overdue wake-up call. </p>
<p>Whenever discussions about the foundations of mathematics arise, we
pay lip service to the "Axioms" of Zermelo-Fraenkel, but do we ever
use them? Hardly ever. With the notable exception of the "Axiom of
Choice", I bet that fewer than 5% of mathematicians have ever employed
even one of these "Axioms" explicitly in their published work. The
average mathematician probably can't even remember the "Axioms". I
think I am typical-in two weeks time I'll have retired them to their
usual spot in some distant ballpark of my memory, mostly beyond
recall.</p>
<p>In practise, working mathematicians are quite aware of the lurking
contradictions with "infinite set theory". We have learnt to keep the
demons at bay, not by relying on "Axioms" but rather by developing
conventions and intuition that allow us to seemingly avoid the most
obvious traps. Whenever it smells like there may be an "infinite set"
around that is problematic, we quickly use the term "class". For
example: A topology is an "equivalence class of atlases". Of course
most of us could not spell out exactly what does and what does not
constitute a "class", and we learn to not bring up such questions in
company.</p>
</blockquote>
| <blockquote>
<p>Is it true that with modern mathematics it is becoming less important for an axiom to be self-evident?</p>
</blockquote>
<p>Yes and no.</p>
<h2>Yes</h2>
<p>in the sense that we now realize that all proofs, in the end, come down to the axioms and logical deduction rules that were assumed in writing the proof. For every statement, there are systems in which the statement is provable, including specifically the systems that assume the statement as an axiom. Thus no statement is "unprovable" in the broadest sense - it can only be unprovable relative to a specific set of axioms.</p>
<p>When we look at things in complete generality, in this way, there is no reason to think that the "axioms" for every system will be self-evident. There has been a parallel shift in the study of logic away from the traditional viewpoint that there should be a single "correct" logic, towards the modern viewpoint that there are multiple logics which, though incompatible, are each of interest in certain situations.</p>
<h2>No</h2>
<p>in the sense that mathematicians spend their time where it interests them, and few people are interested in studying systems which they feel have implausible or meaningless axioms. Thus some motivation is needed to interest others. The fact that an axiom seems self-evident is one form that motivation can take.</p>
<p>In the case of ZFC, there is a well-known argument that purports to show how the axioms are, in fact, self evident (with the exception of the axiom of replacement), by showing that the axioms all hold in a pre-formal conception of the cumulative hierarchy. This argument is presented, for example, in the article by Shoenfield in the <em>Handbook of Mathematical Logic</em>.</p>
<p>Another in-depth analysis of the state of axiomatics in contemporary foundations of mathematics is "<a href="http://www.jstor.org/stable/420965" rel="noreferrer">Does Mathematics Need New Axioms?</a>" by Solomon Feferman, Harvey M. Friedman, Penelope Maddy and John R. Steel, <em>Bulletin of Symbolic Logic</em>, 2000.</p>
| <p><em>Disclaimer: I didn't read the entire <strong>original</strong> quote in details, the question had since been edited and the quote was shortened. My answer is based on the title, the introduction, and a few paragraphs from the [original] quote.</em></p>
<p>Mathematics, <em>modern mathematics</em> focuses a lot of resources on rigor. After several millenniums where mathematics was based on intuition, and that got <em>some</em> results, we reached a point where rigor was needed.</p>
<p>Once rigor is needed one cannot just "do things". One has to obey a particular set of rules which define what constitutes as a legitimate proof. True, we don't write all proof in a fully rigorous way, and we do make mistakes from time to time due to neglecting the details.</p>
<p>However we need a rigid framework which tells us what is rigor. Axioms are the direct result of this framework, because axioms are really just assumptions that we are not going to argue with (for the time being anyway). It's a word which we use to distinguish some assumptions from other assumptions, and thus giving them some status of "assumptions we do not wish to change very often".</p>
<p>I should add two points, as well.</p>
<ol>
<li><p>I am not living in a mathematical world. The last I checked I had arms and legs, and not mathematical objects. I ate dinner and not some derived functor. And I am using a computer to write this answer. All these things are not mathematical objects, these are physical objects. </p>
<p>Seeing how I am not living in the mathematical world, but rather in the physical world, I see no need whatsoever to insist that mathematics will describe the world I am in. I prefer to talk about mathematics in a framework where I have rules which help me decide whether or not something is a reasonable deduction or not.</p>
<p>Of course, if I were to discuss how many keyboards I have on my desk, or how many speakers are attached to my computer right now -- then of course I wouldn't have any problem in dropping rigor. But unfortunately a lot of the things in modern mathematics deal with infinite and very general objects. These objects defy all intuition and when not working rigorously mistakes pop up more often then they should, as history taught us.</p>
<p>So one has to decide: either do mathematics about the objects on my desk, or in my kitchen cabinets; or stick to rigor and axioms. I think that the latter is a better choice.</p></li>
<li><p>I spoke with more than one Ph.D. student in computer science that did their M.Sc. in mathematics (and some folks that only study a part of their undergrad in mathematics, and the rest in computer science), and everyone agreed on one thing: computer science lacks the definition of proof and rigor, and it gets really difficult to follow some results.</p>
<p>For example, one of them told me he listened to a series of lectures by someone who has a world renowned expertise in a particular topic, and that person made a horrible mistake in the proof of a most trivial lemma. Of course the lemma was correct (and that friend of mine sat to write a proof down), but can we really allow negligence like that? In computer science a lot of the results are later applied into code and put into tests. Of course that doesn't prove their correctness, but it gives a "good enough" feel to it.</p>
<p>How are we, in mathematics, supposed to test our proofs about intangible objects? When we write an inductive argument. How are we even supposed to begin testing it? Here is an example: <strong>all the decimal expansions of integers are shorter than $2000^{1000}$ decimal digits.</strong> I defy someone to write an integer which is larger than $10^{2000^{1000}}$ explicitly. It can't be done in the physical world! Does that mean this preposterous claim is correct? No, it does not. Why? Because our intuition about integers tells us that they are infinite, and that all of them have decimal expansions. It would be absurd to assume otherwise.</p></li>
</ol>
<p>It is important to realize that axioms are not just the axioms of logic and $\sf ZFC$. Axioms are all around us. These are the definitions of mathematical objects. We have axioms of a topological space, and axioms for a category and axioms of groups, semigroups and cohomologies.</p>
<p>To ignore that fact is to bury your head in the sand and insist that axioms are only for logicians and set theorists.</p>
|
differentiation | <p>I'm studying about EM-algorithm and on one point in my reference the author is taking a derivative of a function with respect to a matrix. Could someone explain how does one take the derivative of a function with respect to a matrix...I don't understand the idea. For example, lets say we have a multidimensional Gaussian function:</p>
<p>$$f(\textbf{x}, \Sigma, \boldsymbol \mu) = \frac{1}{\sqrt{(2\pi)^k |\Sigma|}}\exp\left( -\frac{1}{2}(\textbf{x}-\boldsymbol \mu)^T\Sigma^{-1}(\textbf{x}-\boldsymbol \mu)\right),$$</p>
<p>where $\textbf{x} = (x_1, ..., x_n)$, $\;\;x_i \in \mathbb R$, $\;\;\boldsymbol \mu = (\mu_1, ..., \mu_n)$, $\;\;\mu_i \in \mathbb R$ and $\Sigma$ is the $n\times n$ covariance matrix. </p>
<p>How would one calculate $\displaystyle \frac{\partial f}{\partial \Sigma}$? What about $\displaystyle \frac{\partial f}{\partial \boldsymbol \mu}$ or $\displaystyle \frac{\partial f}{\partial \textbf{x}}$ (Aren't these two actually just special cases of the first one)?</p>
<p>Thnx for any help. If you're wondering where I got this question in my mind, I got it from reading this reference: (page 14)</p>
<p><a href="http://ptgmedia.pearsoncmg.com/images/0131478249/samplechapter/0131478249_ch03.pdf" rel="noreferrer">http://ptgmedia.pearsoncmg.com/images/0131478249/samplechapter/0131478249_ch03.pdf</a></p>
<p>UPDATE: </p>
<p>I added the particular part from my reference here if someone is interested :) I highlighted the parts where I got confused, namely the part where the author takes the derivative with respect to a matrix (the sigma in the picture is also a covariance matrix. The author is estimating the optimal parameters for Gaussian mixture model, by using the EM-algorithm):</p>
<p>$Q(\theta|\theta_n)\equiv E_Z\{\log p(Z,X|\theta)|X,\theta_n\}$</p>
<p><img src="https://i.sstatic.net/cnlY0.png" alt="enter image description here"></p>
| <p>It's not the derivative with respect to a matrix really. It's the derivative of $f$ with respect to each element of a matrix and the result is a matrix.</p>
<p>Although the calculations are different, it is the same idea as a Jacobian matrix. Each entry is a derivative with respect to a different variable.</p>
<p>Same goes with $\frac{\partial f}{\partial \mu}$, it is a vector made of derivatives with respect to each element in $\mu$.</p>
<p>You could think of them as $$\bigg[\frac{\partial f}{\partial \Sigma}\bigg]_{i,j} = \frac{\partial f}{\partial \sigma^2_{i,j}} \qquad \text{and}\qquad \bigg[\frac{\partial f}{\partial \mu}\bigg]_i = \frac{\partial f}{\partial \mu_i}$$
where $\sigma^2_{i,j}$ is the $(i,j)$th covariance in $\Sigma$ and $\mu_i$ is the $i$th element of the mean vector $\mu$.</p>
| <p>You can view this in the same way you would view a function of any vector. A matrix is just a vector in a normed space where the norm can be represented in any number of ways. One possible norm would be the root-mean-square of the coefficients; another would be the sum of the absolute values of the matrix coefficients. Another is as the norm of the matrix as a linear operator on a vector space with its own norm.</p>
<p>What is significant is that the invertible matrices are an open set; so a derivative can make sense. What you have to do is find a way to approximate
$$ f(x,\Sigma + \Delta\Sigma,\mu)-f(x,\Sigma,\mu)$$ as a linear function of $\Delta\Sigma$. I would use a power series to find a linear approximation. For example,
$$ (\Sigma+\Delta\Sigma)^{-1}=\Sigma^{-1}(I+(\Delta\Sigma) \Sigma^{-1})^{-1} =\Sigma^{-1}
\sum_{n=0}^{\infty}(-1)^{n}\{ (\Delta\Sigma)\Sigma^{-1}\}^{n} \approx \Sigma^{-1}(I-(\Delta\Sigma)\Sigma^{-1})$$
Such a series converges for $\|\Delta\Sigma\|$ small enough (using whatever norm you choose.) And, in the language of derivatives,
$$ (\frac{d}{d\Sigma} \Sigma^{-1})\Delta\Sigma = -\Sigma^{-1}(\Delta\Sigma)\Sigma^{-1} $$
Remember, that the derivative is a linear operator on $\Delta\Sigma$; if you squint you can almost see the classical term $\frac{d}{dx}x^{-1} =-x^{-2}$. Chain rules for derivatives apply. So that's how you can handle the exponential composed with matrix inversion.</p>
|
differentiation | <p>There is a famous example of a function that has no derivative: the Weierstrass function:</p>
<p><a href="https://i.sstatic.net/puevr.png" rel="noreferrer"><img src="https://i.sstatic.net/puevr.png" alt="enter image description here" /></a></p>
<p>But just by looking at this equation - I can't seem to understand why exactly the Weierstrass Function does not have a derivative?</p>
<p>I tried looking at a few articles online (e.g. <a href="https://www.quora.com/Why-isnt-the-Weierstrass-function-differentiable" rel="noreferrer">https://www.quora.com/Why-isnt-the-Weierstrass-function-differentiable</a>), but I still can't seem to understand what prevents this function from having a derivative?</p>
<p>For example, if you expand the summation term for some very large (finite) value of <span class="math-container">$n$</span>:
<span class="math-container">$$
f(x) = a \cos(b\pi x) + a^2\cos(b^2\pi x) + a^3\cos(b^3\pi x) + ... + a^{100}\cos(b^{100}\pi x)
$$</span>
What is preventing us from taking the derivative of <span class="math-container">$f(x)$</span>? Is the Weierstrass function non-differentiable only because it has "infinite terms" - and no function with infinite terms can be differentiated?</p>
<p>For a finite value of <span class="math-container">$n$</span>, is the Weierstrass function differentiable?</p>
<p>Thank you!</p>
| <p>Nothing is preventing us from taking the derivative of any finite partial sum of this series. This is a trigonometric polynomial and it has derivatives of all orders.</p>
<p>However, this infinite sum represents the pointwise <strong>limit</strong> of such trigonometric polynomials. A pointwise limit of differentiable functions has no obligation to be differentiable.</p>
<p>On the other hand, the sheer fact that this function is an infinite sum doesn’t automatically imply that it’s nowhere differentiable. An infinite power series or infinite trigonometric series may be differentiable everywhere, differentiable nowhere, or differentiable in some places and not others.</p>
<p>To check if a function is differentiable at a point <span class="math-container">$x_0$</span>, you must determine if the limit <span class="math-container">$\lim_{h\to 0}(f(x_0+h)-f(x_0))/h$</span> exists. If it doesn’t, the function isn’t differentiable at <span class="math-container">$x_0$</span>.</p>
<p>There are various theorems which help us bypass the need for doing this directly. For example, we can show that the composition of differentiable functions is differentiable, and that various elementary functions are differentiable everywhere, which lets us conclude that things like <span class="math-container">$\cos(17x^2)-e^x$</span> are differentiable everywhere.</p>
<p>One of these shortcuts applies when you have infinite sums with <strong>uniformly convergent</strong> derivatives. This very important result allows us to conclude that many functions defined by infinite series do indeed have derivatives, and those derivatives are what you’d expect. But this doesn’t apply here, since this sum does <strong>not</strong> have uniformly convergent derivatives.</p>
<p>And once again, this fact alone isn’t enough to show that the function is nowhere differentiable. To show that, you have to roll up your sleeves and work out inequalities that apply everywhere and prevent the limit above from existing. This is done carefully <a href="https://math.berkeley.edu/%7Ebrent/files/104_weierstrass.pdf" rel="noreferrer">here</a>, for example.</p>
| <p>In order to calculate the derivative of a function at a given point (e.g. <span class="math-container">$x=0$</span>), you just need to <a href="https://math.stackexchange.com/a/2833438/386794">keep zooming</a> on this point until the curve gets smooth:</p>
<p><img src="https://download.ericduminil.com/weierstrass_zoom.gif" alt="https://download.ericduminil.com/weierstrass_zoom.gif" /></p>
<p>Oh, wait...</p>
|
combinatorics | <p>A triangular grid has $N$ vertices, labeled from 1 to $N$. Two vertices $i$ and $j$ are adjacent if and only if $|i-j|=1$ or $|i-j|=2$. See the figure below for the case $N = 7$.</p>
<p><img src="https://i.sstatic.net/q7Glg.jpg" alt="Triangular grid with 7 vertices"></p>
<p>How many trails are there from $1$ to $N$ in this graph? A trail is allowed to visit a vertex more than once, but it cannot travel along the same edge twice.</p>
<p>I wrote a program to count the trails, and I obtained the following results for $1 \le N \le 17$.</p>
<p>$$1, 1, 2, 4, 9, 23, 62, 174, 497, 1433, 4150, 12044, 34989, 101695, 295642, 859566, 2499277$$</p>
<p>This sequence is not in the <a href="http://oeis.org" rel="nofollow noreferrer">OEIS</a>, but <a href="http://oeis.org/ol.html" rel="nofollow noreferrer">Superseeker</a> reports that the sequence satisfies the fourth-order linear recurrence</p>
<p>$$2 a(N) + 3 a(N + 1) - a(N + 2) - 3 a(N + 3) + a(N + 4) = 0.$$</p>
<p>Question: Can anyone prove that this equation holds for all $N$?</p>
| <p>Regard the same graph, but add an edge from $n-1$ to $n$ with weight $x$ (that is, a path passing through this edge contributes $x$ instead of 1).</p>
<p>The enumeration is clearly a linear polynomial in $x$, call it $a(n,x)=c_nx+d_n$ (and we are interested in $a(n,0)=d_n$).</p>
<p>By regarding the three possible edges for the last step, we find $a(1,x)=1$, $a(2,x)=1+x$ and</p>
<p>$$a(n,x)=a(n-2,1+2x)+a(n-1,x)+x\,a(n-1,1)$$</p>
<p>(If the last step passes through the ordinary edge from $n-1$ to $n$, you want a trail from 1 to $n-1$, but there is the ordinary edge from $n-2$ to $n-1$ and a parallel connection via $n$ that passes through the $x$ edge and is thus equivalent to a single edge of weight $x$, so we get $a(n-1,x)$.</p>
<p>If the last step passes through the $x$-weighted edge this gives a factor $x$, and you want a trail from $1$ to $n-1$ and now the parallel connection has weight 1 which gives $x\,a(n-1,1)$.</p>
<p>If the last step passes through the edge $n-2$ to $n$, then we search a trail to $n-2$ and now the parallel connection has the ordinary possibility $n-3$ to $n-2$ and two $x$-weighted possibilities $n-3$ to $n-1$ to $n$ to $n-1$ to $n-2$, in total this gives weight $2x+1$ and thus $a(n-2,2x+1)$.)</p>
<p>Now, plug in the linear polynomial and compare coefficients to get two linear recurrences for $c_n$ and $d_n$. </p>
<p>\begin{align}
c_n&=2c_{n-2}+2c_{n-1}+d_{n-1}\\
d_n&=c_{n-2}+d_{n-2}+d_{n-1}
\end{align}</p>
<p>Express $c_n$ with the second one, eliminate it from the first and you find the recurrence for $d_n$.</p>
<p>(Note that $c_n$ and $a(n,x)$ are solutions of the same recurrence.)</p>
| <p>This is not a new answer, just an attempt to slightly demystify user9325's very elegant answer to make it easier to understand and apply to other problems. Of course this is based on what I myself find easier to understand; others may prefer user9325's original formulation.</p>
<p>The crucial insight, in my view, is not the use of a variable weight and a polynomial (which serve as convenient bookkeeping devices), but that the problem becomes more tractable if we generalize it. This becomes apparent when we try a similar approach without this generalization: We might try to decompose $a(n)$ into two contributions corresponding to the two edges from $n-2$ and $n-1$ by which we can get to $n$, and in each case account for the new possibilities arising from the new vertices and edges. The contribution from $n-1$ is straightforward, but the contribution from $n-2$ causes a problem: We can now travel between $n-3$ and $n-2$ either directly or via $n-1$, and we can't just add a factor of $2$ to take this into account because there are trails using both of these possibilities. This is where the idea of an edge parallel to the final edge arises: Even though we're only interested in the final result without a parallel edge, the recurrence leads to parallel edges, so we need to include that possibility. We can do this without edge weights or polynomials by just counting the number $b(n)$ of trails that use the parallel edge separately from the number $a(n)$ of trails that don't. (I'm not saying we should; the polynomial, like a generating function, is an elegant and useful way to keep track of things; I'm just trying to emphasize that the polynomial isn't an essential part of the central idea of generalizing the original problem.)</p>
<p>Counting the number $a(n)$ of trails that don't use the parallel edge, we have a contribution $a(n-1)$ from trails ending with the normal edge from $n-1$, and a contribution $a(n-2)+b(n-2)$ from trails ending with the edge from $n-2$, which may ($b$) or may not ($a$) go via $n-1$:</p>
<p>$$a(n)=a(n-1)+a(n-2)+b(n-2)\;.$$</p>
<p>Counting the number $b(n)$ of trails that do use the parallel edge, we have a contribution $a(n-1)+b(n-1)$ from trails ending with the parallel edge, which may ($b$) or may not ($a$) go via $n$, a contribution $b(n-1)$ from trails ending with the normal edge from $n-1$, which have to go via $n$ (hence $b$), and a contribution $2b(n-2)$ from trails ending with the edge from $n-2$, which have to go via $n-1$ (hence $b$) and can use the normal edge from $n-1$ and the parallel edge in either order (hence the factor $2$):</p>
<p>$$b(n)=a(n-1)+b(n-1)+b(n-1)+2b(n-2)\;.$$</p>
<p>This is precisely user9325's result, with $a(n)=d_n$ and $b(n)=c_n$. There was a tad more work in counting the possibilities, but then we didn't have to compare coefficients.</p>
|
matrices | <p>I am trying to understand how - exactly - I go about projecting a vector onto a subspace.</p>
<p>Now, I know enough about linear algebra to know about projections, dot products, spans, etc etc, so I am not sure if I am reading too much into this, or if this is something that I have missed.</p>
<p>For a class I am taking, the proff is saying that we take a vector, and 'simply project it onto a subspace', (where that subspace is formed from a set of orthogonal basis vectors).</p>
<p>Now, I know that a subspace is really, at the end of the day, just a set of vectors. (That satisfy properties <a href="http://en.wikipedia.org/wiki/Projection_%28linear_algebra%29">here</a>). I get that part - that its this set of vectors. So, how do I "project a vector on this subspace"?</p>
<p>Am I projecting my one vector, (lets call it a[n]) onto ALL the vectors in this subspace? (What if there is an infinite number of them?)</p>
<p>For further context, the proff was saying that lets say we found a set of basis vectors for a signal, (lets call them b[n] and c[n]) then we would project a[n] onto its <a href="http://en.wikipedia.org/wiki/Signal_subspace">signal subspace</a>. We project a[n] onto the signal-subspace formed by b[n] and c[n]. Well, how is this done exactly?..</p>
<p>Thanks in advance, let me know if I can clarify anything!</p>
<p>P.S. I appreciate your help, and I would really like for the clarification to this problem to be somewhat 'concrete' - for example, something that I can show for myself over MATLAB. Analogues using 2-D or 3-D space so that I can visualize what is going on would be very much appreciated as well. </p>
<p>Thanks again.</p>
| <p>I will talk about orthogonal projection here.</p>
<p>When one projects a vector, say $v$, onto a subspace, you find the vector in the subspace which is "closest" to $v$. The simplest case is of course if $v$ is already in the subspace, then the projection of $v$ onto the subspace is $v$ itself.</p>
<p>Now, the simplest kind of subspace is a one dimensional subspace, say the subspace is $U = \operatorname{span}(u)$. Given an arbitrary vector $v$ not in $U$, we can project it onto $U$ by
$$v_{\| U} = \frac{\langle v , u \rangle}{\langle u , u \rangle} u$$
which will be a vector in $U$. There will be more vectors than $v$ that have the same projection onto $U$.</p>
<p>Now, let's assume $U = \operatorname{span}(u_1, u_2, \dots, u_k)$ and, since you said so in your question, assume that the $u_i$ are orthogonal. For a vector $v$, you can project $v$ onto $U$ by
$$v_{\| U} = \sum_{i =1}^k \frac{\langle v, u_i\rangle}{\langle u_i, u_i \rangle} u_i = \frac{\langle v , u_1 \rangle}{\langle u_1 , u_1 \rangle} u_1 + \dots + \frac{\langle v , u_k \rangle}{\langle u_k , u_k \rangle} u_k.$$</p>
| <p>Take a basis $\{v_1, \dots, v_n\}$ for the "signal subspace" $V$. Let's assume $V$ is finite dimensional for simplicity and practical purposes, but you can generalize to infinite dimensions. Let's also assume the basis is orthonormal.</p>
<p>The projection of your signal $f$ onto the subspace $V$ is just</p>
<p>$$\mathrm{proj}_V(f) = \sum_{i=1}^n \langle f, v_i \rangle v_i$$</p>
<p>and $f = \mathrm{proj}_V(f) + R(f)$, where $R(f)$ is the remainder, or orthogonal complement, which will be 0 if $f$ lies in the subspace $V$. </p>
<p>The $i$-th term of the sum, $\langle f, v_i\rangle$, is the projection of $f$ onto the subspace spanned by the $i$-th basis vector. (Note, if the $v_i$ are orthogonal, but not necessarily orthonormal, you must divide the $i$-th term by $\|v_i\|^2$.)</p>
|
probability | <p>I'm studying Probability theory, but I can't fully understand what are Borel sets. In my understanding, an example would be if we have a line segment [0, 1], then a Borel set on this interval is a set of all intervals in [0, 1]. Am I wrong? I just need more examples. </p>
<p>Also I want to understand what is Borel $\sigma$-algebra.</p>
| <p>To try and motivate the technical answers, I'm ploughing through this stuff myself, so, people, do correct me:</p>
<p>Imagine <a href="https://en.wikipedia.org/wiki/Arnold_Schwarzenegger" rel="noreferrer">Arnold Schwarzenegger</a>'s height was recorded to <em>infinite</em> precision. Would you prefer to try and guess Arnie's <em>exact</em> height, or some <em>interval</em> containing it?</p>
<p>But what if there was a website for this game, which provided some pre-defined intervals? That could be quite annoying, if say, the bands offered were <span class="math-container">$[0,1m)$</span> and <span class="math-container">$[1m,\infty)$</span>. I suspect most of us could improve on those.</p>
<p>Wouldn't it be better to be able to choose an arbitrary interval? That's what the Borel <span class="math-container">$\sigma$</span>-algebra offers: a choice of all the possible intervals you might need or want. </p>
<p>It would make for a seriously (infinitely) long drop down menu, but it's conceptually equivalent: all the members are predefined. But you still get the convenience of choosing an arbitrary interval. </p>
<p>The Borel sets just function as the building blocks for the menu that is the Borel <span class="math-container">$\sigma$</span>-algebra.</p>
| <p>First let me clear one misconception.</p>
<p>The set <em>of all subintervals</em> is <strong>not</strong> a Borel set, but rather a collection of Borel sets. Every subinterval <em>is</em> a Borel set on its own accord.</p>
<p>To understand the Borel sets and their connection with probability one first needs to bear in mind two things:</p>
<ol>
<li><p>Probability is $\sigma$-additive, namely if $\{X_i\mid i\in\mathbb N\}$ is a list of mutually exclusive events then $P(\bigcup X_i)=\sum P(X_i)$.</p>
<p>Therefore the collection of all events which we can measure their probability must have the property that it is closed under countable unions; trivially we require closure under complements (i.e. negation) and thus by DeMorgan we have also closure under countable intersections.</p>
<p>If so, the set of all events which we can measure the probability of them happening is a $\sigma$-algebra.</p></li>
<li><p>We wish to extend the idea that the probability that $x\in (a,b)$, where $(a,b)$ is a subinterval of $[0,1]$ is exactly $b-a$. Namely the length of the interval is the probability that we choose a point from it.</p></li>
</ol>
<p>Combine these two results and we have that the Borel sets of $[0,1]$ is a collection which is a $\sigma$-algebra, and it contains all the subintervals of $[0,1]$. Since we do not want to add more than we need, then Borel sets are defined to be <strong>the smallest $\sigma$-algebra which contains all the subintervals</strong>.</p>
|
differentiation | <p>Are the symbols $d$ and $\delta$ equivalent in expressions like $dy/dx$? Or do they mean something different?</p>
<p>Thanks</p>
| <p>As Giuseppe Negro said in a comment, $\delta$ is never used in mathematics in $$\frac{dy}{dx}.$$</p>
<p>(I am a physics ignoramus, so I do not know whether it is used in that context in physics, or what it might mean if it is.)</p>
<p>You do sometimes see $$\frac{\partial y}{\partial x}$$</p>
<p>which means that $y$ is a function of several variables, including $x$, and you are taking the <a href="http://enwp.org/partial_derivative"><em>partial</em> derivative</a> of $y$ with respect to $x$. This is a slightly different meaning than just $\frac{dy}{dx}$. For example, suppose that $f(x,y)$ is a function of both $x$ and $y$, and that each of $x$ and $y$ can in turn be expressed as functions of a third variable, $t$. Then one can write:</p>
<p>$$\frac{df}{dt} = \frac{\partial f}{\partial x}\frac{dx}{dt} +
\frac{\partial f}{\partial y}\frac{dy}{dt}$$</p>
<p>The $\partial$ symbol is not a Greek delta ($\delta$), but a variant on the Latin letter 'd'. In $\TeX$, you get it by writing <code>\partial</code>.</p>
| <p>I am so excited that I can help! I just learned about this in Thermodynamics (pg 95 of Fundamentals of Thermodynamics, Borgnakke & Sonntag). ' "d" stands for the exact differential (as often used in mathmatics); where the change in volume depends only on the initial and final states. "δ" refers to an inexact differential (which is used in physics when calculating things like work), where the quasi-equilibrium process between the two given states depends on the path followed. The differentials of path functions are inexact differentials and designated by, "δ." '</p>
|
matrices | <p>Let $A$ be an n×n matrix with eigenvalues $\lambda_i, i=1,2,\dots,n$. Then
$\lambda_1^k,\dots,\lambda_n^k$ are eigenvalues of $A^k$.</p>
<ol>
<li>I was wondering if $\lambda_1^k,\dots,\lambda_n^k$ are <strong>all</strong> the
eigenvalues of $A^k$?</li>
<li>Are the algebraic and geometric multiplicities of $\lambda_i^k$ for
$A^k$ same as those of $\lambda_i$ for $A$ respectively?</li>
</ol>
<p>Thanks!</p>
| <p>The powers of the eigenvalues are indeed all the eigenvalues of $A^k$. I will limit myself to $\mathbb{R}$ and $\mathbb{C}$ for brevity.</p>
<p>The algebraic multiplicity of the matrix will indeed be preserved (up to merging as noted in the comments). An easy way to see this is to take the matrix over $\mathbb{C}$ where the minimal polynomial splits. This means that the matrix is triangularizable. The spectrum of the matrix appear on the diagonals of the triangularized matrix and successive powers will alter the eigenvalues accordingly.</p>
<p>Geometric multiplicity is not as easy to answer. Consider a nilpotent matrix $N$ which is in general <em>not diagonalizable</em> over $\mathbb{R}$ or $\mathbb{C}$. But eventually, we will hit $N^k = 0$ which is trivially diagonal. </p>
<p>This can be generalized: The geometric multiplicity of $0$ will always increase to the algebraic multiplicity of $0$ given enough iterations, this is because the zero Jordan blocks of a matrix is nilpotent and will eventually merge with enough powers.</p>
<p>For non-zero eigenvalues, this is a bit different. Consider the iteration of generalized eigenspaces for $A^k$ given by
$$(A^k-\lambda^kI)^i\mathbf{v}$$
for successive powers of $i$. The difference of powers above splits into the $k$th roots of $\lambda^k$ given as
$$[P(A)]^i(A-\lambda I)^i\mathbf{v}$$
where we collect the other $k$th roots into a polynomial $P(A)$. Notice that <em>if</em> the matrix does not contain two distinct $k$th roots of $\lambda^k$ then $P(A)$ is in fact invertible. This means that
$$[P(A)]^i(A-\lambda I)^i\mathbf{v} = \mathbb{0} \iff (A-\lambda I)^i\mathbf{v} = \mathbb{0}$$
Note that this remains true for each generalized eigenspace even if there <em>are</em> distinct eigenvalues which share a $k$th power because the mapping $(A-\lambda I)$ restricted to the generalized eigenspace of another eigenvalue is <em>injective</em>.</p>
<p>This means that generalized eigenvectors of $A$ remain generalized eigenvectors of $A^k$ under the same height. More precisely, this implies that the structures of the generalized eigenspaces are identical and so the matrix power $A^k$ has the same Jordan form for that eigenvalue as $A$. It follows that the dimension of the regular eigenspace is also retained, so the geometric multiplicity is stable.</p>
<p><strong>In Summary:</strong></p>
<ol>
<li>The spectrum is changed as expected to the $k$th powers.</li>
<li>The algebraic multiplicities of the eigenvalues powers are retained. If eigenvalues merge, then their multiplicities are added.</li>
<li>The geometric multiplicity of $0$ will increase to the algebraic multiplicity. The geometric multiplicities of other eigenvalues are retained. Again if there is merging, then the multiplicities are merged.</li>
</ol>
| <p>I feel like I should mention this theorem. Forgot its name but I think its one of the spectral theorems. It says that if $\lambda$ is an eigenvalue for a matrix A and $f(x)$ is any analytic function, then $f(\lambda)$ is an eigenvalue for $f(A)$. So even $\sin(A)$ will have $\sin(\lambda)$ as its eigenvalues.</p>
<p>In your case, just take $f(x)=x^k$ and then apply it to all of the eigenvalues. So yes, $\lambda_n^k$ are all of the eigenvalues.</p>
|
game-theory | <p>Can someone please guide me to a way by which I can solve the following problem.
There is a die and 2 players. Rolling stops as soon as some exceeds 100(not including 100 itself). Hence you have the following choices: 101, 102, 103, 104, 105, 106.
Which should I choose given first choice.
I'm thinking Markov chains, but is there a simpler way? </p>
<p>Thanks.</p>
<p>EDIT: I wrote dice instead of die. There is just one die being rolled</p>
| <p>My attempt: a combination of my comment and Shai's comment.</p>
<p>A partition means a partition of n using only 1,2,3,4,5,6.</p>
<p>Number of ways to get to 101: Number of partitions of 101.</p>
<p>Number of ways to get to 102: partitions of 96 + partitions of 97... + partitions of 100. Since we can have 96+6, 97+5, ...100+2.</p>
<p>...</p>
<p>Number of ways to get to 106: Partitions of 100. Since we can get to 106 only by adding to 100 then rolling 6.</p>
<p>A partition of n will be the nth coefficient x in $\prod_{k=0}^6 \frac{1}{1-x^k}$ using the geometric series. But an easier observation is that $a_n = a_{n-1}+a_{n-2}+a_{n-3}+a_{n-4}+a_{n-5}+a_{n-6}$. Using this </p>
<p>Number of ways to get to 101 is $a_{101}=a_{100}+a_{99}+a_{98}+a_{97}+a_{96}+a_{95}$
Number of ways to get to 102 is $a_{100}+a_{99}+a_{98}+a_{97}+a_{96}$
</p>
<p>...</p>
<p>Number of ways to get to 106 is $a_{100}$
</p>
<p>Since $a_n > 0$ we see that 101 will occur the most frequently.</p>
| <p>It's a nice problem. The chance of hitting $101$ first is surprisingly large.
Let $a(x)$ be the chance of hitting $101$ first, starting at $x$. </p>
<p>We solve recursively by setting $a(106)=a(105)=a(104)=a(103)=a(102)=0$ and $a(101)=1$. Then, for $x$ from $100$ to $0$, put $a(x)={1\over 6}\sum_{j=1}^6 a(x+j)$. By the renewal theorem, $a(x)$ converges to $1/\mu$, where $\mu=(1+2+3+4+5+6)/6=7/2$.
The value $a(0)$ is very close to this limit. </p>
<p>In fact, the whole hitting distribution is approximately $[6/21,5/21,4/21,3/21,2/21,1/21]$.</p>
<hr>
<p><strong>Added:</strong> Let $a^\prime(x)$ be the chance of hitting $102$ first, starting at $x$.
Then $a^\prime(x)$ satisfies the same recurrence as $a(x)$ for $x\leq 100$, but with different boundary conditions $a^\prime(106)=a^\prime(105)=a^\prime(104)=a^\prime(103)=a^\prime(101)=0$ and $a^\prime(102)=1$. Therefore<br>
$$a^\prime(x)=a(x-1)-a(100)a(x)$$ and
$$a^\prime(0)\approx {1\over \mu}-{1\over 6}{1\over \mu}={5\over 21}. $$
The rest of the hitting distribution can be analyzed in a similar way. </p>
|
linear-algebra | <p>Why Linear Algebra named in that way?
Especially, why we call it <code>linear</code>? What does it mean?</p>
| <p>Linear algebra is so named because it studies linear functions. A linear function is one for which</p>
<p>$$f(x+y) = f(x) + f(y)$$</p>
<p>and</p>
<p>$$f(ax) = af(x)$$</p>
<p>where $x$ and $y$ are vectors and $a$ is a scalar. Roughly, this means that inputs are proportional to outputs and that the function is additive. We get the name 'linear' from the prototypical example of a linear function in one dimension: a straight line through the origin. However, linear functions can be more complex than this (or indeed, simpler: the function $f(x)=0$ for all $x$ is a linear function!</p>
<p>Of course, I've brushed a lot of detail under the carpet here. For example, what kind of space are $x$ and $y$ members of? (Answer: They're elements of a vector space). Do $x$ and $f(x)$ have to belong to the same space? (Answer: No). If they belong to different spaces, what does it mean to write $ax$ and $af(x)$? (Answer: you need an action by the same field on each of the vector spaces). Do the vector spaces have to be finite dimensional? (Answer: no, and in fact a lot of really interesting linear algebra takes place over infinite-dimensional vector spaces).</p>
<p>I hope that's enough to get you started.</p>
| <p>I think <em>linear</em> refers to the fact that vector spaces are not curved. For instance, the wikipedia page for <a href="http://en.wikipedia.org/wiki/Linear_space">linear spaces</a> gets redirected to the page on vector spaces. So does the <a href="http://mathworld.wolfram.com/LinearSpace.html">one</a> at MathWorld.</p>
<p>From Moore in <a href="http://dx.doi.org/10.1006%2Fhmat.1995.1025">The axiomatization of linear algebra: 1875–1940</a> I've learned that:</p>
<ul>
<li>Peano used <em>linear systems</em> for what we now call vector spaces. This reflects the view that linear algebra is about spaces of linear algebraic relations. (p. 265, 266)</li>
<li>Pincherle was the first to use the term <em>linear space</em> for the concept of vector space. (p. 270)</li>
<li>Hahn used <em>linear space</em> for <em>normed vector space</em>. (p. 277)</li>
<li><em>Linear transformations</em> as an abstract concept seem to have been introduced much later by Emmy Noether. (p. 293)</li>
<li>The term <em>linear algebra</em> was first used in the modern sense by van der Waerden although the term can be found earlier in Weyl. (p. 294)</li>
</ul>
|
linear-algebra | <p>Where can I find a list of typos for <em>Linear Algebra</em>, 2nd Edition, by Hoffman and Kunze? I searched on Google, but to no avail.</p>
| <p>This list does not repeat the typos mentioned in the other answers.</p>
<h2>Chapter 1</h2>
<ol>
<li>Page 6, last paragraph.</li>
</ol>
<blockquote>
<p>An elementary row operation is thus a special type of function (rule) <span class="math-container">$e$</span> which associated with each <span class="math-container">$m \times n$</span> matrix . . .</p>
</blockquote>
<p>It should be "associates".</p>
<ol start="2">
<li>Page 10, proof of Theorem 4, second paragraph.</li>
</ol>
<blockquote>
<p>say it occurs in column <span class="math-container">$k_r \neq k$</span>.</p>
</blockquote>
<p>It should be <span class="math-container">$k' \neq k$</span>.</p>
<ol start="3">
<li>Page 18, last paragraph.</li>
</ol>
<blockquote>
<p>If <span class="math-container">$B$</span> is an <span class="math-container">$n \times p$</span> matrix, the columns of <span class="math-container">$B$</span> are the <span class="math-container">$1 \times n$</span> matrices . . .</p>
</blockquote>
<p>It should be <span class="math-container">$n \times 1$</span>.</p>
<ol start="4">
<li>Page 24, statement of second corollary.</li>
</ol>
<blockquote>
<p><em>Let <span class="math-container">$\text{A} = \text{A}_1 \text{A}_2 \cdots A_\text{k}$</span>, where <span class="math-container">$\text{A}_1 \dots,A_\text{k}$</span> are</em> . . .</p>
</blockquote>
<p>The formatting of <span class="math-container">$A_\text{k}$</span> is incorrect in both instances. Also, there should be a comma after <span class="math-container">$\text{A}_1$</span> in the second instance. So, it should be "<em>Let <span class="math-container">$\text{A} = \text{A}_1 \text{A}_2 \cdots \text{A}_\text{k}$</span>, where <span class="math-container">$\text{A}_1, \dots,\text{A}_\text{k}$</span> are</em> . . .".</p>
<h2>Chapter 2</h2>
<ol>
<li>Page 52, below equation (2–16).</li>
</ol>
<blockquote>
<p>Thus from (2–16) and Theorem 7 of Chapter 1 . . .</p>
</blockquote>
<p>It should be Theorem 13.</p>
<ol start="2">
<li>Page 57, second last displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ \beta = (0,\dots,0,\ \ b_{k_s},\dots,b_n), \quad b_{k_s} \neq 0$$</span></p>
</blockquote>
<p>The formatting on the right-hand side is not correct. There is too much space before <span class="math-container">$b_{k_s}$</span>. It should be <span class="math-container">$$\beta = (0,\dots,0,b_{k_s},\dots,b_n), \quad b_{k_s} \neq 0$$</span>
instead.</p>
<ol start="3">
<li>Page 57, last displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ \beta = (0,\dots,0,\ \ b_t,\dots,b_n), \quad b_t \neq 0.$$</span></p>
</blockquote>
<p>The formatting on the right-hand side is not correct. There is too much space before <span class="math-container">$b_t$</span>. It should instead be <span class="math-container">$$\beta = (0,\dots,0,b_t,\dots,b_n), \quad b_t \neq 0.$$</span></p>
<ol start="4">
<li>Page 62, second last paragraph.</li>
</ol>
<blockquote>
<p>So <span class="math-container">$\beta = (b_1,b_2,b_3,b_4)$</span> is in <span class="math-container">$W$</span> if and only if <span class="math-container">$b_3 - 2b_1$</span>. . . .</p>
</blockquote>
<p>It should be <span class="math-container">$b_3 = 2b_1$</span>.</p>
<h2>Chapter 3</h2>
<ol>
<li>Page 76, first paragraph.</li>
</ol>
<blockquote>
<p>let <span class="math-container">$A_{ij},\dots,A_{mj}$</span> be the coordinates of the vector . . .</p>
</blockquote>
<p>It should be <span class="math-container">$A_{1j},\dots,A_{mj}$</span>.</p>
<ol start="2">
<li>Page 80, Example 11.</li>
</ol>
<blockquote>
<p>For example, if <span class="math-container">$U$</span> is the operation 'remove the constant term and divide by <span class="math-container">$x$</span>': <span class="math-container">$$ U(c_0 + c_1 x + \dots + c_n x^n) = c_1 + c_2 x + \dots + c_n x^{n-1}$$</span> then . . .</p>
</blockquote>
<p>There is a subtlety in the phrase within apostrophes: what if <span class="math-container">$x = 0$</span>? Rather than having to specify for this case separately, the sentence can be worded more simply as, "For example, if <span class="math-container">$U$</span> is the operator defined by <span class="math-container">$$U(c_0 + c_1 x + \dots + c_n x^n) = c_1 + c_2 x + \dots + c_n x^{n-1}$$</span> then . . .".</p>
<ol start="3">
<li>Page 81, last line.</li>
</ol>
<blockquote>
<p>(iv) <em>If <span class="math-container">$\{ \alpha_1,\dots,\alpha_{\text{n}}\}$</span> is basis for <span class="math-container">$\text{V}$</span>, then <span class="math-container">$\{\text{T}\alpha_1,\dots,\text{T}\alpha_{\text{n}}\}$</span> is a basis for <span class="math-container">$\text{W}$</span>.</em></p>
</blockquote>
<p>It should read "(iv) <em>If <span class="math-container">$\{ \alpha_1,\dots,\alpha_{\text{n}}\}$</span> is a basis for <span class="math-container">$\text{V}$</span>, then . . .</em>".</p>
<ol start="4">
<li>Page 90, second last paragraph.</li>
</ol>
<blockquote>
<p>We should also point out that we proved a special case of Theorem 13 in Example 12.</p>
</blockquote>
<p>It should be "in Example 10."</p>
<ol start="5">
<li>Page 91, first paragraph.</li>
</ol>
<blockquote>
<p>For, the identity operator <span class="math-container">$I$</span> is represented by the identity matrix in any order basis, and thus . . .</p>
</blockquote>
<p>It should be "ordered".</p>
<ol start="6">
<li>Page 92, statement of Theorem 14.</li>
</ol>
<blockquote>
<p><em>Let <span class="math-container">$\text{V}$</span> be a finite-dimensional vector space over the field <span class="math-container">$\text{F}$</span> and let
<span class="math-container">$$\mathscr{B} = \{ \alpha_1,\dots,\alpha \text{i} \} \quad \textit{and} \quad \mathscr{B}'=\{ \alpha'_1,\dots,\alpha'_\text{n}\}$$</span> be ordered bases</em> . . .</p>
</blockquote>
<p>It should be <span class="math-container">$\mathscr{B} = \{ \alpha_1,\dots,\alpha_\text{n}\}$</span>.</p>
<ol start="7">
<li>Page 100, first paragraph.</li>
</ol>
<blockquote>
<p>If <span class="math-container">$f$</span> is in <span class="math-container">$V^*$</span>, and we let <span class="math-container">$f(\alpha_i) = \alpha_i$</span>, then when . . .</p>
</blockquote>
<p>It should be <span class="math-container">$f(\alpha_i) = a_i$</span>.</p>
<ol start="8">
<li>Page 101, paragraph following the definition.</li>
</ol>
<blockquote>
<p>If <span class="math-container">$S = V$</span>, then <span class="math-container">$S^0$</span> is the zero subspace of <span class="math-container">$V^*$</span>. (This is easy to see when <span class="math-container">$V$</span> is finite dimensional.)</p>
</blockquote>
<p>It is equally easy to see this when <span class="math-container">$V$</span> is infinite-dimensional, so the statement in the brackets is redundant. Perhaps the authors meant to say that <span class="math-container">$\{ v \in V : f(v) = 0\ \forall\ f \in V^* \}$</span> is the zero subspace of <span class="math-container">$V$</span>. <a href="https://math.stackexchange.com/q/2815104/279515">This question</a> asks for details on this point.</p>
<ol start="9">
<li>Page 102, proof of the second corollary.</li>
</ol>
<blockquote>
<p>By the previous corollaries (or the proof of Theorem 16) there is a linear functional <span class="math-container">$f$</span> such that <span class="math-container">$f(\beta) = 0$</span> for all <span class="math-container">$\beta$</span> in <span class="math-container">$W$</span>, but <span class="math-container">$f(\alpha) \neq 0$</span>. . . .</p>
</blockquote>
<p>It should be "corollary", since there is only one previous corollary. Also, <span class="math-container">$W$</span> should be replaced by <span class="math-container">$W_1$</span>.</p>
<ol start="10">
<li>Page 112, statement of Theorem 22.</li>
</ol>
<blockquote>
<p>(i) <em>rank <span class="math-container">$(T^t) = $</span> rank <span class="math-container">$(T)$</span></em></p>
</blockquote>
<p>There should be a semi-colon at the end of the line.</p>
<h2>Chapter 4</h2>
<ol>
<li>Page 118, last displayed equation, third line.</li>
</ol>
<blockquote>
<p><span class="math-container">$$=\sum_{i=0}^n \sum_{j=0}^i f_i g_{i-j} h_{n-i} $$</span></p>
</blockquote>
<p>It should be <span class="math-container">$f_j$</span>. <a href="https://math.stackexchange.com/q/1522044/279515">It is also not immediately clear</a> how to go from this line to the next line.</p>
<ol start="2">
<li>Page 126, proof of Theorem 3.</li>
</ol>
<blockquote>
<p>By definition, the mapping is onto, and if <span class="math-container">$f$</span>, <span class="math-container">$g$</span> belong to <span class="math-container">$F[x]$</span> it is evident that <span class="math-container">$$(cf+dg)^\sim = df^\sim + dg^\sim$$</span> for all scalars <span class="math-container">$c$</span> and <span class="math-container">$d$</span>. . . .</p>
</blockquote>
<p>It should be <span class="math-container">$(cf+dg)^\sim = cf^\sim + dg^\sim$</span>.</p>
<ol start="3">
<li>Page 126, proof of Theorem 3.</li>
</ol>
<blockquote>
<p>Suppose then that <span class="math-container">$f$</span> is a polynomial of degree <span class="math-container">$n$</span> such that <span class="math-container">$f' = 0$</span>. . . .</p>
</blockquote>
<p>It should be <span class="math-container">$f^\sim = 0$</span>.</p>
<ol start="4">
<li>Page 128, statement of Theorem 4.</li>
</ol>
<blockquote>
<p>(i) <span class="math-container">$f = dq + r$</span>.</p>
</blockquote>
<p>The full stop should be a semi-colon.</p>
<ol start="5">
<li><p>Page 129, paragraph before statement of Theorem 5. The notation <span class="math-container">$D^0$</span> needs to be introduced, so the sentence, "We also use the notation <span class="math-container">$D^0 f = f$</span>" can be added at the end of the paragraph.</p>
</li>
<li><p>Page 131, first displayed equation, second line.</p>
</li>
</ol>
<blockquote>
<p><span class="math-container">$$ = \sum_{m = 0}^{n-r} \frac{(D^m g)}{m!}(x-c)^{r+m} $$</span></p>
</blockquote>
<p>There should be a full stop at the end of the line.</p>
<ol start="7">
<li>Page 135, proof of Theorem 8.</li>
</ol>
<blockquote>
<p>Since <span class="math-container">$(f,p) = 1$</span>, there are polynomials . . .</p>
</blockquote>
<p>It should be <span class="math-container">$\text{g.c.d.}{(f,p)} = 1$</span>.</p>
<ol start="8">
<li>Page 137, first paragraph.</li>
</ol>
<blockquote>
<p>This decomposition is also clearly unique, and is called the <strong>primary decomposition</strong> of <span class="math-container">$f$</span>. . . .</p>
</blockquote>
<p>For the sake of clarity, the following sentence can be added after the quoted line: "Henceforth, whenever we refer to the <strong>prime factorization</strong> of a non-scalar monic polynomial we mean the primary decomposition of the polynomial."</p>
<ol start="9">
<li><p>Page 137, proof of Theorem 11. The chain rule for the formal derivative of a product of polynomials is used, but <a href="https://proofwiki.org/wiki/Formal_Derivative_of_Polynomials_Satisfies_Leibniz%27s_Rule" rel="noreferrer">this needs proof</a>.</p>
</li>
<li><p>Page 139, Exercise 7.</p>
</li>
</ol>
<blockquote>
<p>Use Exercise 7 to prove the following. . . .</p>
</blockquote>
<p>It should be "Use Exercise 6 to prove the following. . . ."</p>
<h2>Chapter 5</h2>
<ol>
<li>Page 142, second last displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$\begin{align} D(c\alpha_i + \alpha'_{iz}) &= [cA(i,k_i) + A'(i,k_i)]b \\ &= cD(\alpha_i) + D(\alpha'_i) \end{align}$$</span></p>
</blockquote>
<p>The left-hand side should be <span class="math-container">$D(c\alpha_i + \alpha'_i)$</span>.</p>
<ol start="2">
<li>Page 166, first displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$\begin{align*}L(\alpha_1,\dots,c \alpha_i + \beta_i,\dots,\alpha_r) &= cL(\alpha_1,\dots,\alpha_i,\dots,\alpha_r {}+{} \\ &\qquad \qquad \qquad \qquad L(\alpha_1,\dots,\beta_i,\dots,\alpha_r)\end{align*}$$</span></p>
</blockquote>
<p>The first term on the right has a missing closing bracket, so it should be <span class="math-container">$cL(\alpha_1,\dots,\alpha_i,\dots,\alpha_r)$</span>.</p>
<ol start="3">
<li>Page 167, second displayed equation, third line.</li>
</ol>
<blockquote>
<p><span class="math-container">$${}={} \sum_{j=1}^n A_{1j} L\left( \epsilon_j, \sum_{j=1}^n A_{2k} \epsilon_k, \dots, \alpha_r \right) $$</span></p>
</blockquote>
<p>The second summation should run over the index <span class="math-container">$k$</span> instead of <span class="math-container">$j$</span>.</p>
<ol start="4">
<li><p>Page 170, proof of the lemma. To show that <span class="math-container">$\pi_r L \in \Lambda^r(V)$</span>, the authors show that <span class="math-container">$(\pi_r L)_\tau = (\operatorname{sgn}{\tau})(\pi_rL)$</span> for every permutation <span class="math-container">$\tau$</span> of <span class="math-container">$\{1,\dots,r\}$</span>. This implies that <span class="math-container">$\pi_r L$</span> is alternating only when <span class="math-container">$K$</span> is a ring such that <span class="math-container">$1 + 1 \neq 0$</span>. <a href="https://math.stackexchange.com/q/1140462/279515">A proof over arbitrary commutative rings with identity</a> is still needed.</p>
</li>
<li><p>Page 170, first paragraph after proof of the lemma.</p>
</li>
</ol>
<blockquote>
<p>In (5–33) we showed that the determinant . . .</p>
</blockquote>
<p>It should be (5–34).</p>
<ol start="6">
<li>Page 171, equation (5–39).</li>
</ol>
<blockquote>
<p><span class="math-container">$$\begin{align} D_J &= \sum_\sigma (\operatorname{sgn} \sigma)\ f_{j_{\sigma 1}} \otimes \dots \otimes f_{j_{\sigma r}} \tag{5–39}\\ &= \pi_r (f_{j_1} \otimes \dots \otimes f_{j_r}) \end{align}$$</span></p>
</blockquote>
<p>The equation tag should be centered instead of being aligned at the first line.</p>
<ol start="7">
<li>Page 173, Equation (5-42)</li>
</ol>
<blockquote>
<p><span class="math-container">$$ D_J(\alpha_1,\dotsc,\alpha_r) = \sum_\sigma (\operatorname{sgn} \sigma) A(1,j_{\sigma 1})\dotsm A(n,j_{\sigma n}) \tag{5-42}$$</span></p>
</blockquote>
<p>There are only <span class="math-container">$r$</span> terms in the product. Hence the equation should instead be: <span class="math-container">$D_J(\alpha_1,\dotsc,\alpha_r) = \sum_\sigma (\operatorname{sgn} \sigma) A(1,j_{\sigma 1})\dotsm A(r,j_{\sigma r})$</span>.</p>
<ol start="8">
<li>Page 174, below the second displayed equation.</li>
</ol>
<blockquote>
<p>The proof of the lemma following equation (5–36) shows that for any <span class="math-container">$r$</span>-linear form <span class="math-container">$L$</span> and any permutation <span class="math-container">$\sigma$</span> of <span class="math-container">$\{1,\dots,r\}$</span>
<span class="math-container">$$
\pi_r(L_\sigma) = \operatorname{sgn} \sigma\ \pi_r(L)
$$</span></p>
</blockquote>
<p>The proof of the lemma actually shows <span class="math-container">$(\pi_r L)_\sigma = \operatorname{sgn} \sigma\ \pi_r(L)$</span>. <a href="https://math.stackexchange.com/q/2587664/279515">This fact still needs proof</a>. Also, there should be a full stop at the end of the displayed equation.</p>
<ol start="9">
<li>Page 174, below the third displayed equation.</li>
</ol>
<blockquote>
<p>Hence, <span class="math-container">$D_{ij} \cdot f_k = 2\pi_3(f_i \otimes f_j \otimes f_k)$</span>.</p>
</blockquote>
<p>This is not immediate from just the preceding equations. The authors implicitly assume the identity <span class="math-container">$(f_{j_1} \otimes \dots \otimes f_{j_r})_\sigma = f_{j_{\sigma^{-1} 1}}\! \otimes \dots \otimes f_{j_{\sigma^{-1} r}}$</span>. <a href="https://math.stackexchange.com/a/2587246/279515">This identity needs proof</a>.</p>
<ol start="10">
<li>Page 174, sixth displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$(D_{ij} \cdot f_k) \cdot f_l = 6 \pi_4(f_i \otimes f_j \otimes f_k \otimes f_l)$$</span></p>
</blockquote>
<p>The factor <span class="math-container">$6$</span> <a href="https://math.stackexchange.com/q/1208424/279515">should be replaced</a> by <span class="math-container">$12$</span>.</p>
<ol start="11">
<li>Page 174, last displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ (L \otimes M)_{(\sigma,\tau)} = L_\sigma \otimes L_\tau$$</span></p>
</blockquote>
<p>The right-hand side should be <span class="math-container">$L_\sigma \otimes M_\tau$</span>.</p>
<ol start="12">
<li>Page 177, below the third displayed equation.</li>
</ol>
<blockquote>
<p>Therefore, since <span class="math-container">$(N\sigma)\tau = N\tau \sigma$</span> for any <span class="math-container">$(r+s)$</span>-linear form . . .</p>
</blockquote>
<p>It should be <span class="math-container">$(N_\sigma)_\tau = N_{\tau \sigma}$</span>.</p>
<ol start="13">
<li>Page 179, last displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ (L \wedge M)(\alpha_1,\dots,\alpha_n) = \sum (\operatorname{sgn} \sigma) L(\alpha \sigma_1,\dots,\alpha_{\sigma r}) M(\alpha_{\sigma(r+1)},\dots,\alpha_{\sigma_n}) $$</span></p>
</blockquote>
<p>The right-hand side should have <span class="math-container">$L(\alpha_{\sigma 1},\dots,\alpha_{\sigma r})$</span> and <span class="math-container">$M(\alpha_{\sigma (r+1)},\dots,\alpha_{\sigma n})$</span>.</p>
<h2>Chapter 6</h2>
<ol>
<li>Page 183, first paragraph.</li>
</ol>
<blockquote>
<p>If the underlying space <span class="math-container">$V$</span> is finite-dimensional, <span class="math-container">$(T-cI)$</span> fails to be <span class="math-container">$1 : 1$</span> precisely when its determinant is different from <span class="math-container">$0$</span>.</p>
</blockquote>
<p>It should instead be "precisely when its determinant is <span class="math-container">$0$</span>."</p>
<ol start="2">
<li>Page 186, proof of second lemma.</li>
</ol>
<blockquote>
<p>one expects that <span class="math-container">$\dim W < \dim W_1 + \dots \dim W_k$</span> because of linear relations . . .</p>
</blockquote>
<p>It should be <span class="math-container">$\dim W \leq \dim W_1 + \dots + \dim W_k$</span>.</p>
<ol start="3">
<li>Page 194, statement of Theorem 4 (Cayley-Hamilton).</li>
</ol>
<blockquote>
<p><em>Let <span class="math-container">$\text{T}$</span> be a linear operator on a finite dimensional vector space <span class="math-container">$\text{V}$</span>. . . .</em></p>
</blockquote>
<p>It should be "finite-dimensional".</p>
<ol start="4">
<li>Page 195, first displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$T\alpha_i = \sum_{j=1}^n A_{ji} \alpha_j,\quad 1 \leq j \leq n.$$</span></p>
</blockquote>
<p>It should be <span class="math-container">$1 \leq i \leq n$</span>.</p>
<ol start="5">
<li>Page 195, above the last paragraph.</li>
</ol>
<blockquote>
<p>since <span class="math-container">$f$</span> is the determinant of the matrix <span class="math-container">$xI - A$</span> whose entries are the polynomials <span class="math-container">$$(xI - A)_{ij} = \delta_{ij} x - A_{ji}.$$</span></p>
</blockquote>
<p>Here <span class="math-container">$xI-A$</span> should be replaced <span class="math-container">$(xI-A)^t$</span> in both places, and it could read "since <span class="math-container">$f$</span> is also the determinant of" for more clarity.</p>
<ol start="6">
<li>Page 203, proof of Theorem 5, last paragraph.</li>
</ol>
<blockquote>
<p>The diagonal entries <span class="math-container">$a_{11},\dots,a_{1n}$</span> are the characteristic values, . . .</p>
</blockquote>
<p>It should be <span class="math-container">$a_{11},\dots,a_{nn}$</span>.</p>
<ol start="7">
<li>Page 207, proof of Theorem 7.</li>
</ol>
<blockquote>
<p>this theorem has the same proof as does Theorem 5, if one replaces <span class="math-container">$T$</span> by <span class="math-container">$\mathscr{F}$</span>.</p>
</blockquote>
<p>It would make more sense if it read "replaces <span class="math-container">$T$</span> by <span class="math-container">$T \in \mathscr{F}$</span>."</p>
<ol start="8">
<li>Page 207-208, proof of Theorem 8.</li>
</ol>
<blockquote>
<p>We could prove this theorem by adapting the lemma before Theorem 7 to the diagonalizable case, just as we adapted the lemma before Theorem 5 to the diagonalizable case in order to prove Theorem 6.</p>
</blockquote>
<p>The adaptation of the lemma before Theorem 5 is not explicitly done. <a href="https://math.stackexchange.com/q/2184825/279515">It is hidden in the proof of Theorem 6</a>.</p>
<ol start="9">
<li>Page 212, statement of Theorem 9.</li>
</ol>
<blockquote>
<p><em>and if we let <span class="math-container">$\text{W}_\text{i}$</span> be the range of <span class="math-container">$\text{E}_\text{i}$</span>, then <span class="math-container">$\text{V} = \text{W}_\text{i} \oplus \dots \oplus \text{W}_\text{k}$</span>.</em></p>
</blockquote>
<p>It should be <span class="math-container">$\text{V} = \text{W}_1 \oplus \dots \oplus \text{W}_\text{k}$</span>.</p>
<ol start="10">
<li>Page 216, last paragraph.</li>
</ol>
<blockquote>
<p>One part of Theorem 9 says that for a diagonalizable operator . . .</p>
</blockquote>
<p>It should be Theorem 11.</p>
<ol start="11">
<li>Page 220, statement of Theorem 12.</li>
</ol>
<blockquote>
<p><em>Let <span class="math-container">$\text{p}$</span> be the minimal polynomial for <span class="math-container">$\text{T}$</span>, <span class="math-container">$$\text{p} = \text{p}_1^{\text{r}_1} \cdots \text{p}_k^{r_k}$$</span> where the <span class="math-container">$\text{p}_\text{i}$</span> are distinct irreducible monic polynomials over <span class="math-container">$\text{F}$</span> and the <span class="math-container">$\text{r}_\text{i}$</span> are positive integers. Let <span class="math-container">$\text{W}_\text{i}$</span> be the null space of <span class="math-container">$\text{p}_\text{i}(\text{T})^{\text{r}_j}$</span>, <span class="math-container">$\text{i} = 1,\dots,\text{k}$</span>.</em></p>
</blockquote>
<p>The displayed equation is improperly formatted. It should read <span class="math-container">$\text{p} = \text{p}_1^{\text{r}_1} \cdots \text{p}_\text{k}^{\text{r}_\text{k}}$</span>. Also, in the second sentence it should be <span class="math-container">$\text{p}_\text{i}(\text{T})^{\text{r}_\text{i}}$</span>.</p>
<ol start="12">
<li>Page 221, below the last displayed equation.</li>
</ol>
<blockquote>
<p>because <span class="math-container">$p^{r_i} f_i g_i$</span> is divisible by the minimal polynomial <span class="math-container">$p$</span>.</p>
</blockquote>
<p>It should be <span class="math-container">$p_i^{r_i} f_i g_i$</span>.</p>
<h2>Chapter 7</h2>
<ol>
<li><p>Page 233, proof of Theorem 3, last displayed equation in statement of Step 1. The formatting of "<span class="math-container">$\alpha$</span> in <span class="math-container">$V$</span>" underneath the "<span class="math-container">$max$</span>" operator on the right-hand side is incorrect. It should be "<span class="math-container">$\alpha$</span> in <span class="math-container">$\text{V}$</span>".</p>
</li>
<li><p>Page 233, proof of Theorem 3, displayed equation in statement of Step 2. The formatting of "<span class="math-container">$1 \leq i < k$</span>" underneath the <span class="math-container">$\sum$</span> operator on the right-hand side is incorrect. It should be "<span class="math-container">$1 \leq \text{i} < \text{k}$</span>".</p>
</li>
<li><p>Page 238, paragraph following corollary.</p>
</li>
</ol>
<blockquote>
<p>If we have the operator <span class="math-container">$T$</span> and the direct-sum decomposition of Theorem 3, let <span class="math-container">$\mathscr{B}_i$</span> be the ‘cyclic ordered basis’ . . .</p>
</blockquote>
<p>It should be “of Theorem 3 with <span class="math-container">$W_0 = \{ 0 \}$</span>, . . .”.</p>
<ol start="4">
<li>Page 239, Example 2.</li>
</ol>
<blockquote>
<p>If <span class="math-container">$T = cI$</span>, then for any two linear independent vectors <span class="math-container">$\alpha_1$</span> and <span class="math-container">$\alpha_2$</span> in <span class="math-container">$V$</span> we have . . .</p>
</blockquote>
<p>It should be "linearly".</p>
<ol start="5">
<li>Page 240, second last displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$f = (x-c_1)^{d_1} \cdots (x - c_k)^{d_k}$$</span></p>
</blockquote>
<p>It should just be <span class="math-container">$(x-c_1)^{d_1} \cdots (x - c_k)^{d_k}$</span> because later (on page 241) the letter <span class="math-container">$f$</span> is again used, this time to denote an arbitrary polynomial.</p>
<ol start="6">
<li>Page 244, last paragraph.</li>
</ol>
<blockquote>
<p>where <span class="math-container">$f_i$</span> is a polynomial, the degree of which we may assume is less than <span class="math-container">$k_i$</span>. Since <span class="math-container">$N\alpha = 0$</span>, for each <span class="math-container">$i$</span> we have . . .</p>
</blockquote>
<p>It should be “where <span class="math-container">$f_i$</span> is a polynomial, the degree of which we may assume is less than <span class="math-container">$k_i$</span> whenever <span class="math-container">$f_i \neq 0$</span>. Since <span class="math-container">$N\alpha = 0$</span>, for each <span class="math-container">$i$</span> such that <span class="math-container">$f_i \neq 0$</span> we have . . .”.</p>
<ol start="7">
<li>Page 245, first paragraph.</li>
</ol>
<blockquote>
<p>Thus <span class="math-container">$xf_i$</span> is divisible by <span class="math-container">$x^{k_i}$</span>, and since <span class="math-container">$\deg (f_i) > k_i$</span> this means that <span class="math-container">$$f_i = c_i x^{k_i - 1}$$</span> where <span class="math-container">$c_i$</span> is some scalar.</p>
</blockquote>
<p>It should be <span class="math-container">$\deg (f_i) < k_i$</span>. Also, the following sentence should be added at the end: "If <span class="math-container">$f_j = 0$</span>, then we can take <span class="math-container">$c_j = 0$</span> so that <span class="math-container">$f_j = c_j x^{k_j - 1}$</span> in this case as well."</p>
<ol start="8">
<li>Page 245, last paragraph.</li>
</ol>
<blockquote>
<p>Furthermore, the sizes of these matrices will decrease as one reads from left to right.</p>
</blockquote>
<p>It should be “Furthermore, the sizes of these matrices will not increase as one reads from left to right.”</p>
<ol start="9">
<li>Page 246, first paragraph.</li>
</ol>
<blockquote>
<p>Also, within each <span class="math-container">$A_i$</span>, the sizes of the matrices <span class="math-container">$J_j^{(i)}$</span> decrease as <span class="math-container">$j$</span> increases.</p>
</blockquote>
<p>It should be “Also, within each <span class="math-container">$A_i$</span>, the sizes of the matrices <span class="math-container">$J_j^{(i)}$</span> do not increase as <span class="math-container">$j$</span> increases.”</p>
<ol start="10">
<li>Page 246, third paragraph.</li>
</ol>
<blockquote>
<p>The uniqueness we see as follows.</p>
</blockquote>
<p>This part is not clearly written. What the authors want to show is the following. Suppose that the linear operator <span class="math-container">$T$</span> is represented in some other ordered basis by the matrix <span class="math-container">$B$</span> in Jordan form, where <span class="math-container">$B$</span> is the direct sum of the matrices <span class="math-container">$B_1,\dots,B_s$</span>. Suppose each <span class="math-container">$B_i$</span> is an <span class="math-container">$e_i \times e_i$</span> matrix that is a direct sum of elementary Jordan matrices with characteristic value <span class="math-container">$\lambda_i$</span>. Suppose the matrix <span class="math-container">$B$</span> induces the invariant direct-sum decomposition <span class="math-container">$V = U_1 \oplus \dots \oplus U_s$</span>. Then,
<span class="math-container">$s = k$</span>, and there is a permutation <span class="math-container">$\sigma$</span> of <span class="math-container">$\{ 1,\dots,k\}$</span> such that <span class="math-container">$\lambda_i = c_{\sigma i}$</span>, <span class="math-container">$e_i = d_{\sigma i}$</span>, <span class="math-container">$U_i = W_{\sigma i}$</span>, and <span class="math-container">$B_i = A_{\sigma i}$</span> for each <span class="math-container">$1 \leq i \leq k$</span>.</p>
<ol start="11">
<li>Page 246, third paragraph.</li>
</ol>
<blockquote>
<p>The fact that <span class="math-container">$A$</span> is the direct sum of the matrices <span class="math-container">$\text{A}_i$</span> gives us a direct sum decomposition . . .</p>
</blockquote>
<p>The formatting of <span class="math-container">$\text{A}_i$</span> is incorrect. It should be <span class="math-container">$A_i$</span>.</p>
<ol start="12">
<li>Page 246, third paragraph.</li>
</ol>
<blockquote>
<p>then the matrix <span class="math-container">$A_i$</span> is uniquely determined as the rational form for <span class="math-container">$(T_i - c_i I)$</span>.</p>
</blockquote>
<p>It should be "is uniquely determined by the rational form . . .".</p>
<ol start="13">
<li>Page 248, Example 7.</li>
</ol>
<blockquote>
<p>Since <span class="math-container">$A$</span> is the direct sum of two <span class="math-container">$2 \times 2$</span> matrices, it is clear that the minimal polynomial for <span class="math-container">$A$</span> is <span class="math-container">$(x-2)^2$</span>.</p>
</blockquote>
<p>It should read "Since <span class="math-container">$A$</span> is the direct sum of two <span class="math-container">$2 \times 2$</span> matrices when <span class="math-container">$a \neq 0$</span>, and of one <span class="math-container">$2 \times 2$</span> matrix and two <span class="math-container">$1 \times 1$</span> matrices when <span class="math-container">$a = 0$</span>, it is clear that the minimal polynomial for <span class="math-container">$A$</span> is <span class="math-container">$(x-2)^2$</span> in either case."</p>
<ol start="14">
<li>Page 249, first paragraph.</li>
</ol>
<blockquote>
<p>Then as we noted in Example 15, Chapter 6 the primary decomposition theorem tells us that . . .</p>
</blockquote>
<p>It should be Example 14.</p>
<ol start="15">
<li>Page 249, last displayed equation</li>
</ol>
<blockquote>
<p><span class="math-container">$$\begin{align} Ng &= (r-1)x^{r-2}h \\ \vdots\ & \qquad \ \vdots \\ N^{r-1}g &= (r-1)! h \end{align}$$</span></p>
</blockquote>
<p>There should be a full stop at the end.</p>
<ol start="16">
<li>Page 257, definition.</li>
</ol>
<blockquote>
<p>(b) <em>on the main diagonal of <span class="math-container">$\text{N}$</span> there appear (in order) polynomials <span class="math-container">$\text{f}_1,\dots,\text{f}_l$</span> such that <span class="math-container">$\text{f}_\text{k}$</span> divides <span class="math-container">$\text{f}_{\text{k}+1}$</span>, <span class="math-container">$1 \leq \text{k} \leq l - 1$</span>.</em></p>
</blockquote>
<p>The formatting of <span class="math-container">$l$</span> is incorrect in both instances. So, it should be <span class="math-container">$\text{f}_1,\dots,\text{f}_\text{l}$</span> and <span class="math-container">$1 \leq \text{k} \leq \text{l} - 1$</span>.</p>
<ol start="17">
<li>Page 259, paragraph following the proof of Theorem 9.</li>
</ol>
<blockquote>
<p>Two things we have seen provide clues as to how the polynomials <span class="math-container">$f_1,\dots,f_{\text{l}}$</span> in Theorem 9 are uniquely determined by <span class="math-container">$M$</span>.</p>
</blockquote>
<p>The formatting of <span class="math-container">$l$</span> is incorrect. It should be <span class="math-container">$f_1,\dots,f_l$</span>.</p>
<ol start="18">
<li>Page 260, third paragraph.</li>
</ol>
<blockquote>
<p>For the case of a type (c) operation, notice that . . .</p>
</blockquote>
<p>It should be (b).</p>
<ol start="19">
<li>Page 260, statement of Corollary.</li>
</ol>
<blockquote>
<p><em>The polynomials <span class="math-container">$\text{f}_1,\dots,\text{f}_l$</span> which occur on the main diagonal of <span class="math-container">$N$</span> are . . .</em></p>
</blockquote>
<p>The formatting of <span class="math-container">$l$</span> is incorrect. It should be <span class="math-container">$\text{f}_1,\dots,\text{f}_\text{l}$</span>.</p>
<ol start="20">
<li>Page 265, first displayed equation, third line.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ = (W \cap W_1) + \dots + (W \cap W_k) \oplus V_1 \oplus \dots \oplus V_k.$$</span></p>
</blockquote>
<p>It should be <span class="math-container">$$ = (W \cap W_1) \oplus \dots \oplus (W \cap W_k) \oplus V_1 \oplus \dots \oplus V_k.$$</span></p>
<ol start="21">
<li>Page 266, proof of second lemma. The chain rule for the formal derivative of a product of polynomials is used, but <a href="https://proofwiki.org/wiki/Formal_Derivative_of_Polynomials_Satisfies_Leibniz%27s_Rule" rel="noreferrer">this needs proof</a>.</li>
</ol>
<h2>Chapter 8</h2>
<ol>
<li>Page 274, last displayed equation, first line.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ (\alpha | \beta) = \left( \sum_k x_n \alpha_k \bigg|\, \beta \right) $$</span></p>
</blockquote>
<p>It should be <span class="math-container">$x_k$</span>.</p>
<ol start="2">
<li>Page 278, first line.</li>
</ol>
<blockquote>
<p>Now using (c) we find that . . .</p>
</blockquote>
<p>It should be (iii).</p>
<ol start="3">
<li>Page 282, second displayed equation, second last line.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ = (2,9,11) - 2(0,3,4) - -4,0,3) $$</span></p>
</blockquote>
<p>The right-hand side should be <span class="math-container">$(2,9,11) - 2(0,3,4) - (-4,0,3)$</span>.</p>
<ol start="4">
<li>Page 284, first displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$ \alpha = \sum_k \frac{(\beta | \alpha_k)}{\| \alpha_k \|^2} \alpha_k $$</span></p>
</blockquote>
<p>This equation should be labelled (8–11).</p>
<ol start="5">
<li>Page 285, paragraph following the first definition.</li>
</ol>
<blockquote>
<p>For <span class="math-container">$S$</span> is non-empty, since it contains <span class="math-container">$0$</span>; . . .</p>
</blockquote>
<p>It should be <span class="math-container">$S^\perp$</span>.</p>
<ol start="6">
<li>Page 285, line following the first displayed equation.</li>
</ol>
<blockquote>
<p>thus <span class="math-container">$c\alpha + \beta$</span> also lies in <span class="math-container">$S$</span>. . . .</p>
</blockquote>
<p>It should be <span class="math-container">$S^\perp$</span>.</p>
<ol start="7">
<li>Page 289, Exercise 7, displayed equation.</li>
</ol>
<blockquote>
<p><span class="math-container">$$\| (x_1,x_2 \|^2 = (x_1 - x_2)^2 + 3x_2^2. $$</span></p>
</blockquote>
<p>The left-hand side should be <span class="math-container">$\| (x_1,x_2) \|^2$</span>.</p>
<ol start="8">
<li>Page 316, first line.</li>
</ol>
<blockquote>
<p><em>matrix <span class="math-container">$\text{A}$</span> of <span class="math-container">$\text{T}$</span> in the basis <span class="math-container">$\mathscr{B}$</span> is upper triangular. . . .</em></p>
</blockquote>
<p>It should be "upper-triangular".</p>
<ol start="9">
<li>Page 316, statement of Theorem 21.</li>
</ol>
<blockquote>
<p><em>Then there is an orthonormal basis for <span class="math-container">$\text{V}$</span> in which the matrix of <span class="math-container">$\text{T}$</span> is upper triangular.</em></p>
</blockquote>
<p>It should be "upper-triangular".</p>
<h2>Chapter 9</h2>
<ol>
<li>Page 344, statement of Corollary.</li>
</ol>
<blockquote>
<p><em>Under the assumptions of the theorem, let <span class="math-container">$\text{P}_\text{j}$</span> be the orthogonal projection of <span class="math-container">$\text{V}$</span> on <span class="math-container">$\text{V}(\text{r}_\text{j})$</span>, <span class="math-container">$(1 \leq \text{j} \leq \text{k})$</span>. . . .</em></p>
</blockquote>
<p>The parentheses around <span class="math-container">$1 \leq \text{j} \leq \text{k}$</span> should be removed.</p>
| <p>I'm using the second edition. I think that the <strong><em>definition</em></strong> before Theorem $9$ (Chapter $1$) should be </p>
<blockquote>
<p><strong><em>Definition.</em></strong> An $m\times m$ matrix is said to be an <strong>elementary matrix</strong> if it can be obtained from the $m\times m$ identity matrix by means of a single elementary row operation.</p>
</blockquote>
<p>instead of </p>
<blockquote>
<p><strong><em>Definition.</em></strong> An $\color{red}{m\times n}$ matrix is said to be an <strong>elementary matrix</strong> if it can be obtained from the $m\times m$ identity matrix by means of a single elementary row operation.</p>
</blockquote>
<p>Check out <a href="https://math.stackexchange.com/questions/1801120/can-a-elementary-row-operation-change-the-size-of-a-matrix">this</a> question for details.</p>
|
geometry | <p>I went through this thread
<a href="https://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle">Mapping Irregular Quadrilateral to a Rectangle</a></p>
<p>If i know the 4 corresponding points in image say
<br/>p1->p1'
<br/>p2->p2'
<br/>p3->p3'
<br/>p4->p4'</p>
<p>then how to compute pi(x,y) from pi'(x,y)</p>
<p><img src="https://i.sstatic.net/9JT4A.jpg" alt="enter image description here">
<img src="https://i.sstatic.net/ybsAh.jpg" alt="enter image description here"></p>
<p>i don't know how to compute elements in Homography matrix H from those 8 known points
<br/>[x']= [h11 h12 h13] [x]
<br/>[y']= [h21 h22 h23] [y]
<br/>[(1)]=[h31 h32 (1)] [(1)]</p>
<p><em>[Excuse me. I am not sure if I should extend this question, or create a new one, since I can't post comments on threads]</em></p>
<p>I want to ask the same question, but using absolute values so I can visualize it.
Lets say my points on the image plane are:</p>
<pre><code>p[0] = x:407 y:253
p[1] = x:386 y:253
p[2] = x:406 y:232
p[3] = x:385 y:232
</code></pre>
<p>these points are in a 500px width x 333px height image plane with 0,0 at top left corner. These points represents a picture of a real plane where a 30mm side square are located. Assuming this picture was taken by a fixed camera at origin heading Z axis.</p>
<p>So, I know the physical distance between p0,p1 ; p0,p2 ; p1,p3; p2,p3 are 30mm. </p>
<p>But is it possible to get the X,Y,Z from each of these points using only this information above?</p>
| <p>You can compute the homography matrix H with your eight points with a matrix system such that the four correspondance points $(p_1, p_1'), (p_2, p_2'), (p_3, p_3'), (p_4, p_4')$ are written as $2\times9$ matrices such as:</p>
<p>$p_i = \begin{bmatrix}
-x_i \quad -y_i \quad -1 \quad 0 \quad 0 \quad 0 \quad x_ix_i' \quad y_ix_i' \quad x_i' \\
0 \quad 0 \quad 0 \quad -x_i \quad -y_i \quad -1 \quad x_iy_i' \quad y_iy_i' \quad y_i'
\end{bmatrix}$</p>
<p>It is then possible to stack them into a matrix $P$ to compute:</p>
<p>$PH = 0$</p>
<p>Such as:</p>
<p>$PH = \begin{bmatrix}
-x_1 \quad -y_1 \quad -1 \quad 0 \quad 0 \quad 0 \quad x_1x_1' \quad y_1x_1' \quad x_1' \\
0 \quad 0 \quad 0 \quad -x_1 \quad -y_1 \quad -1 \quad x_1y_1' \quad y_1y_1' \quad y_1' \\
-x_2 \quad -y_2 \quad -1 \quad 0 \quad 0 \quad 0 \quad x_2x_2' \quad y_2x_2' \quad x_2' \\
0 \quad 0 \quad 0 \quad -x_2 \quad -y_2 \quad -1 \quad x_2y_2' \quad y_2y_2' \quad y_2' \\
-x_3 \quad -y_3 \quad -1 \quad 0 \quad 0 \quad 0 \quad x_3x_3' \quad y_3x_3' \quad x_3' \\
0 \quad 0 \quad 0 \quad -x_3 \quad -y_3 \quad -1 \quad x_3y_3' \quad y_3y_3' \quad y_3' \\
-x_4 \quad -y_4 \quad -1 \quad 0 \quad 0 \quad 0 \quad x_4x_4' \quad y_4x_4' \quad x_4' \\
0 \quad 0 \quad 0 \quad -x_4 \quad -y_4 \quad -1 \quad x_4y_4' \quad y_4y_4' \quad y_4' \\
\end{bmatrix} \begin{bmatrix}h1 \\ h2 \\ h3 \\ h4 \\ h5 \\ h6 \\ h7 \\ h8 \\h9 \end{bmatrix} = 0$</p>
<p>While adding an extra constraint $|H|=1$ to avoid the obvious solution of $H$ being all zeros. It is easy to use SVD $P = USV^\top$ and select the last singular vector of $V$ as the solution to $H$.</p>
<p>Note that this gives you a DLT (direct linear transform) homography that minimizes algebraic error. This error is not geometrically meaningful and so the computed homography may not be as good as you expect. One typically applies nonlinear least squares with a better cost function (e.g. symmetric transfer error) to improve the homography.</p>
<p>Once you have your homography matrix $H$, you can compute the projected coordinates of any point $p(x, y)$ such as:</p>
<p>$\begin{bmatrix}
x' / \lambda \\
y' / \lambda \\
\lambda
\end{bmatrix} =
\begin{bmatrix}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33}
\end{bmatrix}.
\begin{bmatrix}
x \\
y \\
1
\end{bmatrix}$</p>
| <p>Rather than taking SVD of the system, there is an easier way to solve this. since the last component in the $H$ vector is considered as $1$, we can add one more constraint equation $(h9=1)$ (since $h_{33}$ is considered $1$) to this set and convert $P$ matrix into $9\times 9$ matrix and system will become non-homogenous.</p>
<p>$PH = \begin{bmatrix}
-x_1 & -y_1 & -1 & 0 & 0 & 0 & x_1x_1' & y_1x_1' & x_1' \\
0 & 0 & 0 & -x_1 & -y_1 & -1 & x_1y_1' & y_1y_1' & y_1' \\
-x_2 & -y_2 & -1 & 0 & 0 & 0 & x_2x_2' & y_2x_2' & x_2' \\
0 & 0 & 0 & -x_2 & -y_2 & -1 & x_2y_2' & y_2y_2' & y_2' \\
-x_3 & -y_3 & -1 & 0 & 0 & 0 & x_3x_3' & y_3x_3' & x_3' \\
0 & 0 & 0 & -x_3 & -y_3 & -1 & x_3y_3' & y_3y_3' & y_3' \\
-x_4 & -y_4 & -1 & 0 & 0 & 0 & x_4x_4' & y_4x_4' & x_4' \\
0 & 0 & 0 & -x_4 & -y_4 & -1 & x_4y_4' & y_4y_4' & y_4' \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
\end{bmatrix} \begin{bmatrix}h1 \\ h2 \\ h3 \\ h4 \\ h5 \\ h6 \\ h7 \\ h8 \\h9 \end{bmatrix} = \begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\1 \end{bmatrix}$</p>
<p>This is a $Ax=b$ kind of problem which can be solved much faster than computing SVD.</p>
|
logic | <p>I've heard that within the field of intuitionistic mathematics, all real functions are continuous (i.e. there are no discontinuous functions). Is there a good book where I can find a proof of this theorem?</p>
| <p>Brouwer proved (to his own satisfaction) that every function from $\mathbb{R}$ to $\mathbb{R}$ is continuous. Modern constructive systems rarely are able to prove this, but they are <em>consistent</em> with it - they are unable to disprove it. These system are also (almost always) consistent with classical mathematics in which there are plenty of discontinuous functions from $\mathbb{R}$ to $\mathbb{R}$. One place you can find something about this is the classic <em>Varieties of Constructive Mathematics</em> by Bridges and Richman. </p>
<p>The same phenomenon occurs in classical computable analysis, by the way. Any <em>computable</em> function $f$ from $\mathbb{R}$ to $\mathbb{R}$ which is well defined with respect to equality of reals (and thus is a function from $\mathbb{R}$ to $\mathbb{R}$ in the normal sense) is continuous. In particular the characteristic function of a singleton real is never computable. This would be covered in any computable analysis text, such as the one by Weihrauch. </p>
<p>Here is a very informal argument that has a grain of truth. It should appear naively correct that if you can "constructively" prove that something is a function from $\mathbb{R}$ to $\mathbb{R}$, then you can compute that function. So the classical fact that every computable real function is continuous suggests that anything that you can constructively prove to be a real function will also be continuous. This suggests that you cannot prove constructively that any classically discontinuous function is actually a function. The grain of truth is that there are ways of making this argument rigorous, such as the method of "realizability". </p>
| <p>There exists a Grothendieck topos $\mathcal{E}$ in which the statement "every function from the Dedekind real numbers to the Dedekind real numbers is continuous" is true in the internal logic. To be a little more precise, in the topos $\mathcal{E}$, there is an object $R$ of Dedekind cuts of rational numbers, such that
$$\forall f \in R^R . \, \forall \epsilon \in R . \, \forall x \in R . \, \epsilon > 0 \Rightarrow \exists \delta \in R . \, \forall x' \in R . \, \left| x - x' \right| < \delta \Rightarrow \left| f (x) - f (x') \right| < \epsilon$$
holds when interpreted using Kripke–Joyal semantics in $\mathcal{E}$. The topos $\mathcal{E}$ is constructed as follows: we take a full subcategory $\mathbb{T}$ of the category of topological spaces $\textbf{Top}$ such that</p>
<ul>
<li>$\mathbb{T}$ is small,</li>
<li>the real line $\mathbb{R}$ is in $\mathbb{T}$, </li>
<li>for each $X \in \operatorname{ob} \mathbb{T}$ and each open subset $U$ of $X$, we have $U \in \operatorname{ob} \mathbb{T}$, and</li>
<li>$\mathbb{T}$ is closed under finite products in $\textbf{Top}$;</li>
</ul>
<p>and we set $\mathcal{E}$ to be the category of sheaves on $\mathbb{T}$ equipped with the open immersion topology. One can then show that the object of internal Dedekind real numbers in $\mathcal{E}$ is (isomorphic to) the representable sheaf $\mathbb{T}(-, \mathbb{R})$, and with more work, one finds that Brouwer's "theorem" holds in $\mathcal{E}$. The details of the construction and the proof of validity can be found in [<em>Sheaves in geometry and logic</em>, Ch. VI, §9], though I have not understood it in full.</p>
|
logic | <p>What books/notes should one read to learn model theory? As I do not have much background in logic it would be ideal if such a reference does not assume much background in logic. Also, as I am interested in arithmetic geometry, is there a reference with a view towards such a topic?</p>
| <p>I really like Introduction to Model Theory by David Marker. It starts from scratch and has a lot of algebraic examples. </p>
| <p>For a free alternative, Peter L. Clark has posted his notes <a href="http://alpha.math.uga.edu/%7Epete/MATH8900.html" rel="nofollow noreferrer">Introduction to Model Theory</a> on his website. He says no prior knowledge of logic is assumed and the applications are primarily in the areas of Algebra, Algebraic Geometry and Number Theory.</p>
|
probability | <p>I found this problem and I've been stuck on how to solve it.</p>
<blockquote>
<p>A miner is trapped in a mine containing 3 doors. The first door leads to a tunnel that will take him to safety after 3 hours of travel. The second door leads to a tunnel that will return him to the mine after 5 hours of travel. The third door leads to a tunnel that will return him to the mine after 7 hours. If we assume that the miner is at all times equally likely to choose any one of doors, what is the expected length of time until he reaches safety?</p>
</blockquote>
<p>The fact that the miner could be stuck in an infinite loop has confused me.
Any help is greatly appreciated.</p>
| <p>Let $T$ be the time spent in the mine. Conditioning on the first door the miner chooses, we get
$$ \mathbb{E}[T]=\frac{1}{3}\cdot3+\frac{1}{3}(5+\mathbb{E}[T])+\frac{1}{3}(7+\mathbb{E}[T])$$
so
$$ \mathbb{E}[T]=5+\frac{2}{3}\mathbb{E}[T].$$
If $\mathbb{E}[T]$ is finite, then we can conclude that $\mathbb{E}[T]=15$.</p>
<p>To see that $\mathbb{E}[T]$ is finite, let $X$ be the number of times the miner chooses a door. Then $ \mathbb{P}(X\geq n)=(\frac{2}{3})^{n-1}$ for $n=1,2,3,\dots$, hence
$$ \mathbb{E}[X]=\sum_{n=1}^{\infty}\mathbb{P}(X\geq n)=\sum_{n=1}^{\infty}\Big(\frac{2}{3}\Big)^{n-1}<\infty$$
And since $T\leq 7(X-1)+3$, we see that $\mathbb{E}[T]<\infty$ as well.</p>
| <p>Let $t$ be the expected time to get out. If he takes the second or third door he returns to the same position as the start, so the expected time after he returns is $t$. Therefore we have $$t=\frac 13(3) + \frac 13(t+5)+\frac 13(t+7)\\\frac 13t=5
\\t=15$$</p>
|
logic | <ul>
<li><p>If <span class="math-container">$p\implies q$</span> ("<span class="math-container">$p$</span> implies <span class="math-container">$q$</span>"), then <span class="math-container">$p$</span> is a <strong>sufficient</strong> condition for <span class="math-container">$q$</span>.</p></li>
<li><p>If <span class="math-container">$\lnot p\implies \lnot q$</span> ("not <span class="math-container">$p$</span> implies not <span class="math-container">$q$</span>"), then <span class="math-container">$p$</span> is a <strong>necessary</strong> condition for <span class="math-container">$q$</span>.</p></li>
</ul>
<p>I don't understand what sufficient and necessary mean in this case. How do you know which one is necessary and which one is sufficient?</p>
| <p>Suppose first that $p$ implies $q$. Then knowing that $p$ is true is <strong>sufficient</strong> (i.e., enough evidence) for you to conclude that $q$ is true. It’s possible that $q$ could be true even if $p$ weren’t, but having $p$ true ensures that $q$ is also true.</p>
<p>Now suppose that $\text{not-}p$ implies $\text{not-}q$. If you know that $p$ is false, i.e., that $\text{not-}p$ is true, then you know that $\text{not-}q$ is true, i.e., that $q$ is false. Thus, in order for $q$ to be true, $p$ <strong>must</strong> be true: without that, you automatically get that $q$ is false. In other words, in order for $q$ to be true, it’s <strong>necessary</strong> that $p$ be true; you can’t have $q$ true while $p$ is false.</p>
| <p>I always think of it in terms of sets.</p>
<p><img src="https://i.sstatic.net/6S8qC.jpg" alt="enter image description here"></p>
<p>In the picture above, for an element to be purple, it's necessary to be red, but it is not sufficient.</p>
<p>The same holds for the blue set, to be in the blue set is a necessary condition in order to be purple, but it is not enough, it's not sufficient.</p>
<p>A sufficient condition is stronger than a necessary condition. If you tell me that you have a red or blue element I can't say for sure if it is in the purple set, but if you tell me that you have a purple element I now for sure that it is in the red and blue sets.</p>
|
logic | <p>Then otherwise the sentence <em>"It is not possible for someone to find a counter-example"</em> would be a proof.</p>
<p>I mean, are there some hypotheses that are false but the counter-example is somewhere we cannot find even if we have super computers.</p>
<p>Sorry, if this is a silly question. </p>
<p>Thanks a lot.</p>
| <p>A standard example of this is <a href="http://en.wikipedia.org/wiki/Halting_problem">the halting problem</a>, which states essentially:</p>
<blockquote>
<p>There is no program which can always determine whether another program will eventually terminate. </p>
</blockquote>
<p>Thus there must be some program which does not terminate, but no proof that it does not terminate exists. Otherwise for any program, we could run the program and at the same time search for a proof that it does not terminate, and either the program would eventually terminate or we would find such a proof. </p>
<p>To match the phrasing of your question, this means that the statement:</p>
<blockquote>
<p>If a program does not terminate, there is some proof of this fact.</p>
</blockquote>
<p>is false, but no counterexample can be found.</p>
| <p>I actually like this one:</p>
<p>There are uncountably many real numbers. However, given that all specifications of specific real numbers (be it by digits, by an algorithm, or even a description of the number in plain English) is ultimately given by a finite string of finitely many symbols, there are only countably many descriptions of real numbers.</p>
<p>A straightforward formalization (but not the only possible, nor the most general one) of that idea is to model the descriptions as natural numbers (think e.g. of storing the description in a file, and then interpreting the file as natural number), and then having a function from the natural numbers (that is, the descriptions) to subsets of the real numbers (namely the set of real numbers which fit the description). A description which uniquely describes a real number would, in this model, be a natural number which maps to a one-element subset of the real numbers; the single element of that subset is. Since there are only countably many natural numbers (by definition), they can only map to at most countably many one-element subsets, whose union therefore only contains countably many real numbers. Since there are uncountably many real numbers, there must be uncountably many numbers not in this set.</p>
<p>Therefore in this formalization, for any given mapping, almost every real number cannot be individually specified by any description. Therefore there exist uncountably many counterexamples to the claim "you can uniquely specify any real number".</p>
<p>Of course I cannot give a counterexample, because to give a counterexample, I'd have to specify an individual real number violating the claim, but if I could specify it, it would not violate the claim and therefore not be a counterexample.</p>
<p>Note that in the first version, I omitted that possible formalization. As I learned from the comments and especially <a href="https://mathoverflow.net/a/44129/1682">this MathOverflow post</a> linked from them, in the original generality the argument is wrong.</p>
|
matrices | <p>How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?</p>
| <p>Hint: Show that rows of $AB$ are linear combinations of rows of $B$. Transpose this hint.</p>
| <p>I used a way to prove this, which I thought may not be the most concise way but it feels very intuitive to me.
The matrix $AB$ is actually a matrix that consist the linear combination of $A$ with $B$ the multipliers. So it looks like...
$$\boldsymbol{AB}=\begin{bmatrix}
& & & \\
a_1 & a_2 & ... & a_n\\
& & &
\end{bmatrix}
\begin{bmatrix}
& & & \\
b_1 & b_2 & ... & b_n\\
& & &
\end{bmatrix}
=
\begin{bmatrix}
& & & \\
\boldsymbol{A}b_1 & \boldsymbol{A}b_2 & ... & \boldsymbol{A}b_n\\
& & &
\end{bmatrix}$$
Suppose if $B$ is singular, then when $B$, being the multipliers of $A$, will naturally obtain another singular matrix of $AB$. Similarly, if $B$ is non-singular, then $AB$ will be non-singular. Therefore, the $rank(AB) \leq rank(B)$.</p>
<p>Then now if $A$ is singular, then clearly, no matter what $B$ is, the $rank(AB)\leq rank(A)$. The $rank(AB)$ is immediately capped by the rank of $A$ unless the the rank of $B$ is even smaller.</p>
<p>Put these two ideas together, the rank of $AB$ must have been capped the rank of $A$ or $B$, which ever is smaller. Therefore, $rank(AB) \leq min(rank(A), rank(B))$.</p>
<p>Hope this helps you!</p>
|
probability | <p><span class="math-container">$\newcommand{\F}{\mathcal{F}} \newcommand{\powset}[1]{\mathcal{P}(#1)}$</span>
I am reading lecture notes which contradict my understanding of random variables. Suppose we have a probability space <span class="math-container">$(\Omega, \mathcal{F}, Pr)$</span>, where </p>
<ul>
<li><p><span class="math-container">$\Omega$</span> is the set of outcomes</p></li>
<li><p><span class="math-container">$\F \subseteq \powset{\Omega}$</span> is the collection of events, a <span class="math-container">$\sigma$</span>-algebra</p></li>
<li><p><span class="math-container">$\Pr:\Omega\to[0,1]$</span> is the mapping outcomes to their probabilities.</p></li>
</ul>
<p>If we take the standard definition of a random variable <span class="math-container">$X$</span>, it is actually a function from the sample space to real values, i.e. <span class="math-container">$X:\Omega \to \mathbb{R}$</span>.</p>
<p>What now confuses me is the precise definition of the term <em>support</em>. </p>
<p><a href="https://en.wikipedia.org/wiki/Support_%28mathematics%29" rel="noreferrer">According to Wikipedia</a>:</p>
<blockquote>
<p>the support of a function is the set of points where the function is
not zero valued.</p>
</blockquote>
<p>Now, applying this definition to our random variable <span class="math-container">$X$</span>, these <a href="http://www.math.fsu.edu/~paris/Pexam/3-Random%20Variables.pdf" rel="noreferrer">lectures notes</a> say:</p>
<blockquote>
<p>Random Variables – A random variable is a real valued function defined
on the sample space of an experiment. Associated with each random
variable is a probability density function (pdf) for the random
variable. The sample space is also called the support of a random
variable.</p>
</blockquote>
<p>I am not entirely convinced with the line <em>the sample space is also callled the support of a random variable</em>. </p>
<p>Why would <span class="math-container">$\Omega$</span> be the support of <span class="math-container">$X$</span>? What if the random variable <span class="math-container">$X$</span> so happened to map some element <span class="math-container">$\omega \in \Omega$</span> to the real number <span class="math-container">$0$</span>, then that element would not be in the support?</p>
<p>What is even more confusing is, when we talk about support, do we mean that of <span class="math-container">$X$</span> or that of the distribution function <span class="math-container">$\Pr$</span>? </p>
<p><a href="https://math.stackexchange.com/questions/416035/support-vs-range-of-a-random-variable">This answer says</a> that:</p>
<blockquote>
<p>It is more accurate to speak of the support of the distribution than
that of the support of the random variable.</p>
</blockquote>
<p>Do we interpret the <em>support</em> to be</p>
<ul>
<li>the set of outcomes in <span class="math-container">$\Omega$</span> which have a non-zero probability, </li>
<li>the set of values that <span class="math-container">$X$</span> can take with non-zero probability?</li>
</ul>
<p>I think being precise is important, although my literature does not seem very rigorous.</p>
| <blockquote>
<p>I am not entirely convinced with the line <em>the sample space is also called the support of a random variable</em> </p>
</blockquote>
<p>That looks quite wrong to me.</p>
<blockquote>
<p>What is even more confusing is, when we talk about support, do we mean that of $X$ or that of the distribution function $Pr$?</p>
</blockquote>
<p>In rather informal terms, the "support" of a random variable $X$ is defined as the support (in the function sense) of the density function $f_X(x)$. </p>
<p>I say, in rather informal terms, because the density function is a quite intuitive and practical concept for dealing with probabilities, but no so much when speaking of probability in general and formal terms. For one thing, it's not a proper function for "discrete distributions" (again, a practical but loose concept). </p>
<p>In more formal/strict terms, the comment of Stefan fits the bill.</p>
<pre><code>Do we interpret the support to be
- the set of outcomes in Ω which have a non-zero probability,
- the set of values that X can take with non-zero probability?
</code></pre>
<p>Neither, actually. Consider a random variable that has a uniform density in $[0,1]$, with $\Omega = \mathbb{R}$.
Then the support is the full interval $[0,1]$ - which is a subset of $\Omega$. But, then, of course, say $x=1/2$ belongs to the support. But the probability that $X$ takes this value is zero.</p>
| <h1>TL;DR</h1>
<p>The support of a random variable <span class="math-container">$X$</span> can be defined as the smallest closed set <span class="math-container">$R_X \in \mathcal{B}$</span> such that its probability is 1, as Did pointed out in their comment. An alternative definition is the one given by Stefan Hansen in his comment: the set of points in <span class="math-container">$\mathbb{R}$</span> around which any ball (i.e. open interval in 1-D) with nonzero radius has a nonzero probability. (See the section "Support of a random variable" below for a proof of the equivalence of these definitions.)</p>
<p>Intuitively, if any neighbourhood around a point, no matter how small, has a nonzero probability, then that point is in the support, and vice-versa.</p>
<hr />
<br>
<p>I'll start from the beginning to make sure we're using the same definitions.</p>
<h1>Preliminary definitions</h1>
<h2>Probability space</h2>
<p><span class="math-container">$\newcommand{\A}{\mathcal{A}} \newcommand{\powset}[1]{\mathcal{P}(#1)} \newcommand{\R}{\mathbb{R}} \newcommand{\deq}{\stackrel{\scriptsize def}{=}} \newcommand{\N}{\mathbb{N}}$</span>
Let <span class="math-container">$(\Omega, \A, \Pr)$</span> be a probability space, defined as follows:</p>
<ul>
<li><p><span class="math-container">$\Omega$</span> is the set of <strong>outcomes</strong></p>
</li>
<li><p><span class="math-container">$\A \subseteq \powset{\Omega} $</span> is the collection of <strong>events</strong>, a <a href="https://en.wikipedia.org/wiki/Sigma-algebra#Definition" rel="nofollow noreferrer"><span class="math-container">$\sigma$</span>-algebra</a></p>
</li>
<li><p><span class="math-container">$\Pr\colon\ \mathbf{\A}\to[0,1]$</span> is the <strong>mapping of events to their probabilities</strong>.
It has to satisfy some properties:</p>
<ul>
<li><span class="math-container">$\Pr(\Omega) = 1$</span> (we know <span class="math-container">$\Omega \in \A$</span> since <span class="math-container">$\A$</span> is a <span class="math-container">$\sigma$</span>-algebra of <span class="math-container">$\Omega$</span>)</li>
<li>has to be <a href="https://en.wikipedia.org/wiki/Sigma_additivity#%CF%83-additive_set_functions" rel="nofollow noreferrer">countably additive</a></li>
</ul>
</li>
</ul>
<br>
<h2>Random variable</h2>
<p>A random variable <span class="math-container">$X$</span> is defined as a map <span class="math-container">$X\colon\; \Omega \to \R$</span> such that, for any <span class="math-container">$x\in\R$</span>, the set <span class="math-container">$\{\omega \in \Omega \mid X(\omega) \le x\}$</span> is an element of <span class="math-container">$\A$</span>, ergo, an element of <span class="math-container">$\Pr$</span>'s domain to which a probability can be assigned.</p>
<p>We can think of <span class="math-container">$X$</span> as a "realisation" of <span class="math-container">$\Omega$</span>, in that it assigns a real number to each outcome in <span class="math-container">$\Omega$</span>. Intuitively, this condition means that we are assigning numbers to outcomes in an order such that the set of outcomes whose assigned number is less than a certain threshold (think of cutting the real number line at the threshold and forming the set of outcomes whose number falls
on or to the left of that) is always one of the events in <span class="math-container">$\A$</span>, meaning we can assign it a probability.</p>
<p>This is necessary in order to define the following concepts.</p>
<h3>Cumulative Distribution Function of a random variable</h3>
<p>The probability distribution function (or <strong>cumulative distribution function</strong>) of a random variable <span class="math-container">$X$</span> is defined as the map
<span class="math-container">$$
\begin{align}
F_X \colon \quad \R \ &\to\ [0, 1] \\
x\ &\mapsto\ \Pr(X \le x) \deq \Pr(X^{-1}(I_x))
\end{align}
$$</span></p>
<p>where <span class="math-container">$I_x \deq (-\infty, x]$</span>. (NB: <span class="math-container">$X^{-1}$</span> denotes preimage, not inverse; <span class="math-container">$X$</span> might well be non-injective.)</p>
<p>For notational clarity, define the following:</p>
<ul>
<li><span class="math-container">$\Omega_{\le x} \deq X^{-1}((-\infty, x]) = X^{-1}(I_x)$</span></li>
<li><span class="math-container">$\Omega_{> x} \deq X^{-1}((x, +\infty)) = X^{-1}(\overline{I_x}) = \overline{\Omega_{\le x}}$</span> where <span class="math-container">$\overline{\phantom{\Omega}}$</span> denotes set complement (in <span class="math-container">$\R$</span> or <span class="math-container">$\Omega$</span>, depending on the context)</li>
<li><span class="math-container">$\Omega_{< x} \deq X^{-1}((-\infty, x)) = \displaystyle\bigcup_{n\in\N} X^{-1} \left(I_{x-\frac{1}{n}}\right)$</span></li>
<li><span class="math-container">$\Omega_{=x} \deq X^{-1}(x) = \Omega_{\le x} \setminus \Omega_{< x}$</span></li>
</ul>
<p>we know all of these are still in <span class="math-container">$\A$</span> since <span class="math-container">$\A$</span> is a <span class="math-container">$\sigma$</span>-algebra.</p>
<p>We can see that</p>
<ul>
<li><span class="math-container">$\Pr(X > x) \deq \Pr(\Omega_{>x}) = \Pr(\overline{\Omega_{\le x}}) = 1 - \Pr(\Omega_{\le x}) = 1 - F_X(x)$</span></li>
</ul>
<br>
<ul>
<li><span class="math-container">$\Pr(X < x) \deq \Pr(\Omega_{<x}) = \Pr\left(\displaystyle\bigcup_{n\in\N} X^{-1} \left(I_{x-\frac{1}{n}}\right)\right)$</span> <span class="math-container">$= \lim\limits_{n \to \infty} \Pr(X \le x - \frac{1}{n}) = \lim\limits_{n \to \infty} F_X(x - \frac{1}{n}) = \lim\limits_{t \to x^-} F_X(t) \deq F_X(x^-)$</span>
<br><br>
since <span class="math-container">$X^{-1} \left(I_{x-\frac{1}{n}}\right) \subseteq X^{-1} \left(I_{x-\frac{1}{n+1}}\right)$</span> for all <span class="math-container">$n\in\N$</span>.</li>
</ul>
<br>
<ul>
<li><span class="math-container">$\Pr(X = x) \deq \Pr(\Omega_{=x}) = \Pr(\Omega_{\le x} \setminus \Omega_{<x})= \Pr(\Omega_{\le x}) - \Pr(\Omega_{<x}) = F_X(x) - F_X(x^-)$</span></li>
</ul>
<p>and so forth.</p>
<p>Note that the limit that defines <span class="math-container">$F(x^-)$</span> always exists because <span class="math-container">$F_X$</span> is nondecreasing (since if <span class="math-container">$x< y$</span>, then <span class="math-container">$\Omega_{\le x} \subseteq \Omega_{\le y}$</span> and <span class="math-container">$\Pr$</span> is <span class="math-container">$\sigma$</span>-additive) and bounded above (by <span class="math-container">$1$</span>), so the monotone convergence theorem guarantees that the images by <span class="math-container">$F_X$</span> by of <em>any</em> nondecreasing sequence approaching <span class="math-container">$x$</span> from the left will also converge, and thus the continuous limit <span class="math-container">$\lim_{t \to x^-} F_X(t)$</span> exists.</p>
<br>
<h2>Probability measure on <span class="math-container">$\R$</span> by <span class="math-container">$X$</span></h2>
<p>The mapping defined by <span class="math-container">$X$</span> is sufficient to uniquely define a probability measure on <span class="math-container">$\R$</span>; that is, a map
<span class="math-container">$$
\begin{align}
P_X \colon \quad \mathcal{B} \subset \powset{\R} \ &\to \ [0, 1]\\
A \ &\mapsto \ \Pr(X \in A) \deq \Pr(X^{-1}(A))
\end{align}
$$</span>
that assigns to any set <span class="math-container">$A \in \mathcal{B}$</span> the probability of the corresponding event in <span class="math-container">$\A$</span>.</p>
<p>Here <span class="math-container">$\mathcal{B}$</span> is the <a href="https://en.wikipedia.org/wiki/Borel_set" rel="nofollow noreferrer">Borel <span class="math-container">$\sigma$</span>-algebra</a> in <span class="math-container">$\R$</span>, which is, loosely speaking, the smallest <span class="math-container">$\sigma$</span>-algebra containing all of the semi-intervals <span class="math-container">$(-\infty, x]$</span>. The reason why <span class="math-container">$P_X$</span> is defined only on those sets is because in our definition we only required <span class="math-container">$X^{-1}(A) \in \A$</span> to be true for the semi-intervals of the form <span class="math-container">$A = (-\infty, x]$</span>; thus <span class="math-container">$X^{-1}(A)$</span> is an element of <span class="math-container">$\A$</span> only when <span class="math-container">$A$</span> is "generated" by those semi-intervals, their complements, and countable unions/intersections thereof (according to the rules of a <span class="math-container">$\sigma$</span>-algebra).</p>
<br>
<hr />
<br>
<h1>Support of a random variable</h1>
<h2>Formal definition</h2>
<p>Formally, the <strong>support</strong> of <span class="math-container">$X$</span> can be defined as the smallest closed set <span class="math-container">$R_X \in \mathcal{B}$</span> such that <span class="math-container">$P_X(R_X) = 1$</span>, as Did pointed out in their comment.</p>
<p>An alternative but equivalent definition is the one given by Stefan Hansen in his comment:</p>
<blockquote>
<p>The support of a random variable <span class="math-container">$X$</span> is the set <span class="math-container">$\{x\in \R \mid P_X(B(x,r))>0, \text{ for all } r>0\}$</span> where <span class="math-container">$B(x,r)$</span> denotes the ball with center at <span class="math-container">$x$</span> and radius <span class="math-container">$r$</span>. In particular, the support is a subset of <span class="math-container">$\R$</span>.</p>
</blockquote>
<p>(This can be generalized as-is to random variables with values in <span class="math-container">$\R^n$</span>, but I'll stick to <span class="math-container">$\R$</span> as that's how I defined random variables.)
The equivalence can be proven as follows:</p>
<blockquote>
<p><strong>Proof</strong> <br>
Let <span class="math-container">$R_X$</span> be the smallest closed set in <span class="math-container">$\mathcal{B}$</span> such that <span class="math-container">$P_X(R_X) = 1$</span>.
Since <span class="math-container">$\R \setminus {R_X}$</span> is open, for every <span class="math-container">$x$</span> outside <span class="math-container">$R_X$</span> there exists a radius <span class="math-container">$r\in\R_{>0}$</span> such that the open ball <span class="math-container">$B(x, r)$</span> lies completely outside of <span class="math-container">$R_X$</span>.
That, in turn, implies that <span class="math-container">$P_X(B(x, r)) = 0$</span>—otherwise, if this were strictly positive, <span class="math-container">$P_X(R_X \cup B(x, r)) = P_X(R_X) + P_X(B(x, r)) > P_X(R_X) = 1$</span>, a contradiction.</p>
<p>Conversely, let <span class="math-container">$x \in \R$</span> and suppose <span class="math-container">$P_X(B(x, r)) = 0$</span> for some <span class="math-container">$r\in\R_{>0}$</span>. Then <span class="math-container">$B(x, r)$</span> lies completely outside <span class="math-container">$R_X$</span>, and, in particular, <span class="math-container">$x$</span> is not in <span class="math-container">$R_X$</span>. Otherwise <span class="math-container">$R_X \setminus B(x, r)$</span> would be a closed set smaller than <span class="math-container">$R_X$</span> satisfying <span class="math-container">$P_X(R_X') = 1$</span>.</p>
<p>This proves <span class="math-container">$\R \setminus R_X = \{x\in\R \mid \exists r \in \R_{>0}\quad P_X(B(x, r)) = 0\}$</span>
Negating the predicate, one gets <span class="math-container">$R_X = \{x\in\R \mid \forall r \in \R_{>0}\quad P_X(B(x, r)) > 0\}$</span>.</p>
</blockquote>
<p>But more often, different definitions are given.</p>
<br>
<h2>Alternative definition for discrete random variables</h2>
<p>A discrete random variable can be defined as a random variable <span class="math-container">$X$</span> such that <span class="math-container">$X(\Omega)$</span> is countable (either finite or countably infinite). Then, for a discrete random variable the support can be defined as</p>
<p><span class="math-container">$$R_X \deq \{x\in\R \mid \Pr(X = x) > 0\}\,.$$</span></p>
<p>Note that <span class="math-container">$R_X \subseteq X(\Omega)$</span> and thus <span class="math-container">$R_X$</span> is countable. We can prove this by proving its contrapositive:</p>
<blockquote>
<p>Suppose <span class="math-container">$x \in \R$</span> and <span class="math-container">$x \notin X(\Omega)$</span>. We can distinguish two cases: either <span class="math-container">$x < y$</span> <span class="math-container">$\forall y \in R_X$</span>, or <span class="math-container">$x > y$</span> <span class="math-container">$\forall y \in R_X$</span>, or neither.</p>
<p>Suppose <span class="math-container">$x < y$</span> <span class="math-container">$\forall y \in R_X$</span>. Then <span class="math-container">$\Pr(X = x) \le \Pr(X \le x) = \Pr(X^{-1}(I_x)) = \Pr(\emptyset) = 0$</span>, since <span class="math-container">$\forall \omega\in\Omega\ X(\omega) > x$</span>. Ergo, <span class="math-container">$x\notin R_X$</span>.</p>
<p>The case in which <span class="math-container">$x > y$</span> <span class="math-container">$\forall y \in X(\Omega)$</span> is analogous.</p>
<p>Suppose now <span class="math-container">$\exists y_1, y_2 \in X(\Omega)$</span> such that <span class="math-container">$y_1 < x < y_2$</span>. Let <span class="math-container">$S = \{y\in X(\Omega) \mid y < x\}$</span>, which is. Thus <span class="math-container">$\sup L$</span> and exists, and <span class="math-container">$\lim_{y \to x^-} F_X(y) = F_X(\sup L)$</span> since <span class="math-container">$F_X$</span> is nondecreasing and bounded above. Thus, since <span class="math-container">$\sup L \le x$</span>, <span class="math-container">$F_X(x) \ge F_X(\sup L)$</span> and therefore <span class="math-container">$\Pr(X=x) = F_X(x) - F_X(x^-) \ge F_X(\sup L) - F_X(x^-) = 0$</span>.</p>
</blockquote>
<br>
<h2>Alternative definition for continuous random variables</h2>
<p>Notice that for absolutely continuous random variables (that is, random variables whose distribution function is continuous on all of <span class="math-container">$\R$</span>), <span class="math-container">$\Pr(X = x) = 0$</span> for all <span class="math-container">$x\in \R$</span>—since, by definition of continuity, <span class="math-container">$F_X(x^-) = F_X(x)$</span>. But that doesn't mean that the outcomes in <span class="math-container">$X^{-1}({x})$</span> are "impossible", informally speaking. Thus, in this case, the support is defined as</p>
<p><span class="math-container">$$ R_X = \mathrm{closure}{\{x \in \R \mid f_X(x) > 0\}}\,,$$</span></p>
<p>which intuitively can be justified as being the set of points around which we can make an arbitrarily small interval on which the integral of the PDF is strictly positive.</p>
|
differentiation | <p>I was wondering how we could use the limit definition </p>
<p>$$ \lim_{h \rightarrow 0} \frac{f(x+h)-f(x)}{h}$$</p>
<p>to find the derivative of $e^x$, I get to a point where I do not know how to simplify the indeterminate $\frac{0}{0}$. Below is what I have already done</p>
<p>$$\begin{align}
&\lim_{h \rightarrow 0} \frac{f(x+h)-f(x)}{h} \\
&\lim_{h \rightarrow 0} \frac{e^{x+h}-e^x}{h} \\
&\lim_{h \rightarrow 0} \frac{e^x (e^h-1)}{h} \\
&e^x \cdot \lim_{h \rightarrow 0} \frac{e^h-1}{h}
\end{align}$$</p>
<p>Where can I go from here? Because, the $\lim$ portion reduces to indeterminate when $0$ is subbed into $h$. </p>
| <p>Sometimes one <em>defines</em> $e$ as the (unique) number for which $$\tag 1 \lim_{h\to 0}\frac{e^h-1}{h}=1$$</p>
<p>In fact, there are two possible directions. </p>
<p>$(i)$ Start with the logarithm. You'll find out it is continuous monotone increasing on $\Bbb R_{>0}$, and it's range is $\Bbb R$. It follows $\log x=1$ for some $x$. We define this (unique) $x$ to be $e$. Some elementary properties will pop up, and one will be $$\tag 2 \lim\limits_{x\to 0}\frac{\log(1+x)}{x}=1$$</p>
<p>Upon defining $\exp x$ as the inverse of the logarithm, and after some rules, we will get to defining exponentiation of $a>0\in \Bbb R$ as $$a^x:=\exp(x\log a)$$</p>
<p>In said case, $e^x=\exp(x)$, as we expected. $(1)$ will then be an immediate consequence of $(2)$.</p>
<p>$(ii)$ We might define $$e=\sum_{k=0}^\infty \frac 1 {k!}$$ (or the equivalent Bernoulli limit). Then, we may define $$\exp x=\sum_{k=0}^\infty \frac{x^k}{k!}$$ Note $$\tag 3 \exp 1=e$$</p>
<p>We define the $\log$ as the inverse of the exponential function. We may derive certain properties of $\exp x$. The most important ones would be $$\exp(x+y)=\exp x\exp y$$ $$\exp'=\exp$$ $$\exp 0 =1$$</p>
<p>In particular, we have that $\log e=1$ by. We might then define general exponentiation yet again by $$a^x:=\exp(x\log a)$$</p>
<p>Note then that again $e^x=\exp x$. We can prove $(1)$ easily recurring to the series expansion we used.</p>
<hr>
<p><strong>ADD</strong> As for the definition of the logarithm, there are a few ones. One is $$\log x=\int_1^x \frac{dt}{t}$$</p>
<p>Having defined exponentiation of real numbers using rationals by $$a^x=\sup\{a^r:r\in\Bbb Q\wedge r<x\}$$</p>
<p>we might also define $$\log x=\lim_{k\to 0}\frac{x^k-1}{k}$$</p>
<p>In any case, you should be able to prove that </p>
<p>$$\tag 1 \log xy = \log x +\log y $$
$$\tag 2 \log x^a = a\log x $$
$$\tag 3 1-\dfrac 1 x\leq\log x \leq x-1 $$
$$\tag 4\lim\limits_{x\to 0}\dfrac{\log(1+x)}{x}=1 $$
$$\tag 5\dfrac{d}{dx}\log x = \dfrac 1 x$$</p>
<p>What you want is a direct consequence of either $(4)$ or $(5)$, or of the first sentence in my post.</p>
<hr>
<p><strong>ADD</strong> We can prove that for $x \geq 0$ $$\lim\left(1+\frac xn\right)^n=\exp x$$ from definition $(ii)$. </p>
<p>First, note that $${n\choose k}\frac 1{n^k}=\frac{1}{{k!}}\frac{{n\left( {n - 1} \right) \cdots \left( {n - k + 1} \right)}}{{{n^k}}} = \frac{1}{{k!}}\left( {1 - \frac{1}{n}} \right)\left( {1 - \frac{2}{n}} \right) \cdots \left( {1 - \frac{{k - 1}}{n}} \right)$$</p>
<p>Since all the factors to the rightmost are $\leq 1$, we can claim $${n\choose k}\frac{1}{{{n^k}}} \leqslant \frac{1}{{k!}}$$</p>
<p>It follows that $${\left( {1 + \frac{x}{n}} \right)^n}=\sum\limits_{k = 0}^n {{n\choose k}\frac{{{x^k}}}{{{n^k}}}} \leqslant \sum\limits_{k = 0}^n {\frac{{{x^k}}}{{k!}}} $$</p>
<p>It follows that <strong>if</strong> the limit on the left exists, $$\lim {\left( {1 + \frac{x}{n}} \right)^n} \leqslant \lim \sum\limits_{k = 0}^n {\frac{{{x^k}}}{{k!}}} = \exp x$$</p>
<p>Note that the sums in $$\sum\limits_{k = 0}^n {{n\choose k}\frac{{{x^k}}}{{{n^k}}}} $$</p>
<p>are always increasing, which means that for $m\leq n$</p>
<p>$$\sum\limits_{k = 0}^m {{n\choose k}\frac{{{x^k}}}{{{n^k}}}}\leq \sum\limits_{k = 0}^n {{n\choose k}\frac{{{x^k}}}{{{n^k}}}}$$</p>
<p>By letting $n\to\infty$, since $m$ is fixed on the left side, and $$\mathop {\lim }\limits_{n \to \infty } \frac{1}{{k!}}\left( {1 - \frac{1}{n}} \right)\left( {1 - \frac{2}{n}} \right) \cdots \left( {1 - \frac{{k - 1}}{n}} \right) = \frac{1}{{k!}}$$</p>
<p>we see that <strong>if</strong> the limit exists, then for each $m$, we have $$\sum\limits_{k = 0}^m {\frac{{{x^k}}}{{k!}}} \leqslant \lim {\left( {1 + \frac{x}{n}} \right)^n}$$</p>
<p>But then, taking $m\to\infty$ $$\exp x = \mathop {\lim }\limits_{m \to \infty } \sum\limits_{k = 0}^m {\frac{{{x^k}}}{{k!}}} \leqslant \lim {\left( {1 + \frac{x}{n}} \right)^n}$$</p>
<p>It follows that <strong>if</strong> the limit exists $$\eqalign{
& \exp x \leqslant \lim_{n\to\infty} {\left( {1 + \frac{x}{n}} \right)^n} \cr
& \exp x \geqslant \lim_{n\to\infty} {\left( {1 + \frac{x}{n}} \right)^n} \cr}$$ which means $$\exp x = \lim_{n\to\infty} {\left( {1 + \frac{x}{n}} \right)^n}$$ Can you show the limit exists?</p>
<p>The case $x<0$ follows now from $$\displaylines{
{\left( {1 - \frac{x}{n}} \right)^{ - n}} = {\left( {\frac{n}{{n - x}}} \right)^n} \cr
= {\left( {\frac{{n - x + x}}{{n - x}}} \right)^n} \cr
= {\left( {1 + \frac{x}{{n - x}}} \right)^n} \cr} $$</p>
<p>using the squeeze theorem with $\lfloor n-x\rfloor$, $\lceil n-x\rceil$, and the fact $x\to x^{-1}$ is continuous. We care only for terms $n>\lfloor x\rfloor$ to make the above meaningful.</p>
<p><strong>NOTE</strong> If you're acquainted with $\limsup$ and $\liminf$; the above can be put differently as $$\eqalign{
& \exp x \leqslant \lim \inf {\left( {1 + \frac{x}{n}} \right)^n} \cr
& \exp x \geqslant \lim \sup {\left( {1 + \frac{x}{n}} \right)^n} \cr} $$ which means $$\lim \inf {\left( {1 + \frac{x}{n}} \right)^n} = \lim \sup {\left( {1 + \frac{x}{n}} \right)^n}$$ and proves the limit exists and is equal to $\exp x$.</p>
| <p>First prove that $\lim_{h\to 0}\frac{\ln(h+1)}{h}=1$. The switch of $\ln$, $\lim$ is possible because $f(x)=\ln x$, $x>0$ is continuous.</p>
<p>$$\lim_{h\to 0}\ln\left( (1+h)^{\frac{1}{h}} \right)=\ln\lim_{h\to 0}\left( (1+h)^{\frac{1}{h}} \right)=\ln e = 1$$</p>
<p>Now let $u=e^h-1$. We know $h\to 0\iff u\to 0$.</p>
<p>$$\lim_{h\to 0}\frac{e^h-1}{h}=\lim_{u\to 0}\frac{u}{\ln(u+1)}=1$$</p>
|
matrices | <p>Define</p>
<ul>
<li><p>The <em>algebraic multiplicity</em> of <span class="math-container">$\lambda_{i}$</span> to be the degree of the root <span class="math-container">$\lambda_i$</span> in the polynomial <span class="math-container">$\det(A-\lambda I)$</span>.</p>
</li>
<li><p>The <em>geometric multiplicity</em> the be the dimension of the eigenspace associated with the eigenvalue <span class="math-container">$\lambda_i$</span>.</p>
</li>
</ul>
<p>For example:
<span class="math-container">$\begin{bmatrix}1&1\\0&1\end{bmatrix}$</span> has root <span class="math-container">$1$</span> with algebraic multiplicity <span class="math-container">$2$</span>, but the geometric multiplicity <span class="math-container">$1$</span>.</p>
<p><strong>My Question</strong> : Why is the geometric multiplicity always bounded by algebraic multiplicity?</p>
<p>Thanks.</p>
| <p>Suppose the geometric multiplicity of the eigenvalue $\lambda$ of $A$ is $k$. Then we have $k$ linearly independent vectors $v_1,\ldots,v_k$ such that $Av_i=\lambda v_i$. If we change our basis so that the first $k$ elements of the basis are $v_1,\ldots,v_k$, then with respect to this basis we have
$$A=\begin{pmatrix}
\lambda I_k & B \\
0 & C
\end{pmatrix}$$
where $I_k$ is the $k\times k$ identity matrix. Since the characteristic polynomial is independent of choice of basis, we have
$$\mathrm{char}_A(x)=\mathrm{char}_{\lambda I_k}(x)\mathrm{char}_{C}(x)=(x-\lambda)^k\mathrm{char}_{C}(x)$$
so the algebraic multiplicity of $\lambda$ is at least $k$.</p>
| <p>I will give more details to other answers.</p>
<p>For a specific <span class="math-container">$\lambda_i$</span>, the idea is to transform the matrix <span class="math-container">$A$</span> (n by n) to matrix <span class="math-container">$B$</span> which shares the same eigenvalues as <span class="math-container">$A$</span>. If <span class="math-container">$P_1=[v_1, \cdots, v_m]$</span> are eigenvectors of <span class="math-container">$\lambda_i$</span>, we expand it to the basis <span class="math-container">$P=[P_1, P_2]=[v_1, \cdots, v_m, \cdots, v_n]$</span>. Therefore, <span class="math-container">$AP=[\lambda_i P_1, AP_2]$</span>. In order to make <span class="math-container">$P^{-1}AP=B$</span>, we must have <span class="math-container">$AP=PB$</span>. Let <span class="math-container">$B= \begin{bmatrix} B_{11} & B_{12} \\
B_{21} & B_{22} \end{bmatrix}$</span>, then <span class="math-container">$P_1B_{11}+P_2B_{21}=\lambda_i P_1$</span> and <span class="math-container">$P_1B_{12}+P_2B_{22}=AP_2$</span>. Because <span class="math-container">$P$</span> is a basis, <span class="math-container">$B_{11}=\lambda_iI$</span>(m by m), <span class="math-container">$B_{21}=\mathbf{0}$</span> (<span class="math-container">$(n-m)\times m$</span>). So, we have <span class="math-container">$B=\begin{bmatrix} \lambda_iI & B_{12}\\ \mathbf{0} & B_{22} \end{bmatrix}$</span>.</p>
<p><span class="math-container">$\det(A-\lambda I)=\det(P^{-1}(A-\lambda I)P)=\det(P^{-1}AP-\lambda I)=det(B-\lambda I)$</span> <span class="math-container">$ = \det \Bigg(\begin{bmatrix} (\lambda_i-\lambda)I & B_{12}\\ \mathbf{0} & B_{22}-\lambda I \end{bmatrix} \Bigg)=(\lambda_i-\lambda)^{m}\det(B_{22}-\lambda I)$</span>.</p>
<p>Obviously, <span class="math-container">$m$</span> is no bigger than the algebraic multiplicity of <span class="math-container">$\lambda_i$</span>.</p>
|
probability | <p>I need help with the following problem. Suppose $Z=N(0,s)$ i.e. normally distributed random variable with standard deviation $\sqrt{s}$. I need to calculate $E[Z^2]$. My attempt is to do something like
\begin{align}
E[Z^2]=&\int_0^{+\infty} y \cdot Pr(Z^2=y)dy\\
=& \int_0^{+\infty}y\frac{1}{\sqrt{2\pi s}}e^{-\frac y{2s}}dy\\
=&\frac{1}{\sqrt{2\pi s}}\int_0^{\infty}ye^{-\frac y{2s}}dy.
\end{align}</p>
<p>By using integration by parts we get</p>
<p>$$\int_0^{\infty}ye^{-\frac y{2s}}dy=\int_0^{+\infty}2se^{-\frac y{2s}}dy=4s^2.$$</p>
<p>Hence $E[Z^2]=\frac{2s\sqrt{2s}}{\sqrt{\pi}},$ which does not coincide with the answer in the text. Can someone point the mistake?</p>
| <p>This is old, but I feel like an easy derivation is in order.</p>
<p>The variance of any random variable $X$ can be written as
$$
V[X] = E[X^2] - (E[X])^2
$$</p>
<p>Solving for the needed quantity gives
$$
E[X^2] = V[X] + (E[X])^2
$$</p>
<p>But for our case, $E[X] = 0$, so the answer of $\sigma^2$ is immediate.</p>
| <p>The answer is $s = \sigma^2$. The integral you want to evaluate is </p>
<p>$$E[Z^2] = \frac{1}{\sqrt{2 \pi} \sigma} \int_{-\infty}^{\infty} dz \: z^2 \exp{(-\frac{z^2}{2 \sigma^2})}$$</p>
|
number-theory | <p>For example how come $\zeta(2)=\sum_{n=1}^{\infty}n^{-2}=\frac{\pi^2}{6}$. It seems counter intuitive that you can add numbers in $\mathbb{Q}$ and get an irrational number.</p>
| <p>But for example $$\pi=3+0.1+0.04+0.001+0.0005+0.00009+0.000002+\cdots$$ and that surely does not seem strange to you...</p>
| <p>You can't add an infinite number of rational numbers. What you can do, though, is find a limit of a sequence of partial sums. So, $\pi^2/6$ is the limit to infinity of the sequence $1, 1 + 1/4, 1 + 1/4 + 1/9, 1 + 1/4 + 1/9 + 1/16, \ldots $. Writing it so that it looks like a sum is really just a shorthand.</p>
<p>In other words, $\sum^\infty_{i=1} \cdots$ is actually kind of an abbreviation for $\lim_{n\to\infty} \sum^n_{i=1} \cdots$.</p>
|
probability | <p>The <a href="http://en.wikipedia.org/wiki/Kolmogorov_extension_theorem" rel="nofollow noreferrer">Kolmogorov Extension Theorem</a> says, essentially, that one can get a process on <span class="math-container">$\mathbb{R}^T$</span> for <span class="math-container">$T$</span> being an <em>arbitrary</em>, non-empty index set, by specifying all finite dimensional distributions in a "consistent" way. My favorite formulation of the consistency condition can be found <a href="https://web.archive.org/web/20150226030616/http://people.hss.caltech.edu/%7Ekcb/Notes/Kolmogorov.pdf" rel="nofollow noreferrer">here</a>. Now for the case in which <span class="math-container">$T$</span> is countable, this has already be shown by P. J. Daniell (see for example <a href="http://www.emis.de/journals/JEHPS/Decembre2007/Aldrich.pdf" rel="nofollow noreferrer">here</a> or <a href="http://www.probabilityandfinance.com/articles/STS154.pdf" rel="nofollow noreferrer">here</a>). So I would like to know what the extension to uncountable index sets brings. Events like "sample paths are continuous" are not in the <span class="math-container">$\sigma$</span>-algebra. In a rather critical paper on Kolmogrov's work on the foundation of probability, Shafer and Vovk <a href="http://www.probabilityandfinance.com/articles/STS154.pdf" rel="nofollow noreferrer">write</a> about the extension to uncountable index sets: "This greater generality is merely formal, in two senses: it involves no additional mathematical complications and it has no practical use." My impression is that this sentiment is not universally shared, so I would like to know:</p>
<blockquote>
<p>How is the Kolmogorov Extension Theorem applied in the construction of stochastic processes in continuous time? Especially, how are the constructed probabilities transferred to richer measurable spaces?</p>
</blockquote>
| <p>Assume that you have a set of finite-dimensional distributions $(\mu_S)_{S\in A}$, where $A$ is the set of finite subsets of, say, $\mathbb{R}$, and assume that you would like to argue for the existence of a stochastic process $X$ with càdlàg (right-continuous with left limits) paths such that the family of finite-dimensional distributions of $X$ is $(\mu_S)_{S\in A}$. Kolmogorov's extension theorem allows you to split this problem into two parts:</p>
<ol>
<li><p>Establishing the existence of a measure on $(\mathbb{R}^{\mathbb{R}_+},\mathbb{B}^{\mathbb{R}_+})$ with the appropriate finite-dimensional distributions (here, the extension theorem is invoked).</p></li>
<li><p>Using the measure constructed above, argue for the existence of a measure on $D[0,\infty)$ - the space of functions from $\mathbb{R}_+$ to $\mathbb{R}$ which are right-continuous with left limits - with the same finite-dimensional distributions.</p></li>
</ol>
<p>One example of where this is a viable proof technique is in the theory of continuous-time Markov processes on general state spaces. For convenience, consider a complete, separable metric space (E,d) endowed with its Borel $\sigma$-algebra $\mathbb{E}$. Assume given a family of probability measures $(P(x,\cdot))_{x\in E}$, where $P(x,\cdot)$ is a probability measure on $(E,\mathbb{E})$. We wish to argue for the existence of a càdlàg continuous-time Markov process with values in $E$ and $(P(x,\cdot))_{x\in E}$ as its transition probabilities.</p>
<p>The following argument is given in Rogers & Williams: "Diffusions, Martingales and Markov Processes", Volume 1, Section III.7, and uses the two steps outlined above. First, the Kolmogorov extension theorem is invoked to obtain a measure $P$ on $(E^{\mathbb{R}_+},\mathbb{E}^{\mathbb{R}_+})$ with the desired finite-dimensional distributions. Letting $X$ be the identity on $E^{\mathbb{R}_+}$, $X$ is then a "non-regularized" process with the desired finite-dimensional distributions. Afterwards, in the case where the family of transition probabilities satisfy certain regularity criteria, a supermartingale argument is applied to obtain a càdlàg version of $X$. This supermartingale argument could not have been immediately applied without the existence of the measure $P$ on $(E^{\mathbb{R}_+},\mathbb{E}^{\mathbb{R}_+})$: Without this measure, there would be no candidate for a common probability space on which to define the supermartingales applied in the regularity proof. Thus, it is not obvious how to obtain the same existence result without the Kolmogorov extension theorem.</p>
| <p>I think there are plenty of cases where the uncountable version is essential. Gaussian processes are quite frequently defined on something much larger than <span class="math-container">$\mathbb{R}^d$</span>. The index set <span class="math-container">$T$</span> is frequently taken to be a collection of sets or some functional space, or a space of measures.</p>
<p>One will find quite a few examples in Adler's <em>Introduction to continuity, extrema and related topics for general Gaussian processes</em>.</p>
<p><em>P.S. I see that the question is quite old. I just read the answers and discussions and wanted to restore some justice :) Maybe it would be of help for someone who stumbles upon this thread.</em></p>
|
probability | <p>$\newcommand{\erf}{\operatorname{erf}}$
This may be a very naïve question, but here goes.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Error_function">error function</a> $\erf$ is defined by
$$\erf(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2}dt.$$
Of course, it is closely related to the normal cdf
$$\Phi(x) = P(N < x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2}dt$$
(where $N \sim N(0,1)$ is a standard normal) by the expression $\erf(x) = 2\Phi(x \sqrt{2})-1$.</p>
<p>My question is:</p>
<blockquote>
<p>Why is it natural or useful to define $\erf$ normalized in this way?</p>
</blockquote>
<p>I may be biased: as a probabilist, I think much more naturally in terms of $\Phi$. However, anytime I want to compute something, I find that my calculator or math library only provides $\erf$, and I have to go check a textbook or Wikipedia to remember where all the $1$s and $2$s go. Being charitable, I have to assume that $\erf$ was invented for some reason other than to cause me annoyance, so I would like to know what it is. If nothing else, it might help me remember the definition.</p>
<p>Wikipedia says:</p>
<blockquote>
<p><em>The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.</em></p>
</blockquote>
<p>So perhaps a practitioner of one of these mysterious "other branches of mathematics" would care to enlighten me. </p>
<p>The most reasonable expression I've found is that
$$P(|N| < x) = \erf(x/\sqrt{2}).$$
This at least gets rid of all but one of the apparently spurious constants, but still has a peculiar $\sqrt{2}$ floating around.</p>
| <p>Some paper chasing netted <a href="http://www.jstatsoft.org/v11/a05/paper">this short article</a> by George Marsaglia, in which he also quotes <a href="http://www.informaworld.com/smpp/content~db=all~content=a911145858~frm=abslink">the article by James Glaisher</a> where the error function was given a name and notation (but with a different normalization). Here's the relevant section of the paper:</p>
<blockquote>
<p>In 1871, J.W. Glaisher published an article on definite integrals in which
he comments that while there is scarcely a function that cannot be put
in the form of a definite integral, for the evaluation of those that
cannot be put in the form of a tolerable series we are limited to
combinations of algebraic, circular, logarithmic and exponential—the
elementary or primary functions. ... He writes:</p>
<blockquote>
<p>The chief point of importance, therefore, is the choice of the
elementary functions; and this is a work of some difficulty. One function
however, viz. the integral $\int_x^\infty e^{-x^2}\mathrm dx$,
well known for its use in physics, is so obviously suitable for the purpose,
that, with the exception of receiving a name and a fixed notation, it may
almost be said to have already become primary... As it is necessary that
the function should have a name, and as I do not know that any has been
suggested, I propose to call it the <em>Error-function</em>, on account of its
earliest and still most important use being in connexion with the theory of
Probability, and notably with the theory of Errors, and to write</p>
<p>$$\int_x^\infty e^{-x^2}\mathrm dx=\mathrm{Erf}(x)$$</p>
</blockquote>
<p>Glaisher goes on to demonstrate use of $\mathrm{Erf}$ in the evaluation of a
variety of definite integrals. We still use "error function" and
$\mathrm{Erf}$, but $\mathrm{Erf}$ has become $\mathrm{erf}$, with a change
of limits and a normalizing factor:
$\mathrm{erf}(x)=\frac2{\sqrt{\pi}}\int_0^x e^{-t^2}\mathrm dt$ while Glaisher’s
original $\mathrm{Erf}$ has become
$\mathrm{erfc}(x)=\frac2{\sqrt{\pi}}\int_x^\infty e^{-t^2}\mathrm dt$. The normalizing
factor $\frac2{\sqrt{\pi}}$ that makes $\mathrm{erfc}(0)=1$ was not used in
early editions of the famous “A Course in Modern Analysis” by Whittaker and
Watson. Both were students and later colleagues of Glaisher, as were other
eminences from Cambridge mathematics/physics: Maxwell, Thomson (Lord Kelvin)
Rayleigh, Littlewood, Jeans, Whitehead and Russell. Glaisher had a long and
distinguished career at Cambridge and was editor of <em>The Quarterly Journal of
Mathematics</em> for fifty years, from 1878 until his death in 1928.</p>
<p>It is unfortunate that changes from Glaisher’s original $\mathrm{Erf}$:
the switch of limits, names and the standardizing factor, did not apply to
what Glaisher acknowledged was its most important application: the normal
distribution function, and thus
$\frac1{\sqrt{2\pi}}\int e^{-\frac12t^2}\mathrm dt$ did not become the
basic integral form. So those of us interested in its most important
application are stuck with conversions...</p>
<p>...A search of the Internet will show many applications of what we now call
$\mathrm{erf}$ or $\mathrm{erfc}$ to problems of the type that seemed of
more interest to Glaisher and his famous colleagues: integral solutions
of differential equations. These include the telegrapher’s equation,
studied by Lord Kelvin in connection with the Atlantic cable, and Kelvin’s
estimate of the age of the earth (25 million years), based on the solution
of a heat equation for a molten sphere (it was far off because of then
unknown contributions from radioactive decay). More recent Internet mentions
of the use of $\mathrm{erf}$ or $\mathrm{erfc}$ for solving
differential equations include short-circuit power dissipation in
electrical engineering, current as a function of time in a switching diode,
thermal spreading of impedance in electrical components, diffusion of a
unidirectional magnetic field, recovery times of junction diodes and
the Mars Orbiter Laser Altimeter.</p>
</blockquote>
<p>On the other hand, for the applications where the error function is to be evaluated at <em>complex</em> values (spectroscopy, for instance), probably the more "natural" function to consider is <a href="http://dx.doi.org/10.1016/0022-4073%2867%2990057-X">Faddeeva's (or Voigt's) function:</a></p>
<p>$$w(z)=\exp\left(-z^2\right)\mathrm{erfc}(-iz)$$</p>
<p>there, the normalization factor simplifies most of the formulae in which it is used. In short, I suppose the choice of whether you use the error function or the normal distribution CDF $\Phi$ or the Faddeeva function in your applications is a matter of convenience.</p>
| <p>I think the normalization in $x$ is easy to account for: it's natural to write down the integral $\int_0^x e^{-t^2} \, dt$ as an integral even if it's not actually the most natural probabilistic quantity. So it remains to explain the normalization in $y$, and as far as I can tell this is so $\lim_{x \to \infty} \text{erf}(x) = 1$. </p>
<p>Beyond that, the normalization's probably stuck more for historical reasons than anything else. </p>
|
matrices | <p>I know there are different definitions of Matrix Norm, but I want to use the definition on <a href="http://mathworld.wolfram.com/MatrixNorm.html" rel="noreferrer">WolframMathWorld</a>, and Wikipedia also gives a similar definition.</p>
<p>The definition states as below:</p>
<blockquote>
<p>Given a square complex or real <span class="math-container">$n\times n$</span> matrix <span class="math-container">$A$</span>, a matrix norm <span class="math-container">$\|A\|$</span> is a nonnegative number associated with <span class="math-container">$A$</span> having the properties</p>
<p>1.<span class="math-container">$\|A\|>0$</span> when <span class="math-container">$A\neq0$</span> and <span class="math-container">$\|A\|=0$</span> iff <span class="math-container">$A=0$</span>,</p>
<p>2.<span class="math-container">$\|kA\|=|k|\|A\|$</span> for any scalar <span class="math-container">$k$</span>,</p>
<p>3.<span class="math-container">$\|A+B\|\leq\|A\|+\|B\|$</span>, for <span class="math-container">$n \times n$</span> matrix <span class="math-container">$B$</span></p>
<p>4.<span class="math-container">$\|AB\|\leq\|A\|\|B\|$</span>.</p>
</blockquote>
<p>Then, as the website states, we have <span class="math-container">$\|A\|\geq|\lambda|$</span>, here <span class="math-container">$\lambda$</span> is an eigenvalue of <span class="math-container">$A$</span>. I don't know how to prove it, by using just these four properties.</p>
| <p>Suppose $v$ is an eigenvector for $A$ corresponding to $\lambda$. Form the "eigenmatrix" $B$ by putting $v$ in all the columns. Then $AB = \lambda B$. So, by properties $2$ and $4$ (and $1$, to make sure $\|B\| > 0$),
$$|\lambda| \|B\| = \|\lambda B\| = \|AB\| \le \|A\| \|B\|.$$
Hence, $\|A\| \ge |\lambda|$ for all eigenvalues $\lambda$.</p>
| <p>Let $\|\cdot\|$ be a matrix norm.</p>
<p>It is known that the <a href="https://en.wikipedia.org/wiki/Spectral_radius" rel="noreferrer">spectral radius</a> $r(A) = \lim_{n\to\infty} \|A^n\|^{\frac1n}$ has the property $|\lambda| \le r(A)$ for all $\lambda\in \sigma(A)$.</p>
<p>Indeed, let $\lambda \in \mathbb{C}$ such that $|\lambda| > r(A)$.</p>
<p>Then $I - \frac1{\lambda} A$ is invertible. Namely, check that the inverse is given by $\sum_{n=0}^\infty\frac1{\lambda^n}A^n$.</p>
<p>This series converges absolutely because $\frac1{|\lambda|}$ is less than the radius of convergence of the power series $\sum_{n=1}^\infty \|A\|^nx^n$, which is $\frac1{\limsup_{n\to\infty} \|A^n\|^{\frac1n}} = \frac1{r(A)}$.</p>
<p>Hence $$\lambda I - A = \lambda\left(I - \frac1{\lambda} A\right)$$</p>
<p>is also invertible so $\lambda \notin \sigma(A)$.</p>
<p>Now using submultiplicativity we get $\|A^n\| \le \|A\|^n$ so</p>
<p>$$|\lambda| \le r(A) = \lim_{n\to\infty} \|A^n\|^{\frac1n} \le \lim_{n\to\infty} \|A\|^{n\cdot\frac1n} = \|A\|$$</p>
|
linear-algebra | <p>During a year and half of studying Linear Algebra in academy, I have never questioned why we use the word "scalar" and not "number". When I started the course our professor said we would use "scalar" but he never said why.</p>
<p>So, why do we use the word "scalar" and not "number" in Linear Algebra?</p>
| <p>So first of all, "integer" would not be adequate; vector spaces have <em>fields</em> of scalars and the integers are not a field. "Number" would be adequate in the common cases (where the field is $\mathbb{R}$ or $\mathbb{C}$ or some other subfield of $\mathbb{C}$), but even in those cases, "scalar" is better for the following reason. We can identify $c$ in the base field with the function $*_c : V \to V,*_c(v)=cv$. Especially when the field is $\mathbb{R}$, you can see that geometrically, this function acts on the space by "scaling" a vector (stretching or contracting it and possibly reflecting it). Thus the role of the scalars is to scale the vectors, and the word "scalar" hints us toward this way of thinking about it.</p>
| <p>Not all fields are fields of numbers. For instance, it makes sense to talk about vector spaces over the field of rational functions $\mathbb R(X)$ but the scalars in this case are definitely not numbers.</p>
|
linear-algebra | <p>Why Linear Algebra named in that way?
Especially, why we call it <code>linear</code>? What does it mean?</p>
| <p>Linear algebra is so named because it studies linear functions. A linear function is one for which</p>
<p>$$f(x+y) = f(x) + f(y)$$</p>
<p>and</p>
<p>$$f(ax) = af(x)$$</p>
<p>where $x$ and $y$ are vectors and $a$ is a scalar. Roughly, this means that inputs are proportional to outputs and that the function is additive. We get the name 'linear' from the prototypical example of a linear function in one dimension: a straight line through the origin. However, linear functions can be more complex than this (or indeed, simpler: the function $f(x)=0$ for all $x$ is a linear function!</p>
<p>Of course, I've brushed a lot of detail under the carpet here. For example, what kind of space are $x$ and $y$ members of? (Answer: They're elements of a vector space). Do $x$ and $f(x)$ have to belong to the same space? (Answer: No). If they belong to different spaces, what does it mean to write $ax$ and $af(x)$? (Answer: you need an action by the same field on each of the vector spaces). Do the vector spaces have to be finite dimensional? (Answer: no, and in fact a lot of really interesting linear algebra takes place over infinite-dimensional vector spaces).</p>
<p>I hope that's enough to get you started.</p>
| <p>I think <em>linear</em> refers to the fact that vector spaces are not curved. For instance, the wikipedia page for <a href="http://en.wikipedia.org/wiki/Linear_space">linear spaces</a> gets redirected to the page on vector spaces. So does the <a href="http://mathworld.wolfram.com/LinearSpace.html">one</a> at MathWorld.</p>
<p>From Moore in <a href="http://dx.doi.org/10.1006%2Fhmat.1995.1025">The axiomatization of linear algebra: 1875–1940</a> I've learned that:</p>
<ul>
<li>Peano used <em>linear systems</em> for what we now call vector spaces. This reflects the view that linear algebra is about spaces of linear algebraic relations. (p. 265, 266)</li>
<li>Pincherle was the first to use the term <em>linear space</em> for the concept of vector space. (p. 270)</li>
<li>Hahn used <em>linear space</em> for <em>normed vector space</em>. (p. 277)</li>
<li><em>Linear transformations</em> as an abstract concept seem to have been introduced much later by Emmy Noether. (p. 293)</li>
<li>The term <em>linear algebra</em> was first used in the modern sense by van der Waerden although the term can be found earlier in Weyl. (p. 294)</li>
</ul>
|
probability | <p>I'm struggling with the concept of conditional expectation. First of all, if you have a link to any explanation that goes beyond showing that it is a generalization of elementary intuitive concepts, please let me know. </p>
<p>Let me get more specific. Let <span class="math-container">$\left(\Omega,\mathcal{A},P\right)$</span> be a probability space and <span class="math-container">$X$</span> an integrable real random variable defined on <span class="math-container">$(\Omega,\mathcal{A},P)$</span>. Let <span class="math-container">$\mathcal{F}$</span> be a sub-<span class="math-container">$\sigma$</span>-algebra of <span class="math-container">$\mathcal{A}$</span>. Then <span class="math-container">$E[X|\mathcal{F}]$</span> is the a.s. unique random variable <span class="math-container">$Y$</span> such that <span class="math-container">$Y$</span> is <span class="math-container">$\mathcal{F}$</span>-measurable and for any <span class="math-container">$A\in\mathcal{F}$</span>, <span class="math-container">$E\left[X1_A\right]=E\left[Y1_A\right]$</span>.</p>
<p>The common interpretation seems to be: "<span class="math-container">$E[X|\mathcal{F}]$</span> is the expectation of <span class="math-container">$X$</span> given the information of <span class="math-container">$\mathcal{F}$</span>." I'm finding it hard to get any meaning from this sentence.</p>
<ol>
<li><p>In elementary probability theory, expectation is a real number. So the sentence above makes me think of a real number instead of a random variable. This is reinforced by <span class="math-container">$E[X|\mathcal{F}]$</span> sometimes being called "conditional expected value". Is there some canonical way of getting real numbers out of <span class="math-container">$E[X|\mathcal{F}]$</span> that can be interpreted as elementary expected values of something?</p></li>
<li><p>In what way does <span class="math-container">$\mathcal{F}$</span> provide information? To know that some event occurred, is something I would call information, and I have a clear picture of conditional expectation in this case. To me <span class="math-container">$\mathcal{F}$</span> is not a piece of information, but rather a "complete" set of pieces of information one could possibly acquire in some way. </p></li>
</ol>
<p>Maybe you say there is no real intuition behind this, <span class="math-container">$E[X|\mathcal{F}]$</span> is just what the definition says it is. But then, how does one see that a martingale is a model of a fair game? Surely, there must be some intuition behind that!</p>
<p>I hope you have got some impression of my misconceptions and can rectify them.</p>
| <p>Maybe this simple example will help. I use it when I teach
conditional expectation. </p>
<p>(1) The first step is to think of ${\mathbb E}(X)$ in a new way:
as the best estimate for the value of a random variable $X$ in the absence of any information.
To minimize the squared error
$${\mathbb E}[(X-e)^2]={\mathbb E}[X^2-2eX+e^2]={\mathbb E}(X^2)-2e{\mathbb E}(X)+e^2,$$
we differentiate to obtain $2e-2{\mathbb E}(X)$, which is zero at $e={\mathbb E}(X)$.</p>
<p>For example, if I throw a fair die and you have to
estimate its value $X$, according to the analysis above, your best bet is to guess ${\mathbb E}(X)=3.5$.
On specific rolls of the die, this will be an over-estimate or an under-estimate, but in the long run it minimizes the mean square error. </p>
<p>(2) What happens if you <em>do</em> have additional information?
Suppose that I tell you that $X$ is an even number.
How should you modify your estimate to take this new information into account?</p>
<p>The mental process may go something like this: "Hmmm, the possible values <em>were</em> $\lbrace 1,2,3,4,5,6\rbrace$
but we have eliminated $1,3$ and $5$, so the remaining possibilities are $\lbrace 2,4,6\rbrace$.
Since I have no other information, they should be considered equally likely and hence the revised expectation is $(2+4+6)/3=4$".</p>
<p>Similarly, if I were to tell you that $X$ is odd, your revised (conditional) expectation is 3. </p>
<p>(3) Now imagine that I will roll the die and I will tell you the parity of $X$; that is, I will
tell you whether the die comes up odd or even. You should now see that a single numerical response
cannot cover both cases. You would respond "3" if I tell you "$X$ is odd", while you would respond "4" if I tell you "$X$ is even".
A single numerical response is not enough because the particular piece of information that I will give you is <strong>itself random</strong>.
In fact, your response is necessarily a function of this particular piece of information.
Mathematically, this is reflected in the requirement that ${\mathbb E}(X\ |\ {\cal F})$ must be $\cal F$ measurable. </p>
<p>I think this covers point 1 in your question, and tells you why a single real number is not sufficient.
Also concerning point 2, you are correct in saying that the role of $\cal F$ in ${\mathbb E}(X\ |\ {\cal F})$
is not a single piece of information, but rather tells what possible specific pieces of (random) information may occur. </p>
| <p>I think a good way to answer question 2 is as follows.</p>
<p>I am performing an experiment, whose outcome can be described by an element $\omega$ of some set $\Omega$. I am not going to tell you the outcome, but I will allow you to ask certain questions yes/no questions about it. (This is like "20 questions", but infinite sequences of questions will be allowed, so it's really "$\aleph_0$ questions".) We can associate a yes/no question with the set $A \subset \Omega$ of outcomes for which the answer is "yes". </p>
<p>Now, one way to describe some collection of "information" is to consider all the questions which could be answered with that information. (For example, the 2010 Encyclopedia Britannica is a collection of information; it can answer the questions "Is the dodo extinct?" and "Is the elephant extinct?" but not the question "Did Justin Bieber win a 2011 Grammy?") This, then, would be a set $\mathcal{F} \subset 2^\Omega$.</p>
<p>If I know the answer to a question $A$, then I also know the answer to its negation, which corresponds to the set $A^c$ (e.g. "Is the dodo not-extinct?"). So any information that is enough to answer question $A$ is also enough to answer question $A^c$. Thus $\mathcal{F}$ should be closed under taking complements. Likewise, if I know the answer to questions $A,B$, I also know the answer to their disjunction $A \cup B$ ("Are either the dodo or the elephant extinct?"), so $\mathcal{F}$ must also be closed under (finite) unions. Countable unions require more of a stretch, but imagine asking an infinite sequence of questions "converging" on a final question. ("Can elephants live to be 90? Can they live to be 99? Can they live to be 99.9?" In the end, I know whether elephants can live to be 100.)</p>
<p>I think this gives some insight into why a $\sigma$-field can be thought of as a collection of information.</p>
|
logic | <p>I read that contraposition $\neg Q \rightarrow \neg P$ in intuitionistic logic is not generally equivalent to $P \rightarrow Q$. If this is right, in what case can this contraposition logical-equivalence be used in intuitionistic logic?</p>
| <p>Contraposition in intuitionism can, sometimes, be used, but it's a delicate situation.</p>
<p>You can think of it this way. Intuitionistically, the meaning of $P\to Q$ is that any proof of $P$ can be turned into a proof of $Q$. Similarly, $\neg Q \to \neg P$ means that every proof of $\neg Q$ can be turned into a proof of $\neg P$. </p>
<p>If $P\to Q$ is true, and you are given a proof of $\neg Q$, can you construct a proof of $\neg P$ ? The answer is yes, as follows. Well, we are given a proof that there is no proof of $Q$. Suppose $P$ is true, then it can be turned into a proof of $Q$, but then we will have a proof of $Q\wedge \neg Q$, which is impossible. Thus we just showed that it is not the case that $P$ holds, thus $\neg P$ holds. In other words, $(P\to Q)\to (\neg Q \to \neg P)$.</p>
<p>In the other direction, suppose that $\neg Q \to \neg P$, and you are given a proof of $P$. Can you now construct a proof of $Q$? Well, not quite. The best you can do is as follows. Suppose I have a proof of $\neg Q$. I can turn it into a proof of $\neg P$, and then obtain a proof of $P\wedge \neg P$, which is impossible. It thus shows that $\neg Q$ can not be proven. That is, that $\neg \neg Q$ holds. In other words, $(\neg Q \to \neg P)\to (P\to \neg \neg Q)$.</p>
| <blockquote>
<p>In what case can this contraposition logical-equivalence be used in intuitionistic logic?</p>
</blockquote>
<p>This is not straightforward to answer. What needs to be true is that $P$ and $Q$ need to act sufficiently like classical formulas. Here are two examples:</p>
<p><em>1.</em> The <a href="https://en.wikipedia.org/wiki/Double-negation_translation" rel="noreferrer">negative translation</a> embeds classical logic into intuitionistic logic, sending a formula $S$ to a formula $S^N$. If we compute this for an instance of contraposition, we obtain:</p>
<p>$$(\lnot R \to \lnot S) \to (S \to R))^N\\
(\lnot R \to \lnot S)^N \to (S \to R)^N\\
((\lnot R)^N \to (\lnot R)^N \to (S^N \to R^N)\\
(\lnot R^N \to \lnot S^N) \to (S^N \to R^N)$$</p>
<p>Therefore $(\lnot R^N \to \lnot S^N) \to (S^N \to R^N)$ is intuitionistically valid for all $R,S$. In particular, if $P$ and $Q$ are equivalent to negative translations of other formulas, then contraposition holds for $P$ and $Q$.</p>
<hr>
<p><em>2.</em> Here is a different view. Suppose $Q_0$ is a fixed formula such that contraposition holds between $Q_0$ and all $P$:
$$(\lnot Q_0 \to \lnot P) \to (P \to Q_0)$$</p>
<p>Then, letting $P$ be $\lnot \bot$, we have $\lnot P$ equivalent to $\bot$, and so from
$$(\lnot Q_0 \to \lnot P) \to (P \to Q_0)$$
we obtain
$$(\lnot \lnot Q_0) \to (\lnot \bot \to Q_0)$$
which is equivalent to
$$\lnot \lnot Q_0 \to Q_0$$
Thus if $Q_0$ satisfies contraposition with all $P$, then $Q_0$ is equivalent to $\lnot\lnot Q_0$. The converse of this was shown by Ittay Weiss in another answer: if $Q_0$ is equivalent to $\lnot\lnot Q_0$ then $Q_0$ satisfies contraposition with all $P$.</p>
|
probability | <p>I attempted to answer <a href="https://www.quora.com/Two-distinct-real-numbers-between-0-and-1-are-written-on-two-sheets-of-paper-You-have-to-select-one-of-the-sheets-randomly-and-declare-whether-the-number-you-see-is-the-biggest-or-smallest-of-the-two-How-can-one-expect-to-be-correct-more-than-half-the-times-you-play-this-game/answer/Ephraim-Rothschild">this question</a> on Quora, and was told that I am thinking about the problem incorrectly. The question was:</p>
<blockquote>
<p>Two distinct real numbers between 0 and 1 are written on two sheets of
paper. You have to select one of the sheets randomly and declare
whether the number you see is the biggest or smallest of the two. How
can one expect to be correct more than half the times you play this
game?</p>
</blockquote>
<p>My answer was that it was impossible, as the probability should always be 50% for the following reason:</p>
<blockquote>
<p><strong>You can't!</strong> Here's why: </p>
<p>The set of real numbers between (0, 1) is known as an Uncountably Infinite Set
(<a href="https://en.wikipedia.org/wiki/Uncountable_set">https://en.wikipedia.org/wiki/Uncountable_set</a>). A set that is
uncountable has the following interesting property: </p>
<p>Let $\mathbb{S}$ be an uncountably infinite set. Let, $a, b, c, d \in \mathbb{S} (a \neq b, c \neq d)$. If $x$ is an uncountably infinite subset of
$\mathbb{S}$, containing all elements in $\mathbb{S}$ on the interval $(a, b)$; and $y$
is another uncountably infinite subset of $\mathbb{S}$, which contains all
elements of $\mathbb{S}$ on the interval $(c, d),$ <strong>$x$ and $y$ have the same
cardinality (size)!</strong></p>
<p>So for example, the set of all real numbers between (0, 1) is actually
the <em>exact same size</em> as the set of all real numbers between (0, 2)!
It is also the same size as the set of all real numbers between (0,
0.00001). In fact, if you have an uncountably infinite set on the interval $(a, b)$, and $a<n<b$, then then exactly 50% of the numbers
in the set are greater than $n$, and 50% are less than $n$, <em>no matter
what you choose for</em> $n$. This is important because it tells us
something unintuitive about our probability in this case. Let's say
the first number you picked is 0.03. You might think "Well, 97% of the
other possible numbers are larger than this, so the other number is
probably larger." <strong>You would be wrong!</strong> There are actually <em>exactly</em>
as many numbers between (0, 0.03) as there are between (0.03, 1). Even
if you picked 0.03, half of the other possible numbers are smaller
than it, and half of the other possible numbers are larger than it.
<strong>This means there is still a 50% probability that the other number is larger, and a 50% probability that it is smaller!</strong> </p>
<p>"<em>But how can that be?</em>" you ask, "<em>why isn't $\frac{a-b}{2}$ the
midpoint?</em>"</p>
<p>The real question is, why is it that we believe that
$\frac{a-b}{2}$ is the midpoint to begin with? The reason is probably
the following: it seems to make the most sense for discrete
(finite/countably infinite) sets. For example, if instead of the real
numbers, we took the set of all multiples of $0.001$ on the interval
$[0, 1]$. Now it makes sense to say that 0.5 is the midpoint, as we
know that the number of numbers below 0.5 is equal to the number of
numbers above 0.5. If we were to try to say that the midpoint is 0.4,
we would find that there are now more numbers above 0.4 then there are
below 0.4. <strong>This no longer applies when talking about the set of all
real numbers</strong> $\mathbb{R}$. Strangely enough, we can no longer talk
about having a midpoint in $\mathbb{R}$, because every number in
$\mathbb{R}$ could be considered a midpoint. For any point in
$\mathbb{R}$, the numbers above it and the numbers below it always
have the same cardinality. </p>
<p>See the Wikipedia article on Cardinality of the continuum
(<a href="https://en.wikipedia.org/wiki/Cardinality_of_the_continuum">https://en.wikipedia.org/wiki/Cardinality_of_the_continuum</a>).</p>
</blockquote>
<p>My question is, from a mathematical point of view, is this correct? The person who told me that this is wrong is fairly well known, and not someone who I would assume to often be wrong, especially for these types of problems.</p>
<p>The reasoning given for my answer being wrong was as follows:</p>
<blockquote>
<p>Your conclusion is not correct.<br>
You're right that the set of real
numbers between 0 and 1 is uncountable infinite, and most of what you
said here is correct. But that last part is incorrect. If you picked a
random real number between 0 and 1, the number does have a 97% chance
of being above 0.03. Let's look at this another way. Let K = {all
integers divisible by 125423423}. Let M = {all integers not divisible
by 125423423}. K and M are the same size, right? Does this mean, if
you picked an random integer, it has a 50% chance of being in K and a
50% chance or not? A random integer has a 50% chance of being
divisible by 125423423?</p>
</blockquote>
<p>The reason I disagreed with this response was because the last sentence should actually be true. If the set of all numbers that are divisible by 125423423 is the same size as the set of numbers that aren't, there should be a 50% probability of picking a random number from the first set, and a 50% chance that a number would be picked from the second. This is cirtainly the case with finite sets. If there are 2 disjoint, finite sets with equal cardinality, and you choose a random number from the union of the two sets, there should be a 50% chance that the number came from the first set, and a 50% chance that the number came from the second set. Can this idea be generalized for infinite sets of equal cardinality?</p>
<p><strong>Is my answer wrong? If so, am I missing something about how cardinalities of two set relate to the probability of choosing a number from one of them? Where did I go wrong in my logic?</strong></p>
| <p>Yes, your answer is fundamentally wrong. Let me point at that it is not even right in the finite case. In particular, you are using the following false axiom:</p>
<blockquote>
<p>If two sets of outcomes are equally large, they are equally probable.</p>
</blockquote>
<p>However, this is wrong even if we have just two events. For a somewhat real life example, consider some random variable $X$ which is $1$ if I will get married exactly a year from today and which is $0$ otherwise. Now, clearly the sets $\{1\}$ and $\{0\}$ are equally large, each having one element. However, $0$ is far more likely than $1$, although they are both possible outcomes.</p>
<p>The point here is <em>probability</em> is not defined from <em>cardinality</em>. It is, in fact, a separate definition. The mathematical definition for probability goes something like this:</p>
<blockquote>
<p>To discuss probability, we start with a set of possible outcomes. Then, we give a function $\mu$ which takes in a subset of the outcomes and tells us how likely they are.</p>
</blockquote>
<p>One puts various conditions on $\mu$ to make sure it makes sense, but nowhere do we link it to cardinality. As an example, in the previous example with outcomes $0$ and $1$ which are not equally likely, one might have $\mu$ defined something like:
$$\mu(\{\})=0$$
$$\mu(\{0\})=\frac{9999}{10000}$$
$$\mu(\{1\})=\frac{1}{10000}$$
$$\mu(\{0,1\})=1$$
which has nothing to do with the portion of the set of outcomes, which would be represented by the function $\mu'(S)=\frac{|S|}2$.</p>
<p>In general, your discussion of cardinality is correct, but it is irrelevant. Moreover, the conclusions you draw are inconsistent. The sets $(0,1)$ and $(0,\frac{1}2]$ and $(\frac{1}2,1)$ are pairwise equally large, so your reasoning says they are equally probable. However, the number was defined to be in $(0,1)$ so we're saying all the probabilities are $1$ - so we're saying that we're certain that the result will be in two disjoint intervals. This <em>never</em> happens, yet your method predicts that it always happens.</p>
<p>On a somewhat different note, but related in the big picture, you talk about "uncountably infinite sets" having the property that any non-trivial interval is also uncountable. This is true of $\mathbb R$, but not all uncountable subsets - like $(-\infty,-1]\cup \{0\} \cup [1,\infty)$ has that the interval $(-1,1)=\{0\}$ which is not uncountably infinite. Worse, not all uncountable sets have an intrinsic notion of ordering - how, for instance, do you order the set of subsets of natural numbers? The problem is not that there's no answer, but that there are many conflicting answers to that.</p>
<p>I think, maybe, the big thing to think about here is that sets really don't have a lot of structure. Mathematicians add more structure to sets, like probability measures $\mu$ or orders, and these fundamentally change their nature. Though bare sets have counterintuitive results with sets containing equally large copies of themselves, these don't necessarily translate when more structure is added.</p>
| <p>The OP's answer is incorrect. The numbers are not chosen based on <a href="https://en.wikipedia.org/wiki/Cardinality" rel="nofollow noreferrer">cardinality</a>, but based on <a href="https://en.wikipedia.org/wiki/Measure_(mathematics)" rel="nofollow noreferrer">measure</a>. It is not possible to define a <a href="https://en.wikipedia.org/wiki/Probability_distribution" rel="nofollow noreferrer">probability distribution</a> using cardinality (on an infinite set). However it is possible using measure. </p>
<p>Although the problem doesn't specify, if we assume the <a href="https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)" rel="nofollow noreferrer">uniform distribution</a> on $[0,1]$, then if $x=0.03$ then $y$ will be greater than $x$ 97% of the time. Of course, if a different probability distribution is used to select $x,y$, then a different answer will arise. It turns out that it is possible to win more than half the time even NOT KNOWING the distribution used, see this amazing result <a href="https://math.stackexchange.com/questions/655972/help-rules-of-a-game-whose-details-i-dont-remember/656426#656426">here</a>.</p>
|
probability | <p>In one of his interviews, <a href="https://www.youtube.com/shorts/-qvC0ISkp1k" rel="noreferrer">Clip Link</a>, Neil DeGrasse Tyson discusses a coin toss experiment. It goes something like this:</p>
<ol>
<li>Line up 1000 people, each given a coin, to be flipped simultaneously</li>
<li>Ask each one to flip if <strong>heads</strong> the person can continue</li>
<li>If the person gets <strong>tails</strong> they are out</li>
<li>The game continues until <strong>1*</strong> person remains</li>
</ol>
<p>He says the "winner" should not feel too surprised or lucky because there would be another winner if we re-run the experiment! This leads him to talk about our place in the Universe.</p>
<p>I realised, however, that there need <strong>not be a winner at all</strong>, and that the winner should feel lucky and be surprised! (Because the last, say, three people can all flip tails)</p>
<p>Then, I ran an experiment by writing a program with the following parameters:</p>
<ol>
<li><strong>Bias of the coin</strong>: 0.0001 - 0.8999 (8999 values)</li>
<li><strong>Number of people</strong>: 10000</li>
<li><strong>Number of times experiment run per Bias</strong>: 1000</li>
</ol>
<p>I plotted the <strong>Probability of 1 Winner</strong> vs <strong>Bias</strong></p>
<p><a href="https://i.sstatic.net/iVUuXvZj.png" rel="noreferrer"><img src="https://i.sstatic.net/iVUuXvZj.png" alt="enter image description here" /></a></p>
<p>The plot was interesting with <strong>zig-zag</strong> for low bias (for heads) and a smooth one after <strong>p = 0.2</strong>. (Also, there is a 73% chance of a single winner for a fair coin).</p>
<p><strong>Is there an analytic expression for the function <span class="math-container">$$f(p) = (\textrm{probability of $1$ winner with a coin of bias $p$}) \textbf{?}$$</span></strong></p>
<p>I tried doing something and got here:
<span class="math-container">$$
f(p)=p\left(\sum_{i=0}^{e n d} X_i=N-1\right)
$$</span>
where <span class="math-container">$X_i=\operatorname{Binomial}\left(N-\sum_{j=0}^{i-1} X_j, p\right)$</span> and <span class="math-container">$X_0=\operatorname{Binomial}(N, p)$</span></p>
| <p>It is known (to a nonempty set of humans) that when <span class="math-container">$p=\frac12$</span>, there is no limiting probability. Presumably the analysis can be (might have been) extended to other values of <span class="math-container">$p$</span>.
Even more surprisingly, the reason I know this is because it ends up having an application in number theory! In any case, a reference for this limit's nonexistence is <a href="https://www.kurims.kyoto-u.ac.jp/%7Ekyodo/kokyuroku/contents/pdf/1274-9.pdf" rel="noreferrer">Primitive roots: a survey</a> by Li and Pomerance (see the section "The source of the oscillation" starting on page 79). As the number of coins increases to infinity, the probability of winning (when <span class="math-container">$p=\frac12$</span>) oscillates between about <span class="math-container">$0.72134039$</span> and about <span class="math-container">$0.72135465$</span>, a difference of about <span class="math-container">$1.4\times10^{-5}$</span>.</p>
| <p>There is a pretty simple formula for the probability of a unique winner, although it involves an infinite sum. To derive the formula, suppose that there are <span class="math-container">$n$</span> people, and that you continue tossing until everyone is out, since they all got tails. Then you want the probability that at the last time before everyone was out, there was just one person left. If <span class="math-container">$p$</span> is the probability that the coin-flip is heads, to find the probability that this happens after <span class="math-container">$k+1$</span> steps with just person <span class="math-container">$i$</span> left, you can multiply the probability that person <span class="math-container">$i$</span> survives for <span class="math-container">$k$</span> steps, which is <span class="math-container">$p^k$</span>, by the probability that he is out on the <span class="math-container">$k+1^{\rm st}$</span> step, which is <span class="math-container">$1-p$</span>. You also have to multiply this by the probability that none of the other <span class="math-container">$n-1$</span> people survive for <span class="math-container">$k$</span> steps, which is <span class="math-container">$1-p^k$</span> for each of them. Multiplying these probabilities together gives
<span class="math-container">$$
p^k (1-p) (1-p^k)^{n-1}
$$</span>
and summing over the <span class="math-container">$n$</span> possible choices for <span class="math-container">$i$</span> and all <span class="math-container">$k$</span> gives
<span class="math-container">$$
f(p)= (1-p)\sum_{k\ge 1} n p^k (1 - p^k)^{n-1}.\qquad\qquad(*)
$$</span>
(I am assuming that <span class="math-container">$n>1$</span>, so <span class="math-container">$k=0$</span> is impossible.)</p>
<p>Now, the summand in (*) can be approximated by <span class="math-container">$n p^k \exp(-n p^k)$</span>, so if <span class="math-container">$n=p^{-L-\epsilon}$</span>, <span class="math-container">$L\ge 0$</span> large and integral, <span class="math-container">$0\le \epsilon \le1$</span>, <span class="math-container">$f(p)$</span> will be about
<span class="math-container">$$
(1-p) \sum_{j\ge 1-L} p^{j-\epsilon} \exp(-p^{j-\epsilon})
$$</span>
and we can further approximate this by summing over all integers: if <span class="math-container">$L$</span> becomes large and <span class="math-container">$\epsilon$</span> approaches some <span class="math-container">$0\le \delta \le 1$</span>, <span class="math-container">$f(p)$</span> will approach
<span class="math-container">$$
g(\delta):=(1-p)\sum_{j\in\Bbb Z} p^{j-\delta} \exp(-p^{j-\delta}).
$$</span>
The average of this over <span class="math-container">$\delta$</span> has the simple formula
<span class="math-container">$$
\int_0^1 g(\delta) d\delta = (1-p)\int_{\Bbb R} p^x \exp(-p^x) dx = -\frac{1-p}{\log p},
$$</span>
which is <span class="math-container">$1/(2 \log 2)\approx 0.72134752$</span> if <span class="math-container">$p=\frac 1 2$</span>, but as others have pointed out, <span class="math-container">$g(\delta)$</span> oscillates, so the large-<span class="math-container">$n$</span> limit for <span class="math-container">$f(p)$</span> will not exist. You can expand <span class="math-container">$g$</span> in Fourier series to get
<span class="math-container">$$
g(\delta)=-\frac{1-p}{\log p}\left(1+2\sum_{n\ge 1} \Re\left(e^{2\pi i n \delta} \,\,\Gamma(1 + \frac{2\pi i n}{\log p})\right)
\right).
$$</span>
Since <span class="math-container">$\Gamma(1+ri)$</span> falls off exponentially as <span class="math-container">$|r|$</span> becomes large, the peak-to-peak amplitude of the largest oscillation will be
<span class="math-container">$$
h(p):=-\frac{4(1-p)}{\log p} \left|\Gamma(1+\frac{2\pi i}{\log p})\right|,
$$</span>
which, as has already been pointed out, is <span class="math-container">$\approx 1.426\cdot 10^{-5}$</span> for <span class="math-container">$p=1/2$</span>. For some smaller <span class="math-container">$p$</span> it will be larger, although for very small <span class="math-container">$p$</span>, it will decrease as the value of the gamma function approaches 1 and <span class="math-container">$|\log p|$</span> increases. This doesn't mean that the overall oscillation disappears, though, since other terms in the Fourier series will become significant. To illustrate this, here are some graphs of <span class="math-container">$g(\delta)$</span>. From top to bottom, the <span class="math-container">$p$</span> values are <span class="math-container">$0.9$</span>, <span class="math-container">$0.5$</span>, <span class="math-container">$0.2$</span>, <span class="math-container">$0.1$</span>, <span class="math-container">$10^{-3}$</span>, <span class="math-container">$10^{-6}$</span>, <span class="math-container">$10^{-12}$</span>, and <span class="math-container">$10^{-24}$</span>. As <span class="math-container">$p$</span> becomes small,
<span class="math-container">$$
g(0)=(1-p)\sum_{j\in\Bbb Z} p^{j} \exp(-p^{j}) \ \ \qquad {\rm approaches} \ \ \qquad p^0 \exp(-p^0) = \frac 1 e.
$$</span></p>
<p><a href="https://i.sstatic.net/KX5RNjGy.png" rel="noreferrer"><img src="https://i.sstatic.net/KX5RNjGy.png" alt="Graphs of g(delta)" /></a></p>
<p><strong>References</strong></p>
<ul>
<li><a href="https://research.tue.nl/en/publications/on-the-number-of-maxima-in-a-discrete-sample" rel="noreferrer">"On the number of maxima in a discrete sample", J. J. A. M. Brands, F. W. Steutel, and R. J. G. Wilms, Memorandum COSOR 92-16, 1992, Technische Universiteit Eindhoven.</a></li>
<li><a href="https://doi.org/10.1214/aoap/1177005360" rel="noreferrer">"The Asymptotic Probability of a Tie for First Place", Bennett Eisenberg, Gilbert Stengle, and Gilbert Strang, <em>The Annals of Applied Probability</em> <strong>3</strong>, #3 (August 1993), pp. 731 - 745.</a></li>
<li><a href="https://www.jstor.org/stable/2325134" rel="noreferrer">Problem E3436, "Tossing Coins Until All Show Heads", solution, Lennart Råde, Peter Griffin, O. P. Lossers, <em>The American Mathematical Monthly</em>, <strong>101</strong>, #1 (January 1994), pp. 78-80.</a></li>
</ul>
|
logic | <p>I've heard of some other paradoxes involving sets (i.e., "the set of all sets that do not contain themselves") and I understand how paradoxes arise from them. But this one I do not understand.</p>
<p>Why is "the set of all sets" a paradox? It seems like it would be fine, to me. There is nothing paradoxical about a set containing itself.</p>
<p>Is it something that arises from the "rules of sets" that are involved in more rigorous set theory?</p>
| <p>Let <span class="math-container">$|S|$</span> be the cardinality of <span class="math-container">$S$</span>. We know that <span class="math-container">$|S| < |2^S|$</span>, which can be proven with <a href="https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument#General_sets" rel="noreferrer">generalized Cantor's diagonal argument</a>.</p>
<hr />
<p><strong>Theorem</strong></p>
<blockquote>
<p>The set of all sets does not exist.</p>
</blockquote>
<p><strong>Proof</strong></p>
<blockquote>
<p>Let <span class="math-container">$S$</span> be the set of all sets, then <span class="math-container">$|S| < |2^S|$</span>. But <span class="math-container">$2^S$</span> is a subset of <span class="math-container">$S$</span>. Therefore <span class="math-container">$|2^S| \leq |S|$</span>. A contradiction. Therefore the set of all sets does <strong>not</strong> exist.</p>
</blockquote>
| <p>Just by itself the notion of a universal set is not paradoxical.</p>
<p>It becomes paradoxical when you add the assumption that whenever $\varphi(x)$ is a formula, and $A$ is a preexisting set, then $\{x\in A\mid \varphi(x)\}$ is a set as well.</p>
<p>This is known as bounded comprehension, or separation. The full notion of comprehension was shown to be inconsistent by Russell's paradox. But this version is not so strikingly paradoxical. It is part of many of the modern axiomatizations of set theory, which have yet to be shown inconsistent.</p>
<p>We can show that assuming separation holds, the proof of the Russell paradox really translates to the following thing: If $A$ is a set, then there is a subset of $A$ which is not an element of $A$.</p>
<p>In the presence of a universal set this leads to an outright contradiction, because this subset should be an element of the set of all sets, but it cannot be.</p>
<p>But we may choose to restrict the formulas which can be used in this schema of axioms. Namely, we can say "not every formula should define a subset!", and that's okay. Quine defined a set theory called New Foundations, in which we limit these formulas in a way which allows a universal set to exist. Making it consistent to have the set of all sets, if we agree to restrict other parts of our set theory.</p>
<p>The problem is that the restrictions given by Quine are much harder to work with naively and intuitively. So we prefer to keep the full bounded comprehension schema, in which case the set of all set cannot exist for the reasons above.</p>
<p>While we are at it, perhaps it should be mentioned that the Cantor paradox, the fact that the power set of a universal set must be strictly larger, also fails in Quine's New Foundation for the same reasons. The proof of Cantor's theorem that the power set is strictly larger simply does not go through without using "forbidden" formulas in the process.</p>
<p>Not to mention that the Cantor paradox fails if we do not assume the power set axiom, namely it might be that not all sets have a power set. So if the universal set does not have a power set, there is no problem in terms of cardinality.</p>
<p>But again, we are taught from the start that these properties should hold for sets, and therefore they seem very natural to us. So the notion of a universal set is paradoxical for us, for that very reason. We are educated with a bias against universal sets. If you were taught that not all sets should have a power set, or that not all sub-collections of a set which are defined by a formula are sets themselves, then neither solution would be problematic. And maybe even you'd find it strange to think of a set theory without a universal set!</p>
|
geometry | <p>What is the smallest number of $45^\circ-60^\circ-75^\circ$ triangles that a square can be divided into? </p>
<p>The image below is a flawed example, from <a href="http://www.mathpuzzle.com/flawed456075.gif">http://www.mathpuzzle.com/flawed456075.gif</a></p>
<p><img src="https://i.sstatic.net/IwZt8.gif" alt="math puzzle 45-60-75 triangles"></p>
<p>Laczkovich gave a solution with many hundreds of triangles, but this was just an demonstration of existence, and not a <em>minimal</em> solution. ( Laczkovich, M. "Tilings of Polygons with Similar Triangles." Combinatorica 10, 281-306, 1990. )</p>
<p>I've offered a prize for this problem: In US dollars, (\$200-number of triangles). </p>
<p>NEW: The prize is won, with a 50 triangle solution by Lew Baxter.</p>
| <p>I found a minor improvement to Lew Baxter's solution. There are only 46 triangles needed to tile a square:</p>
<h3>This is my design</h3>
<p><a href="https://i.sstatic.net/7kZjY.gif" rel="noreferrer"><img src="https://raw.githubusercontent.com/gist/PM2Ring/734ea20456aaafc492bb3f54de250f21/raw/420f1ec94067530ac11bf14fb991ee99ec30ca3f/SquareTriTessBW.svg" alt="Square tessellation by 46 × 45-60-75 degree triangles" /></a></p>
<p>(Click this SVG for Franz's original GIF version).</p>
<p>Actually, I tried to find an optimal solution with a minimum number of tiles by creating a database with about 26,000 unique rhomboids & trapezoids consisting of 2-15 triangles. I searched through various promising setups (where the variable width/height-ratio of one element defines a second and you just have to look, if it's in the database, too) but nothing showed up. So this 46-tiles solution was in some sense just a by-product. As there probably exist some more complex combinations of triangles which I was not able to include, an even smaller solution could be possible.</p>
<p>With b = <span class="math-container">$\sqrt3$</span> the points have the coordinates:</p>
<pre><code>{
{4686, 0},
{4686, 6 (582 - 35 b)},
{4686, 4089 - 105 b},
{4686, 4686},
{4194 + 94 b, 3000 - 116 b},
{141 (28 + b), 3351 + 36 b},
{4194 + 94 b, -11 (-327 + b)},
{141 (28 + b), 141 (28 + b)},
{3456 + 235 b, 2262 + 25 b},
{3456 + 235 b, 2859 + 130 b},
{3456 + 235 b, 3456 + 235 b},
{3426 - 45 b, 45 (28 + b)},
{3426 - 45 b, 3 (582 - 35 b)},
{3426 - 45 b, 3 (744 - 85 b)},
{3258 - 51 b, 51 (28 + b)},
{2472 + 423 b, 213 (6 + b)},
{-213 (-16 + b), 213 (6 + b)},
{2754 - 69 b, 2754 - 69 b},
{-639 (-5 + b), 0},
{213 (6 + b), 213 (6 + b)},
{0, 0},
{4686, 15 (87 + 31 b)},
{3930 - 27 b, 2736 - 237 b},
{3930 - 27 b, 213 (6 + b)},
{0, 4686},
{6 (582 - 35 b), 4686},
{4089 - 105 b, 4686},
{3000 - 116 b, 4194 + 94 b},
{3351 + 36 b, 141 (28 + b)},
{-11 (-327 + b), 4194 + 94 b},
{2262 + 25 b, 3456 + 235 b},
{2859 + 130 b, 3456 + 235 b},
{45 (28 + b), 3426 - 45 b},
{3 (582 - 35 b), 3426 - 45 b},
{3 (744 - 85 b), 3426 - 45 b},
{51 (28 + b), 3258 - 51 b},
{213 (6 + b), 2472 + 423 b},
{213 (6 + b), -213 (-16 + b)},
{0, -639 (-5 + b)},
{15 (87 + 31 b), 4686},
{2736 - 237 b, 3930 - 27 b},
{213 (6 + b), 3930 - 27 b}
}
</code></pre>
<p>which build the 46 triangles with pointnumbers:</p>
<pre><code>{
{6, 5, 2}, {3, 2, 6}, {8, 7, 3}, {4, 3, 8},
{9, 10, 5}, {5, 6, 10}, {10, 11, 7}, {7, 8, 11},
{12, 15, 13}, {13, 15, 16}, {14, 13, 16}, {17, 15, 16},
{1, 19, 17}, {19, 17, 20}, {21, 20, 19}, {11, 18, 9},
{18, 9, 16}, {20, 16, 18}, {1, 22, 12}, {2, 23, 22},
{22, 24, 23}, {23, 14, 24}, {24, 12, 14}, {4, 27, 8},
{8, 30, 27}, {30, 8, 11}, {32, 11, 30}, {11, 18, 31},
{27, 26, 29}, {28, 29, 32}, {29, 28, 26}, {31, 32, 28},
{26, 41, 40}, {40, 42, 41}, {18, 31, 37}, {20, 37, 18},
{41, 35, 42}, {35, 34, 37}, {38, 36, 37}, {34, 36, 37},
{33, 36, 34}, {42, 33, 35}, {25, 40, 33}, {25, 39, 38},
{39, 38, 20}, {21, 20, 39}
}
</code></pre>
<hr />
<p>Here's a more colourful version, by PM 2Ring.</p>
<p><a href="https://i.sstatic.net/fFDNH.png" rel="noreferrer"><img src="https://raw.githubusercontent.com/gist/PM2Ring/1099ccb08f160f3c74857695bb62a315/raw/6dfd211a789eeb61fd71f4a030caa71ce0daa454/SquareTriTessColour.svg" alt="Square tessellation by 46 × 45-60-75 degree triangles, in colour" /></a></p>
<p>Here's a <a href="https://sagecell.sagemath.org/?z=eJztWnlz28iV_x-fok1tQkCmKAIgKZIj2nEycSqV7GyqPEnspbgKCDRJWDgYABSlUbSffX_vNY4GqXEmVa7drUpYM2If7776ddNhvEuzQmTSWGdpLMJCZkWaRrkI1cYuS4O9X6jdzEsCAlJb-Xa_XkfSMHZpmBS3gVd4Yi46nc6N8TQcT8Y9MXjuVcOxMEcTR1wIdyRWVrM-HEymWLUHWNZW8Zdn9nQoXgv8WfWEOxgMCNQeK1B7aAvTmWB_ZWHXHdkYuuVmC_PCBuSF61wx7CmuPqNddzgaY-KQrD3hOGOHZqWER5uT0RQz2x28tKvP1K4zhgpD3sTfNlNtzz0x1_H21XCI6aTedkYTzGEDbI_a6jjDK1Jg6LgssQ3scb15wfMLe1za4mjbuRoRn_GUUZsJo47dKVBHJSZ7W0M_oTVoRYQN9SfkEdeudJi65GC4iXm5pK_jXpWma22eEK4iph1nWiRpcdas6iGF1SZmeLOJqNMQaYfUCa4WM6dRoMfM6a4WFrRZe51FOtLudFsLi5PtUStfmoA59ZsWMCeb7YApzd8KBU6wlncbi-tuhRCNU0_46HsGqophnKEaodKYnd5N0ul_RtUxM9lfh0ngRZGZdW6e-udvb547PdFUJMuyCPH7rYQIoshCL9lEMherRwUkkn28kllffJcWUhRbrwB0AWh9Nxf7XAr7YuXlMhDgJx_CZPPKaMjNxQKxh8yD8Z574snFN4oeDSc9cQVtaDjEd09MSNUpQgp2G9EykCgfBjSmRdsGCk2AN6EpG9ShlMH_TMl2yxmzsIe0Xs-umj3gYUDMmKAaQTbm5dg0on3eI0iwmzIWDSqCDEMSThiOKiImrCcGDunqqKCn0KEV3iKBaDbkGYlIaMPSEM5VaYkJVXaas90GtcqYOMoW7kCXz7VVHAEFMjkkO8hPaEhRzcymVJgJgMkQCRKN2RHS0KaDhwWh0uHQCrNg8vj_qlLbvVJq04GC2YigmeaIsquCdAlvXM-GzYyqSDlTijt0UhEl5kD0iItbzVzSYaLqHY2OXeWSd5YqosNcBDIPN4nAKH-MY4l49L1IeH6W5jlH8eP8QQSht0kTL-qXeRClB5lpqbD17iUFdehjEiZ0ym-k6bgWIbxPMxG2VnsN6mK_28nsNgweFuFyCTHK3InDLCO82NtIka41hHBp1DiUNOz5SRU2yvrsPXJtZZ1pzyA_kDWGgBkO6exUEcS2HikfupVt2cONkaIwpwbHT7MgF4dt6G815b1MCi_47PkS2V6kQnrYTqFExuY6pCKR4Wa7SveoAvE-p7aHcRIhA2iXy01MqNAySZOLH2SWikgmm2JL-L9dizSJHoUn7tFXyQcCSxNZ8wfHvb-VylcwSwgfwjfpGrhewmL0aO-R5Ey6hSZMX_yaRC32O5DJizRTVIBYuZJEamQ_hMVWbDEFfa5gILBpfCkGyn26LRor2VzZ0P2hVvQNjSrVPVPt2lOEhklVj77hhyueuz36GpUliVxr08IYy4bJlYrBRgw2VUXCcWiiSiGPxvUalYQrQuV6N-G1oVoT5lBRYXBe4FrHJNwRM6QcUpQw4i9bfTn8xcNhr5QIkckjJStXHC4zTI_KwZBHihBFsqtYKVocjmPCdJUNOC6dUpiyQvCM4ppJDZUwlPpDmzFp5PKW4sKVSy0oLkNFe6iELg1glTXiD16Gc-vnQt570d7j801qZ6OQDzuETR6mSd4n-A8ShWAtDlLVBE_sKOyAlIc_4OBrwHsAVpnEKbECVSSYDAC0omiP91ER7qJQZRGyIE97QC8yzy9UhagBOKp2LGgVvglWG14qchnidh15RV_8B-XFIcwliVGjQqCUUwE9wuYYyUDQ5gXEm6MS9P003oWRRMtgXry9CV5bN_n5jWku_staYmx1LCOQa4Vt5tbMEPhUzce6c5M85a-yZ0DRegySinY_9gp_CwRehx1jhaljdxToDD1K3N9k6X6Xm5ZVg8FPIEeA8cJeWuJcUwFLzlKByiiXsyMkDbCUIJPFPktou4kFTxCE7snSj6iTMiS7UmWDAGTalCeaJ6iUw9DrNIvFX-5tke7E4t5ZLlbLv6iIkKiMWP-5uHe4soKQ3KB_6hEoWLwmmhdcSQC_oiUP9Rk1Dzp0V13ITMwQc-TD_K9ZYeIYImqx94gwI6HisCBHwr6MX8L36IxSqvjo0ghlLmzDWOHLFefnYtAfGax7KwKMKgT-zfpbHQvm4vXFEt9vTZq_pcHKeqtHRWXnWR0CHJnHAYAKrgdB5iFmxZ-Qi_LXdEAiltYepAhmQo-oOlaI5kuRQqGF8CDticN3OFE0JsrrdQzxBnxShVUVQ8pAitaQaREdDq0ScrjU2LlLMcfl_qKjxR1RuLh3GqCRAlq1gc7nYtUKSLpVAY3NWRaFW65Kuak17lXaUcvSnER8mKMbz0_acYpORYUR1ZAOKNJryWsIXJGlBwqVhlE_Rx0qohCZaVqN4LEKlFx6GXxKlwvUh97y3OqhWGD4jCF8A3JN-p66_O-5HeiN4-nz0BOP4Ny4vN5RCvU9tE9JYJqqOj2g3KvRo2W10r40hXF28VU-VELe7xO_oMOCDfm7ZF9su7l4F23SDKUjFh978Ax5Av1HkXMpR7HzAvLM8OKADIaV7-AycmIuTO_OE996iU8ov8cOEudM_Dmj_E7oHHkXheJdnnvUDb2n96dtUexml5eHw6Hv5_3Y34RR1Pe9y__2PAKbXu4ei22aXHqVSLcP_W0RR9xNBT8ZPd8H6d2-XzwUKkbzNLqXJrT71KPxnmxQRkrp8Y-Nxx9DGSkNzRr2haLtk5NxgQXVO_k4j7x4FXjCn1HzaH5c-EvtVOCwpaBlqrzZjrCKURUcmdXa9uk9b45WNZJ-UepxBEIsuPn_gq76R2mZt9ZxGWlx6DFj62VJd-kOsa3M25ZLMStlXjSJ-5nE-7TIlrOWYfhy8nHxeTk7UehOYYTLU_nhtzvxai4-n27R5-PibtnPZJzCFmGjAQlV2fgjq_C5nXQEoJT6EWPM2upkEvcCFDGT9NLMTPpAe-anLPW_o7IXBKSvcfbbhC4sIt-vclmQHKQL_FbWbKrYPoyTsXql4Ik8fITQT59ncGlhWo2aH59rtcNeVYE_9XHux62a29glPbSlJNpQuBJQMzntfNUq90cUa62m0UUI-uO2g1Zlk3m7rdiE9yhQYUHHb0B3SNQWugzmbB0FfBt7O5O3e2qvJxK1k8-HTYyXlyh1ua4A4ADaPqOTj3H5XUrEEq7g-ynRVSbH4FaBwPCJbnhqAZR8jfHpZFcUG-smuLck1B7IZqmmukgGyubSenHXrndrHg3f2YsoFNdcx1qLlcofZKEMjtjSbCSDWkvGLX-VMEuDlSYuc5GCUDnm9RF6xYSjOq8ZAYOuC7z5iQypLHb2B9zHveyRojH_sn6yZFnr0wppn1BKUdth_WlhogP3LTIKmea1WJg-tLEad1HvosT5dh_HR8Igm3zl1GQfo4kupNnWWEsuqgrExq8Ikp1OUlnzwzrM0GsV9ARC6qquysuKkO8bCxXdi8GyJwZ06VRTG1Pbaqo2HyolViNLqzbmWvScnEELvSzm6p7DEsBUe0bZM0p1Eoc5NR1oKqS5pztBYTW-yPt5ihtFE8rVKfb1ygcVgNi7k7f5_UY1s7n2XgZvoSxQxxZJNDmlZpsoXUEhYNzmcJ2vEjHdF_Py9ONrx765b5Tb1UlU1wt6rMMVhO_tIb0iIejo0asmso6ROhqRstlUci52jaXK2rruPD3M-lcbeoF85EFHsSJp-Ie-67cPcUSPW3Q_nHfs_qCDWPTTAD3dvPPH799fTDqC_BF4EdrweSdJO2_fGNdQVgAzyecdrSs7uP0021w6g8HgEhAdQ9yH8vDL9GHeuRgJ_DccT8f8p_MG7K2WMNd58Ugvao87sClwr7j087zzBiDXrxa_-vbd9-8WDU6ZPHk7d9p-qWivO0L0ff8pfBZPT2v0jLOn_Pn5-Yj9DSbLJVS7ZDkwwAWWRvMO44iz9Xr9jaDxRbrz_LB4nOFO-o1BLxbpnZydQetvysnFIQyK7WxUz-l6Qj94zFboGCIYBremJKeLOHSlYUTyl7-DWSL3oQm9y13Ylm4qUtvcAWyHvR29RXHD8EO4M4-C9NgG3eudV2xFMO_8u3jiOBpYz-L35djWxg7G_9kR3abQrbt-hA573oER7547l2-6mum615ebNzfJNTmcNjhR61RAdHZvkq76sQcxX-5v04NJjb3ZQNLvPPWd78dvlQa_ft9q2qr3aP2nHDLH-WKhv1o3fVeBLKFJ0Q6d-lXbwllGZQYVWKd5xBbl64ivYXhRVJ9yCxPRubeqOD1i1jzAWk0JhGCKyT9KYzFz3KVOiGrp_hqLhrHeQ6gyLfixV3TP5NjBp9ujoTcaOa4aysnUGfJw7azW5VBK36-GXjAlWEWCiPDydIVlm4ejie_bax7awWptB_Vw5NbDyVSRoPFKATuuNwoGajidjKSi5g7G9lrxcEej8fpKDVdD4CkSw8FwsFYkrgauK71SIOcqGPGQZCh5kGRTRY3kHckuv7FSVd3Igru9MmVu0eF6B1qpQilBOJqY325k0r4vlICGQbv0yAqUgZFGwW35ew6VZ3q_MDQfNL9s5Kop79Ir2irkHyE8sfX8O_E5XfXFhzTLHvuG8Qt6hqN8QJQiCeSc7ud9-hcduHrmptmFXN1eN8_8LirConvepdunOsqQdtRRoGsdwGaVVPMPfICnmbnoekkRqh98yDrNaJ_QKSSDLjqCvAS_5erczbwgTMHLoN884j0keu_hdtwTZIYYfYQ-zwu5m9vEPPPnv9wXRZqYVN7n3Q9_-o1QyQ9-9MQt599ne6BF3kpG8y5zgG5trN_IhDOgq8EZ7aO48kZP6K5Q4oBe7eD6JO9VimRG9SwAabXGpS5Y6263i6PimmTxMumJECV1tV_jkKHb3nw44A5vbo_EAZeNeSfF1punps49X19WuG-uV9klDjiQ1F4LVGgZdffMF4zYezDj2pc9fmXQc9uif79RP_dVaAhG7UHsC20NusDz4RjfHTrlOsuXpSGr1GE91yKkYQKakLYpitU6UEnkSlq6xyohTy-zWqboGi7oURMHLipvKM5PDSAuS4qWVuzVvaxaP2VVOr26lrTAW8-CVWC3xcVGK9df6Uahp_K6LHxB3zPxnTwQ5MmOVlQU8osQCOgSoLy1aqqQlWpnWBQk9SGl3WcrPtYJ_aYQ1kWyBdN-E6s-R-avisALz1B_j8cp_VahfZnMIhQ_q1LgSJQje7Qj4qeLXf7LObPkWb0PHudHWUuPs-O4u_gnSxK-5SXC1I4a7Qiy_r-kzVHDx8mj9Wf_tOlzYpevnES6xPTsX0XGT02ioDkz6_MSqeC8dGbqD8MKp8K-vBTOl_Iy-PqJGfxYZgb_Ss1_peb_bWqqZrQUooq6xX7ZXDpL0OWLjeLJQViS4q_zMtNOz08t9l5GpefWfflKTHfpn9X5W8vls1CGMvBpA3z8oleJX-qo3wCUSSrlFeTRTa8a_g8KRB8y&lang=sage" rel="noreferrer">live version</a> of the Python / Sage code I used to create the SVGs and PNG. It has various modes & options you can play with.</p>
| <p>I improved on Laczkovich's solution by using a different orientation of the 4 small central triangles, by choosing better parameters (x, y) and using fewer triangles for a total of 64 triangles. The original Laczkovich solution uses about 7 trillion triangles.</p>
<p><img src="https://i.sstatic.net/vm9qq.png" alt="tiling with 64 triangles"></p>
<p>Here's one with 50 triangles:</p>
<p><img src="https://i.sstatic.net/jIkRT.png" alt="enter image description here"></p>
|
linear-algebra | <p>I am helping designing a course module that teaches basic python programming to applied math undergraduates. As a result, I'm looking for examples of <em>mathematically interesting</em> computations involving matrices. </p>
<p>Preferably these examples would be easy to implement in a computer program. </p>
<p>For instance, suppose</p>
<p>$$\begin{eqnarray}
F_0&=&0\\
F_1&=&1\\
F_{n+1}&=&F_n+F_{n-1},
\end{eqnarray}$$
so that $F_n$ is the $n^{th}$ term in the Fibonacci sequence. If we set</p>
<p>$$A=\begin{pmatrix}
1 & 1 \\ 1 & 0
\end{pmatrix}$$</p>
<p>we see that</p>
<p>$$A^1=\begin{pmatrix}
1 & 1 \\ 1 & 0
\end{pmatrix} =
\begin{pmatrix}
F_2 & F_1 \\ F_1 & F_0
\end{pmatrix},$$</p>
<p>and it can be shown that</p>
<p>$$
A^n =
\begin{pmatrix}
F_{n+1} & F_{n} \\ F_{n} & F_{n-1}
\end{pmatrix}.$$</p>
<p>This example is "interesting" in that it provides a novel way to compute the Fibonacci sequence. It is also relatively easy to implement a simple program to verify the above.</p>
<p>Other examples like this will be much appreciated. </p>
| <p>If $(a,b,c)$ is a <em>Pythagorean triple</em> (i.e. positive integers such that $a^2+b^2=c^2$), then
$$\underset{:=A}{\underbrace{\begin{pmatrix}
1 & -2 & 2\\
2 & -1 & 2\\
2 & -2 & 3
\end{pmatrix}}}\begin{pmatrix}
a\\
b\\
c
\end{pmatrix}$$
is also a Pythagorean triple. In addition, if the initial triple is <em>primitive</em> (i.e. $a$, $b$ and $c$ share no common divisor), then so is the result of the multiplication.</p>
<p>The same is true if we replace $A$ by one of the following matrices:</p>
<p>$$B:=\begin{pmatrix}
1 & 2 & 2\\
2 & 1 & 2\\
2 & 2 & 3
\end{pmatrix}
\quad \text{or}\quad
C:=\begin{pmatrix}
-1 & 2 & 2\\
-2 & 1 & 2\\
-2 & 2 & 3
\end{pmatrix}.
$$</p>
<p>Taking $x=(3,4,5)$ as initial triple, we can use the matrices $A$, $B$ and $C$ to construct a tree with all primitive Pythagorean triples (without repetition) as follows:</p>
<p>$$x\left\{\begin{matrix}
Ax\left\{\begin{matrix}
AAx\cdots\\
BAx\cdots\\
CAx\cdots
\end{matrix}\right.\\ \\
Bx\left\{\begin{matrix}
ABx\cdots\\
BBx\cdots\\
CBx\cdots
\end{matrix}\right.\\ \\
Cx\left\{\begin{matrix}
ACx\cdots\\
BCx\cdots\\
CCx\cdots
\end{matrix}\right.
\end{matrix}\right.$$</p>
<p><strong>Source:</strong> Wikipedia's page <a href="https://en.wikipedia.org/wiki/Tree_of_primitive_Pythagorean_triples" rel="noreferrer"><em>Tree of primitive Pythagorean triples</em>.</a></p>
| <p><em>(Just my two cents.)</em> While this has not much to do with <em>numerical</em> computations, IMHO, a very important example is the modelling of complex numbers by $2\times2$ matrices, i.e. the identification of $\mathbb C$ with a sub-algebra of $M_2(\mathbb R)$.</p>
<p>Students who are first exposed to complex numbers often ask <em>"$-1$ has two square roots. Which one is $i$ and which one is $-i$?"</em> In some popular models of $-i$, such as $(0,-1)$ on the Argand plane or $-x+\langle x^2+1\rangle$ in $\mathbb R[x]/(x^2+1)$, a student may get a false impression that there is a natural way to identify one square root of $-1$ with $i$ and the other one with $-i$. In other words, they may wrongly believe that the choice should be somehow related to the ordering of real numbers. In the matrix model, however, it is clear that one can perfectly identify $\pmatrix{0&-1\\ 1&0}$ with $i$ or $-i$. The choices are completely symmetric and arbitrary. Neither one is more natural than the other.</p>
|
linear-algebra | <p>I'm majoring in mathematics and currently enrolled in Linear Algebra. It's very different, but I like it (I think). My question is this: What doors does this course open? (I saw a post about Linear Algebra being the foundation for Applied Mathematics -- but I like doing math for the sake of math, not so much the applications.) Is this a stand-alone class, or will the new things I'm learning come into play later on? </p>
| <p>Linear Algebra is indeed one of the foundations of modern mathematics. There are a lot of things which use the language and tools developed in linear algebra:</p>
<ul>
<li><p>Multidimensional Calculus, i.e. Analysis for functions of many variables, i.e. vectors (for example, the first derivative becomes a matrix)</p></li>
<li><p>Differential Geometry, which investigates structures which look locally like a vector space and functions on this.</p></li>
<li><p>Functional Analysis, which is essentially linear algebra on infinite-dimensional vector spaces which is the foundation of quantum mechanics.</p></li>
<li><p>Multivariate Statistics, which investigates vectors whose entries are random. For instance, to describe the relation between two components of such random vector, one can calculate the correlation matrix. Furthermore, one can apply a technique called singular value decomposition (which is close to calculating the eigenvalues of a matrix) to find which components are having a main influence on the data.</p></li>
<li><p>Tagging on to the Multivariate Statistics and multidimensional calculus, there are a number of Machine Learning techniques which require you to find a (local) minimum of a nonlinear function (the likelihood function), for example for neural nets. Generally speaking, on can try to find the parameters which maximize the likelihood, e.g. by applying the gradient descent method, which uses vectors arithmetic. (Thanks, frogeyedpeas!)</p></li>
<li><p>Control Theory and Dynamical Systems theory is mainly concerned with differential equations where matrices are factors in front of the functions. It helps tremendously to know the eigenvalues of the matrices involved to predict how the system will behave and also how to change the matrices in front to make sure the system behaves like you want it to - in Control Theory, this is related to the poles and zeros of the transfer function, but in essence it's all about placing eigenvalues at the right place. This is not only relevant for mechanical systems, but also for electric engineering. (Thanks, Look behind you!) </p></li>
<li><p>Optimization in general and Linear Programming in particular is closely related to multidimensional calculus, namely about finding minima (or maxima) of functions, but you can use the structure of vector spaces to simplify your problems. (Thanks, Look behind you!) </p></li>
</ul>
<p>On top of that, there are a lot of applications in engineering and physics which use tools of linear algebra and the fields listed above to solve real-world problems (often calculating eigenvalues and solving differential equations). </p>
<p>In essence, a lot of things in the mathematical toolbox in one variable can be lifted up to the multivariable case with the help of Linear Algebra.</p>
<p>Edit: This list is by no means complete, these were just the topics which came to my mind at first thought. Not mentioning a particular field doesn't mean that this field irrelevant, but just that I don't feel qualified to write a paragraph about it.</p>
| <p>By now, we can roughly say that all we <em>fully</em> understand in Mathematics is <strong>linear</strong>.</p>
<p>So I guess Linear Algebra is a good topic to master.</p>
<p>Both in Mathematics and in Physics one usually brings the problem to a Linear problem and then solve it with Linear Algebras techniques. This happens in Algebra with linear representations, in Functional Analysis with the study of Hilbert Spaces, in Differential Geometry with the tangent spaces, and so on almost in every field of Mathematics. Indeed I think there should be at least 3 courses of Linear Algebra (undergraduate, graduate, advanced), everytime with different insights on the subject.</p>
|
logic | <p>I'm trying to wrap my head around the relationship between truth in formal logic, as the value a formal expression can take on, as opposed to commonplace notions of truth.</p>
<p>Personal background: When I was taking a class in formal logic last semester, I found that the most efficient way to do my homework was to forget my typical notions of truth and falsehood, and simply treat truth and falsehood as formal, abstract values that formal expressions may be assigned. In other words, I treated everything formally, which from the name of the course is presumably what I was supposed to do while solving problems.</p>
<p>On the other hand, I came out of the course more confused than enlightened about what truth refers to in mathematics. For example, on what levels are each of the following two statements "true"?</p>
<blockquote>
<ol>
<li>There are infinitely many prime numbers.</li>
<li>The empty function $f\colon \emptyset \to \mathbb R $ is injective.</li>
</ol>
</blockquote>
<p>For the first one, I can obviously see that there would be a contradiction if there were only finitely many prime numbers. To me, the classic proof by contradiction is not a "formal proof" or anything; it is merely a natural language argument that proves (in the everyday sense of the word "proves") why the statement must be true (in the everyday sense of the word "true").</p>
<p>On the other hand, I run into trouble when I try to think about the second one. The very concept of the "empty function" doesn't even feel like it makes sense, but if I think about it as the relation between $\emptyset$ and $\mathbb R$ containing no elements, and then try to write out the statement formally, I get (if I did it correctly)</p>
<blockquote>
<p>$
\forall x \forall y ( ((x\in \emptyset) \land (y\in \emptyset) \land (f (x) = f (y))) \implies (x=y))
$</p>
</blockquote>
<p>which I think has to be true in a formal sense (since the antecedent is always false?). But to be honest, I don't really know how to think about "truth" here; the situation feels much more confusing than with the first statement.</p>
<p>So, in conclusion, my questions are:</p>
<blockquote>
<p>In what sense is each of the above statements "true"? And, more generally,</p>
<p>Is the notion of truth in (mathematical) logic <em>just</em> a formal value assigned to expressions? Or should I think of it as encompassing, but also generalizing, the intuitive notion of a true statement?</p>
</blockquote>
<p>(Any insightful comments or answers are appreciated, even if they don't address all of my questions directly.)</p>
| <p>I like to think that mathematical truth is a mathematical model of "real world truth", similar in my mind to the way in which the mathematical real number line $\mathbb{R}$ is a mathematical model of a "real world line", and similarly for many other mathematical models. </p>
<p>In order to achieve the level of rigor needed to do mathematics, sometimes the description of the mathematical model has formal details that perhaps do not reflect anything in particular that one sees in the real world. Oh well! That's just the way things go. </p>
<p>So yes, the empty function is injective. It's a formal consequence of how we axiomatize mathematical truth.</p>
<p>And, by the way, yes, there are infinitely many primes. The classical proof by contradiction that you feel is a natural language proof and not really a "formal proof" is actually not very hard to formalize at all. Part of the training of a mathematician is (a) to use our natural intuition, experience, or whatever, in order to come up with natural language proofs and then (b) to turn those natural language proofs into formal proofs.</p>
| <p>Your main issue here seems to be that you are wondering how all the following statements:</p>
<blockquote>
<p>If the Earth is flat, then the Earth exists.</p>
<p>If the Earth is flat, then the Earth does not exist.</p>
<p>If there is life on Europa, then the Earth exists.</p>
</blockquote>
<p>could possibly be meaningfully assigned the same truth value in the real world. These are called vacuous truths, the first two because the falsity of the condition means that the consequent is irrelevant, and the third because the truth of the consequent means that the condition is irrelevant. One could interpret logic as a game of some sort, where the prover tries to convince the refuter of his claim. If the prover makes a claim of the form:</p>
<blockquote>
<p>If A then B.</p>
</blockquote>
<p>then the refuter must try to refute it. How? She must convince the prover that A is true but yet B is false. Back to our vacuous examples, the refuter must convince the prover that the Earth is flat. Nah... That's not going to happen, which is why the refuter can't refute the prover. In the third case, the refuter must convince the prover that the Earth doesn't exist. Again, no way...</p>
<p>On the other hand, the prover can prove the first two claims by showing that the Earth is not flat! (Come, follow me around the globe in eighty hours.) After doing this he can convince the refuter that he can always keep his promise because it can't be broken; the Earth is not flat, so the condition of his promise will never come to pass. The consequent part of his promise is irrelevant. In the third case, the condition part is immaterial because the prover can convince the refuter that no matter whether she can show that there is life on Europa, he can convince her that the Earth exists.</p>
<p>This is exactly the same as when you talk about an empty function being injective:</p>
<blockquote>
<p>Any function with empty domain is injective.</p>
</blockquote>
<p>which expands to:
<span class="math-container">$\def\none{\varnothing}$</span></p>
<blockquote>
<p>Given any function <span class="math-container">$f$</span> such that <span class="math-container">$Dom(f) = \none$</span>, and any <span class="math-container">$a,b \in Dom(f)$</span>, if <span class="math-container">$f(a) = f(b)$</span> then <span class="math-container">$a = b$</span>.</p>
</blockquote>
<p>Well, what does the prover have to do to convince the refuter? He says, give me any function <span class="math-container">$f$</span> such that <span class="math-container">$Dom(f) = \none$</span>, and give me any <span class="math-container">$a,b \in Dom(f)$</span>! The refuter simply can't! There isn't any object in <span class="math-container">$Dom(f)$</span>!</p>
<p>But wait, you say, how about the also true statement:</p>
<blockquote>
<p>Any function with singleton domain is injective.</p>
</blockquote>
<p>which expands to:</p>
<blockquote>
<p>Given any function <span class="math-container">$f$</span> such that <span class="math-container">$Dom(f) = \{x\}$</span> for some <span class="math-container">$x$</span>, and any <span class="math-container">$a,b \in Dom(f)$</span>, if <span class="math-container">$f(a) = f(b)$</span> then <span class="math-container">$a = b$</span>.</p>
</blockquote>
<p>This time the refuter can continue the game. She gives the prover a function <span class="math-container">$f$</span> and provides an <span class="math-container">$x$</span> such that <span class="math-container">$Dom(f) = \{x\}$</span>, and also gives him <span class="math-container">$a,b \in Dom(f)$</span>. But then the prover now tells her: See? You assured me that every object in <span class="math-container">$Dom(f)$</span> is equal to <span class="math-container">$x$</span>, so you've to accept that <span class="math-container">$a = x$</span> and <span class="math-container">$b = x$</span>, and hence by meaning of equality <span class="math-container">$a = b$</span>. Now I can convince you that if <span class="math-container">$f(a) = f(b)$</span> then <span class="math-container">$a = b$</span>. (This is exactly the third kind of vacuous statement that we discussed at the beginning.) Indeed, haven't I already convinced you that <span class="math-container">$a = b$</span>, so you don't need to even bother to show me that <span class="math-container">$f(a) = f(b)$</span>?</p>
|
Subsets and Splits