tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
probability | <p>In <a href="https://www.nytimes.com/2017/05/15/us/math-counts-national-competition.html?_r=0" rel="noreferrer">this article</a>, the winner of the math competition answered this question correctly: </p>
<blockquote>
<p>In a barn, 100 chicks sit peacefully in a circle. Suddenly, each chick randomly pecks the chick immediately to its left or right. What is the expected number of unpecked chicks?</p>
</blockquote>
<p>The answer was given as 25. I am interested to know the correct method for solving this problem as well as a general way to find this out for N chicks. I was thinking of trying to solve by figuring out the number of occurrences of (something?) inside a binary string of length 100 (or N), but I don't know if that is the right way to approach it.</p>
| <p>For any individual chick, there is a $0.5$ chance that the one on its right won't peck it, and a $0.5$ chance that the one on its left won't peck it. So overall, it has a $0.25$ chance of not being pecked.</p>
<p>Since this is true for every chick, we can add up $0.25(100)=25$ to get the number of unpecked chicks.</p>
<p>"But wait," you might say, "the probabilities that chicks are unpecked aren't independent! If chick $n$ is unpecked, then chicks $n-2$ and $n+2$ will be pecked. Surely we have to take this into account!"</p>
<p>Fortunately, there's a very useful theorem called <strong>linearity of expectation</strong>. This tells us that for <strong>any</strong> random variables $X$ and $Y$, independent or not, $E[X+Y]=E[X]+E[Y]$, where $E$ is the expected value. What we're formally doing above is assigning a random variable $X_i$ to the $i$th chick that is equal to $0$ if the chick is pecked and $1$ if it's unpecked. Then the total number of unpecked chickens is $\sum X_i$, so the expected number of unpecked chickens is
$$E\left[\sum X_i\right]=\sum E[X_i]$$
by linearity of expectation. But $E[X_i]$ is just the probability that the $i$th chicken is unpecked, thus justifying our reasoning above.</p>
| <p>This is a simple problem. Look, ma, no equations!</p>
<p><strong>Consider YOU are the only chicken that matters</strong>, and construct a table to say whether YOU get pecked. Your chance of being pecked comes down to only 4 outcomes. (1) YES - pecked twice. (2) YES - pecked from left wing only. (3) YES - pecked from right wing only. (4) NO - unpecked.</p>
<p>The table has 4 elements, all of equal probability, 1 of which is unpecked. YOU are therefore pecked <strong>3:1 ratio</strong> or <strong>3:4 opportunities</strong> or <strong>75%</strong> of the time. For convenience, this needs to be conducted for 100 trials of YOU, and the answer is that 25 times YOU will NOT be pecked. The circular nature of the 100 chicks says that <strong>YOU are not unique</strong>, and your experience is the same as the others, so we extrapolate your experience of 100 trials to a single trial of 100 chicks just like YOU.</p>
<p><strong>25 unpecked chicks</strong>, 50 get pecked once, 25 get double pecks.</p>
<p>This is the same table constructed for 100 women having two children and asking how many have no girls.</p>
|
probability | <p>I've been thinking about the following game for a while and am curious if anyone has any ideas of how to analyze it.</p>
<p><strong>Problem description</strong></p>
<p>Say I have two biased coins: coin 1 that shows heads with probability <span class="math-container">$p$</span> and coin 2 that shows heads with probability <span class="math-container">$q$</span>. You and I both know the statistics of the coins.</p>
<p>The game proceeds in multiple rounds as follows:</p>
<ul>
<li><p>In the starting round <span class="math-container">$n=0$</span>: </p>
<ul>
<li>I (privately) pick a coin and flip it, we both observe the outcome</li>
<li>you decide to make a guess of which coin I just flipped, or continue watching</li>
<li>if you guess correctly, I pay you <span class="math-container">$\$100$</span>; if you guess incorrectly, you receive no reward and the game is over</li>
</ul></li>
<li><p>At each subsequent round <span class="math-container">$n\ge 1$</span>:</p>
<ul>
<li>I decide to stay with my current coin or reach into my pocket and swap out the current coin for the other coin</li>
<li>you can see whether I swapped out the coin or not (assume I must switch if I reach into my pocket)</li>
<li>I flip the coin and we both observe the outcome</li>
<li>you decide to guess which coin was just flipped, or keep watching</li>
<li>if you guess correctly, I pay you <span class="math-container">$\$100\cdot\delta^n$</span>, where <span class="math-container">$\delta\in(0,1)$</span>; if you guess incorrectly the game ends with you getting nothing</li>
</ul></li>
</ul>
<p><strong>Question</strong></p>
<p>I want to find the "best" switching strategy to minimize the (expected) amount of money I have to pay you.</p>
<p><strong>Notes</strong></p>
<p>The probabilities <span class="math-container">$p$</span> and <span class="math-container">$q$</span> can take on any value, but let's assume that they cannot be equal.</p>
<p>Since you are trying to maximize your reward, the discount factor <span class="math-container">$\delta$</span> incentives you to guess correctly as quickly as possible.</p>
<p>Since there are only two coins and you observe when I switch, you are trying to discern between two possible coin sequences, one where the initial coin was coin 1 and the other where the initial coin was coin 2.</p>
<p>My first thoughts are that I would want to keep the empirical averages (of the two sequences) as close as possible to each other. Intuitively this will be easy if <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are close, but hard if they are far apart.</p>
| <p>Here are solutions for some very special cases. Assume, without loss of generality, that <span class="math-container">$\ p> q\ $</span>.</p>
<p>If <span class="math-container">$\ p+q \le 1\ $</span>, and <span class="math-container">$\ \delta\le \frac{1-q}{2-p-q}\ $</span>, you can keep my expected winnings to at most <span class="math-container">$\ \frac{100\,(1-q)}{2-p-q}\ $</span> dollars by choosing coin <span class="math-container">$1$</span> with probability <span class="math-container">$ \frac{1-q}{2-p-q}\ $</span> and coin <span class="math-container">$2$</span> with probability <span class="math-container">$ \frac{1-p}{2-p-q}\ $</span>. I can ensure that my expected winnings are at least <span class="math-container">$\ \frac{100\,(1-q)}{2-p-q}\ $</span> dollars by guessing coin <span class="math-container">$1$</span> if the result of the first toss is heads, or coin <span class="math-container">$1$</span> with probability <span class="math-container">$ \frac{1-p-q}{2-p-q}\ $</span> and coin <span class="math-container">$2$</span> with probability <span class="math-container">$ \frac{1}{2-p-q}\ $</span> if the result of the first toss is tails. Because I can't win more than <span class="math-container">$\ 100\,\delta \le \frac{100\,(1-q)}{2-p-q}\ $</span> dollars by waiting until the next toss, I have nothing to gain by doing so.</p>
<p>If <span class="math-container">$\ p+q = 1\ $</span> (and therefore <span class="math-container">$\ p>\frac{1}{2}\ $</span>, given the above assumption that <span class="math-container">$\ p>q\ $</span>), and <span class="math-container">$\ \delta\le p $</span>, you can keep my expected winnings to at most <span class="math-container">$\ 100\,p\ $</span> by choosing either coin <span class="math-container">$1$</span> or <span class="math-container">$2$</span> with probability <span class="math-container">$\ \frac{1}{2}\ $</span> each. I can ensure my expected winnings are at least <span class="math-container">$\ 100\,p\ $</span> by guessing coin <span class="math-container">$1$</span> if the result of the first toss is heads, or coin <span class="math-container">$2$</span> if the result of the first toss is tails. Again I can do no better by waiting for the second toss.</p>
<p>If <span class="math-container">$\ p+q > 1\ $</span>, and <span class="math-container">$\ \delta\le \frac{p}{p+q}\ $</span>, you can keep my expected winnings to at most <span class="math-container">$\ \frac{100\,p}{p+q}\ $</span> by choosing coin <span class="math-container">$1$</span> with probability <span class="math-container">$\ \frac{q}{p+q}\ $</span> and coin <span class="math-container">$2$</span> with probability <span class="math-container">$\ \frac{p}{p+q}\ $</span>. I can ensure my expected winnings are at least <span class="math-container">$\ \frac{100\,p}{p+q}\ $</span> by guessing coin <span class="math-container">$2$</span> if the result of the first toss is tails, or coin <span class="math-container">$1$</span> with probability <span class="math-container">$\ \frac{1}{p+q}\ $</span> and coin <span class="math-container">$2$</span> with probability <span class="math-container">$\ \frac{p+q-1}{p+q}\ $</span> if the result of the first toss is heads.Once again, I can't do better by waiting for the second toss.</p>
| <p>This is a problem of Bayesian parameter estimation. The following link <a href="https://www.thomasjpfan.com/2015/09/bayesian-coin-flips/" rel="nofollow noreferrer">Bayesian coin flips</a> might be useful. If you are interested in <a href="https://www.probabilisticworld.com/calculating-coin-bias-bayes-theorem/" rel="nofollow noreferrer">simulation</a> , check out this link.</p>
<p>In any case essential here is the fact that the player doing the guessing can keep track of the outcomes related to each of the two coins (even if initially he does not know which one is the p-coin and which one is the q-coin).</p>
<p>The best strategy for the player doing the switching is to keep the number of outcomes related to each of the two coins, equal or close to equal, because the more outcomes are available to the guessing player (related to a coin), the probability of guessing correctly increases. In other words, the switching player must switch the coins at almost every stage of the game.</p>
<p>Edit. If you want an interesting statistical approach , see the related question <a href="https://math.stackexchange.com/a/3157706/101896">confidence two biased dice are the same </a> (and one of the comments, the Kolmogorov-Smirnov test). The transition from dice to coins is obvious. </p>
|
linear-algebra | <p>I am unsure how to go about doing this inverse product problem:</p>
<p>The question says to find the value of each matrix expression where A and B are the invertible 3 x 3 matrices such that
$$A^{-1} = \left(\begin{array}{ccc}1& 2& 3\\ 2& 0& 1\\ 1& 1& -1\end{array}\right)
$$ and
$$B^{-1}=\left(\begin{array}{ccc}2 &-1 &3\\ 0& 0 &4\\ 3& -2 & 1\end{array}\right)
$$</p>
<p>The actual question is to find $ (AB)^{-1}$. </p>
<p>$ (AB)^{-1}$ is just $ A^{-1}B^{-1}$ and we already know matrices $ A^{-1}$ and $ B^{-1}$ so taking the product should give us the matrix
$$\left(\begin{array}{ccc}11 &-7 &14\\ 7& -4 &7\\ -1& 1 & 6\end{array}\right)
$$
yet the answer is
$$
\left(\begin{array}{ccc} 3 &7 &2 \\ 4& 4 &-4\\ 0 & 7 & 6 \end{array}\right)
$$</p>
<p>What am I not understanding about the problem or what am I doing wrong? Isn't this just matrix multiplication?</p>
| <p>Actually the inverse of matrix product does not work in that way. Suppose that we have two invertible matrices, $A$ and $B$. Then it holds:
$$
(AB)^{-1}=B^{-1}A^{-1},
$$
and, in general:
$$
\left(\prod_{k=0}^NA_k\right)^{-1}=\prod_{k=0}^NA^{-1}_{N-k}
$$</p>
| <p>Note that the matrix multiplication is not <em>commutative</em>, i.e, you'll <strong>not</strong> always have: <span class="math-container">$AB = BA$</span>.</p>
<p>Now, say the matrix <span class="math-container">$A$</span> has the inverse <span class="math-container">$A^{-1}$</span> (i.e <span class="math-container">$A \cdot A^{-1} = A^{-1}\cdot A = I$</span>); and <span class="math-container">$B^{-1}$</span> is the inverse of <span class="math-container">$B$</span> (i.e <span class="math-container">$B\cdot B^{-1} = B^{-1} \cdot B = I$</span>).</p>
<h2>Claim</h2>
<p><span class="math-container">$B^{-1}A^{-1}$</span> is the inverse of <span class="math-container">$AB$</span>. So basically, what I need to prove is: <span class="math-container">$(B^{-1}A^{-1})(AB) = (AB)(B^{-1}A^{-1}) = I$</span>.</p>
<p>Note that, although matrix multiplication is not commutative, it is however, <strong>associative</strong>. So:</p>
<ul>
<li><p><span class="math-container">$(B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B = B^{-1}IB = (B^{-1}I)B = B^{-1}B=I$</span></p>
</li>
<li><p><span class="math-container">$(AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = A^{-1}IA = (A^{-1}I)A = A^{-1}A=I$</span></p>
</li>
</ul>
<p>So, the inverse if <span class="math-container">$AB$</span> is indeed <span class="math-container">$B^{-1}A^{-1}$</span>, and <strong>NOT</strong> <span class="math-container">$A^{-1}B^{-1}$</span>.</p>
|
differentiation | <p>If I have the inequality $f \leq g$, does this imply $f'\leq g'$?</p>
<p>The reason I am asking is because I am using this fact to prove a question with Taylor theorem. </p>
| <p>Consider $x^2\leq x $ in $[0,1]$. Does that imply that $2x\leq 1$ for all x in $[0,1] $? ( A counterexample ). </p>
<p>Note you can never differentiate with an inequality. Instead, the general idea for checking inequalities with differentiation is that we take $h (x)=f (x)-g (x) $ and then try the derivative test to see whether function is increasing or decreasing. That way, if the inequality $h(a) \geq 0$ holds at a particular point a, we can prove it holds for all $x \geq a$ if $h$ is increasing and for all $x \leq a$ if $h$ is decreasing. </p>
| <p>Your question is equivalent to <em>is a function which attains only non-positive values necessarily non-increasing?</em> So, the answer is <strong>no</strong> because there exist increasing functions whose graphs are under the $x$-axis:</p>
<p><a href="https://i.sstatic.net/nC8iV.png" rel="noreferrer"><img src="https://i.sstatic.net/nC8iV.png" alt="enter image description here"></a></p>
<p>(As a general remark, the rates of change of a function has "nothing" to do with the image of the function in the sense that: (i) given a set $X$, we can find functions having $X$ as image and different rates of change; (ii) given a function, there are other functions having the same rates of change and different images.)</p>
<p><strong>Details:</strong> As differentiation is linear, the question</p>
<blockquote>
<p>Does $f\leq g$ imply $f'\leq g'$?</p>
</blockquote>
<p>is the same as</p>
<blockquote>
<p>Does $h\leq 0$ imply $h'\leq 0$?</p>
</blockquote>
<p>which can be rewritten as</p>
<blockquote>
<p>Is a function which attains only non-positive values necessarily non-increasing?</p>
</blockquote>
<p>From this point of view, the answer becomes clear: No because there are (a lot of) increasing functions whose graphs are under the $x$-axis (take any bounded increasing function and apply a translation). Probably, the simplest example is a line segment (see picture above).</p>
|
linear-algebra | <p>Denote by $M_{n \times n}(k)$ the ring of $n$ by $n$ matrices with coefficients in the field $k$. Then why does this ring not contain any two-sided ideal? </p>
<p>Thanks for any clarification, and this is an exercise from the notes of Commutative Algebra by Pete L Clark, of which I thought as simple but I cannot figure it out now.</p>
| <p>Suppose that you have an ideal $\mathfrak{I}$ which contains a matrix with a nonzero entry $a_{ij}$. Multiplying by the matrix that has $0$'s everywhere except a $1$ in entry $(i,i)$, kill all rows except the $i$th row; multiplying by a suitable matrices on the right, kill all columns except the $j$th column; now you have a matrix, necessarily in $\mathfrak{I}$, which contains exactly one nonzero entry, namely $a_{ij}$ in position $(i,j)$.</p>
<p>Now show that $\mathfrak{I}$ must contain <em>all</em> matrices in $M_{n\times n}(k)$. This will show that a $2$-sided ideal consists either of <em>only</em> the $0$ matrix, or must be equal to the entire ring.</p>
<p><em>Added.</em> Now that you have a matrix that has a single nonzero entry, can you get a matrix that has a single nonzero entry on whatever coordinate you specify, and such that this nonzero entry is whatever element of $k$ you want, by multiplying this matrix (on either left, or right, or both) by suitable elementary matrices? Will they all be in $\mathfrak{I}$?</p>
<p>And...</p>
<p>$$\left(\begin{array}{cc}
a&b\\
c&d
\end{array}\right) = \left(\begin{array}{cc}
a & 0\\
0 & 0
\end{array}\right) + \cdots$$</p>
| <p>A faster, and more general result, which Arturo hinted at, is obtained via following proposition from Grillet's <em>Abstract Algebra</em>, section "Semisimple Rings and Modules", page 360:</p>
<p><img src="https://i.sstatic.net/bEYxY.jpg" alt="enter image description here"></p>
<p><strong>Consequence:</strong> if $R:=D$ is a division ring, then $M_n(D)$ is simple.</p>
<p><strong>Proof:</strong> Suppose there existed an ideal of $M_n(D)$. By the proposition, it'd be of the form $M_n(I)$, for $I\unlhd D$, but division rings do not have any ideals (other than $0$ and $D$), so this is a contradiction. $\blacksquare$</p>
|
number-theory | <p>If $n>1$ is an integer, then $\sum \limits_{k=1}^n \frac1k$ is not an integer.</p>
<p>If you know <a href="http://en.wikipedia.org/wiki/Bertrand%27s_postulate">Bertrand's Postulate</a>, then you know there must be a prime $p$ between $n/2$ and $n$, so $\frac 1p$ appears in the sum, but $\frac{1}{2p}$ does not. Aside from $\frac 1p$, every other term $\frac 1k$ has $k$ divisible only by primes smaller than $p$. We can combine all those terms to get $\sum_{k=1}^n\frac 1k = \frac 1p + \frac ab$, where $b$ is not divisible by $p$. If this were an integer, then (multiplying by $b$) $\frac bp +a$ would also be an integer, which it isn't since $b$ isn't divisible by $p$.</p>
<p>Does anybody know an elementary proof of this which doesn't rely on Bertrand's Postulate? For a while, I was convinced I'd seen one, but now I'm starting to suspect whatever argument I saw was wrong.</p>
| <p><strong>Hint</strong> <span class="math-container">$ $</span> There is a <span class="math-container">$\rm\color{darkorange}{unique}$</span> denominator <span class="math-container">$\rm\,\color{#0a0} {2^K}$</span> having <em>maximal</em> power of <span class="math-container">$\:\!2,\,$</span> so scaling by <span class="math-container">$\rm\,\color{#c00}{2^{K-1}}$</span> we deduce a contradiction <span class="math-container">$\large \rm\, \frac{1}2 = \frac{c}d \,$</span> with <em>odd</em> <span class="math-container">$\rm\,d \:$</span> (vs. <span class="math-container">$\,\rm d = 2c),\,$</span> e.g.</p>
<p><span class="math-container">$$\begin{eqnarray} & &\rm\ \ \ \ \color{0a0}{m} &=&\ \ 1 &+& \frac{1}{2} &+& \frac{1}{3} &+&\, \color{#0a0}{\frac{1}{4}} &+& \frac{1}{5} &+& \frac{1}{6} &+& \frac{1}{7} \\
&\Rightarrow\ &\rm\ \ \color{#c00}{2}\:\!m &=&\ \ 2 &+&\ 1 &+& \frac{2}{3} &+&\, \color{#0a0}{\frac{1}{2}} &+& \frac{2}{5} &+& \frac{1}{3} &+& \frac{2}{7}^\phantom{M^M}\\
&\Rightarrow\ & -\color{#0a0}{\frac{1}{2}}\ \ &=&\ \ 2 &+&\ 1 &+& \frac{2}{3} &-&\rm \color{#c00}{2}\:\!m &+& \frac{2}{5} &+& \frac{1}{3} &+& \frac{2}{7}^\phantom{M^M}
\end{eqnarray}$$</span></p>
<p>All denom's in the prior fractions are <em>odd</em> so they sum to fraction with <em>odd</em> denom <span class="math-container">$\rm\,d\, |\, 3\cdot 5\cdot 7$</span>.</p>
<p><strong>Note</strong> <span class="math-container">$ $</span> Said <span class="math-container">$\rm\color{darkorange}{uniqueness}$</span> has easy proof: if <span class="math-container">$\rm\:j\:\! 2^K$</span> is in the interval <span class="math-container">$\rm\,[1,n]\,$</span> then so too is <span class="math-container">$\,\rm \color{#0a0}{2^K}\! \le\, j\:\!2^K.\,$</span> But if <span class="math-container">$\,\rm j\ge 2\,$</span> then the interval contains <span class="math-container">$\rm\,2^{K+1}\!= 2\cdot\! 2^K\! \le j\:\!2^K,\,$</span> contra maximality of <span class="math-container">$\,\rm K$</span>.</p>
<p>The argument is more naturally expressed using valuation theory, but I purposely avoided that because Anton requested an "elementary" solution. The above proof can easily be made comprehensible to a high-school student.</p>
<p>See the <a href="https://math.stackexchange.com/a/4848920/242">Remark here</a> for a trickier application of the same idea (from a contest problem).</p>
| <p>An elementary proof uses the following fact:</p>
<p>If <span class="math-container">$2^s$</span> is the highest power of <span class="math-container">$2$</span> in the set <span class="math-container">$S = \{1,2,...,n\}$</span>, then <span class="math-container">$2^s$</span> is not a divisor of any other integer in <span class="math-container">$S$</span>.</p>
<p>To use that,</p>
<p>consider the highest power of <span class="math-container">$2$</span> which divides <span class="math-container">$n!$</span>. Say that is <span class="math-container">$t$</span>.</p>
<p>Now the number can be rewritten as</p>
<p><span class="math-container">$\displaystyle \frac{\sum \limits_{k=1}^{n}{\frac{n!}{k}}}{n!}$</span></p>
<p>The highest power of <span class="math-container">$2$</span> which divides the denominator is <span class="math-container">$t$</span>.</p>
<p>Now we consider the numerator, as we iterate through the values of <span class="math-container">$k$</span> from <span class="math-container">$1$</span> to <span class="math-container">$n$</span>. If <span class="math-container">$k \neq 2^{s}$</span>, then the highest power of 2 that divides <span class="math-container">$\dfrac{n!}k$</span> is at least <span class="math-container">$t-(s-1)=t-s+1$</span>, as the highest power of <span class="math-container">$2$</span> that divides <span class="math-container">$k$</span> is at most <span class="math-container">$s-1$</span>.</p>
<p>In case <span class="math-container">$k=2^s$</span>, the highest power of <span class="math-container">$2$</span> that divides <span class="math-container">$ \dfrac{n!}{k}$</span> is exactly <span class="math-container">$t-s$</span>.</p>
<p>Thus the highest power of <span class="math-container">$2$</span> that divides the numerator is at most <span class="math-container">$t-s$</span>. If <span class="math-container">$s \gt 0$</span> (which is true if <span class="math-container">$n \gt 1$</span>), we are done.</p>
<p>In fact the above proof shows that the number is of the form <span class="math-container">$\frac{\text{odd}}{\text{even}}$</span>.</p>
|
logic | <p>I am trying to understand the fundamentals of mathematical logic in order to be able to study discrete mathematics and computer science soon.</p>
<p>I have a big problem understanding Implication. I understand the idea intuitively very well; for example:</p>
<p><span class="math-container">$$ x >0 \implies 2x>0 .$$</span></p>
<p>Or</p>
<p><span class="math-container">$$ x \text{ is a prime number} \implies x \geq 2 .$$</span></p>
<p>I notice that there is always a (connection) between the hypothesis and the conclusion; they seem to be related and fall in the same context.</p>
<p>What I don't understand is: according to the truth table, any two propositions can be linked through an implication even if they are not related at all, or they belong to different contexts.</p>
<p>For example :</p>
<p><span class="math-container">$$ \text{A day is 24 hours} \implies \text{A cat has four legs and a tail} .$$</span></p>
<p>which is logically or mathematically TRUE, because both statements are true and according to the truth table when when both inputs are true for any two statements then the implication is true.</p>
<p>How can that be true?</p>
<p>Another example:</p>
<p><span class="math-container">$$ 2 \text{ is a prime number} \implies \text{An hour is 60 minutes}.$$</span></p>
<p>again, which is logically or mathematically TRUE, because both statements are true and according to the truth table when when both inputs are true for any two statements, then the implication is true.</p>
<p>How can that be true? That's my first question.</p>
<p>The second question could be the same:</p>
<blockquote>
<p>How can we use truth tables with implication anyway?</p>
</blockquote>
<p>What I understand is truth tables are used to list the probabilities of the output based on the logical values of the inputs. So the value of the output of any line in the truth table depends (only) on the logical values of the inputs in that line and has nothing to do with the conditional connection between the two inputs. </p>
<p>How can we display a conditional statement in a truth table the same way we display a logic gate or so? In other words, how can I tell only from the values of <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> that <span class="math-container">$P \implies Q$</span>? </p>
| <p>Here is the truth table for an implication:</p>
<p>$$ \begin{array}{ccccc}
P & & Q & & P \to Q \\
T & & T & & T \\
T & & F & & F \\
F & & T & & T \\
F & & F & & T \end{array} $$</p>
<p>You can think of an implication as a conditional promise. If you keep the promise, it's true. If you break the promise, it's false. </p>
<p>If I tell my kids, "I'll give you a cookie if you clean up." Then they clean up. I better give them a cookie. If I don't, I've lied. However, if they don't clean up, I can either give them a cookie or not. I didn't promise either if they didn't keep up their end of the bargain.</p>
<p>So in other words, an implication is false only if the hypothesis is true and conclusion is false.</p>
<p>Logically $P \to Q$ is equivalent to $\neg P \vee Q$</p>
<p>I had a computer science professor who was fond of promising his kids things given a false premise. This way he wasn't compelled to follow through. Example: "If the moon is made of green cheese, I'll give you an x-box."</p>
| <p>Think about it this way:</p>
<p>If a day is 24 hours, does a cat have 4 legs and a tail?</p>
<p>Even though they are unrelated, the answer is yes, so a 24-hour day implies that a cat has 4 legs & tail.</p>
<p>If 2 is a prime number, does an hour have 60 minutes? Yes. </p>
<p>Going with the cookie idea, let's say you already have a cookie in your hand and you are about to give it to your kid. She says "If I clean my room, can I have a cookie?" You will likely say "yes!" Technically, it's true that you'll give her a cookie if she cleans her room. She doesn't need to know she was going to get it anyway.</p>
|
logic | <p>If "This is a lie" were a true statement, its fulfilled claim of being a lie implies it can't be true, leading to a contradiction. If it were false, it could not be a lie and thus had to be true, again leading to a contradiction. So with this argumentation the statement "This is a lie" can be neither true nor false and therefore binary logic is not enough to treat logic.</p>
<p>Is this argumentation correct?</p>
| <p>The problem lies in your interpretation of the sentence. If you want to apply logic to it, you need first reformulate it into the language of logic. There are many different ways to do that, but in majority of "usual" methods the curious thing happens: for a sentence to talk about it self you need a set that contains the sentence which in turn uses the set again -- you are unable to find a formal representation of it. This is closely related to Russel's paradox and his proof that there's no set of all sets, e.g. there is no set of all sentences (if they can refer to this set). </p>
<p>Of course, there are some "unusual" structures, e.g. you could take sets defined using the bisimilarity relation where x = {1, x} would be a proper definition, although, this is far beyond binary logic.</p>
<p>Concluding: the simple binary logic is not contradictory, but it is too weak to express this sentence.</p>
<p>On a happier note: this kind of thing happens all the time, and even innocent fairy-tales fall for that, e.g. consider what would happen if Pinocchio had said "my nose will grow"... (Have anyone tried ever this on their's native language teacher?)</p>
| <p>No, this just shows that logic should not allow arbitrary sentences and ascribe a truth value to all of them. Completely nonsensical utterances clearly fall under the forbidden category. But sentences like this classical liar paradox which mix levels cannot be allowed either. If you want to know more interesting forms of the liar paradox and applications of them, I advise you to read Hofstadters "Gödel, Escher, Bach", where such matters are discussed at great lengths. </p>
|
matrices | <p>It is known that if two matrices <span class="math-container">$A,B \in M_n(\mathbb{C})$</span> commute, then <span class="math-container">$e^A$</span> and <span class="math-container">$e^B$</span> commute. Is the converse true?</p>
<blockquote>
<p>If <span class="math-container">$e^A$</span> and <span class="math-container">$e^B$</span> commute, do <span class="math-container">$A$</span> and <span class="math-container">$B$</span> commute?</p>
</blockquote>
<p><strong>Edit:</strong> Additionally, what happens in <span class="math-container">$M_n(\mathbb{R})$</span>?</p>
<p><strong>Nota Bene:</strong> As a corollary of the counterexamples below, we deduce that if <span class="math-container">$A$</span> is not diagonal then <span class="math-container">$e^A$</span> may be diagonal.</p>
| <p>No. Let $$A=\begin{pmatrix}2\pi i&0\\0&0\end{pmatrix}$$ and note that $e^A=I$. Let $B$ be any matrix that does not commute with $A$.</p>
| <p>Here's an example over $\mathbb{R}$, modeled after Harald's answer: let
$$A=\pmatrix{0&-2\pi\\ 2\pi&0}.$$
Again, $e^A=I$. Now choose any $B$ that doesn't commute with $A$.</p>
|
linear-algebra | <p>My professor keeps mentioning the word "isomorphic" in class, but has yet to define it... I've asked him and his response is that something that is isomorphic to something else means that they have the same vector structure. I'm not sure what that means, so I was hoping anyone could explain its meaning to me, using knowledge from elementary linear algebra only. He started discussing it in the current section of our textbook: General Vector Spaces.</p>
<p>I've also heard that this is an abstract algebra term, so I'm not sure if isomorphic means the same thing in both subjects, but I know absolutely no abstract algebra, so in your definition if you keep either keep abstract algebra out completely, or use very basic abstract algebra knowledge, that would be appreciated.</p>
| <p>Isomorphisms are defined in many different contexts; but, they all share a common thread.</p>
<p>Given two objects $G$ and $H$ (which are of the same type; maybe groups, or rings, or vector spaces... etc.), an <em>isomorphism</em> from $G$ to $H$ is a bijection $\phi:G\rightarrow H$ which, in some sense, <em>respects the structure</em> of the objects. In other words, they basically identify the two objects as actually being the same object, <em>after renaming of the elements</em>.</p>
<p>In the example that you mention (vector spaces), an isomorphism between $V$ and $W$ is a bijection $\phi:V\rightarrow W$ which respects scalar multiplication, in that $\phi(\alpha\vec{v})=\alpha\phi(\vec{v})$ for all $\vec{v}\in V$ and $\alpha\in K$, and also respects addition in that $\phi(\vec{v}+\vec{u})=\phi(\vec{v})+\phi(\vec{u})$ for all $\vec{v},\vec{u}\in V$. (Here, we've assumed that $V$ and $W$ are both vector spaces over the same base field $K$.)</p>
| <p>Two vector spaces $V$ and $W$ are said to be isomorphic if there exists an invertible linear transformation (aka an isomorphism) $T$ from $V$ to $W$.</p>
<p>The idea of a homomorphism is a transformation of an algebaric structure (e.g. a vector space) that preserves its algebraic properties. So an homomorphism of a vector space should preserve the basic algebraic properties of the vector space, in the following sense:</p>
<p>$1$. Scalar multiplication and vector addition in $V$ is carried over to scalar multiplication and vector addition in $W$:</p>
<blockquote>
<p><strong>For any vectors $x,y$ in $V$ and scalars $a,b$ from the underlying field, $T(ax+by)=aT(x)+bT(y)$.</strong></p>
</blockquote>
<p>$2$. The identity element of $V$ is carried over to the identity element of $W$:</p>
<blockquote>
<p><strong>If $0_V$ is the identity vector in $V$, then $T(0_V)$ is the identity vector in $W$.</strong></p>
</blockquote>
<p>$3$. Vector inversion in $V$ is carried over to vector inversion.</p>
<blockquote>
<p><strong>$T(-v)=-T(v)$ for all $v$ in $V$.</strong></p>
</blockquote>
<p>$1$ is precisely the property that defines linear transformations, and $2$ and $3$ are redundant (they follow from $1$). So linear transformations are the homomorphisms of vector spaces.</p>
<p>An isomorphism is a homomorphism that can be reversed; that is, an invertible homomorphism. So a vector space isomorphism is an invertible linear transformation. The idea of an invertible transformation is that it transforms spaces of a particular "size" into spaces of the same "size." Since dimension is the analogue for the "size" of a vector space, an isomorphism must preserve the dimension of the vector space.</p>
<p>So this is the idea of the (finite-dimensional) vector space isomorphism: a linear (i.e. structure-preserving) dimension-preserving (i.e. size-preserving, invertible) transformation.</p>
<p>Because isomorphic vector spaces are the same size and have the same algebraic properties, mathematicians think of them as "the same, for all intents and purposes."</p>
|
logic | <p>I'm reading through Hindley and Seldin's book about the lambda calculus and combinatory logic. In the book, the authors express that, though combinatory logic can be expressed as an equational theory in first order logic, the lambda calculus cannot. I sort of understand why this is true, but I can't seem to figure out the problem with the following method. Aren't the terms of first order logic arbitrary (constructed from constants, functions, and variables, etc.) so why can't we define composition and lambda abstraction as functions on variables, and then consider an equational theory over these terms? Which systems of logic can the lambda calculus be formalized as an equationary theory over?</p>
<p>In particular, we consider a first order language with a single binary predicate, equality. We take a set $V$ of terms over the $\lambda$ calculus, which we will view of as constants in the enveloping first order theory and for each $x \in V$, we will add a function $f_x$, corresponding to $\lambda$ abstraction. We also add a binary function $c$ corresponding to composition. We add in the standard equality axioms, in addition to the $\beta$ and $\alpha$ conversion rules, which are a sequence of infinitely many axioms produced over the terms formed from the $\lambda$ terms from composition and $\lambda$ abstraction. I doubt this is finitely axiomatizable, but it's still a first order theory.</p>
| <p>Untyped <span class="math-container">$\lambda$</span>-calculus and untyped combinatory terms both have a form of the <span class="math-container">$Y$</span> combinator, so we introduce types to avoid infinite reductions. As (untyped) systems, they are equivalent <em>only</em> as equational theories. To elaborate;</p>
<p>Take <span class="math-container">$(\Lambda , \rightarrow_{\beta\eta})$</span> to be the set of <span class="math-container">$\lambda$</span>-terms equipped with the <span class="math-container">$\beta\eta$</span>-relation (abstraction and application only, I am considering the simplest case) and <span class="math-container">$(\mathcal{C}, \rightarrow_R)$</span> be the set of combinatory terms equipped with reduction. We want a <em>translation</em> <span class="math-container">$T$</span>, be a bijection between the two sets that respects reduction. The problem is that reduction in combinatory terms happens "too fast" when compared with <span class="math-container">$\beta\eta$</span>, so such a translation does not exist. When viewed as <em>equational theories</em>, i.e. <span class="math-container">$(\Lambda , =_{\beta\eta})$</span> and <span class="math-container">$(\mathcal{C}, =_R)$</span>, we can find a translation that respects equalities.</p>
<p>An interesting fact is that <span class="math-container">$=_{\beta\eta} \approx =_{ext}$</span>, where <span class="math-container">$=_{ext}$</span> is a form of extensional equality on <span class="math-container">$\lambda$</span>-terms.</p>
<p>To answer your question, when you view these systems as equational theories, they are both expressible (in FOL) since one of them is and this translation exists. But using just reduction, <span class="math-container">$\lambda$</span>-calculus has the notion of bound variables, while combinatory logic does not. Bound variables require more expressibility in the system, more than FOL. You need to have <em>access</em> to FOL expressions in order to differentiate them from free variables, which you cannot do from inside the system.</p>
<p>Concepts related to, and explanatory of my abuse of terminology when I say <em>"you need to have access to expressions of FOL"</em> are the <span class="math-container">$\lambda$</span>-cube and higher-order logics from formal systems, and "linguistic levels" from linguistics. Informally it is the level of precision the system in question has. Take for example the alphabet <span class="math-container">$\{a,b\}$</span> and consider all words from this alphabet. This language requires the minimum amount of precision to be expressed, since it is the largest language I can create from this alphabet. Now consider only words of the form <span class="math-container">$a^nb^n$</span>. This language requires a higher level of precision for a system to be able to express it, because you need to be able to "look inside" a word to tell if belongs to the language or not. You can read more about this topic on <a href="https://devopedia.org/chomsky-hierarchy" rel="nofollow noreferrer">https://devopedia.org/chomsky-hierarchy</a></p>
| <p>The problem with your approach is that in the semantics of lambda calculus, if you have that the interpretation of two variables is equal in a model, then their lambda abstractions will need to be too. Your situation may have <span class="math-container">$[[x]]=[[y]]$</span> but <span class="math-container">$[[f_x]]$</span> and <span class="math-container">$[[f_y]]$</span> distinct. It therefore doesn't have the same semantics as lambda calculus, so doesn't axiomatise it successfully (you can see this, as your axiomatisation is universal, so will be closed under substructure, whereas the semantics of <span class="math-container">$\lambda$</span>-calculus is not closed under substructure).</p>
<p>There are two other approaches that work: you could have a predicate <span class="math-container">$V$</span> representing 'being a variable' and axioms saying <span class="math-container">$V(A)$</span> for each <span class="math-container">$A$</span> which <span class="math-container">$\beta$</span>-reduces to a variable, and axioms saying <span class="math-container">$\neg V(A)$</span> for all other <span class="math-container">$A$</span>.You can then introduce your partial lambda abstraction function as a ternary predicate, i.e. <span class="math-container">$\lambda (A,B,C)$</span> represents <span class="math-container">$\lambda A.B$</span> is defined and <span class="math-container">$=C$</span>. You can then axiomatise only being able to abstract over a variable:
<span class="math-container">$\forall A,B (\exists C \lambda (A,B,C))\iff V(A)$</span>
and then proceed to write the rest of the theory using these predicates.</p>
<p>This works, but since whether a term reduces to a variable is undecidable, these axioms are not recursively enumerable, which is a pain for logicians. The final and best approach, is to note that Scott-Meyer models of <span class="math-container">$\lambda$</span>-calculus are essentially described in a first order way.</p>
<p>So you can have an axiomatisation using 3 constants (s,k,e) and a binary function <span class="math-container">$\cdot$</span> with the axioms:<br />
<span class="math-container">$\forall a,b \, \, \, \, \, \, \, \, k \cdot a \cdot b$</span> <span class="math-container">$= a$</span><br />
<span class="math-container">$\forall a,b,c \, \, \, \, \, \, s \cdot a \cdot b\cdot c$</span> <span class="math-container">$= a\cdot c \cdot (b\cdot c)$</span><br />
<span class="math-container">$e \cdot e = e$</span><br />
<span class="math-container">$\forall a,b \, \, \, \, \, \, \, \, e \cdot a \cdot b$</span> <span class="math-container">$= a\cdot b$</span><br />
<span class="math-container">$\forall a,b \, \, \, \, \, \, \, \, \forall c \, \, a \cdot c$</span> <span class="math-container">$= b\cdot c \to e \cdot a = e \cdot b$</span></p>
<p>This is a theory (in a different language) which has the same (Set) models as <span class="math-container">$\lambda$</span>-calculus. While this is now first order, it is no longer an <em>algebraic</em> axiomatisation.</p>
|
logic | <p>A friend of mine asked me if I could explain this statement: "It's not logically possible to prove that something can't be done". The actual reason is the understanding of this strip: </p>
<p><img src="https://i.sstatic.net/YtHMp.gif" alt="enter image description here"></p>
<p>Since I'm not an expert on logic, but at least know the basics, I told him that the statement is false. To prove it I gave him the counterexample of the impossibility of trisecting an angle by using only straightedge and compass, which was proved by Pierre Wantzel. This shows clearly that it's possible to prove that something can't be done.</p>
<p>When I told him this, he explained to me that this is sort of naive because Asok, the guy who makes the statement, is a graduate from IIT and also I read that he's always a brilliant character in the strip. </p>
<p>Can you guys please tell me what you think about this? Also I'd like to know how you would interpret this dialog. </p>
| <p>What the strip most likely was (awkwardly) referring to is not that it's impossible to prove something can't be done, but rather that it's impossible to prove within a (sufficiently powerful) formal system that something can't be <em>proven</em> in that system — in other words, letting $P(\phi)$ be the statement 'there exists a proof of $\phi$', then it's impossible to prove any statement of the form $\neg P(\phi)$ (i.e., the statement $P(\neg P(\phi))$ can't be true).</p>
<p>The reason behind this has to do with Godel's second incompleteness theorem that no formal system can prove its own consistency - in other words, the system can't prove the statement $\neg P(\bot)$ (or equivalently, $P(\neg P(\bot))$ is false). The logical chain follows from the statement 'false implies anything': $\bot\implies\phi$ for any proposition $\phi$. By the rules of deduction, this gives that $P(\bot)\implies P(\phi)$, and then taking the contrapositive, $\neg P(\phi)\implies\neg P(\bot)$ for <em>any</em> proposition $\phi$; using deduction once more, this gives $P(\neg P(\phi))\implies P(\neg P(\bot))$ — if the system can prove the unprovability of any statement, then it can prove its own consistency, which would violate Godel's second incompleteness theorem.</p>
<p>Note the keyword 'sufficiently powerful' here - and also that we're referring to a formal system talking about proofs <em>within itself</em>. This is what allows statements like the impossibility of angle trisection off the hook: the axioms of ruler-and-compass geometry aren't sufficiently strong for Godel's theorem to apply to the system, so that geometry itself can't talk about the notion of <em>proof</em>, and our statements of proofs of impossibility within that system are proofs from outside of that system.</p>
| <p>Scott Adams may have intended something different, but in the given context and with the given wording, Dan is right and Asok is wrong.</p>
<p>Imagine Asok writing a piece of software that crucially depends on deciding the <a href="http://en.wikipedia.org/wiki/Halting_problem">halting problem</a> (or any other property covered by <a href="http://en.wikipedia.org/wiki/Rice%27s_theorem">Rice's theorem</a>), then Dan is right: The halting problem is <a href="http://en.wikipedia.org/wiki/Undecidable_problem">not decidable</a> and therefore the software will never work. And Dan can indeed prove it, because the undecidability of the halting problem is provable.</p>
|
probability | <p>In the book "Zero: The Biography of a Dangerous Idea", author Charles Seife claims that a dart thrown at the real number line would never hit a rational number. He doesn't say that it's only "unlikely" or that the probability approaches zero or anything like that. He says that it will never happen because the irrationals take up all the space on the number line and the rationals take up no space. This idea <em>almost</em> makes sense to me, but I can't wrap my head around why it should be impossible to get really lucky and hit, say, 0, dead on. Presumably we're talking about a magic super sharp dart that makes contact with the number line in exactly one point. Why couldn't that point be a rational? A point takes up no space, but it almost sounds like he's saying the points don't even exist somehow. Does anybody else buy this? I found one academic paper online which ridiculed the comment, but offered no explanation. Here's the original quote:</p>
<blockquote>
<p>"How big are the rational numbers? They take up no space at all. It's a tough concept to swallow, but it's true. Even though there are rational numbers everywhere on the number line, they take up no space at all. If we were to throw a dart at the number line, it would never hit a rational number. Never. And though the rationals are tiny, the irrationals aren't, since we can't make a seating chart and cover them one by one; there will always be uncovered irrationals left over. Kronecker hated the irrationals, but they take up all the space in the number line. The infinity of the rationals is nothing more than a zero." </p>
</blockquote>
| <p>Mathematicians are strange in that we distinguish between "impossible" and "happens with probability zero." If you throw a magical super sharp dart at the number line, you'll hit a rational number with probability zero, but it isn't <em>impossible</em> in the sense that there do exist rational numbers. What <em>is</em> impossible is, for example, throwing a dart at the real number line and hitting $i$ (which isn't even on the line!). </p>
<p>This is formalized in <a href="http://en.wikipedia.org/wiki/Measure_(mathematics)">measure theory</a>. The standard measure on the real line is <a href="http://en.wikipedia.org/wiki/Lebesgue_measure">Lebesgue measure</a>, and the formal statement Seife is trying to state informally is that the rationals have measure zero with respect to this measure. This may seem strange, but lots of things in mathematics seem strange at first glance. </p>
<p>A simpler version of this distinction might be more palatable: flip a coin infinitely many times. The probability that you flip heads every time is zero, but it isn't impossible (at least, it isn't <em>more</em> impossible than flipping a coin infinitely many times to begin with!).</p>
| <p>Note that if you randomly (i.e. uniformly) choose a real number in the interval $[0,1]$ then for <em>every</em> number there is a zero probability that you will pick this number. This does not mean that you did not pick <em>any</em> number at all.</p>
<p>Similarly with the rationals, while infinite, and dense and all that, they are very very sparse in the aspect of measure and probability. It is perfectly possible that if you throw countably many darts at the real line you will hit <em>exactly</em> all the rationals and every rational exactly once. This scenario is <strong>highly unlikely</strong>, because the rational numbers is a measure zero set.</p>
<p>Probability deals with "<em>what are the odds of that happening?</em>" a priori, not a posteriori. So we are interested in measuring a certain structure a set has, in modern aspects of probability and measure, the rationals have size zero and this means zero probability.</p>
<p>I will leave you with some food for thought: if you ask an arbitrary mathematician to choose <em>any</em> real number from the interval $[0,10]$ there is a good chance they will choose an integer, a slightly worse chance it will be a rational, an even slimmer chance this is going to be an algebraic number, and even less likely an transcendental number. In some aspect this is strongly against measure-theoretic models of a uniform probability on $[0,10]$, but that's just how life is.</p>
|
matrices | <p>I am looking for an intuitive reason for a projection matrix of an orthogonal projection to be symmetric. The algebraic proof is straightforward yet somewhat unsatisfactory.</p>
<p>Take for example another property: $P=P^2$. It's clear that applying the projection one more time shouldn't change anything and hence the equality.</p>
<p>So what's the reason behind $P^T=P$?</p>
| <p>In general, if $P = P^2$, then $P$ is the projection onto $\operatorname{im}(P)$ along $\operatorname{ker}(P)$, so that $$\mathbb{R}^n = \operatorname{im}(P) \oplus \operatorname{ker}(P),$$ but $\operatorname{im}(P)$ and $\operatorname{ker}(P)$ need not be orthogonal subspaces. Given that $P = P^2$, you can check that $\operatorname{im}(P) \perp \operatorname{ker}(P)$ if and only if $P = P^T$, justifying the terminology "orthogonal projection."</p>
| <p>There are some nice and succinct answers already. If you'd like even more intuition with as little math and higher level linear algebra concepts as possible, consider two arbitrary vectors <span class="math-container">$v$</span> and <span class="math-container">$w$</span>.</p>
<h2>Simplest Answer</h2>
<p>Take the dot product of one vector with the projection of the other vector.
<span class="math-container">$$
(P v) \cdot w
$$</span>
<span class="math-container">$$
v \cdot (P w)
$$</span></p>
<p>In both dot products above, one of the terms (<span class="math-container">$P v$</span> or <span class="math-container">$P w$</span>) lies entirely in the subspace you project onto. Therefore, both dot products ignore every vector component that is <em>not</em> in this subspace - they consider only components <em>in</em> the subspace. This means both dot products are equal to each other, and are in fact equal to:
<span class="math-container">$$
(P v) \cdot (P w)
$$</span></p>
<p>Since <span class="math-container">$(P v) \cdot w = v \cdot (P w)$</span>, it doesn't matter whether we apply the projection matrix to the first or second argument of the dot product operation. Some simple identities then imply <span class="math-container">$P = P^T$</span>, so <span class="math-container">$P$</span> is symmetric (See step 2 below if you aren't familiar with this property).</p>
<h2>Less intuitive Answer</h2>
<p>If the above explanation isn't intuitive, we can use a little more math.</p>
<h1>Step 1.</h1>
<p>First, prove that the two dot products above are equal.</p>
<p>Decompose <span class="math-container">$v$</span> and <span class="math-container">$w$</span>:
<span class="math-container">$$
v = v_p + v_n
$$</span>
<span class="math-container">$$
w = w_p + w_n
$$</span></p>
<p>In this notation the <span class="math-container">$p$</span> subscript indicates the component of the vector in the subspace of <span class="math-container">$P$</span>, and the <span class="math-container">$n$</span> subscript indicates the component of the vector outside (normal to) the subspace of <span class="math-container">$P$</span>.</p>
<p>The projection of a vector lies in a subspace. The dot product of anything in this subspace with anything orthogonal to this subspace is zero. We use this fact on the dot product of one vector with the projection of the other vector:
<span class="math-container">$$
(P v) \cdot w \hspace{1cm} v \cdot (P w)
$$</span>
<span class="math-container">$$
v_p \cdot w \hspace{1cm} v \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot (w_p + w_n) \hspace{1cm} (v_p + v_n) \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot w_p + v_p \cdot w_n \hspace{1cm} v_p \cdot w_p + v_n \cdot w_p
$$</span>
<span class="math-container">$$
v_p \cdot w_p \hspace{1cm} v_p \cdot w_p
$$</span>
Therefore
<span class="math-container">$$
(Pv) \cdot w = v \cdot (Pw)
$$</span></p>
<h1>Step 2.</h1>
<p>Next, we can show that a consequence of this equality is that the projection matrix P must be symmetric. Here we begin by expressing the dot product in terms of transposes and matrix multiplication (using the identity <span class="math-container">$x \cdot y = x^T y$</span> ):
<span class="math-container">$$
(P v) \cdot w = v \cdot (P w)
$$</span>
<span class="math-container">$$
(P v)^T w = v^T (P w)
$$</span>
<span class="math-container">$$
v^T P^T w = v^T P w
$$</span>
Since v and w can be any vectors, the above equality implies:
<span class="math-container">$$
P^T = P
$$</span></p>
|
logic | <p>I've been reading On Numbers and Games and I noticed that Conway defines addition in his number system in terms of addition. Similarly in the analysis and logic books that I've read (I'm sure that this is not true of all such books) how addition works is assumed. From what I understand the traditional method of building the number system begins with the natural numbers (and zero)</p>
<p>$0:=|\emptyset|$</p>
<p>$1:=|\{\emptyset\}|$</p>
<p>$2:=|\{\emptyset,\{\emptyset\}\}|$</p>
<p>and so forth. In this construction addition could(?) be defined as the disjoint union of the sets associated with the two numbers. Then the integers could be defined as the additive inverse and so forth. Is this the ideal way to do it though, is there a more elegant method?</p>
| <p>You don't have an a priori notion of cardinality, so you cannot really say things like "$0:=|\emptyset|$". In fact, before you can define cardinals you usually define ordinals, and usually the definition of the naturals precedes the definition of the ordinals.</p>
<p>The set-theoretic method begins by using the Axiom of Infinity, which states that there exists at least one inductive set; a set $X$ is <em>inductive</em> if and only if (i) $\emptyset\in X$; and (ii) For all $x$, if $x\in X$, then $s(x) = x\cup\{x\}\in X$. </p>
<p>So, let $X$ be any inductive set. Then we define
$$\mathbb{N} = \cap\\{ A\subseteq X\mid \text{$A$ is inductive}\\}.$$
One can then prove that $\mathbb{N}$ is well defined (it does not depend on the choice of $X$) and satisfies "Peano's Axioms":</p>
<ol>
<li>$\emptyset \in\mathbb{N}$.</li>
<li>If $x\in\mathbb{N}$, then $s(x) \in\mathbb{N}$.</li>
<li>For all $x\in\mathbb{N}$, $s(x)\neq\emptyset$.</li>
<li>For all $x,y\in\mathbb{N}$, if $s(x)=s(y)$, then $x=y$.</li>
<li>If $S\subseteq \mathbb{N}$ is inductive, then $S=\mathbb{N}$.</li>
</ol>
<p>(In fact, 3 and 4 are just consequences of the definition of $s(x)$). We then define "$0$" to mean $\emptyset$, "$1$" to mean $s(0)$, "$2$" to mean $s(1)=s(s(0))$, etc.</p>
<p>Alternatively, you can begin with Peano's Axioms. Here, there is a primitive notion called "natural number", and a primitive symbol called $0$. We also have a primitive function $s$. The Peano Axioms would be:</p>
<ol>
<li>$0$ is a natural number.</li>
<li>If $n$ is a natural number, then $s(n)$ is a natural number.</li>
<li>For all natural numbers $n$, $s(n)\neq 0$.</li>
<li>For all natural numbers $n$ and $m$, if $s(n)=s(m)$, then $n=m$.</li>
<li><em>Axiom Schema of Induction.</em> If $\Phi$ is a predicate such that $\Phi(0)$ is true, and for all $n$, $\Phi(n)\Rightarrow \Phi(s(n))$, then for all natural numbers $k$, $\Phi(k)$.</li>
</ol>
<p>(You can also begin with $1$ instead of $0$; I use $0$ because it then parallels the set-theoretic construction). We then define "$1$" to mean $s(0)$; and "$2$" to mean $s(1)=s(s(0))$, etc.</p>
<p>We then need the Recursion Theorem:</p>
<p><strong>Recursion Theorem.</strong> Given a set $X$, an element $a\in X$, and a function $f\colon X\to X$, there exists a unique function function $F\colon\mathbb{N}\to X$ such that $F(0)=a$ and $F(s(n)) = f(F(n))$ for all $n\in\mathbb{N}$.</p>
<p>Once we have these definitions and theorem, we can start defining addition. Fix $n\in\mathbb{N}$. I'm going to define "add $n$", $+_n\colon \mathbb{N}\to\mathbb{N}$ by letting
\begin{align*}
+_n(0) &= n,\\\
+_n(s(m)) &= s(+_n(m)).
\end{align*}
Or, in usual notation,
\begin{align*}
n+0& = n,\\\
n+s(m) &= s(n+m).
\end{align*}</p>
<p>With these definitions, we have:</p>
<p><strong>Theorem.</strong> For all $n\in\mathbb{N}$, $n+0=0+n=n$.</p>
<p><em>Proof.</em> Let $S=\{n\in\mathbb{N}\mid n+0=0+n=n\}$. Note that $0\in S$, since $0+0 = 0$ by the definition of addition. Now assume that $k\in S$; that means that $k+0 = 0+k = k$. Then
$0+s(k) = s(0+k) = s(k)$ (first equality by the definition of addition with $0$, second by the induction hypothesis). And by the definition of addition with $s(k)$, we have $s(k)+0 = s(k)$. Therefore, $k\in S$ implies $s(k)\in S$. Thus, $S=\mathbb{N}$, as desired. <strong>QED</strong></p>
<p><strong>Theorem.</strong> For all $n\in\mathbb{N}$, $s(n)=n+1$</p>
<p><em>Proof.</em> Let $S=\{n\in\mathbb{N}\mid s(n)=n+1\}$. First, $0\in S$, since $s(0) = 1 = 0+1$, by the previous theorem. Assume that $k\in S$; that means that $s(k)=k+1$. Then $s(s(k)) = s(s(k)+0) = s(k)+s(0) = s(k)+1$. So $k\in S$ implies $s(k)\in S$, hence $S=\mathbb{N}$. <strong>QED</strong></p>
<p><strong>Theorem.</strong> For all $\ell,n,m\in\mathbb{N}$, $\ell+(m+n) = (\ell+m)+n$.</p>
<p><em>Proof.</em> Fix $\ell$ and $m$. Let $S=\{n\in\mathbb{N}\mid \ell+(m+n)=(\ell+m)+n\}$. We have $0\in S$, since
$$\ell + (m+0) = \ell + m = (\ell+m) + 0.$$
Now assume that $k\in S$; that means that $(\ell+m)+k = \ell+(m+k)$. We prove that $s(k)\in S$. We have:
$$(\ell+m)+s(k) = s((\ell+m)+k) = s(\ell+(m+k)) = \ell+s(m+k) = \ell+(m+s(k)).$$
Thus, if $k\in S$ then $s(k)\in S$. Hence, $S=\mathbb{N}$. <strong>QED</strong></p>
<p><strong>Lemma.</strong> For all $n\in\mathbb{N}$, $1+n = n+1$.</p>
<p><em>Proof.</em> Let $S=\{n\in\mathbb{N}\mid 1+n=n+1\}$. Then $0\in S$. Suppose that $k\in S$, so that $1+k = k+1 = s(k)$. Then we have:
$$1+s(k) = s(1+k) = s(k+1) = s(k+s(0)) = s(s(k+0)) = s(s(k)) = s(k)+1.$$
Thus, $S=\mathbb{N}$. <strong>QED</strong></p>
<p><strong>Theorem.</strong> For all $n,m\in\mathbb{N}$, $n+m=m+n$. </p>
<p><em>Proof.</em> Fix $m$, and let $S=\{n\in\mathbb{N}\mid m+n=n+m\}$. First, $0\in S$, since $m+0=0+m$. Also, $1\in S$ by the previous lemma. Now assume that $k\in S$. Then $m+k=k+m$. To show that $s(k)\in S$, we have:
\begin{align*}
m+s(k) &= s(m+k) = s(k+m) = (k+m)+1 = k+(m+1) = k+(1+m)\\\
&= (k+1)+m = s(k)+m.
\end{align*}
Thus, $S=\mathbb{N}$. <strong>QED</strong></p>
<p>And so on. We can then define multiplication similarly, by fixing $n$ and defining
\begin{align*}
n\times 0 &= 0\\\
n\times s(m) &= (n\times m) + n,
\end{align*}
and prove the usual properties of multiplication inductively. Then we can define exponentiation also recursively: fix $n$; then
\begin{align*}
n^0 & = 1\\\
n^{s(m)} &= n^m\times n.
\end{align*}
We later define the order among the natural numbers by
$$a\leq b\Longleftrightarrow \exists n\in\mathbb{N}(a+n=b),$$
and prove the usual properties.</p>
<p>Later, we can construct $\mathbb{Z}$ from $\mathbb{N}$, $\mathbb{Q}$ from $\mathbb{Z}$, $\mathbb{R}$ from $\mathbb{Q}$, $\mathbb{C}$ from $\mathbb{R}$, etc. See for example my <a href="https://math.stackexchange.com/questions/14828/set-theoretic-definition-of-numbers/14842#14842">answer to this previous question</a>.</p>
| <p>Jacob, there are two ways of defining addition of natural numbers in set theory.</p>
<p>The first is the one you indicate: The sum of two numbers is the size of their disjoint union. In general, you define <em>cardinal addition</em> this way: If $\kappa$ and $\lambda$ are cardinals (sizes of sets), say $\kappa=|A|$ and $\lambda=|B|$, then $\kappa+\lambda=|A\sqcup B|$, where $\sqcup$ denotes disjoint union. </p>
<p>The second way is to define <em>ordinal addition</em>, considering natural numbers as ordered sets: $0=\emptyset$, $1=\{0\}$, $2=\{0,1\}$, etc, with $n=\{0,\dots,n-1\}$ ordered in the usual way, which actually corresponds to membership: $a<b$ iff $a\in b$. Here we identify two linearly ordered sets if they are order isomorphic. The relevant notion here is that of sum of ordered sets: If $(A,<)$ and $(B,\prec)$ are linear orders, their sum $A+B$ is the set $A\sqcup B$ ordered by $a$ is less than $b$ iff $a,b\in A$ and $a<b$, or $a,b\in B$ and $a\prec b$, or $a\in A$ and $b\in B$. </p>
<p>One can check that these two notions coincide when $A$, $B$ are finite sets (i.e., when we look at $n+m$ for $n,m$ finite). One has to be careful, because the notions are different in general for infinite sets. For example, if $\aleph_0=|{\mathbb N}|$, then $\aleph_0+1=1+\aleph_0=\aleph_0$. However, if we think of $\aleph_0$ as the ordered set $\{0<1<\dots\}$, then $1+\aleph_0=\aleph_0$ (remember, we identify sets if they are order isomorphic). However, $\aleph_0+1$ is strictly larger than $\aleph_0$ as ordered sets.</p>
<p>There is a third option in the context of arithmetic: All we need is the notion of successor. The successor of 0 is 1, of 1 is 2, etc. Denote by $S(n)$ the successor of $n$ (in general, $S(n)=n+1$, of course). Note that "successor of $n$" is an order-theoretic notion: The smallest number larger than $n$. Then we define $n+m$ recursively: $n+0=n$, and $n+S(m)=S(n+m)$.</p>
<hr>
<p>We can go on, and define, for example, multiplication and exponentiation in these three ways as well. Cardinal multiplication is the size of the Cartesian product. Cardinal exponentiation $|A|^{|B|}$ is the size of the set of functions from $B$ to $A$. </p>
<p>Ordinal multiplication $\alpha\beta$ means: "$\beta$ copies of $\alpha$". So $\aleph_02=\aleph_0+\aleph_0$ while $2\aleph_0=\aleph_0$. </p>
<p>Ordinal exponentiation is a bit trickier to define: For ordered sets $\alpha,\beta$ such that $\alpha$ has a minimum $0$, define $F(\alpha,\beta)$ as the set consisting of those functions $f:\beta\to\alpha$ such that there are only finitely many $\xi$ such that $f(\xi)\ne0$. </p>
<p>For functions $f,g$ in $F(\alpha,\beta)$ set $f\triangleleft g$ iff $f(\xi)<g(\xi)$ (in the order of $\alpha$) for $\xi$ largest (in the order of $\beta$) such that $f(\xi)\ne g(\xi)$.</p>
<p>Then $\alpha^\beta$ is defined as the order type of $(F(\alpha,\beta),\triangleleft)$.</p>
<p>Again, these notions coincide for finite numbers, but differ wildly for infinite sets.</p>
<p>The recursive definitions are given by $n\cdot 0=0$ and $n\cdot S(m)=nm+n$, and $n^0=1$ (even if $n=0$) and $n^{S(m)}=n^m\cdot n$.</p>
|
logic | <p>Several times in my studies, I've come across Hilbert-style proof systems for various systems of logic, and when an author says, "<strong>Theorem:</strong> $\varphi$ is provable in system $\cal H$," or "<strong>Theorem:</strong> the following axiomatizations of $\cal H$ are equivalent: ...," I usually just take the author's word as an oracle instead of actually trying to construct a Hilbert-style proof (can you blame me?). However, I would like to change this habit and least be in a position where I <em>could</em> check these claims in principle.</p>
<p>On the few occasions where I actually did try to construct a (not-so-short) Hilbert-style proof from scratch, I found it easier to first construct a proof in the corresponding natural deduction system, show that the natural deduction system and the Hilbert system were equivalent, and then try to deconstruct the natural deduction system into a Hilbert-style proof (in the style of Anderson and Belnap). The problem with that (apart from being tortuous) is that I would need the natural deduction system first, and it's not always obvious to me how to construct the natural deduction system given the axioms (sometimes it's not so bad; it's easy to see, for instance, that $(A \rightarrow (A \rightarrow B)) \rightarrow (A \rightarrow B)$ corresponds to contraction; but it's not always <em>that</em> easy...).</p>
<p><strong>So I'm wondering</strong>: are there "standard tricks" for constructing Hilbert-style proofs floating around out there? Or are there tricks for constructing a corresponding natural deduction system given a set of Hilbert axioms? Or is it better to just accept proof-by-oracle?</p>
| <p>Regarding "tricks for constructing a corresponding natural deduction system given a set of Hilbert axioms": Constructing natural deduction systems corresponding to axiomatic propositional or first-order systems isn't too hard when most of the axioms have fairly clear 'meanings', but I think it gets a bit tricker with nonclassical logics. <a href="http://www.ualberta.ca/~francisp/papers/PellHazenSubmittedv2.pdf">Pelletier & Hazen's <em>Natural Deduction</em></a> gives a good overview of some different types of natural deduction systems. See, in particular, §2.3, pp. 6–12, <em>The Beginnings of Natural Deduction: Jaśkowski and Gentzen (and Suppes) on Representing Natural Deduction Proofs</em>. I think that there are three types of natural deduction systems that should be considered (in order of increasing ease of translation from the natural deduction system to the axiomatic system): Gentzen-style; Fitch-style (Jaśkowski's first method); and Suppes-style (Jaśkowski's second method).</p>
<h2>Gentzen-style Natural Deduction</h2>
<p>Gentzen-style natural deduction use proof trees composed of instances of inference rules. Inference rules typically look like this:</p>
<p>$$
\begin{array}{c} A \quad B \\ \hline A \land B \end{array}\land I \qquad
\begin{array}{c} A \\ \hline A \lor B \end{array}\lor I \qquad
\begin{array}{c} [A] \\ \vdots \\ B \\ \hline A \to B \end{array}\to I
$$</p>
<p>A significant difference between axiomatic systems and Gentzen-style natural deduction is that intermediate deductions can be cited by any later line in an axiomatic system, but can only used <em>once</em> in a proof tree. For instance, to prove $(P \land Q) \land P$ from $P \land Q$ requires three instances of the assumption $P \land Q$ in a proof tree:</p>
<p>$$
\frac{\displaystyle
\frac{\displaystyle
\frac{[P \land Q]}{Q} \quad \frac{[P \land Q]}{P}}{Q \land P}
\quad
\frac{[P \land Q]}{P} }{
(Q \land P) \land P }
$$</p>
<p>There's no way to reuse the intermediate deduction of $P$ from $P \land Q$. A naïve translation of a proof tree into corresponding axiomatic deductions will probably be pretty verbose with lots of repeated work (but it would be easy to check for and eliminate redundant deductions in the axiomatic proof). A very nice benefit of these systems, however, is that it is very easy to determine where a rule can be applied, and whether a formula is "in-scope" for use as a premise. Fitch-style and Suppes-style systems are more complicated in this regard.</p>
<h2>Fitch-style Natural Deduction Systems</h2>
<p><a href="http://en.wikipedia.org/wiki/Fitch-style_calculus">Fitch-style natural deduction systems</a> for propositional logic have a type of subproof for conditional introduction. These capture "<em>Suppose</em> $\phi$. … $\psi$. Therefore (no longer supposing $\phi$), $\phi \to \psi$.) Even in this simplest type of subproof, the natural deduction has proof-construction rules about how lines in subproofs may be cited (e.g., a line outside of a subproof can't cite lines within the subproof). Still, unlike proof trees, some reusability is gained. For instance, in the Barwise & Etchemendy's <em>Fitch</em> (from <em>Language, Proof, and Logic</em>), would simplify the proof by reusing the deduction of $P$ from $P \land Q$:</p>
<ol>
<li><ul>
<li>$P \land Q$ Assume.</li>
</ul></li>
<li><ul>
<li>$P$ by conjunction elimination with 1.</li>
</ul></li>
<li><ul>
<li>$Q$ by conjunction elimination with with 1.</li>
</ul></li>
<li><ul>
<li>$Q \land P$ by conjunction introduction with 2 and 3.</li>
</ul></li>
<li><ul>
<li>$(Q \land P) \land P$ by conjunction introduction with 2 and 4.</li>
</ul></li>
</ol>
<p>Some presentations allow for conditional introduction from <em>any</em> line top-level line within a subproof. In these presentations, not only can intermediate deductions be reused, but entire subproofs:</p>
<ol>
<li><ul>
<li>$P \land Q$ Assume.</li>
</ul></li>
<li><ul>
<li>$P$ by conjunction elimination with 1.</li>
</ul></li>
<li><ul>
<li>$Q$ by conjunction elimination with with 1.</li>
</ul></li>
<li>$(P \land Q) \to P$ by conditional introduction with 1–3.</li>
<li>$(P \land Q) \to Q$ by conditional introduction with 1–3.</li>
</ol>
<p>In the first-order case, not only are there subproofs for conditional introduction, but there are subproofs for introducing new 'temporary' individuals (e.g., generic instances for universal introduction, or witnesses for existential elimination). These subproofs require special rules about where the individual of concern may appear.</p>
<p>Kenneth Konyndyk's <em>Introductory Modal Logic</em> gives Fitch-style natural deduction systems for <strong>T</strong>, <strong>S4</strong>, and <strong>S5</strong>. In addition to a condtional introduction, these have modal subproofs for necessity-introduction, and those subproofs require special rules for reiterating formulae into the subproof. For instance, in <strong>T</strong>, only a modal formula $\Box \phi$ can be reiterated into a subproof, and when it does, the $\Box$ is dropped. That is, when $\Box \phi$ is outside a subproof, $\phi$ can be reiterated in (but only through one 'layer' of subproof). In <strong>S4</strong>, $\Box\phi$ can still be reiterated into a modal subproof, but the $\Box$ need not be dropped. In <strong>S5</strong> both $\Box\phi$ and $\Diamond\phi$ can be reiterated into a modal subproof, and neither modality needs to be dropped. </p>
<p>The point to all this is that in the propositional case, many Hilbert-style axioms correspond nicely to Fitch-style natural deduction style rules, but it seems that the nicest cases are those for boolean connectives. E.g., </p>
<p>$$ A \to \left( A \lor B \right) $$</p>
<p>and</p>
<p>$$ A \to \left( B \to (A \land B)\right) $$</p>
<p>turn into "left disjunction introduction" and "conjunction introduction" pretty easily. However, more complicated axiom schemata (such as what would be used for universal introduction, or modal necessitation) that really require new types of subproofs for good natural deduction treatment are trickier to handle nicely.</p>
<h2>Suppes-style Natural Deduction Systems</h2>
<p>There are, of course, other formalizations of natural deduction than Fitch's. Some of these might make for easier translation from axiomatic systems. For instance, consider a proof of $(A \to (B \to (A \land B))) \land (B \to (A \to (A \land B)))$. In a Fitch-style proof, the left conjunct, $A \to \dots$, would have to be proved in a subproof assuming $A$ containing a subproof containing $B$:</p>
<ol>
<li><ul>
<li>Assume $A$.</li>
</ul></li>
<li><ul>
<li><ul>
<li>Assume $B$.</li>
</ul></li>
</ul></li>
<li><ul>
<li><ul>
<li>$A \land B$ by conjunction introduction with 1 and 2.</li>
</ul></li>
</ul></li>
<li><ul>
<li>$B \to (A \land B)$ by conditional introduction with 2–3.</li>
</ul></li>
<li>$A \to (B \to (A \land B))$ by conditional introduction with 1–4.</li>
</ol>
<p>Then another five lines are needed to get $B \to (A \to (A \land B))$, and an eleventh for the final conjunction introduction. In Suppes's system, this is shorter (eight lines) because <em>any</em> in-scope assumption can be discharged by conditional introduction, so we can "get out of the subproofs" in different orders:</p>
<ol>
<li>{1} $A$ Assume.</li>
<li>{2} $B$ Assume.</li>
<li>{1,2} $A \land B$ $\land$-introduction with 1 and 2.</li>
<li>{1} $B \to (A \land B)$ $\to$-introduction with 3.</li>
<li>{} $A \to (B \to (A \land B))$ $\to$-introduction with 4.</li>
<li>{2} $A \to (A \land B)$ $\to$-introduction with 3.</li>
<li>{} $B \to (A \to (A \land B))$ $\to$-introduction with 4.</li>
<li>{} $(A \to (B \to (A \land B))) \land (B \to (A \to (A \land B)))$ $\land$-introduction with 5 and 7.</li>
</ol>
<p>(Note: some implementations of Fitch's system allow this for conditional introduction as well. E.g., in <strong>Fitch</strong> from Barwise and Etchemendy's <em>Language, Proof and Logic</em> conditional introduction can cite a subproof that starts with an assumption $A$ and contains lines $B$ and $C$ to infer both $A \to B$ and $A \to C$.)</p>
<p>To use this approach, each inference rule must also specify how the set of tracked assumptions for its conclusion is determined based on the premises of the rule. For most rules, the assumptions of a conclusion are just the union of the assumptions of the premises. Conditional introduction is the obvious exception. This approach also specifies that only lines with empty assumption sets are <em>theorems</em>.</p>
<p>This "tracking" approach, though, can be used for other properties too. The same considerations apply: each rule must specify how the tracked properties of the conclusion are computed from the premises, and the proof system must define which sentences are theorems.</p>
<p>For instance, in a system for first-order logic, the set of new individuals (for universal generalization or existential elimination) can be tracked, with most rules giving their conclusion the "union of the premises' individuals", with existential elimination and universal introduction the exceptions. Theorems are those sentences with an empty set of individuals and an empty set of assumptions.</p>
<p>This approach works nicely for modal logics, too. A Suppes-style proof system for <strong>K</strong>, for instance, in addition to tracking assumptions, tracks a "modal context", which is a natural number or $\infty$. The modal context indicates how many "necessitation contexts" we're in (intuitively, how many times we should be able to apply necessity introduction to a formula). In terms of Kripke semantics, the modal context is how far removed from the designated world we are. Sentences without any assumptions have context $\infty$, corresponding to the $\vdash \phi / \vdash \Box\phi$ rule. The inference rules require that their premises have compatible modal contexts (i.e., all non-$\infty$ modal contexts are the same). The default modal propagation is that the context of the conclusion is the same as the minimum context of the premises. The exceptions are that $\Box$ introduction subtracts 1, and $\Box$ elimination adds 1. Theorems are those sentences that have no assumptions and modal context $\infty$. </p>
<ol>
<li>{1} (0) $\Box P$ Assume.</li>
<li>{2} (0) $\Box (P \to Q)$ Assume.</li>
<li>{1} (1) $P$ $\Box$-elimination with 1.</li>
<li>{2} (1) $P \to Q$ $\Box$-elimination with 2.</li>
<li>{1,2} (1) $Q$ $\to$-elimination with 3 and 4.</li>
<li>{1,2} (0) $\Box Q$ $\Box$-introduction with 5.</li>
<li>{2} (0) $\Box P \to \Box Q$ $\to$-introduction with 6.</li>
<li>{} ($\infty$) $\Box(P \to Q) \to (\Box P \to \Box Q)$ $\to$-introduction with 7.</li>
</ol>
<p>In Suppes-style proof systems, the question is no longer about reiteration rules, about about property tracking and propagation rules. The purposes are similar, but in practice, certain kinds of axiomatic systems might be easier to translate into one kind or another.</p>
| <p>It's a long list of postings, a lot of it in Polish notation that is hard to read. Anyway, the systematic way to do it is to define a translation algorithm from derivations in ND to derivations in axiomatic logic. This is done in my recent "Elements of Logical Reasoning" section 3.6.(c). ND is there written in SC notation with the open assumptions displayed at the left of a turnstile. </p>
<p>I don't know if anyone ever mastered fully axiomatic logic. Russell was very bad at it, didn't even notice that one of his axioms was in fact a theorem! I needed the translation to be able to produce some of the more elaborate axiomatic derivations. Would be nice if someone implemented the algorithm. </p>
|
logic | <p>Gödel states and proves his celebrated Incompleteness Theorem (which is a statement about all axiom systems). What is his own axiom system of choice? ZF, ZFC, Peano or what? He surely needs one, doesn't he?</p>
| <p>Gödel's paper was written in the same way as essentially every other mathematical paper. To prove a theorem <em>about</em> a formal system does not require one to prove that theorem <em>within</em> a formal system. Gödel argued in his paper that the incompleteness theorem should be viewed as a result in elementary number theory, and he certainly proved the incompleteness theorem to the same standard of rigor as other results in that area. </p>
<p>Of course, we now know that the incompleteness theorem can be proved in <em>extremely</em> weak systems, such as PRA (primitive recursive arithmetic), a theory much weaker than Peano arithmetic. But, at the time that Gödel wrote his paper, the definition of PRA had never been formulated. </p>
<p>Even Gödel's theorem, as he stated it at the time, was for a particular formal system "$P$" rather than for effective formal systems in general, because the definition of an "effective formal system" had not yet been compellingly formulated. Remarkably, it took well into the 20th century before many of the the now-standard concepts of logic were clearly understood. </p>
| <p>As a footnote to Carl Mummert's terrific answer, it is worth adding the following remark.</p>
<p>Yes, Gödel was giving an informal mathematical proof "from outside" (as it were) about certain formal systems. And yes, he is not explicit about what exactly his proof requires to go through. </p>
<p>However, it <em>was</em> very important to him at the time that his proof used only very elementary constructive reasoning that would be acceptable even to e.g. intuitionists who did not accept full classical mathematics, and equally to Hilbertian finitists who put even more stringent limits on what counted as quite indisputable mathematical reasoning. (After all, he couldn't effectively challenge Hilbert's program by using reasoning that wouldn't be accepted as unproblematic by a Hilbertian!) </p>
<p>So although Gödel doesn't explicitly set out what exactly he is assuming, we are supposed to be able to see by inspection that nothing worryingly infinitary or otherwise suspect is going on, and that -- although his construction of the Gödel sentence for his system $P$ is beautifully ingenious -- once we spot the trick, the reasoning that shows that the sentence is formally undecidable in $P$ is as uncontentious as can be (even by the lights of very contentious intuitionists or finitists!), so falls <em>way</em> short of what it would require the oomph of classical ZF (or even full classical PA) to formalize. </p>
|
logic | <p>I'm not sure whether this is the best place to ask this, but is the XOR binary operator a combination of AND+NOT operators?</p>
| <p>Yes. In fact, any logical operation can be built from the NAND operator, where </p>
<blockquote>
<p>A NAND B = NOT(A AND B)</p>
</blockquote>
<p>See for instance <a href="http://en.wikipedia.org/wiki/NAND_logic">http://en.wikipedia.org/wiki/NAND_logic</a>, which gives </p>
<blockquote>
<p>A XOR B = (A NAND (A NAND B)) NAND (B NAND (A NAND B))</p>
</blockquote>
<p>Digression: There is a story about certain military devices being designed using only NAND gates, so that only one part needs to be certified, stocked as spares, etc. I don't know whether it's actually true.</p>
| <p><img src="https://i.sstatic.net/3YG4z.png" alt="enter image description here"></p>
|
matrices | <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$n \times n$</span> matrix and let <span class="math-container">$\Lambda$</span> be an <span class="math-container">$n \times n$</span> diagonal matrix. Is it always the case that <span class="math-container">$A\Lambda = \Lambda A$</span>? If not, when is it the case that <span class="math-container">$A \Lambda = \Lambda A$</span>?</p>
<p>If we restrict the diagonal entries of <span class="math-container">$\Lambda$</span> to being equal (i.e. <span class="math-container">$\Lambda = \text{diag}(a, a, \dots, a)$</span>), then it is clear that <span class="math-container">$A\Lambda = AaI = aIA = \Lambda A$</span>. However, I can't seem to come up with an argument for the general case.</p>
| <p>It is possible that a diagonal matrix $\Lambda$ commutes with a matrix $A$ when $A$ is symmetric and $A \Lambda$ is also symmetric. We have</p>
<p>$$
\Lambda A = (A^{\top}\Lambda^\top)^{\top} = (A\Lambda)^\top = A\Lambda
$$</p>
<p>The above trivially holds when $A$ and $\Lambda$ are both diagonal.</p>
| <p>A diagonal matrix will not commute with every matrix.</p>
<p>$$
\begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}*\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$$</p>
<p>But:</p>
<p>$$\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} * \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} 0 & 2 \\ 0 & 0 \end{pmatrix}.$$</p>
|
logic | <p>What's the meaning of the double turnstile symbol in logic or mathematical notation? :</p>
<blockquote>
<p>$\models$</p>
</blockquote>
| <p>Just to enlarge on Harry's answer:</p>
<p>Your symbol denotes one of two specified notions of implication in formal logic</p>
<p><span class="math-container">$\vdash$</span> -the <strong>turnstile</strong> symbol denotes <strong>syntactic</strong> implication (syntactic here means related to syntax, the structure of a sentence), where the 'algebra' of the logical system in play (for example <a href="http://en.wikipedia.org/wiki/Sentential_calculus" rel="nofollow noreferrer">sentential calculus</a>) allows us to 'rearrange and cancel' the stuff we know on the left into the thing we want to prove on the right.</p>
<p>An example might be the classic "all men are mortal <span class="math-container">$\wedge$</span> socrates is a man <span class="math-container">$\vdash$</span> socrates is mortal" ('<span class="math-container">$\wedge$</span>' of course here just means 'and'). You can almost imagine cancelling out the 'man bit' on the left to just give the sentence on the right (although the truth may be more complex...).</p>
<hr />
<p><span class="math-container">$\models$</span> -the <strong>double turnstile</strong>, on the other hand, is not so much about algebra as meaning (formally it denotes <strong>semantic</strong> implication)- it means that any interpretation of the stuff we know on the left must have the corresponding interpretation of the thing we want to prove on the right true.</p>
<p>An example would be if we had an infinite set of sentences: <span class="math-container">$\Gamma$</span>:= {"1 is lovely", "2 is lovely", ...} in which all numbers appear, and the sentence A= " the natural numbers are precisely {1,2,...}" listing all numbers. Any interpretation would give us B="all natural numbers are lovely". So <span class="math-container">$\Gamma$</span>, A <span class="math-container">$\models$</span> B.</p>
<hr />
<p>Now, the goal of any logician trying to set up a formal system is to have <span class="math-container">$\Gamma \vdash A \iff \Gamma \models A$</span>, meaning that the 'algebra' must line up with the interpretation, and this is not something we can take as given. Take the second example above- can we be sure that algebraic operations can 'parse' those infinitely many sentences and make the simple sentence on the right?? (this is to do with a property called <strong>compactness</strong>)</p>
<p>The goal can be split into two distinct subgoals:</p>
<p><strong>Soundness:</strong> <span class="math-container">$A \vdash B \Rightarrow A \models B$</span></p>
<p><strong>Completeness:</strong> <span class="math-container">$A \models B \Rightarrow A \vdash B$</span></p>
<p>Where the first stops you proving things that aren't true when we interpret them and the second means that everything we know to be true on interpretation, we must be able to prove.</p>
<p>Sentential calculus, for example, can be proved complete (and was in Godel's lesser known, but celebrated completeness theorem), but for other systems Godel's incompleteness theorem, give us a terrible choice between the two.</p>
<hr />
<p><strong>In summary:</strong> The interplay of meaning and axiomatic machine mathematics, captured by the difference between <span class="math-container">$\models$</span> and <span class="math-container">$\vdash$</span>, is a subtle and interesting thing.</p>
| <p><span class="math-container">$\models$</span> is also known as the <a href="https://planetmath.org/SatisfactionRelation" rel="nofollow noreferrer">satisfication relation</a>. For a structure <span class="math-container">$\mathcal{M}=(M,I)$</span> and an <span class="math-container">$\mathcal{M}$</span>-assignment <span class="math-container">$\nu$</span>, <span class="math-container">$(\mathcal{M},\nu)\models \varphi$</span> means that the formula <span class="math-container">$\varphi$</span> is true with the particular assignment <span class="math-container">$\nu$</span>.<br />
See <a href="http://www.trinity.edu/cbrown/topics_in_logic/struct/node2.html" rel="nofollow noreferrer">http://www.trinity.edu/cbrown/topics_in_logic/struct/node2.html</a></p>
|
probability | <p>I gave my friend <a href="https://math.stackexchange.com/questions/42231/obtaining-irrational-probabilities-from-fair-coins">this problem</a> as a brainteaser; while her attempted solution didn't work, it raised an interesting question.</p>
<p>I flip a fair coin repeatedly and record the results. I stop as soon as the number of heads is equal to twice the number of tails (for example, I will stop after seeing HHT or THTHHH or TTTHHHHHH). What's the probability that I never stop?</p>
<p>I've tried to just compute the answer directly, but the terms got ugly pretty quickly. I'm hoping for a hint towards a slick solution, but I will keep trying to brute force an answer in the meantime.</p>
| <blockquote>
<p>The game stops with probability $u=\frac34(3-\sqrt5)=0.572949017$.</p>
</blockquote>
<p>See the end of the post for generalizations of this result, first to every asymmetric heads-or-tails games (Edit 1), and then to every integer ratio (Edit 2).</p>
<hr>
<p><strong>To prove this</strong>, consider the random walk which goes two steps to the right each time a tail occurs and one step to the left each time a head occurs. Then the number of tails is the double of the number of heads each time the walk is back at its starting point (and only then). In other words, the probability that the game never stops is $1-u$ where $u=P_0(\text{hits}\ 0)$ for the random walk with equiprobable steps $+2$ and $-1$.</p>
<p>The classical one-step analysis of hitting times for Markov chains yields $2u=v_1+w_2$ where, for every positive $k$, $v_k=P_{-k}(\text{hits}\ 0)$ and $w_k=P_{k}(\text{hits}\ 0)$. We first evaluate $w_2$ then $v_1$.</p>
<p>The $(w_k)$ part is easy: the only steps to the left are $-1$ steps hence to hit $0$ starting from $k\ge1$, the walk must first hit $k-1$ starting from $k$, then hit $k-2$ starting from $k-1$, and so on. These $k$ events are equiprobable hence $w_k=(w_1)^k$. Another one-step analysis, this time for the walk starting from $1$, yields
$$
2w_1=1+w_3=1+(w_1)^3
$$
hence $w_k=w^k$ where $w$ solves $w^3-2w+1=0$. Since $w\ne1$, $w^2+w=1$ and since $w<1$, $w=\frac12(\sqrt5-1)$.</p>
<p>Let us consider the $(v_k)$ part. The random walk has a drift to the right hence its position converges to $+\infty$ almost surely. Let $k+R$ denote the first position visited on the right of the starting point $k$. Then $R\in\{1,2\}$ almost surely, the distribution of $R$ does not depend on $k$ because the dynamics is invariant by translations, and
$$
v_1=r+(1-r)w_1\quad\text{where}\ r=P_{-1}(R=1).
$$
Now, starting from $0$, $R=1$ implies that the first step is $-1$ hence $2r=P_{-1}(A)$ with $A=[\text{hits}\ 1 \text{before}\ 2]$. Consider $R'$ for the random walk starting at $-1$. If $R'=2$, $A$ occurs. If $R'=1$, the walk is back at position $0$ hence $A$ occurs with probability $r$. In other words, $2r=(1-r)+r^2$, that is, $r^2-3r+1=0$. Since $r<1$, $r=\frac12(3-\sqrt5)$ (hence $r=1-w$).</p>
<p>Plugging these values of $w$ and $r$ into $v_1$ and $w_2$ yields the value of $u$.</p>
<hr>
<p><strong>Edit 1</strong> Every asymmetric random walk which performs elementary steps $+2$ with probability $p$ and $-1$ with probability $1-p$ is transient to $+\infty$ as long as $p>\frac13$ (and naturally, for every $p\le\frac13$ the walk hits $0$ with full probability). In this regime, one can compute the probability $u(p)$ to hit $0$. The result is the following.</p>
<blockquote>
<p>For every $p$ in $(0,\frac13)$, $u(p)=\frac32\left(2-p-\sqrt{p(4-3p)}\right).$</p>
</blockquote>
<p>Note that $u(p)\to1$ when $p\to\frac13$ and $u(p)\to0$ when $p\to1$, as was to be expected.</p>
<hr>
<p><strong>Edit 2</strong>
Coming back to symmetric heads-or-tails games, note that, for any fixed integer $N\ge2$, the same techniques apply to compute the probability $u_N$ to reach $N$ times more tails than heads. </p>
<p>One gets $2u_N=E(w^{R_N-1})+w^N$ where $w$ is the unique solution in $(0,1)$ of the polynomial equation $2w=1+w^{1+N}$, and the random variable $R_N$ is almost surely in $\{1,2,\ldots,N\}$. The distribution of $R_N$ is characterized by its generating function, which solves
$$
(1-(2-r_N)s)E(s^{R_N})=r_Ns-s^{N+1}\quad\text{with}\quad r_N=P(R_N=1).
$$
This is equivalent to a system of $N$ equations with unknowns the probabilities $P(R_N=k)$ for $k$ in $\{1,2,\ldots,N\}$. One can deduce from this system that $r_N$ is the unique root $r<1$ of the polynomial $(2-r)^Nr=1$. One can then note that $r_N=w^N$ and that $E(w^{R_N})=\dfrac{Nr_N}{2-r_N}$ hence some further simplifications yield finally the following general result.</p>
<blockquote>
<p>For every $N\ge2$, $u_N=\frac12(N+1)r_N$ where $r_N<1$ solves the equation $(2-r)^Nr=1$.</p>
</blockquote>
| <p>(Update: The answer to the original question is that the probability of stopping is $\frac{3}{4} \left(3 - \sqrt{5}\right)$. See end of post for an infinite series expression in the general case.)
<HR></p>
<p>Let $S(n)$ denote the number of ways to stop after seeing $n$ tails. Seeing $n$ tails means seeing $2n$ heads, so this would be stopping after $3n$ flips. Since there are $2^{3n}$ possible sequences in $3n$ flips, the probability of stopping is $\sum_{n=1}^{\infty} S(n)/8^n$.</p>
<p>To determine $S(n)$, we see that there are $\binom{3n}{n}$ ways to choose which $n$ of $3n$ flips will be tails. However, this overcounts for $n > 1$, as we could have seen twice as many heads as tails for some $k < n$. Of these $\binom{3n}{n}$ sequences, there are $S(k) \binom{3n-3k}{n-k}$ sequences of $3n$ flips in which there are $k$ tails the first time we would see twice as many heads as tails, as any of the $S(k)$ sequences of $3k$ flips could be completed by choosing $n-k$ of the remaining $3n-3k$ flips to be tails. Thus $S(n)$ satisfies the recurrence $S(n) = \binom{3n}{n} - \sum_{k=1}^{n-1} \binom{3n-3k}{n-k}S(k)$, with $S(1) = 3$.</p>
<p>The solution to this recurrence is $S(n) = \frac{2}{3n-1} \binom{3n}{n}.$ This can be verified easily, as substituting this expression into the recurrence yields a slight variation on Identity 5.62 in <em>Concrete Mathematics</em> (p. 202, 2nd ed.), namely,
$$\sum_k \binom{tk+r}{k} \binom{tn-tk+s}{n-k} \frac{r}{tk+r} = \binom{tn+r+s}{n},$$
with $t = 3$, $r = -1$, $s=0$.</p>
<p>So the probability of stopping is $$\sum_{n=1}^{\infty} \binom{3n}{n} \frac{2}{3n-1} \frac{1}{8^n}.$$</p>
<p>Mathematica gives the closed form for this probability of stopping to be $$2 \left(1 - \cos\left(\frac{2}{3} \arcsin \frac{3 \sqrt{3/2}}{4}\right) \right) \approx 0.572949.$$</p>
<p><em>Added</em>: The sum is hypergeometric and has a simpler representation. See Sasha's comments for why the sum yields this closed form solution and also why the answer is $$\frac{3}{4} \left(3 - \sqrt{5}\right) \approx 0.572949.$$</p>
<p><HR>
<em>Added 2</em>: This answer is generalizable to other ratios $r$ up to the infinite series expression. For the general $r \geq 2$ case, the argument above is easily adapted to produce the recurrence
$S(n) = \binom{(r+1)n}{n} - \sum_{k=1}^{n-1} \binom{(r+1)n-(r+1)k}{n-k}S(k)$, with $S(1) = r+1$. The solution to the recurrence is $S(n) = \frac{r}{(r+1) n - 1} \binom{(r+1) n}{n}$ and can be verified easily by using the binomial convolution formula given above. Thus, for the ratio $r$, the probability of stopping has the infinite series expression $$\sum_{n=1}^{\infty} \binom{(r+1)n}{n} \frac{r}{(r+1)n-1} \frac{1}{2^{(r+1)n}}.$$
This can be expressed as a hypergeometric function, but I am not sure how to simplify it any further for general $r$ (and neither does Mathematica). It can also be expressed using the generalized binomial series discussed in <em>Concrete Mathematics</em> (p. 200, 2nd ed.), but I don't see how to simplify it further in that direction, either.</p>
<p><HR>
<em>Added 3</em>: In case anyone is interested, I found a <a href="https://math.stackexchange.com/questions/60991/combinatorial-proof-of-binom3nn-frac23n-1-as-the-answer-to-a-coin-fli/66146#66146">combinatorial proof of the formula for $S(n)$</a>. It works in the general $r$ case, too.</p>
|
probability | <p>Suppose we flip a coin until we see a head. What is the expected value of the number of flips we will take?</p>
<hr />
<p>I am pretty new to expected value, so I tried to evaluate it by multiplying the probability of each scenario with the number of flips it took to get there (like taking the arithmetic mean). This didn't work though, because of the infinite possibilities. I'm really confused, and if someone were to provide an answer, I would really appreciate it if they could go into detail.</p>
| <p>Let $X$ be a discrete random variable with possible outcomes: $x_1, x_2, x_3,\dots, x_i,\dots$ with associated probabilities $p_1,p_2,p_3,\dots,p_i,\dots$</p>
<p>The expected value of $f(X)$ is given as:</p>
<blockquote>
<p>$E[f(X)] = \sum\limits_{i\in\Delta} f(x_i)p_i$</p>
</blockquote>
<p>In your specific example, $X$ could be one of the values: $1,2,3,\dots ,i,\dots$ with corresponding probabilities $\frac{1}{2},\frac{1}{4},\frac{1}{8},\dots,\frac{1}{2^i},\dots$ (seen easily from the beginnings of a tree diagram)</p>
<p>So, the expected value of $X$ is:</p>
<p>$\sum\limits_{i=1}^\infty i(\frac{1}{2})^i$</p>
<p>This is a well-known infinite sum of the form $\sum\limits_{i=1}^\infty i p (1-p)^{i-1}$, in this case with $p=1-p=\frac{1}{2}$. You will likely be expected to simply memorize the result, and it is included in most formula lists. $\sum\limits_{i=1}^\infty i p (1-p)^{i-1}=\frac{1}{p}~~~~~~~(\dagger)$</p>
<p>Using this result without proof, we get our expected number of flips is $\frac{1}{0.5}=2$</p>
<p>The proof of $(\dagger)$: </p>
<p>$$\sum\limits_{i=1}^\infty i p (1-p)^{i-1} = p\sum\limits_{i=1}^\infty i (1-p)^{i-1}\\
= p\left(\sum\limits_{i=1}^\infty (1-p)^{i-1} + \sum\limits_{i=2}^\infty (1-p)^{i-1} + \sum\limits_{i=3}^\infty (1-p)^{i-1} + \dots\right)\\
= p\left[(1/p)+(1-p)/p+(1-p)^2/p+\dots\right]\\
= 1 + (1-p)+(1-p)^2+\dots\\
=\frac{1}{p}$$</p>
| <p>Let <span class="math-container">$X$</span> be the number of flips we make before seeing heads. Let <span class="math-container">$p$</span> be the probability of getting heads on any one flip. And note three things that, individually, aren't too hard to see:</p>
<ol>
<li>Coin flipping is a memoryless process.</li>
<li>We must flip the coin at least once in order to get heads (or tails, or anything).</li>
<li>The probability of not getting heads on that first flip is <span class="math-container">$1-p$</span>.</li>
</ol>
<p>Using these three ideas, we can model <span class="math-container">$E[X]$</span>, the expected number of flips we perform before seeing heads, as:</p>
<p><span class="math-container">$E[X] = 1 + (1-p)E[X]$</span></p>
<p>That is, we flip the coin once (hence the <span class="math-container">$1$</span>), and there is a (<span class="math-container">$1-p$</span>) chance that we get tails and have to continue flipping the coin (hence the <span class="math-container">$+ (1-p)E[X]$</span>). Because coin flipping is memoryless, the expected value of <span class="math-container">$X$</span> after flipping a tails is the same as before flipping that tails (that's why the <span class="math-container">$E[X]$</span> on the RHS of the equation is not conditioned).</p>
<p>Now it turns out that modeling <span class="math-container">$E[X]$</span> in this way is useful because we can easily solve for <span class="math-container">$E[X]$</span> and answer our question:</p>
<p><span class="math-container">$E[X] = 1 + (1-p)E[X]$</span></p>
<p><span class="math-container">$E[X] - (1-p)E[X] = 1$</span></p>
<p><span class="math-container">$E[X] + (-1+p)E[X] = 1$</span></p>
<p><span class="math-container">$E[X] - E[X] + pE[X] = 1$</span></p>
<p><span class="math-container">$pE[X] = 1$</span></p>
<p><span class="math-container">$E[X] = 1 / p$</span></p>
<p>Since <span class="math-container">$p = 1/2$</span>, we get <span class="math-container">$E[X] = 2$</span>.</p>
<p>I learned of this clever approach here: <a href="http://online.stanford.edu/course/algorithms-design-and-analysis-part-1" rel="noreferrer">http://online.stanford.edu/course/algorithms-design-and-analysis-part-1</a></p>
|
game-theory | <p>So there are $n$ people, each choosing some non-zero counting number. You don't know what any of them choose. To win, you must choose the smallest number; but if you choose the same number as somebody else, you are disqualified. How would you decide what number $k$ is best to choose? I feel like $k\le n$, but apart from that I have no idea where to start. Any ideas?</p>
<p><strong>EDIT</strong>: So to avoid a trivial paradox and to somewhat model real human behavior, we want the $n$ people to choose numbers reasonably but not necessarily perfectly. For instance, nobody else is gonna choose $k > n$, as that would be silly. Since choosing 1 being unreasonable would lead to paradox, we'll also say 1 could be chosen, but won't necessarily be picked.</p>
| <p>Contrary to intuition, the Nash equilibrium for this game (assuming $n\geq 2$) must have positive probability of choosing any positive integer. Assume not, so there is some integer $m$ such that the Nash equilibrium picks $m$ with probability $p_m>0$ but never picks $m+1$. Suppose everyone else is playing that strategy, and consider what happens if you play the modified strategy which instead picks $m+1$ with probability $p_m$ and never picks $m$. This performs exactly the same if you pick some number other than $m+1$. If you pick $m+1$ and would have won had you picked $m$ then you will still win, since no-one else has picked $m+1$ (because they can't) or $m$ (by assumption that you would have won by picking $m$). You also win in the event that you pick $m+1$ and everyone else picks $m$, which has positive probability. So the original strategy wasn't a Nash equilibrium, because this one beats it.</p>
| <p>Given that there are $n$ players, let's assume that each player must choose a number $k \in \{z \in \mathbb{Z} | 1 \le z \le n\}$. Note that the order of players picking a number does not affect the outcome.</p>
<p>Some thoughts:</p>
<p>When $\textbf{n = 2}$ equilibrium is achieved when both players choose the smallest number i.e. $1$.</p>
<p>For case $\textbf{n = 3}$, let two numbers from $\{1,2,3\}$ be already choose, then</p>
<p>it is impossible to win whenever $\{1, 2\}$, $\{1, 3\}$ are chosen by others.</p>
<p>it is possible to win when $\{2, 3\}$, $\{1, 1\}$, $\{2, 2\}$ or $\{3, 3\}$ by others.</p>
<p>Let's consider what happens if we choose</p>
<p>$\rightarrow$ winning number</p>
<p>$1$</p>
<p>$\{1, 2\} \rightarrow 2$</p>
<p>$\{1, 3\} \rightarrow 3$</p>
<p>$\{2, 3\} \rightarrow 1$ <strong>We win!</strong></p>
<p>$\{1, 1\}$ <em>No winner.</em></p>
<p>$\{2, 2\} \rightarrow 1$ <strong>We win!</strong></p>
<p>$\{3, 3\} \rightarrow 1$ <strong>We win!</strong></p>
<p>$2$</p>
<p>$\{1, 2\} \rightarrow 1$</p>
<p>$\{1, 3\} \rightarrow 1$</p>
<p>$\{2, 3\} \rightarrow 3$</p>
<p>$\{1, 1\} \rightarrow 2$ <strong>We win!</strong></p>
<p>$\{2, 2\}$ <em>No winner.</em></p>
<p>$\{3, 3\} \rightarrow 2$ <strong>We win!</strong></p>
<p>$3$</p>
<p>$\{1, 2\} \rightarrow 1$</p>
<p>$\{1, 3\} \rightarrow 1$</p>
<p>$\{2, 3\} \rightarrow 2$</p>
<p>$\{1, 1\} \rightarrow 3$ <strong>We win!</strong></p>
<p>$\{2, 2\} \rightarrow 3$ <strong>We win!</strong></p>
<p>$\{3, 3\}$ <em>No winner.</em></p>
<p>Therefore it has been shown that choosing $1$ when $n = 3$ gives us best chance of winning. Hence, $k = 1$ is the equilibrium.</p>
<p>This approach can be generalised for more players.</p>
|
geometry | <p>From the 1951 novel <a href="https://www.goodreads.com/book/show/1417219.The_Universe_Between" rel="noreferrer">The Universe Between</a> by Alan E. Nourse.</p>
<blockquote>
<p>Bob Benedict is one of the few scientists able to make contact with the invisible, dangerous world of The Thresholders and return—sane! For years he has tried to transport—and receive—matter by transmitting it through the mysterious, parallel Threshold. </p>
</blockquote>
<p>[...]</p>
<blockquote>
<p>Incredibly, something changed. A pause, a sag, as though some terrible pressure had
suddenly been released. Their fear was still there, biting into him, but there was something else. He was aware of his body around him in its curious configuration of orderly disorder, its fragments whirling about him like sections of a crazy quilt. <strong>Two concentric circles of different radii intersecting each other at three different points</strong>. Twisting cubic masses interlacing themselves into the jumbled incredibility of a geometric nightmare. </p>
</blockquote>
<p>The author might be just throwing some terms together to give the reader a sense of awe, but maybe there's some non-euclidean geometry where this is possible.</p>
| <p>Yes, with the appropriate definition of "circle". Namely, define a circle of radius <span class="math-container">$R$</span> centered at <span class="math-container">$x$</span> on manifold <span class="math-container">$M$</span> to be the set of points which can be reached by a geodesic of length <span class="math-container">$R$</span> starting at <span class="math-container">$x$</span>. This seems pretty reasonable, and reproduces the usual definition in Euclidean space. </p>
<p>It's not hard to see that concentric circles on a torus or cylinder can have four intersection points.</p>
<p><a href="https://i.sstatic.net/QUl9S.png" rel="noreferrer"><img src="https://i.sstatic.net/QUl9S.png" alt="enter image description here"></a></p>
<p>(Here's how to interpret this picture: The larger circle has been wrapped in the y-direction, reflecting a torus or cylinder topology. Coming soon: A picture of this embedded in 3D.)</p>
<p>By flattening one side of the torus a bit, you can make one side of the larger circle intersect the smaller circle at two points, while the other side just grazes at a single point*. Thus you get three intersections. </p>
<hr>
<p>*As a technical point, this can definitely be accomplished in Finsler geometry, though I'm not sure if it can be done in Riemannian geometry. </p>
| <p>The situation is impossible if we make the following assumptions:</p>
<ol>
<li>Each circle has exactly one center, which is a point. </li>
<li>Concentric circles have the same center.</li>
<li>Each circle has exactly one radius, which is a number. (We make no assumptions, besides those listed, about the meaning of the word "number.") </li>
<li>If two circles intersect each other at a point, then that point lies on both circles. </li>
<li>Given any unordered pair of points, there is exactly one distance between those points, which is a number. </li>
<li>If a point <span class="math-container">$p$</span> lies on a circle, then the distance between <span class="math-container">$p$</span> and the center of the circle is the radius of the circle.</li>
</ol>
<p>From the above, suppose that <span class="math-container">$C$</span> and <span class="math-container">$D$</span> are two concentric circles that intersect at a point. The two circles have the same center, <span class="math-container">$e$</span>, and call the intersection point <span class="math-container">$p$</span>. Then the distance between <span class="math-container">$e$</span> and <span class="math-container">$p$</span> is the radius of <span class="math-container">$C$</span>, but it is also the radius of <span class="math-container">$D$</span>, so the two circles cannot have different radii. </p>
<p>We could make the situation possible by discarding some of the axioms, but for the most part, these axioms are so fundamental to the notion of geometry that if you discarded one, the result wouldn't be considered geometry any more (not even non-Euclidean geometry). In particular, axioms 2 and 4 above are essentially just the definitions of the words "concentric" and "intersect," and axioms 1, 3 and 6 essentially constitute the definition of a circle with a given center and radius.</p>
<p>If I had to pick an axiom to discard, I would discard axiom number 5: the statement that given two points, there is <em>only one</em> distance between those points. This is the approach taken in <a href="https://math.stackexchange.com/a/3328252/13524">Luca Bressan's answer</a> (a plane based on modular arithmetic, where pairs of points with a distance of <span class="math-container">$0$</span> also have a distance of <span class="math-container">$2$</span> and vice versa) and in <a href="https://math.stackexchange.com/a/3328685/13524">Yly's answer</a> (a cylinder, where a pair of points has infinitely many distances, depending on which direction and how many times you wrap around the cylinder as you measure the distance).</p>
|
linear-algebra | <p>If the matrix is positive definite, then all its eigenvalues are strictly positive. </p>
<p>Is the converse also true?<br>
That is, if the eigenvalues are strictly positive, then matrix is positive definite?<br>
Can you give example of $2 \times 2$ matrix with $2$ positive eigenvalues but is not positive definite?</p>
| <p>I think this is false. Let <span class="math-container">$A = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}$</span> be a 2x2 matrix, in the canonical basis of <span class="math-container">$\mathbb R^2$</span>. Then A has a double eigenvalue b=1. If <span class="math-container">$v=\begin{pmatrix}1\\1\end{pmatrix}$</span>, then <span class="math-container">$\langle v, Av \rangle < 0$</span>.</p>
<p>The point is that the matrix can have all its eigenvalues strictly positive, but it does not follow that it is positive definite.</p>
| <p>This question does a great job of illustrating the problem with thinking about these things in terms of coordinates. The thing that is positive-definite is not a matrix $M$ but the <em>quadratic form</em> $x \mapsto x^T M x$, which is a very different beast from the linear transformation $x \mapsto M x$. For one thing, the quadratic form does not depend on the antisymmetric part of $M$, so using an asymmetric matrix to define a quadratic form is redundant. And there is <em>no reason</em> that an asymmetric matrix and its symmetrization need to be at all related; in particular, they do not need to have the same eigenvalues. </p>
|
differentiation | <p>I'm still struggling to understand why the derivative of sine only works for radians. I had always thought that radians and degrees were both arbitrary units of measurement, and just now I'm discovering that I've been wrong all along! I'm guessing that when you differentiate sine, the step that only works for radians is when you replace $\sin(dx)$ with just $dx$, because as $dx$ approaches $0$ then $\sin(dx)$ equals $dx$ because $\sin(\theta)$ equals $\theta$. But isn't the same true for degrees? As $dx$ approaches $\theta$ degrees then $\sin(dx \,\text{degrees})$ still approaches $0$. But I've come to the understanding that $\sin(dx \,\text{degrees})$ approaches $0$ almost $60$ times slower, so if $\sin(dx \,\text{radians})$ can be replaced with $dx$ then $\sin(dx \,\text{degrees})$ would have to be replaced with $(\pi/180)$ times $dx$ degrees.</p>
<p>But the question remains of why it works perfectly for radians. How do we know that we can replace $\sin(dx)$ with just $dx$ without any kind of conversion applied like we need for degrees? It's not good enough to just say that we can see that $\sin(dx)$ approaches $dx$ as $dx$ gets very small. Mathematically we can see that $\sin(.00001)$ is pretty darn close to $0.00001$ when we're using radians. But let's say we had a unit of measurement "sixths" where there are $6$ of them in a full circle, pretty close to radians. It would also look like $\sin(dx \,\text{sixths})$ approaches $dx$ when it gets very small, but we know we'd have to replace $\sin(dx \,\text{sixths})$ with $(\pi/3) \,dx$ sixths when differentiating. So how do we know that radians work out so magically, and why do they?</p>
<p>I've read the answers to <a href="https://math.stackexchange.com/questions/466299/why-is-the-derivative-of-sine-the-cosine-in-radians-but-not-in-degrees">this question</a> and followed the links, and no, they don't answer my question.</p>
| <p>Radians, unlike degrees, are not arbitrary in an important sense. </p>
<p>The circumference of a unit circle is $2\pi$; an arc of the unit circle subtended by an angle of $\theta$ radians has arc length of $\theta$. </p>
<p>With these 'natural' units, the trigonometric functions behave in a certain way. Particularly important is
$$\lim_{x\to 0} \frac{\sin x}{x} = 1 \quad\quad - (*)$$</p>
<p>Now study the derivative of $\sin$ at $x = a$:</p>
<p>$$\lim_{x \to a} \frac{\sin x - \sin a}{x-a} = \lim_{x \to a}\left( \frac{\sin\left(\frac{x-a}{2}\right)}{(x-a)/2}\cdot \cos\left(\frac{x+a}{2}\right)\right)$$</p>
<p>This limit is equal to $$\cos a$$ precisely because of the limit $(*)$. And $(*)$ is quite different in degrees.</p>
| <p>It seems to me that the best answer thus far is <strong>Simon S</strong>'s. Others have hinted at the important property:</p>
<p><span class="math-container">$$
\lim_{x\rightarrow 0} \frac{\sin(x)}{x} = 1
$$</span></p>
<p>Some have simply stated it's important with little reason as to <em>why</em> it's important (specifically in regards to your question about the derivative of <span class="math-container">$\sin(x)$</span> equaling the <span class="math-container">$\cos(x)$</span>). <strong>Simon S</strong>'s answer explained <em>why</em> that limit is important for the derivative. However, what I find lacking is <em>why</em> it is that the limit equals what it equals <em>and</em> what <em>would</em> it equal if we decided to use degrees instead of radians.</p>
<p><em>At this point, I want to acknowledge that my answer is essentially the same as <strong>Simon S</strong>'s except that I am going to go into gruesome detail.</em></p>
<p>Before I go into this, there is absolutely <em>nothing</em> wrong with using degrees over radians. It <em>will</em> change what the definition of the derivative of the trigonometric functions are, but it won't change <em>any</em> of our math--it just introduces a tedious factor we always have to carry around.</p>
<p>I am going to use <a href="https://proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Geometric_Proof" rel="nofollow noreferrer">this geometric proof</a> as a way to make sense of the limit above:</p>
<p><a href="https://i.sstatic.net/JvPjH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JvPjH.png" alt="geometric proof link" /></a></p>
<p>There is only <em>one</em> part of the proof that will change if we decide to use degrees as opposed to radians and that is when we find the area of the sector subtended by <span class="math-container">$\theta$</span>. When we use radians we get: <span class="math-container">$A_{AB} = \pi 1^2 * \frac{\theta}{2\pi} = \frac{\theta}{2}$</span>--just as they found in the given proof. If <em>however</em>, we use degrees then we will get: <span class="math-container">$A_{AB} = \theta * \frac{\pi}{360}$</span>. Now this changes their initial inequality which the rest of the proof relies on:</p>
<p><span class="math-container">$$
\frac{1}{2}\sin(\theta) \leq \frac{\pi \theta}{360} \leq \frac{1}{2}\tan(\theta)
$$</span></p>
<p><em>(the others don't change because the sine and tangents equal the same thing regardless of whether or not we use radians or degrees--with a proper, trigonometric definition of each, of course).</em></p>
<p>We still proceed in the same way (I'm going to be less formal and not worry about the absolute values--although we should technically). We divide everything by <span class="math-container">$\sin(\theta)$</span> which since I'm only worrying about the first quadrant won't change the directions of the inequalities:</p>
<p><span class="math-container">$$
\frac{1}{2} \leq \frac{\pi \theta}{360 \sin(\theta)} \leq \frac{1}{2\cos(\theta)}\\
\frac{360}{2\pi} \leq \frac{\theta}{\sin(\theta)} \leq \frac{360}{2\pi \cos(\theta)} \\
\frac{\pi}{180} \geq \frac{\sin(\theta)}{\theta} \geq \frac{\pi}{180}\cos(\theta)
$$</span></p>
<p>When we plug in <span class="math-container">$\theta = 0$</span> (whether radians or degrees) we get <span class="math-container">$\cos(0) = 1$</span> and thus we use the squeeze theorem to show that:</p>
<p><span class="math-container">$$
\frac{\pi}{180} \leq \lim_{\theta \rightarrow 0} \frac{\sin(\theta)}{\theta} \leq \frac{\pi}{180}
$$</span></p>
<p>Therefore, if we use degrees, then:</p>
<p><span class="math-container">$$
\lim_{\theta \rightarrow 0} \frac{\sin(\theta)}{\theta} = \frac{\pi}{180}
$$</span></p>
<p>Going back to <strong>Simon S</strong>'s answer, this gives, as the definition of the derivative for <span class="math-container">$\sin(x)$</span>:</p>
<p><span class="math-container">$$
\lim_{h \rightarrow 0} \frac{\sin(x + h) - \sin(x)}{h}\\
\lim_{h \rightarrow 0} \frac{\sin(x)\cos(h) + \sin(h)\cos(x) - \sin(x)}{h} \\
\lim_{h \rightarrow 0} \frac{\sin(h)\cos(x) + \sin(x)(\cos(h) - 1)}{h}
$$</span></p>
<p>This may be a little sloppy, but when <span class="math-container">$h = 0$</span> <span class="math-container">$\cos(h) - 1 = 1 - 1 = 0$</span>, so we can drop the second part and are left with:</p>
<p>*Actually this is <em>extremely</em> sloppy, at this point I would refer back to <strong>Simon S</strong>'s Answer</p>
<p><span class="math-container">$$
\lim_{h \rightarrow 0} \frac{\sin(h)}{h}\cos(x) = \cos(x)\lim_{h \rightarrow 0} \frac{\sin(h)}{h}
$$</span></p>
<p>Using our above result we find the following:</p>
<p><span class="math-container">$$
\frac{d}{dx}\sin(x) = \frac{\pi}{180}\cos(x)
$$</span></p>
<p>This is what the derivative of <span class="math-container">$\sin(x)$</span> is <em>when we use degrees</em>! And yes, this will work fine in a Taylor series where we plug in degrees for the polynomial as opposed to radians (although the Taylor series <em>will</em> look different!).</p>
<p>And hopefully you already realize that this is what the derivative of <span class="math-container">$\sin(x)$</span> is when we use degrees, because if we accept that we <em>must</em> use radians, then we must convert our degrees to radians:</p>
<p><span class="math-container">$$
\sin(x^\circ) = \sin\left(\frac{\pi}{180}x\right)
$$</span></p>
<p>Now using the chain rule we get:</p>
<p><span class="math-container">$$
\frac{d}{dx}\sin(x^\circ) = \frac{\pi}{180}\cos(x^\circ)
$$</span></p>
<p>So the question isn't really why does it only work with radians--it works just as well with degrees except that we get a different definition of the derivative. The reason we prefer radians to degrees is that radians doesn't require this extra factor of <span class="math-container">$\frac{\pi}{180}$</span> every single time we differentiate a trigonometric function.</p>
|
differentiation | <p>I thought about this a lot and consulted a lot of people but everyone had contradicting answers. I am a high school student. please help.</p>
| <p>Good question. The answer is no.</p>
<p>Many functions are derivatives, though: For example, any continuous function. The fundamental theorem of calculus gives us that if $f$ is continuous, and $F$ is defined by $$ F(x)=\int_0^x f(t)\,dt, $$ then $f$ is the derivative of $F$. </p>
<p>However, derivatives do not need to be continuous. But they are not arbitrary. For instance: None of their discontinuities are jump discontinuities. For example, if $h$ is defined by $h(x)=0$ if $x\ne0$ and $h(0)=1$, then $h$ is not a derivative. (The reason for this is a theorem of Darboux, showing that derivatives satisfy the intermediate value property.)</p>
<p>Perhaps more interestingly, if $f$ is given by $f(x)=\sin(1/x)$ if $x\ne0$ and $f(0)=0$, then $f$ is a derivative. However, if $g$ is given by $g(x)=\sin(1/x)$ if $x\ne0$ and $g(0)=1$, then $g$ is not. The function $f$ is discontinuous at $0$. <a href="http://samjshah.com/2009/10/03/sin1x/" rel="noreferrer">Here</a> you can see a graph and some additional information. The (wildly oscillating) behavior of $f$ near zero is typical of any discontinuity that is not a jump discontinuity.</p>
<p><a href="https://math.stackexchange.com/a/622083/462">This answer</a> has additional examples and details (some of it is technical).</p>
<p>Derivatives are the (pointwise) limit of continuous functions: If $f$ is differentiable, and we define $f_n$ by $$ f_n(x)=\frac{f\bigl(x+\frac1n\bigr)-f(x)}{\frac{1}{n}}=n(f\left(x+\frac1n\right)-f(x)), $$ then each $f_n$ is continuous, and $\lim_{n\to\infty}f_n(x)=f'(x)$ for all $x$.</p>
<p>There are many functions that are not the limit of continuous functions, none of them are derivatives. Analysts have defined a hierarchy of functions: Baire class zero functions are continuous functions. Baire class one functions are limits of continuous functions. Baire class two functions are limits of Baire class one functions, etc. All derivatives are in Baire classes zero and one. Any function in Baire class two or beyond, but not in Baire class one, is not a derivative. <a href="https://math.stackexchange.com/a/614435/462">Here</a> are some examples.</p>
<p><a href="https://math.stackexchange.com/a/631755/462">This answer</a> has some additional information (mostly technical). In particular, the answer mentions that in a sense, most functions are not derivatives: Given any set $X$, the size of the set $Y$ of all functions $f:X\to X$ is strictly larger than the size of $X$. Here I am using the set theoretic notion of size (cardinality) that makes sense even for infinite sets. It turns out that there are exactly as many continuous functions as there are real numbers and, again, there are as many pointwise limits of continuous functions as there are real numbers. This means that there are as many derivatives as there are reals. However, there are many more functions $f:\mathbb R\to\mathbb R$ than there are reals, so many functions are not derivatives, just based on this consideration.</p>
<p>The question of precisely what functions are derivatives is a difficult one, without a satisfactory answer. Mathematical analysts still do research on this topic. <a href="https://andrescaicedo.files.wordpress.com/2014/03/andrew-m-bruckner-derivatives-why-they-elude-classification.pdf" rel="noreferrer">This article</a> (Andrew M. Bruckner. <em>Derivatives: Why they elude classification</em>. Math. Mag., <strong>49 (1)</strong>, (1976), 5–11.) should be more or less accessible, and explains some of the difficulties in this area. </p>
| <p>The Fundamental Theorem of Calculus tells us that every <strong>continuous</strong> function is the derivative of something, but there are many functions which are not continuous, and not derivatives. There are also some functions which are not continuous, but they are still derivatives.</p>
<p>There is a theorem by Darboux, which says that any derivative has the intermediate value property. This allows us construct many functions which are not derivatives. </p>
<p>For example $F(x)=1$ when $x$ is rational and $F(x)=0$ when $x$ is irrational, is not a derivative.</p>
|
probability | <p>I know there's no uniform distribution for a countably infinite set, but I'm wondering if there's still a way to determine the probability of picking from a subset of a countably infinite set.</p>
<p>For example, what's the probability that I pick an odd number from the set of naturals, assuming I'm picking randomly? Is this even a coherent question? If so, is there a textbook approach to this problem in measure theory/probability theory/probability measure?</p>
<p>Thanks!</p>
| <p>I'd like to give a semi-philosophical, semi-mathematical defense of the answer 1/2.</p>
<p>The received view in contemporary mathematical probability theory, due to Kolmogorov, is that probability functions are normalized measures: nonnegative, <strong>countably additive</strong> set functions that are defined on $\sigma$-fields and that assign the sure event measure $1$. This hasn't always been the received view, however.</p>
<p>Bruno DeFinetti argued, famously, that the concept of probability does not mandate countable additivity. His argument is based on the fact, pointed out by you and commenters, that countable additivity <em>rules out</em> the possibility of a uniform distribution on the natural numbers. DeFinetti, a subjectivist, thought there was nothing rationally incoherent about distributing one's credences uniformly over $\mathbb{N}$, and, on these grounds, argued that the countable additivity axiom be weakened. For DeFinetti, probabilities need only be <strong>finitely additive</strong>.</p>
<p>Working in the tradition of finitely additive probability, it is indeed possible to define a uniform distribution on $\mathbb{N}$. The needed extension results can be found in Rao and Rao's <em>Theory of Charges</em> and Hrbacek and Jech's <em>Introduction to Set Theory</em>. I will sketch a construction here so that this answer has some mathematical content (if needed or requested, I will edit later to include more details).</p>
<p>First, we define the natural density, as in comments above, to be the limit (if it exists) $$ \lim_{n \to \infty} \frac{|A \cap \{1,...,n \}|}{n} = :d(A) $$ for $A \subseteq \mathbb{N}$.</p>
<p>It is not difficult to show</p>
<p><strong>Proposition</strong>. (i) $d(\emptyset) = 0$ and $d(\mathbb{N})=1$. (ii) $d(\{n\})=0$ for all $n \in \mathbb{N}$. (iii) The set of numbers divisible by $m$ has natural density $1/m$. (NB: Some of this has been asserted in comments above, but I include it here for completeness.)</p>
<p>We then have the following theorem.</p>
<p><strong>Theorem</strong>. There exists a finitely additive probability measure $P$ on $\mathscr{P}(\mathbb{N})$ that extends the natural density $d$.</p>
<p><strong>Proof sketch</strong>. The proof relies on the existence of the Frechet ultrafilter $\mathcal{U}$ (the ultrafilter of cofinite sets) on $\mathbb{N}$, and is therefore non-constructive. We say a sequence $(x_{n})$ of real numbers is <em>convergent in $\mathcal{U}$ with limit $x$</em> and write $x = \lim_{\mathcal{U}}x_{n}$ if for all $\epsilon > 0$
$$\{n: |x_{n} - x| < \epsilon \} \in \mathcal{U}.$$
It can be shown that every real sequence has a unique $\mathcal{U}$-limit. It can also be shown that
$$P(A) = \lim_{\mathcal{U}}\frac{|A \cap \{1,...,n \}|}{n}$$
is a finitely additive probability on $\mathscr{P}(\mathbb{N})$ extending $d$. $\square$</p>
<p>By the Theorem and the Proposition, $P$ is a finitely additive probability measure that assigns measure $1/2$ to the set of odd numbers.</p>
<p>Now, you may object that I've changed the subject by invoking merely finitely additive "probabilities". In response, I would remind you that the concept of probability predates the Kolmogorovian theory, and that there are good arguments (due to DeFinetti and other subjectivists) in favor of relaxing countable additivity in some contexts.</p>
| <p>This is a great question that I as well don't know the answer to.</p>
<p>Also the probability cannot be 1. If the probability to choosing an odd number is 1, then the complement (the probability of choosing an even number is 0). However there are the same number of odd numbers and even numbers, therefore we can use the same proof to show that the probability choosing an even number is 1 leading to a contradiction. </p>
<p>I'm guessing the other approach is first defining our measure. We can use the normalized counting measure (density) as our probability measure on $(\mathbb{N}, 2^\mathbb{N})$ we just need the set to be finite. This measure would imply that the probability is the same for each natural number. </p>
<p>$$\mu(A) = \begin{cases}
|A| & \text{if $A$ is finite} \\
\infty & \text{if $A$ is infinite}
\end{cases}$$</p>
<p>The only problem that arises is that our set is infinite which will prevent us from having a our counting measure be a probability measure (we cannot divide by infinity). Maybe I'm mistaken, but it seems in order to solve this problem we can then cut up the natural numbers into a set of intervals, and prove it true for each interval. We then can extend it to projection maps.</p>
<p>We write the projection map as: $$\pi_k(\mathbb{N}) = \{k, k+1\} \text{ for any $k\in \mathbb{N}$ }$$ and define the new measure $\mu_k$ on $(\{k, k+1 \}, 2^{\{k, k+1\}})$ as $$\mu_k(A) = \mu(A \cap \pi_k(\mathbb{N})) = \mathbb{P}(A)$$ for any $A \in 2^\mathbb{N}$. Now we solve what happens when we add union two spaces together to our probability. It seems that we must prove the probability is then the average the measures together, i.e., if we union the interval $\{k, k+1 \} \cup \{k+2, k+3\} = B$, then our measure would be $\mu(B) = \frac{\mu_k(B) + \mu_{k+2}(B)}{2}$ assuming we normalize each density in order for it to be a probability measure. </p>
<p>Let $O = \{$ odd natural numbers$\}$. Then we can extend the measure to a limit after noticing $\mu_k(O) = \mu_{k+2}(O) = \dots = \mu_{k+2n}(O) = \frac12$ for any $n\in\mathbb{N}$. Our limit
$$\mu(O) = \lim_{n\rightarrow\infty} \frac{\mu_1(O) + \mu_3(O) + \cdots + \mu_{1 + 2n}(O)}{n} $$
$$= \lim_{n\rightarrow\infty}\frac{\frac12 + \frac12 + \dots + \frac12}{n} = \lim_{n\rightarrow\infty} \frac{n\frac12}{n} = 1/2$$</p>
<p>Please correct me if you find any major errors. </p>
|
combinatorics | <p>I have $n$ people seated around a circular table, initially in arbitrary order. At each step, I choose two people and switch their seats. What is the minimum number of steps required such that every person has sat either to the right or to the left of everyone else?</p>
<p>To be specific, we consider two different cases:</p>
<ol>
<li>You can only switch people who are sitting next to each other.</li>
<li>You can switch any two people, no matter where they are on the table.</li>
</ol>
<p>The small cases are relatively simple: if we denote the answer in case 1 and 2 for a given value of $n$ as $f(n)$ and $g(n)$ respectively, then we have $f(x)=g(x)=0$ for $x=1, 2, 3$, $f(4)=g(4)=1$. I'm not sure how I would generalize to larger values, though.</p>
<p>(I initially claimed that $f(5)=g(5)=2$, but corrected it based on @Ryan's comment).</p>
<p><em>If you're interested, this question came up in a conversation with my friends when we were trying to figure out the best way for a large party of people during dinner to all get to know each other.</em></p>
<p><strong>Edit:</strong> The table below compares the current best known value for case 2, $g(n)$, to the theoretical lower bound $\lceil{\frac{1}{8}n(n-3)}\rceil$ for a range of values of $n$. Solutions up to $n=14$ are known to be optimal, in large part due to the work of Andrew Szymczak and PeterKošinár.</p>
<p>\begin{array} {|r|r|l|}
\hline
n & \text{Best known value of g(n)} & \left\lceil{\frac{1}{8}n(n-3)}\right\rceil & \text{Comments}\\
\hline
4 & 1 & 1 & \\
\hline
5 & 3 & 2 & \\
\hline
6 & 4 & 3 & \\
\hline
7 & 4 & 4 & \\
\hline
8 & 6 & 5 & \\
\hline
9 & 8 & 7 & \\
\hline
10 & 10 & 9 & \\
\hline
11 & 12 & 11 & \\
\hline
12 & 14 & 14 & \\
\hline
13 & 17 & 17 & \\
\hline
14 & 20 & 20 & \\
\hline
15 & 24 & 23 & \\
\hline
16 & 28 & 26 & \\
\hline
17 & 32 & 30 & \\
\hline
18 & 37 & 34 & \\
\hline
20 & 47 & 43 & \text{Loose upper bound}\\
\hline
25 & 77 & 69 & \text{Loose upper bound}\\
\hline
30 & 114 & 102 & \text{Loose upper bound}\\
\hline
\end{array} </p>
<p>The moves corresponding to the current best value are found below. Each ordered pair $(i, j)$ indicates that we switch the people in <strong>seats</strong> $(i, j)$ with each other, with the seats being labeled from $1 \ldots n$ consecutively around the table.</p>
<pre><code>4 - ((2,1))
5 - ((2,5),(1,5),(1,3))
6 - ((5,3),(1,5),(2,5),(3,6))
7 - ((4,7),(3,7),(1,5),(2,5))
8 - ((1,2),(4,7),(1,5),(3,7),(1,6),(2,5)) (h.t. PeterKošinár)
9 - ((3,8),(1,4),(6,9),(4,8),(1,6),(5,8),(2,8),(2,9))
10 - ((3,8),(4,8),(7,10),(1,7),(3,6),(1,5),(2,9),(3,7),(1,4),(3,9))
11 - ((4,8),(2,9),(5,8),(1,7),(3,9),(7,11),(5,10),(1,4),(5,9),(2,7),(2,6),(5,10))
12 - ((1,2),(5,10),(1,6),(4,10),(1,7),(8,11),(4,12),(3,12),(1,9),(1,5),(7,11) (1,8),(5,10),(2,6))
13 - ((1,2),(1,7),(3,9),(6,12),(8,11),(8,12),(1,11),(4,12),(9,12),(6,10),(7,10),(1,6),(2,8),(5,9),(3,8),(8,12),(9,12))
14 - ((1,4),(1,5),(1,8),(1,12),(4,11),(4,12),(9,13),(1,12),(9,12),(6,10),(6,9),(1,9),(4,7),(4,13),(3,13),(3,10),(2,13),(2,7),(3,13),(4,12))
15 - ((0,3),(0,10),(4,7),(2,8),(1,8),(5,9),(3,14),(5,13),(2,11),(4,9),(5,14),(4,12),(2,6),(7,14),(0,3),(2,9),(6,10),(8,11),(0,12),(0,4),(0,7),(3,7),(3,10),(2,13))
16 - ((10,14),(10,13),(0,10),(5,9),(5,8),(2,12),(2,5),(7,12),(2,12),(3,14),(5,11),(0,5),(4,14),(4,7),(3,11),(3,10),(0,8),(0,9),(0,6),(3,6),(1,14),(11,15),(1,5),(6,14),(3,11),(11,14),(0,12),(1,4))
17 - ((9,15),(5,13),(13,16),(0,13),(2,10),(10,16),(5,16),(5,13),(2,6),(2,10),(10,16),(7,10),(4,15),(1,8),(4,9),(5,12),(4,10),(3,13),(5,14),(1,4),(5,15),(1,6),(5,12),(8,12),(7,12),(4,12),(0,12),(8,11),(8,14),(7,16),(2,3),(1,8))
18 - ((4,7),(4,14),(6,10),(7,13),(4,7),(8,16),(8,13),(7,13),(3,8),(0,8),(4,8),(6,16),(1,12),(1,5),(5,11),(0,5),(14,17),(1,13),(8,13),(3,13),(0,4),(11,16),(2,10),(11,17),(9,15),(10,15),(1,9),(2,13),(1,4),(5,12),(6,14),(7,16),(13,17),(0,15),(1,15),(6,10),(5,15))
# Note that some solutions are zero-indexed and some are one-indexed.
</code></pre>
<p>The code I used to generate my the results can be found on <a href="https://github.com/vtjeng/friends-of-the-round-table" rel="noreferrer">Github</a>. Unless otherwise specified, the switches above were found by my code, using a randomized greedy approach. As demonstrated by PeterKošinár, since the total number of possibilities is large, this approach may not find the best result even after many trials.</p>
| <p>I didn't want to keep adding more and more comments, so I'll just post all my thoughts here. @PeterKošinár @VincentTjeng</p>
<p>States are only different if their underlying friend-graphs are non-isomorphic -- permuting the people-labels (or seat-labels) does not change the state you're in. You can drastically reduce your search space this way. For n=8 there are only ~3000 states within 6 swaps of the starting position. As an example, consider the very first swap. There are really only $\lfloor \tfrac{n}{2} \rfloor$ different swaps to make.</p>
<p>The problem is coming up with a canonical-representation for graphs. Fortunately, to reduce your state space you don't need a truly canonical representation. You only need to make sure you don't equate two non-isomorphic graphs. It's okay if isomorphic $A \sim B$ have different representations - it just means your search space has a few redundancies. In any case, I've written my own method in Python because I couldn't get any packages to work (I'll post it once I clean it up). A port to C with <a href="http://pallini.di.uniroma1.it/" rel="nofollow noreferrer">nauty</a> would speed up the algorithm by a factor of 200+. </p>
<p>One naive canonicalization method is to permute the graph in all $n!$ ways and take the one that results in the lexicographically smallest adjacency list (using the binary sequence '001100....' as your representation). I use the same method, but I have a way to limit the number of valid permutations (down from $n!$ to ~$n^2$ on average). For example, since isomorphic graphs have the same degree sequence, we can enforce that the nodes are labelled by increasing order of degree. This by itself leads to a fairly drastic reduction of valid permutations. My idea iterates on that and is similar to finding the <em>coarsest equitable partition</em> that is described in <a href="http://www.math.unl.edu/~aradcliffe1/Papers/Canonical.pdf" rel="nofollow noreferrer">this paper</a>. The paper's algorithm (implemented by nauty) then goes on to do other stuff that I don't do, namely creating a search tree.</p>
<p>My exhaustive search algorithm begins in the starting position and runs a breadth-first-search of the state-space. I process a single state by making all possible swaps and then converting the resulting states to canonical form. The $i$th step of the algorithm takes $D_i \rightarrow D_{i+1}$ where $D_i$ is the set of all states that are at a distance $i$ from the starting position. The algorithm terminates once the final state has been found (complete-graph $K_n$). Of course, an exhaustive search will no longer be tractable for $n>12$, but you can easily turn it into a monte-carlo-type method. </p>
<p>For $n=10..12$ the search space becomes too large to store in memory. So I combine my BFS with @PeterKošinár's memoryless DFS method. I run the BFS for the first 4 swaps and then run a DFS starting from each state in $D_4$ (without keeping track of visited states). In the DFS we only make moves that add atleast 1 friend, and we prune states that can not possible beat the best solution found. We can make a max number of 8 friends per swap (counting duplicates (i,j) and (j,i)). So if we are in a state of $s$ swaps and $e$ friends, and we've found a solution with $s^*$ swaps, then we can prune if $s + \lceil (n^2 - e)/8 \rceil \geq s^*$.<br>
<a href="https://drive.google.com/uc?export=download&id=0B4l-WgeGmpNVRGdSUHhkNnI2YzA" rel="nofollow noreferrer">$n=14$ distinct 4-swap sequences</a><br>
<a href="https://drive.google.com/uc?export=download&id=0B4l-WgeGmpNVNFNaSkp0clRLX1E" rel="nofollow noreferrer">$n=15$ distinct 4-swap sequences</a></p>
<p>I've run an exhaustive search for n=4..12 and can confirm the bounds we found as the true lower bounds. Here are the lexicographically smallest solutions (indices refer to seat-swaps).</p>
<ul>
<li>4 $\quad$ - $\quad$ 1.[(1,2)]</li>
<li>5 $\quad$ - $\quad$ 3.[(1,2) (1,3) (1,4)] </li>
<li>6 $\quad$ - $\quad$ 4.[(1,2) (1,3) (1,5) (1,4)] </li>
<li>7 $\quad$ - $\quad$ 4.[(1,4) (1,5) (3,6) (2,6)] </li>
<li>8 $\quad$ - $\quad$ 6.[(1,2) (4,7) (1,5) (3,7) (1,6) (2,5)] </li>
<li>9 $\quad$ - $\quad$ 8.[(1,4) (1,5) (1,7) (1,6) (3,8) (2,8) (3,6) (1,5)]</li>
<li>10 $\quad$ - $\quad$ 10.[(1,2) (1,5) (1,6) (3,8) (4,10) (3,7) (5,8) (3,9) (3,10) (1,6)]</li>
<li>11 $\quad$ - $\quad$ 12.[(1,2) (1,4) (3,7) (6,9) (6,10) (1,5) (2,7) (3,11) (4,9) (4,8) (6,10) (4,11)]</li>
<li>12 $\quad$ - $\quad$ 14.[(1,2) (5,10) (1,6) (4,10) (1,7) (8,11) (4,12) (3,12) (1,9) (1,5) (7,11) (1,8) (5,10) (2,6)]</li>
</ul>
| <p>Not an answer, but a toy to play with
<a href="https://jsfiddle.net/zn1f30mv/14/embedded/result/" rel="nofollow noreferrer">https://jsfiddle.net/zn1f30mv/14/embedded/result/</a></p>
|
probability | <p>A two-sided coin has just been minted with two different sides (heads and tails). It has never been flipped before. Basic understanding of probability suggests that the probability of flipping heads is .5 and tails is .5. Unexpectedly, you flip the coin a <strong>very large number of times</strong> and it always lands on heads. Is the probability of flipping heads/tails still .5 each? Or has it changed in favor of tails because the probability should tend to .5 heads and .5 tails as you approach an infinite number of trials?</p>
<p>I understand that flipping coins is <em>generally</em> a stochastic process, but does that change at all if you see a large number of trials bias to one side?</p>
| <p>If you don't know whether it is a fair coin to start with, then it isn't a dumb question at all. (EDIT) You ask if the coin will be biased towards Tails to account for the all of the heads. If the coin <em>was</em> fair, then the <a href="https://math.stackexchange.com/a/1863465/14972">answer from tilper</a> addresses this well, with the overall answer being "No". Without that assumption of fairness, the overall answer becomes "No, and in fact we should believe the coin <em>is</em> biased towards heads.".</p>
<p>One way to think about this is by thinking of the probability $p$ of the coin landing heads to be a random variable. We can assume that we know absolutely nothing about the coin to start with, and take the distribution for $p$ to be a uniform random variable over $[0,1]$. Then, after flipping the coin some number of times and collecting data, we change our distribution accordingly.</p>
<p>There is actually a distribution which does exactly this, called the <em>Beta Distribution</em>, which is a continuous distribution with probability density function
$$f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}$$
where $\alpha-1$ represents the number of heads we've recorded and $\beta-1$ the number of tails. The number $B(\alpha,\beta)$ is just a constant to normalize $f$ (however, it is actually equal to $\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$).</p>
<p>The following graphic (from the <a href="https://en.wikipedia.org/wiki/Beta_distribution#/media/File:Beta_distribution_pdf.svg" rel="nofollow noreferrer">wikipedia article</a>) shows how $f$ changes with difference choices of $\alpha,\beta$:</p>
<p><a href="https://i.sstatic.net/ETUo2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ETUo2.png" alt="PDF of Beta Distribution for different parameters"></a></p>
<p>As $\alpha \to \infty$ (i.e. you continue getting more heads) and $\beta$ stays constant (below I chose $\beta=5$), this will become extremely skewed in favor of $p$ being close to $1$.</p>
<p><a href="https://i.sstatic.net/RT9ZM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RT9ZM.jpg" alt="enter image description here"></a></p>
| <p>You have wandered into the realm of Bayesian versus frequentist statistics. The Bayesian philosophy is most attractive to me in understanding this question.</p>
<p>Your "basic understanding of probability" can be interpreted as a prior expectation for what the distribution of heads and tails should be. But that prior expectation actually is itself a probabilistic expectation. Let me give some examples of what your prior expectation (called the Bayesian prior) might be:</p>
<p>1) Your prior might be that the frequency of heads is exactly $0.5$ no matter what. In that case, no number of consecutive heads or tails would shake that <em>a priori</em> certainty, and the answer is that your posterior estimate of the distribution is that it is $0.5$ heads.</p>
<p>2) Your prior might be that the probability of heads is a normal distribution with mean $0.5$ and standard deviation $0.001$ -- you are pretty sure the coin will be pretty close to fair. Now by the time the coin has landed on heads 100 consecutive times, the posterior estimate of the distribution is peaked sharply at around $0.995$. Your Bayesian prior has allowed experimental evidence to modify your expectation.</p>
<p>The alternative approach is frequentists. That says "I thought (null hypothesis) that the coin was fair. If that were true, the likelihood of this result of no tails in a hundred flips is very small. So, I can reject the original hypothesis and conclude that this coin is not fair.</p>
<p>The weakness of the Bayesian approach is that the result depends on the prior expectation, which is somewhat arbitrary. But when a lot of trials are involved, an amazing spectrum of possible priors lead to very similar <em>a posteriori</em> expectations.</p>
<p>The weakness of the frequentist approach is that in the end you don't have any expectation, just the idea that your null hypothesis is unlikely.</p>
|
geometry | <p>I have the equation not in the center, i.e.</p>
<p>$$\frac{(x-h)^2}{a^2}+\frac{(y-k)^2}{b^2}=1.$$</p>
<p>But what will be the equation once it is rotated?</p>
| <p>After a lot of mistakes I finally got the correct equation for my problem:-</p>
<p><span class="math-container">$$\dfrac {((x-h)\cos(A)+(y-k)\sin(A))^2}{a^2}+\dfrac{((x-h) \sin(A)-(y-k) \cos(A))^2}{b^2}=1,$$</span></p>
<p>where <span class="math-container">$h, k$</span> and <span class="math-container">$a, b$</span> are the shifts and semi-axis in the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> directions respectively and <span class="math-container">$A$</span> is the angle measured from <span class="math-container">$x$</span> axis.</p>
| <p>The equation you gave can be converted to the parametric form:
$$
x = h + a\cos\theta \quad ; \quad y = k + b\sin\theta
$$
If we let $\mathbf x_0 = (h,k)$ denote the center, then this can also be written as
$$
\mathbf x = \mathbf x_0 + (a\cos\theta)\mathbf e_1 + (b\sin\theta)\mathbf e_2
$$
where $\mathbf e_1 = (1,0)$ and $\mathbf e_2 = (0,1)$.</p>
<p>To rotate this curve, choose a pair of mutually orthogonal unit vectors $\mathbf u$ and $\mathbf v$, and then
$$
\mathbf x = \mathbf x_0 + (a\cos\theta)\mathbf u + (b\sin\theta)\mathbf v
$$
One way to define the $\mathbf u$ and $\mathbf v$ is:
$$
\mathbf u = (\cos\alpha, \sin\alpha) \quad ; \quad \mathbf v = (-\sin\alpha, \cos\alpha)
$$
This will give you an ellipse that's rotated by an angle $\alpha$, with center still at the point $\mathbf x_0 = (h,k)$.</p>
<p>If you prefer an implicit equation, rather than parametric ones, then any rotated ellipse (or, indeed, any rotated conic section curve) can be represented by a general second-degree equation of the form
$$
ax^2 + by^2 + cxy + dx + ey + f = 0
$$
The problem with this, though, is that the geometric meaning of the coefficients $a$, $b$, $c$, $d$, $e$, $f$ is not very clear.</p>
<p>There are further details on <a href="http://mathamazement.com/Lessons/Pre-Calculus/09_Conic-Sections-and-Analytic-Geometry/rotation-of-axes.html">this page</a>.</p>
<p><strong>Addition</strong>. Borrowing from rschwieb's solution ...</p>
<p>Since you seem to want a single implicit equation, proceed as follows. Let $c = \sqrt{a^2 - b^2}$. Then the foci of the rotated ellipse are at $\mathbf x_0 + c \mathbf u$ and $\mathbf x_0 - c \mathbf u$. Using the "pins and string" definition of an ellipse, which is described <a href="http://en.wikipedia.org/wiki/Ellipse">here</a>, its equation is
$$
\Vert\mathbf x - (\mathbf x_0 + c \mathbf u)\Vert +
\Vert\mathbf x - (\mathbf x_0 - c \mathbf u)\Vert = \text{constant}
$$
This is equivalent to the one given by rschwieb. If you plug $\mathbf u = (\cos\alpha, \sin\alpha)$ into this, and expand everything, you'll get a single implicit equation.</p>
<p>The details are messy (which is probably why no-one wants to actually write everything out for you).</p>
|
probability | <p>Your friend flips a coin 7 times and you flip a coin 8 times; the person who got the most tails wins. If you get an equal amount, your friend wins.</p>
<p>There is a 50% chance of you winning the game and a 50% chance of your friend winning.</p>
<p>How can I prove this? The way I see it, you get one more flip than your friend so you have a 50% chance of winning if there is a 50% chance of getting a tails.</p>
<p>I even wrote a little script to confirm this suspicion:</p>
<pre><code>from random import choice
coin = ['H', 'T']
def flipCoin(count, side):
num = 0
for i in range(0, count):
if choice(coin) == side:
num += 1
return num
games = 0
wins = 0
plays = 88888
for i in range(0, plays):
you = flipCoin(8, 'T')
friend = flipCoin(7, 'T')
games += 1
if you > friend:
wins += 1
print('Games: ' + str(games) + ' Wins: ' + str(wins))
probability = wins/games * 100.0
print('Probability: ' + str(probability) + ' from ' + str(plays) + ' games.')
</code></pre>
<p>and as expected,</p>
<pre><code>Games: 88888 Wins: 44603
Probability: 50.17887678876789 from 88888 games.
</code></pre>
<p>But how can I prove this?</p>
| <p>Well, let there be two players $A$ and $B$. Let them flip $7$ coins each. Whoever gets more tails wins, ties are discounted. It's obvious that both players have an equal probability of winning $p=0.5$.</p>
<p>Now let's extend this. As both players have equal probability of winning the first seven tosses, I think we can discard them and view the 8th toss as a tiebreaker. So let's give player $A$ the 8th toss: if he gets a tail, he wins, otherwise, he loses. So with $p = 0.5$, he will either win or lose this 8th toss. Putting it like this, we can see that the 8th toss for player $A$ is equivalent to giving both players another toss and discarding ties, so both players have winning probabilities of $0.5$.</p>
| <p>The probability distribution of the number of tails flipped by you is binomial with parameters $n = 8$, and $p$, where we will take $p$ to be the probability of obtaining tails in a single flip. Then the random number of tails you flipped $Y$ has the probability mass function $$\Pr[Y = k] = \binom{8}{k} p^k (1-p)^{8-k}.$$ Similarly, the number of tails $F$ flipped by your friend is $$\Pr[F = k] = \binom{7}{k} p^k (1-p)^{7-k},$$ assuming that the coin you flip and the coin your friend flips have the same probability of tails.</p>
<p>Now, suppose we are interested in calculating $$\Pr[Y > F],$$ the probability that you get strictly more tails than your friend (and therefore, you win). An exact calculation would then require the evaluation of the sum $$\begin{align*} \Pr[Y > F] &= \sum_{k=0}^7 \sum_{j=k+1}^8 \Pr[Y = j \cap F = k] \\ &= \sum_{k=0}^7 \sum_{j=k+1}^8 \binom{8}{j} p^j (1-p)^{8-j} \binom{7}{k} p^k (1-p)^{7-k}, \end{align*} $$ since your outcome $Y$ is independent of his outcome $F$. For such a small number of trials, this is not hard to compute: $$\begin{align*} \Pr[Y > F] &= p^{15}+7 p^{14} q+77 p^{13} q^2+203 p^{12} q^3+903 p^{11} q^4+1281 p^{10} q^5 \\ &+3115 p^9 q^6+2605 p^8 q^7+3830 p^7 q^8+1890 p^6 q^9+1722 p^5 q^{10} \\ &+462 p^4 q^{11}+252 p^3 q^{12}+28 p^2 q^{13}+8 p q^{14}, \end{align*}$$ where $q = 1-p$. For $p = 1/2$--a fair coin--this is exactly $1/2$.</p>
<p>That seems surprising! But there is an intuitive interpretation. Think of your final toss as a tiebreaker, in the event that both of you got the same number of tails after 7 trials each. If you win the final toss with a tail, you win because your tail count is now strictly higher. If not, your tail count is still the same as his, and under the rules, he wins. But the chances of either outcome for the tiebreaker is the same.</p>
|
differentiation | <p>A function is "differentiable" if it has a derivative. A function is "continuous" if it has no sudden jumps in it.</p>
<p>Until today, I thought these were merely two equivalent definitions of the same concept. But I've read some stuff today which seems to be claiming that this is <em>not</em> the case.</p>
<p>The obvious next question is "why?" Apparently somebody has already asked:</p>
<p><a href="https://math.stackexchange.com/questions/7923/are-continuous-functions-always-differentiable">Are Continuous Functions Always Differentiable?</a></p>
<p>Several answers were given, but I don't understand any of them. In particular, Wikipedia and one of the replies above both claim that $|x|$ has no derivative. Can anyone explain this extremely unexpected result?</p>
<p><strong>Edit:</strong> Apparently some people dislike the fact that this is non-obvious to me. To be clear: I am not saying that the result is <em>untrue</em>. (I'm sure many great mathematicians have analysed the question very carefuly and are quite sure of the answer.) I am saying that it is <em>extremely perplexing</em>. (As a general rule, mathematics has a habit of doing that. Which is one of the reasons why we demand proof of everything.)</p>
<p>In particular, can anyone explain precisely why the derivative of $|x|$ at zero is <em>not</em> simply zero? After all, the function is neither increasing nor decreasing, which ought to mean the derivative is zero. Alternatively, the expression</p>
<p>$$\frac{|x + a| - |x - a|}{a}$$</p>
<p>becomes closer and closer to zero as $a$ becomes closer to zero when $x=0$. (In fact, it is exactly zero <em>for all $a$!</em>) Is that not how derivatives work?</p>
<p>Several answers have suggested that the derivative is not defined here "because there would be a jump in the derivative at that point". This seems to assert that a continuous function must never have a discontinuous derivative; I'm not convinced that this is the case. Can anyone confirm or refuse this argument?</p>
| <p>Let's be clear: continuity and differentiability begin as a concept <em>at a point</em>. That is, we talk about a function being:</p>
<ol>
<li><em>Defined</em> at a point $a$;</li>
<li><em>Continuous</em> at a point $a$;</li>
<li><em>Differentiable</em> at a point $a$;</li>
<li><em>Continuously differentiable</em> at a point $a$;</li>
<li><em>Twice differentiable</em> at a point $a$;</li>
<li><em>Continuously twice differentiable</em> at a point $a$;</li>
</ol>
<p>and so on, until we get to "analytic at the point $a$" after infinitely many steps. </p>
<p>I'll concentrate on the first three and you can ignore the rest; I'm just putting it in a slightly larger context. </p>
<p>A function is defined at $a$ if it has a value at $a$. Not every function is defined everywhere: $f(x) = \frac{1}{x}$ is not defined at $0$, $g(x)=\sqrt{x}$ is not defined at negative numbers, etc. Before we can talk about how the function behaves <em>at</em> a point, we need the function to be <em>defined</em> at the point.</p>
<p>Now, let us say that the function is defined at $a$. The <em>intuitive</em> notion we want to refer to when we talk about the function being "continuous at $a$" is that the graph does not have any holes, breaks, or jumps at $a$. Now, this is intuitive, and as such it makes it very hard to actually check or test functions, especially when we don't have their graphs. So we need a <em>definition</em> that is mathematical, and that allows for testing and falsification. One such definition, apt for functions of real numbers, is:</p>
<blockquote>
<p>We say that $f$ is <em>continuous at $a$</em> if and only if three things happens:</p>
<ol>
<li>$f$ is <em>defined</em> at $a$; and</li>
<li>$f$ has a <em>limit</em> as $x$ approaches $a$; and</li>
<li>$\lim\limits_{x\to a}f(x) = f(a)$.</li>
</ol>
</blockquote>
<p>The first condition guarantees that there are no holes in the graph; the second condition guarantees that there are no jumps at $a$; and the third condition that there are no breaks (e.g., taking a horizontal line and shifting a single point one unit up would be what I call a "break"). </p>
<p>Once we have this condition, we can actually test functions. It will turn out that everything we think should be "continuous at $a$" actually <em>is</em> according to this definition, but there are also functions that might seem like they ought not to be "continuous at $a$" under this definition but <em>are</em>. For example, the function
$$f(x) = \left\{\begin{array}{ll}
0 & \text{if }x\text{ is a rational number,}\\
x & \text{if }x\text{ is not a rational number.}
\end{array}\right.$$
turns out to be continuous at $a=0$ under the definition above, even though it has lots and lots of jumps and breaks. (In fact, it is continuous <strong>only</strong> at $0$, and nowhere else).</p>
<p>Well, too bad. The definition is clear, powerful, usable, and captures the notion of continuity, so we'll just have to let a few undesirables into the club if that's the price for having it.</p>
<p>We say a function is <em>continuous</em> (as opposed to "continuous at $a$") if it is continuous at <em>every point</em> where it is defined. We say a function is <em>continuous everywhere</em> if it is continuous at each and every point (in particular, it has to be defined everywhere). This is perhaps unfortunate terminology: for instance, $f(x) = \frac{1}{x}$ is <em>not</em> continuous at $0$ (it is not defined at $0$), but it <em>is</em> a continuous function (it is continuous at every point where it <em>is</em> defined), but <strong>not</strong> continuous everywhere (not continuous at $0$). Well, language is not always logical, we just learn to live with it (witness "flammable" and "inflammable", which mean the same thing).</p>
<p>Now, what about differentiability at $a$? We say a function is <em>differentiable at $a$</em> if the graph has a well-defined tangent at the point $(a,f(a))$ that is not vertical. What is a tangent? A tangent is a line that affords the best possible linear approximation to the function, in such a way that the relative error goes to $0$. That's a mouthful, you can see this explained in more detail <a href="https://math.stackexchange.com/a/12290/742">here</a> and <a href="https://math.stackexchange.com/q/19630/742">here</a>. We exclude vertical tangents because the derivative is actually the <em>slope</em> of the tangent at the point, and vertical lines have no slope.</p>
<p>Turns out that, intuitively, in order for there to be a tangent at the point, we need the graph to have no holes, no jumps, no breaks, <em>and</em> no sharp corners or "vertical segments".</p>
<p>From that intuitive notion, it should be clear that in order to be differentiable at $a$ the function has to be <em>continuous</em> at $a$ (to satisfy the "no holes, no jumps, no breaks"), but it needs <em>more</em> than that. The example of $f(x) = |x|$ is a function that is <em>continuous</em> at $x=0$, but has a sharp corner there; that sharp corner means that you don't have a well-defined tangent at $x=0$. You might think the line $y=0$ is the tangent there, but it turns out that it does not satisfy the condition of being a <em>good approximation</em> to the function, so it's not actually the tangent. There is no tangent at $x=0$. </p>
<p>To formalize this we end up using limits: the function has a non-vertical tangent at the point $a$ if and only if
$$\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}\text{ exists}.$$
What this does is just saying "there is a line that affords the best linear approximation with a relative error going to $0$." Once you check, it turns out it <em>does</em> capture what we had above in the sense that every function that we think <em>should</em> be differentiable (have a nonvertical tangent) at $a$ <em>will</em> be differentiable under this definition. Again, turns out that it does open the door of the club for functions that might seem like they ought <em>not</em> to be differentiable but are. Again, that's the price of doing business.</p>
<p>A function is <em>differentiable</em> if it is differentiable at each point of its domain. It is <em>differentiable everywhere</em> if it is differentiable at every point (in particular, $f$ is defined at every point).</p>
<p>Because of the definitions, continuity is a <strong>prerequisite</strong> for differentiability, but it is not enough. A function may be continuous at $a$, but not differentiable at $a$.</p>
<p>In fact, functions can get very wild. In the late 19th century, it was shown that you can have functions that are continuous <em>everywhere</em>, but that do not have a derivative <em>anywhere</em> (they are "really spiky" functions). </p>
<p>Hope that helps a bit.</p>
<hr/>
<p><strong>Added.</strong> You ask about $|x|$ and specifically, about considering
$$\frac{|x+a|-|x-a|}{a}$$
as $a\to 0$.</p>
<p>I'll first note that you actually want to consider
$$\frac{f(x+a)-f(x-a)}{2a}$$
rather than over $a$. To see this, consider the simple example of the function $y=x$, where we want the derivative to be $1$ at every point. If we consider the quotient you give, we get $2$ instead:
$$\frac{f(x+a)-f(x-a)}{a} = \frac{(x+a)-(x-a)}{a} = \frac{2a}{a} = 2.$$
You really want to divide by $2a$, because that's the distance between the points $x+a$ and $x-a$. </p>
<p>The problem is that this is not <em>always</em> a good way of finding the tangent; <em>if</em> there is a well-defined tangent, <em>then</em> the difference
$$\frac{f(x+a)-f(x-a)}{2a}$$
will give the correct answer. However, it turns out that there are situations where this gives you <em>an</em> answer, but not the right answer because there is no tangent. </p>
<p>Again: the tangent is defined to be the unique line, if one exists, in which the relative error goes to $0$. The only possible candidate for a tangent at $0$ for $f(x) = |x|$ is the line $y=0$, so the question is why this is not the tangent; the answer is that the relative error does <em>not</em> go to $0$. That is, the ratio between how big the error is if you use the line $y=0$ instead of the function (which is the value $|x|-0$) and the size of the input (how far we are from $0$, which is $x$) is always $1$ when $x\gt 0$,
$$\frac{|x|-0}{x} = \frac{x}{x} = 1\quad\text{if }x\gt 0,$$
and is always $-1$ when $x\lt 0$:
$$\frac{|x|-0}{x} = \frac{-x}{x} = -1\quad\text{if }x\lt 0.$$
That is: this line is <em>not</em> a good approximation to the graph of the function near $0$: even as you get closer and closer and closer to $0$, if you use $y=0$ as an approximation your error continues to be large relative to the input: it's not getting better and better relative to the size of the input. But the tangent is <em>supposed</em> to make the error get smaller and smaller relative to how far we are from $0$ as we get closer and closer to zero. That is, if we use the line $y=mx$, then it must be the case that
$$\frac{f(x) - mx}{x}$$
approaches $0$ as $x$ approaches $0$ in order to say that $y=mx$ is "the tangent to the graph of $y=f(x)$ at $x=0$". This is not the case for <em>any</em> value of $m$ when $f(x)=|x|$, so $f(x)=|x|$ does <em>not</em> have a tangent at $0$. The "symmetric difference" that you are using is hiding the fact that the graph of $y=f(x)$ does <em>not</em> flatten out as we approach $0$, even though the line you are using <em>is</em> horizontal all the time. Geometrically, the graph does not get closer and closer to the line as you approach $0$: it's always a pretty bad error.</p>
| <p>The derivative, in simple words, is the slope of the function at the point. If you consider <span class="math-container">$|x|$</span> at <span class="math-container">$x > 0$</span> the slope is clearly <span class="math-container">$1$</span> since there <span class="math-container">$|x| = x$</span>. Similarly, for <span class="math-container">$x<0$</span> the slope is <span class="math-container">$-1$</span>. Thus, if you consider <span class="math-container">$x = 0$</span> then you cannot define the slope at that point, i.e. <em>right</em> and <em>left</em> directional derivatives do not agree at <span class="math-container">$x = 0$</span>. So that's why the function is not differentiable at <span class="math-container">$x=0$</span>.</p>
<p>Just to extend a perfect comment by Qiaochu to a more striking example, the sample path of a Brownian motion is continuous but nowhere differentiable: <img src="https://i.sstatic.net/tjo4r.gif" alt="Brownian scaling" /></p>
<p>Note also that this curve exhibits self-similarity property, so if you zoom it, it looks the same and never will look any similar to a line. Also, the Brownian motion can be considered as a measure (even a probability distribution) on the space of continuous functions. The set of differentiable functions has this measure zero. So one can say that it is very unlikely that a continuous function is differentiable (I guess, that is what André meant in his comment).</p>
|
probability | <p><strong>Context:</strong> I'm a high school student, who has only ever had an introductory treatment, if that, on combinatorics. As such, the extent to which I have seen combinatoric applications is limited to situations such as "If you need a group of 2 men and 3 women and you have 8 men and 9 women, how many possible ways can you pick the group" (They do get slightly more complicated, but are usually similar).</p>
<p><strong>Question:</strong> I apologise in advance for the naive question, but at an elementary level it seems as though combinatorics (and the ensuing probability that can make use of it), seems not overly rigorous. It doesn't seem as though you can "prove" that the number of arrangements you deemed is the correct number. What if you forget a case?</p>
<p>I know that you could argue that you've considered all cases, by asking if there is another case other than the ones you've considered. But, that doesn't seem to be the way other areas of mathematics is done. If I wish to prove something, I couldn't just say "can you find a situation where the statement is incorrect" as we don't just assume it is correct by nature.</p>
<p><strong>Is combinatorics rigorous?</strong></p>
<p>Thanks</p>
| <p>Combinatorics certainly <em>can</em> be rigourous but is not usually presented that way because doing it that way is:</p>
<ul>
<li>longer (obviously)</li>
<li>less clear because the rigour can obscure the key ideas</li>
<li>boring because once you know intuitively that something works you lose interest in a rigourous argument</li>
</ul>
<p>For example, compare the following two proofs that the binomial coefficient is $n!/k!(n - k)!$ where I will define the binomial coefficient as the number of $k$-element subsets of $\{1,\dots,n\}$.</p>
<hr>
<p><strong>Proof 1:</strong></p>
<p>Take a permutation $a_1,\dots, a_n$ of $n$. Separate this into $a_1,\dots,a_k$ and $a_{k + 1}, \dots, a_n$. We can permute $1,\dots, n$ in $n!$ ways and since we don't care about the order of $a_1,\dots,a_k$ or $a_{k + 1},\dots,a_n$ we divide by $k!(n - k)!$ for a total of $n!/k!(n - k)!$.</p>
<hr>
<p><strong>Proof 2:</strong></p>
<p>Let $B(n, k)$ denote the set of $k$-element subsets of $\{1,\dots,n\}$. We will show that there is a bijection</p>
<p>$$ S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}. $$</p>
<p>The map $\to$ is defined as follows. Let $\pi \in S_n$. Let $A = \{\pi(1),\pi(2),\dots,\pi(k)\}$ and let $B = \{\pi(k + 1),\dots, \pi(n)\}$. For each finite subset $C$ of $\{1,\dots,n\}$ with $m$ elements, fix a bijection $g_C : C \longleftrightarrow \{1,\dots,m\}$ by writting the elements of $C$ in increasing order $c_1 \le \dots \le c_m$ and mapping $c_i \longleftrightarrow i$.</p>
<p>Define maps $\pi_A$ and $\pi_B$ on $\{1,\dots,k\}$ and $\{1,\dots,n-k\}$ respectively by defining
$$ \pi_A(i) = g_A(\pi(i)) \text{ and } \pi_B(j) = g_B(\pi(j)). $$</p>
<p>We map the element $\pi \in S_n$ to the triple $(A, \pi_A, \pi_B) \in B(n, k) \times S_k \times S_{n - k}$.</p>
<p>Conversely, given a triple $(A, \sigma, \rho) \in B(n, k) \times S_k \times S_{n - k}$ we define $\pi \in S_n$ by
$$
\pi(i) =
\begin{cases}
g_A^{-1}(\sigma(i)) & \text{if } i \in \{1,\dots,k\} \\
g_B^{-1}(\rho(i-k)) & \text{if } i \in \{k + 1,\dots,n \}
\end{cases}
$$
where $B = \{1,\dots,n\} \setminus A$.</p>
<p>This defines a bijection $S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}$ and hence</p>
<p>$$ n! = {n \choose k} k!(n - k)! $$</p>
<p>as required.</p>
<hr>
<p><strong>Analysis:</strong></p>
<p>The first proof was two sentences whereas the second was some complicated mess. People with experience in combinatorics will understand the second argument is happening behind the scenes when reading the first argument. To them, the first argument is all the rigour necessary. For students it is useful to teach the second method a few times to build a level of comfort with bijective proofs. But if we tried to do all of combinatorics the second way it would take too long and there would be rioting.</p>
<hr>
<p><strong><em>Post Scriptum</em></strong></p>
<p>I will say that a lot of combinatorics textbooks and papers do tend to be written more in the line of the second argument (i.e. rigourously). Talks and lectures tend to be more in line with the first argument. However, higher level books and papers only prove "higher level results" in this way and will simply state results that are found in lower level sources. They will also move a lot faster and not explain each step exactly.</p>
<p>For example, I didn't show that the map above was a bijection, merely stated it. In a lower level book there will be a proof that the two maps compose to the identity in both ways. In a higher level book, you might just see an example of the bijection and a statement that there is a bijection in general with the assumption that the person reading through the example could construct a proof on their own.</p>
| <p>Essentially, all (nearly all?) of combinatorics comes down to two things, the multiplication rule and the addition rule.</p>
<blockquote>
<p>If <span class="math-container">$A,B$</span> are finite sets then <span class="math-container">$|A\times B|=|A|\,|B|$</span>.</p>
<p>If <span class="math-container">$A,B$</span> are finite sets and <span class="math-container">$A\cap B=\emptyset$</span>, then <span class="math-container">$|A\cup B|=|A|+|B|$</span>.</p>
</blockquote>
<p>These can be rigorously proved, and more sophisticated techniques can be rigorously derived from them, for example, the fact that the number of different <span class="math-container">$r$</span>-element subsets of an <span class="math-container">$n$</span>-element set is <span class="math-container">$C(n,r)$</span>.</p>
<p>So, this far, combinatorics is perfectly rigorous. IMHO, the point at which it may become (or may appear to become) less rigorous is when it moves from pure to applied mathematics. So, with your specific example, you have to assume (or justify if you can) that counting the number of choices of <span class="math-container">$2$</span> men and <span class="math-container">$3$</span> women from <span class="math-container">$8$</span> men and <span class="math-container">$9$</span> women is the same as evaluating
<span class="math-container">$$|\{A\subseteq M: |A|=2\}\times \{B\subseteq W: |B|=3\}|\ ,$$</span>
where <span class="math-container">$|M|=8$</span> and <span class="math-container">$|W|=9$</span>.</p>
<p>It should not be surprising that the applied aspect of the topic requires some assumptions that may not be mathematically rigorous. The same is true in many other cases: for example, modelling a physical system by means of differential equations. Solving the equations once you have them can be done (more or less) rigorously, but deriving the equations in the first place usually cannot.</p>
<p>Hope this helps!</p>
|
logic | <p>According to <a href="https://johncarlosbaez.wordpress.com/2016/04/02/computing-the-uncomputable/">here</a>, there is the "standard" model of Peano Arithmetic. This is defined as $0,1,2,...$ in the usual sense. What would be an example of a <em>nonstandard</em> model of Peano Arithmetic? What would a <em>nonstandard</em> amount of time be like?</p>
| <p>Peano arithmetic is a first-order theory, and therefore if it has an infinite model---and it has---then it has models of every cardinality.</p>
<p>Not only that, because it has a model which is pointwise definable (every element is definable), then there are non-isomorphic countable models. Which means that you can find models which are not the standard model already at the countable cardinality.</p>
<p>What do these models look like? Well, it's kinda hard to explain. They all have an initial segment that looks like the natural numbers. That much is easy to prove. Also not hard to show is that the rest of the model can be decomposed into $\Bbb Z$-chains. Namely, if $c$ is a non-standard number, then it has a predecessor (since Peano proves that every non-zero element is a successor). So we can define $f(k)=S^k(c)$, as an order isomorphism between the "chunk" of the model that $x$ lives in.</p>
<p>Harder to prove, but still not impossible, is that at least for the countable parts, the models all look the same as far as the order go, they all have an initial segment of $\Bbb N$, followed by $\Bbb{Q\times Z}$ ordered lexicographically.</p>
<p>To produce such models you can use three standard methods:</p>
<ol>
<li><p>Compactness. Add a constant $c$, require it to be larger than any numeral, by compactness this is a consistent theory so it has a model. This model cannot be the standard model, because it has an element larger than all the numerals.</p></li>
<li><p>Ultrapowers. Take a free ultrafilter $\mathcal U$ over $\Bbb N$, and consider the ultrapower $\Bbb{N^N}/\cal U$. Counting arguments will show you that this ultrapower has cardinality $2^{\aleph_0}$, so it is certainly not isomorphic to $\Bbb N$. If you prefer, you can use the fact that $\mathcal U$ is not a countably-complete ultrafilter, and therefore the ultrapower cannot be well-ordered, so without checking for cardinality it cannot be isomorphic to $\Bbb N$.</p></li>
<li><p>Incompleteness. We know that Peano is not a complete theory. Therefore there are statements which are true in $\Bbb N$, but Peano does not prove. Therefore the negation of such statement is consistent with the rest of the axioms of Peano, and must have a model. But this model cannot be isomorphic to $\Bbb N$. The benefit of this method is that it allows you to obtain very different theories of your models, whereas ultrapowers and compactness arguments tend to result in elementarily equivalent models.</p></li>
</ol>
| <p>It should be mentioned that one of the most concrete nonstandard models of PA was developed by Skolem in the 1930s in ZF (without the axiom of choice, unlike the constructions mentioned in the other <em>answer</em>). This is roughly in terms of definable functions on $\mathbb N$ ordered by their asymptotic growth; for details see <a href="http://dx.doi.org/10.1007/s10699-012-9316-5" rel="noreferrer">this 2013 publication in <em>Foundations of Science</em></a>, Section 3.2, page 272.</p>
|
number-theory | <p>In Mathworld's "<a href="http://mathworld.wolfram.com/PiApproximations.html"><em>Pi Approximations</em></a>", (line 58), Weisstein mentions one by the mathematician <a href="https://en.wikipedia.org/wiki/Daniel_Shanks">Daniel Shanks</a> that differs by a mere $10^{-82}$,</p>
<p>$$\pi \approx \frac{6}{\sqrt{3502}}\ln(2u)\color{blue}{+10^{-82}}\tag{1}$$</p>
<p>and says that $u$ is a product of four simple quartic units, but does not explicitly give the expressions. I managed to locate Shanks' <strike><em>Dihedral Quartic Approximations and Series for Pi (1982)</em></strike> <em>Quartic Approximations for Pi</em> (1980) online before so,</p>
<p>$$u = (a+\sqrt{a^2-1})^2(b+\sqrt{b^2-1})^2(c+\sqrt{c^2-1})(d+\sqrt{d^2-1}) \approx 1.43\times10^{13}$$</p>
<p>where,</p>
<p>$$\begin{aligned}
a &= \tfrac{1}{2}(23+4\sqrt{34})\\
b &= \tfrac{1}{2}(19\sqrt{2}+7\sqrt{17})\\
c &= (429+304\sqrt{2})\\
d &= \tfrac{1}{2}(627+442\sqrt{2})
\end{aligned}$$</p>
<p>(<em>Remark</em>: In Shank's paper, the expressions for $a,b$ are different since he didn't express them as squares.) </p>
<p>A small tweak to $(1)$ can <em>vastly</em> increase its accuracy to $10^{-161}$,</p>
<p>$$\pi \approx \frac{1}{\sqrt{3502}}\ln\big((2u)^6+24\big)\color{blue}{+10^{-161}}\tag{2}$$</p>
<p>I noticed the constant $u$ can also be expressed in terms of the <em>Dedekind eta function</em> $\eta(\tau)$ as,</p>
<p>$$u = \frac{1}{2}\left(\frac{\eta(\tfrac{1}{2}\sqrt{-3502})}{\eta(\sqrt{-3502})}\right)^4\approx 1.43\times 10^{13}$$</p>
<p>which explains why $24$ improves the accuracy. Note that the <em>class number</em> of $d = 4\cdot3502$ is $h(-d)=16$, and $u$ is an algebraic number of deg $16$. Mathworld has a <a href="http://mathworld.wolfram.com/ClassNumber.html">list of class numbers of <em>d</em></a>. However, we can also use those with $h(-d)=8$ such as,</p>
<p>$$x = \frac{1}{\sqrt{2}}\left(\frac{\eta(\tfrac{1}{2}\sqrt{-742})}{\eta(\sqrt{-742})}\right)^2 \approx 884.2653\dots$$</p>
<p>which is a root of the 8th deg,</p>
<p>$$1 + 886 x + 1535 x^2 + 962 x^3 + 1628 x^4 - 962 x^5 + 1535 x^6 - 886 x^7 + x^8 = 0$$</p>
<p><strong><em>Question</em></strong>: <em>Analogous to $u$, how do we express $x$ as a product of <strong>two</strong> quartic units?</em></p>
| <p>Let $a = \displaystyle{ \frac{11 + \sqrt{106}}{2}}$ and $b = \displaystyle{ \frac{21 + 2 \sqrt{106}}{2}}.$ Then</p>
<p>$$x = (a + \sqrt{a^2 - 1}) (b + \sqrt{b^2 + 1}).$$</p>
<p>As requested, this exhibits $x$ as a product of two quartic units. (For the purists, note that $a$ and $b$ are only half algebraic integers, but the expressions above are genuinely units. I wrote them in this form to conform with the previous example of the OP) The first unit (involving $a$) generates a degree four extension whose Galois closure has Galois group $D_8$, but the latter generates a Galois extension with Galois group $\mathbf{Z}/2\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$.</p>
<p>For those playing at home, this allows for the extra simplification</p>
<p>$$b + \sqrt{b^2 + 1} = (1 + \sqrt{2})^2 \cdot \frac{7 + \sqrt{53}}{2} .$$</p>
| <p>The following is not a direct answer to your question, but is rather too long for a comment.</p>
<p>This is basically Ramanujan's approximation $$\pi \approx \frac{24}{\sqrt{n}}\log(2^{1/4}g_{n}) = \frac{6}{\sqrt{n}}\log (2g_{n}^{4}) = \frac{6}{\sqrt{n}}\log (2u)\tag{1}$$ where $g_{n}$ is Ramanujan's class invariant given by $$g_{n} = 2^{-1/4}e^{\pi\sqrt{n}/24}(1 - e^{-\pi\sqrt{n}})(1 - e^{-3\pi\sqrt{n}})(1 - e^{-5\pi\sqrt{n}})\cdots\tag{2}$$ Here $n = 3502 = 2 \times 17 \times 103$ and it is known that for many values of $n$ the class invariant $g_{n}$ is a unit so that that $u = g_{n}^{4}$ is also a unit. The value of $u$ given in the question is clearly a unit.</p>
<p>Also raising both sides of $(2)$ to 24th power and doing some simpification we get $$(2u)^{6} + 24 = 64g_{n}^{24} + 24 = e^{\pi\sqrt{n}} + 276e^{-\pi\sqrt{n}} + \cdots = e^{\pi\sqrt{n}}(1 + 276e^{-2\pi\sqrt{n}} + \cdots)\tag{3}$$ and hence on taking logs we get $$\log\{(2u)^{6} + 24\} = \pi\sqrt{n} + \log(1 + 276e^{-2\pi\sqrt{n}}) \approx \pi\sqrt{n} + 276e^{-2\pi\sqrt{n}}$$ and thus $$\pi \approx \frac{1}{\sqrt{n}}\log\{(2u)^{6} + 24\} - \frac{276}{\sqrt{n}}e^{-2\pi\sqrt{n}}\tag{4}$$ The error is of the order of $e^{-2\pi\sqrt{n}}$ and in formula $(1)$ it is of the order $e^{-\pi\sqrt{n}}$ (as evident from infinite product $(2)$). Putting $n = 3502$ we see that error in formula $(1)$ is of the order $10^{-81}$ and that in formula $(4)$ is of the order $10^{-161}$. See Ramanujan's paper "Modular Equations and Approximations to $\pi$".</p>
|
probability | <p>At the end of Probability class, our professor gave us the following puzzle:</p>
<blockquote>
<p>There are 100 bags each with 100 coins, but only one of these bags has gold coins in it. The gold coin has weight of 1.01 grams and the other coins has weight of 1 gram. We are given a digital scale, but we can only use it once. How can we identify the bag of gold coins?</p>
</blockquote>
<p>After about 5 minutes waiting, our professor gave us the solution (the class had ended and he didn't want to wait any longer):</p>
<blockquote>
<p>Give the bags numbers from 0 through 99, then take 0 coins from the bag number 0, 1 coin from the bag number 1, 2 coins from the bag number 2, and so on until we take 99 coins from the bag number 99. Gather all the coins we have taken together and put them on the scale. Denote the weight of these coins as $W$ and the number of bag with gold coins in it as $N$, then we can identify the bag of gold coins using formula $$N=100(W-4950)$$
For instance, if the weight of all coins gathered is $4950.25$ grams, then using the formula above the bag number 25 has the gold coins in it.</p>
</blockquote>
<p>My questions are:</p>
<ol>
<li>How does the formula work? Where does it come from?</li>
<li>Do we have other ways to solve this puzzle? If yes, how?</li>
<li>If the digital scale is replaced by a traditional scale, the scale like symbol of libra or the scale in Shakespeare's drama: The Merchant of Venice (I don't know what is the name in English), then how do we solve this puzzle?</li>
</ol>
| <p>To understand the formula, it would be easiest to explain how it works conceptually before we derive it.</p>
<p>Let's simplify the problem and say there are only 3 bags each with 2 coins in them. 2 of those bags have the 1 gram coins and one has the 1.01 gram gold coins. Let's denote the bags arbitrarily as $Bag_0$, $Bag_1$, and $Bag_2$. Similarly to your problem, let's take 0 coins from $Bag_0$, 1 coin from $Bag_1$, and 2 coins from $Bag_2$. We know that the gold coins must be in one of those bags, so there are three possibilities when we weigh the three coins we removed:</p>
<p>Gold Coins in $Bag_0$: So the weight of the 3 coins on the scale are all 1 gram. So the scale will read 3 grams.</p>
<p>Gold Coins in $Bag_1$: So the weight of 1 of the coins is 1.01 grams and 2 of the coins are 2 grams. So the scale will read 3.01 grams.</p>
<p>Gold Coins in $Bag_2$: So the weight of 2 of the coins is 2.02 grams and 1 of the coins is 1 gram. So the scale will read 3.02 grams.</p>
<p>So each possibility has a unique scenario. So if we determine the weight, we can determine from which bag those coins came from based on that weight.</p>
<p>We can generalize our results from this simplified example to your 100 bag example.</p>
<p>Now for deriving the formula. Say hypothetically, of our 100 bags, all 100 coins in each of the 100 bags weigh 1 gram each. In that case, when we remove 0 coins from $Bag_0$, 1 from $Bag_1$, up until 99 coins from $Bag_{99}$, we'll have a total of 4950 coins on the scale, which will equivalently be 4950 grams. Simply put, if $n$ is our Bag number (denoted $Bag_n$), we've placed $n$ coins from each $Bag_n$ onto the scale for $n = 0, 1, 2, ... 99 $.</p>
<p>So the weight of the coins will be $Weight = 1 + 2 + 3 + ... + 99 = 4950$</p>
<p>But we actually have one bag with gold coins weighing 1.01 grams. And we know that those 1.01 gram coins must be from some $Bag_n$. In our hypothetical example, all of our coins were 1 gram coins, so we must replace the $n$ coins weighed from $Bag_n$ with $n$ gold coins weighing 1.01 grams. Mathematically, we would have:
$Weight = 4950 - n + 1.01n = 4950 + .01n = 4950 + n/100$</p>
<p>Rearranging the formula to solve for n, we have:
$100(Weight-4950) = n$, where $Weight$ is $W$ and $n$ is $N$ in your example.</p>
<p>I have no knowledge of an alternative answer to this puzzle, but perhaps another member's answer may be enlightening if there is. Technically speaking, you could have denoted the bags from 1 to 100 and gone through a similar process as above, but the method is still the same, so I wouldn't treat it as a new answer.</p>
<p>If our electric scale is replaced by a scale of libra, I don't believe it would be possible to answer this puzzle with only one measurement of weight. But again, perhaps another answer may be enlightening on that.</p>
| <p>For #1: imagine for a moment that all the coins are fake. If we took 0 coins from bag 0, 1 coin from bag 1, 2 coins from bag 2... we'd have $99\times100/2=4,950$ coins, and those 4,950 coins would weigh a total of 4,950 grams. But now, say that bag 25 were the one with real coins that are slightly heavier: 0.01 grams heavier, in fact. So the total weight of the coins is $W=4950+0.01N$, where $N$ is the number of the bag with the real coins. But -- we have the weight, not the bag number. So let's <strong>invert</strong> the equation: we want to find N given W, not the other way around.</p>
<p>$$\begin{align}W&=4950+0.01N\\W-4950&=0.01N\\100(W-4950)&=N\end{align}$$</p>
<p>For #2, aside from renumbering the bags, there isn't a different way to do this; no matter what, we have to have a different number coming from the scale for each different possible result.</p>
<p>For #3, you need $\lceil\log_3k\rceil$ weighings to discover the odd coin out; for 100 coins, that's 5 weighings: the first splits the coins into groups of (up to) 34; the second into groups of (up to) 12; the third into groups of (up to) 4; the fourth into groups of (up to) 2; the fifth finds it guaranteed.</p>
<p>Why $\lceil\log_3k\rceil$? Each use of the balance scales actually compares three different groups of coins: the one on the left scale, the one on the right scale, and the one not on the scale at all. If one of the two groups on the scale is heavier, then the gold coin is in that group; if neither, then the gold coin is in the group not on the scale. Thus, each weighing can distinguish between 3 states, and $n$ weighings can distinguish between $3^n$ states. We need an integer solution to $3^n\ge k$, thus $n\ge\log_3k$, thus $n=\lceil\log_3k\rceil$.</p>
|
probability | <p>Whats the difference between <em>probability density function</em> and <em>probability distribution function</em>? </p>
| <p><strong>Distribution Function</strong></p>
<ol>
<li>The probability distribution function / probability function has ambiguous definition. They may be referred to:
<ul>
<li>Probability density function (PDF) </li>
<li>Cumulative distribution function (CDF)</li>
<li>or probability mass function (PMF) (statement from Wikipedia)</li>
</ul></li>
<li>But what confirm is:
<ul>
<li>Discrete case: Probability Mass Function (PMF)</li>
<li>Continuous case: Probability Density Function (PDF)</li>
<li>Both cases: Cumulative distribution function (CDF)</li>
</ul></li>
<li>Probability at certain <span class="math-container">$x$</span> value, <span class="math-container">$P(X = x)$</span> can be directly obtained in:
<ul>
<li>PMF for discrete case</li>
<li>PDF for continuous case</li>
</ul></li>
<li>Probability for values less than <span class="math-container">$x$</span>, <span class="math-container">$P(X < x)$</span> or Probability for values within a range from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, <span class="math-container">$P(a < X < b)$</span> can be directly obtained in:
<ul>
<li>CDF for both discrete / continuous case</li>
</ul></li>
<li>Distribution function is referred to CDF or Cumulative Frequency Function (see <a href="http://mathworld.wolfram.com/DistributionFunction.html" rel="noreferrer">this</a>)</li>
</ol>
<p><strong>In terms of Acquisition and Plot Generation Method</strong></p>
<ol>
<li>Collected data appear as discrete when:
<ul>
<li>The measurement of a subject is naturally discrete type, such as numbers resulted from dice rolled, count of people.</li>
<li>The measurement is digitized machine data, which has no intermediate values between quantized levels due to sampling process.</li>
<li>In later case, when resolution higher, the measurement is closer to analog/continuous signal/variable.</li>
</ul></li>
<li>Way of generate a PMF from discrete data:
<ul>
<li>Plot a histogram of the data for all the <span class="math-container">$x$</span>'s, the <span class="math-container">$y$</span>-axis is the frequency or quantity at every <span class="math-container">$x$</span>.</li>
<li>Scale the <span class="math-container">$y$</span>-axis by dividing with total number of data collected (data size) <span class="math-container">$\longrightarrow$</span> and this is called PMF.</li>
</ul></li>
<li>Way of generate a PDF from discrete / continuous data:
<ul>
<li>Find a continuous equation that models the collected data, let say normal distribution equation.</li>
<li>Calculate the parameters required in the equation from the collected data. For example, parameters for normal distribution equation are mean and standard deviation. Calculate them from collected data.</li>
<li>Based on the parameters, plot the equation with continuous <span class="math-container">$x$</span>-value <span class="math-container">$\longrightarrow$</span> that is called PDF.</li>
</ul></li>
<li>How to generate a CDF:
<ul>
<li>In discrete case, CDF accumulates the <span class="math-container">$y$</span> values in PMF at each discrete <span class="math-container">$x$</span> and less than <span class="math-container">$x$</span>. Repeat this for every <span class="math-container">$x$</span>. The final plot is a monotonically increasing until <span class="math-container">$1$</span> in the last <span class="math-container">$x$</span> <span class="math-container">$\longrightarrow$</span> this is called discrete CDF.</li>
<li>In continuous case, integrate PDF over <span class="math-container">$x$</span>; the result is a continuous CDF.</li>
</ul></li>
</ol>
<p><strong>Why PMF, PDF and CDF?</strong></p>
<ol>
<li>PMF is preferred when
<ul>
<li>Probability at every <span class="math-container">$x$</span> value is interest of study. This makes sense when studying a discrete data - such as we interest to probability of getting certain number from a dice roll.</li>
</ul></li>
<li>PDF is preferred when
<ul>
<li>We wish to model a collected data with a continuous function, by using few parameters such as mean to speculate the population distribution.</li>
</ul></li>
<li>CDF is preferred when
<ul>
<li>Cumulative probability in a range is point of interest. </li>
<li>Especially in the case of continuous data, CDF much makes sense than PDF - e.g., probability of students' height less than <span class="math-container">$170$</span> cm (CDF) is much informative than the probability at exact <span class="math-container">$170$</span> cm (PDF).</li>
</ul></li>
</ol>
| <p>The relation between the probability density funtion <span class="math-container">$f$</span> and the cumulative distribution function <span class="math-container">$F$</span> is...</p>
<ul>
<li><p>if <span class="math-container">$f$</span> is discrete:
<span class="math-container">$$
F(k) = \sum_{i \le k} f(i)
$$</span></p></li>
<li><p>if <span class="math-container">$f$</span> is continuous:
<span class="math-container">$$
F(x) = \int_{y \le x} f(y)\,dy
$$</span></p></li>
</ul>
|
combinatorics | <p>Let $\varphi(n)$ be Euler's totient function, the number of positive integers less than or equal to $n$ and relatively prime to $n$. </p>
<p>Challenge: Prove</p>
<p>$$\sum_{k=1}^n \left\lfloor \frac{n}{k} \right\rfloor \varphi(k) = \frac{n(n+1)}{2}.$$
</p>
<p>I have two proofs, one of which is partially combinatorial. </p>
<p>I'm posing this problem partly because I think some folks on this site would be interested in working on it and partly because I would like to see a purely combinatorial proof. (But please post any proofs; I would be interested in noncombinatorial ones, too. I've learned a lot on this site by reading alternative proofs of results I already know.)</p>
<p>I'll wait a few days to give others a chance to respond before posting my proofs.</p>
<p>EDIT: The two proofs in full are now given among the answers.</p>
| <p>One approach is to use the formula $\displaystyle \sum_{d \mid k} \varphi(d) = k$ </p>
<p>So we have that $\displaystyle \sum_{k=1}^{n} \sum_{d \mid k} \varphi(d) = n(n+1)/2$</p>
<p>Exchanging the order of summation we see that the $\displaystyle \varphi(d)$ term appears $\displaystyle \left\lfloor \frac{n}{d} \right\rfloor$ times</p>
<p>and thus</p>
<p>$\displaystyle \sum_{d=1}^{n} \left\lfloor \frac{n}{d} \right\rfloor \varphi(d) = n(n+1)/2$</p>
<p>Or in other words, if we have the $n \times n$ matrix $A$ such that</p>
<p>$\displaystyle A[i,j] = \varphi(j)$ if $j \mid i$ and $0$ otherwise.</p>
<p>The sum of elements in row $i$ is $i$.</p>
<p>The sum of elements in column $j$ is $\displaystyle \left\lfloor \frac{n}{j} \right\rfloor \varphi(j)$ and the identity just says the total sum by summing the rows is same as the total sum by summing the columns.</p>
| <p>In case anyone is interested, here are the full versions of my two proofs. (I constructed the combinatorial one from my original partially combinatorial one after I posted the question.)</p>
<p><HR></p>
<p><strong>The non-combinatorial proof</strong> </p>
<p>As Derek Jennings observes, $\lfloor \frac{n+1}{k} \rfloor - \lfloor \frac{n}{k} \rfloor$ is $1$ if $k|(n+1)$ and $0$ otherwise. Thus, if $$f(n) = \sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k),$$
then $$\Delta f(n) = f(n+1) - f(n) = \sum_{k|(n+1)} \phi(k) = n+1,$$
where the last equality follows from the well-known formula Aryabhata cites.</p>
<p>Then
$$\sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k) = f(n) = \sum_{k=0}^{n-1} \Delta f(k) = \sum_{k=0}^{n-1} (k+1) = \frac{n(n+1)}{2}.$$</p>
<p><HR></p>
<p><strong>The combinatorial proof</strong></p>
<p>Both sides count the number of fractions (reducible or irreducible) in the interval (0,1] with denominator $n$ or smaller. </p>
<p>For the right side, the number of ways to pick a numerator and a denominator is the number of ways to choose two numbers with replacement from the set $\{1, 2, \ldots, n\}$. This is known to be
$$\binom{n+2-1}{2} = \frac{n(n+1)}{2}.$$</p>
<p>Now for the left side. The number of irreducible fractions in $(0,1]$ with denominator $k$ is equal to the number of positive integers less than or equal to $k$ and relatively prime to $k$; i.e., $\varphi(k)$. Then, for a given irreducible fraction $\frac{a}{k}$, there are $\left\lfloor \frac{n}{k} \right\rfloor$ total fractions with denominators $n$ or smaller in its equivalence class. (For example, if $n = 20$ and $\frac{a}{k} = \frac{1}{6}$, then the fractions $\frac{1}{6}, \frac{2}{12}$, and $\frac{3}{18}$ are those in its equivalence class.) Thus the sum
$$\sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k)$$
also gives the desired quantity.</p>
|
logic | <p>You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.</p>
<p>The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length. </p>
<p>So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?</p>
<p>And if so, what is the smallest possible ratio?</p>
<p>Example 1 (unit is cm):</p>
<ul>
<li>1st cut: 50 : 50 ratio: 1 </li>
<li>2nd cut: 50 : 25 : 25 ratio: 2 - bad</li>
</ul>
<p>Example 2</p>
<ul>
<li>1st cut: 40 : 60 ratio: 1.5</li>
<li>2nd cut: 40 : 30 : 30 ratio: 1.33</li>
<li>3rd cut: 20 : 20 : 30 : 30 ratio: 1.5</li>
<li>4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad</li>
</ul>
<p>Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.</p>
| <p>TLDR: $a_n=\log_2(1+1/n)$ works, and is the only smooth solution.</p>
<p>This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.</p>
<p>Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,\dots,a_{2n-1}$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_{2n}$ and $a_{2n+1}$. We have the following constraints:</p>
<p>$$a_1=1\qquad a_n=a_{2n}+a_{2n+1}\qquad a_n\ge a_{n+1}\qquad a_n<2a_{2n-1}$$</p>
<p>I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_{2n}+a_{2n+1}$ as well).</p>
<p>First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_n\in (1/2,1)$ then we can define $a_{2n}=a_nb_n$ and $a_{2n+1}=a_n(1-b_n)$, and this will completely define the sequence $a_n$.</p>
<p>Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence</p>
<p>$$1,\frac12,\frac12,\frac14,\frac14,\frac14,\frac14,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does <em>not</em> tend to a limit - it varies between $1$ and $2$.</p>
<p>If we average this exponentially, by considering the function $g(x)=2^xa_{\lfloor 2^x\rfloor}$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]\to\Bbb R$ such that $g(x+n)\to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.</p>
<p>There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,\log_2 (3/2)]$ and scaled down on $[\log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $\int_0^{\log_2(3/2)}h(x)\,dx$ and $\int_{\log_2(3/2)}^1h(x)\,dx$.</p>
<p>Thus, to keep these balanced we should let $b_1=\log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[\log_2(2n),\log_2(2n+1)]$ and $[\log_2(2n+1),\log_2(2n+2)]$ (reduced$\bmod 1$), so we must set them to
$$b_n=\frac{\log_2(2n+1)-\log_2(2n)}{\log_2(2n+2)-\log_2(2n)}=\frac{\log(1+1/2n)}{\log(1+1/n)}.$$</p>
<p>When we do this, a miracle occurs, and $a_n=\log_2(1+1/n)$ becomes analytically solvable:
\begin{align}
a_1&=\log_2(1+1/1)=1\\
a_{2n}+a_{2n+1}&=\log_2\Big(1+\frac1{2n}\Big)+\log_2\Big(1+\frac1{2n+1}\Big)\\
&=\log_2\left[\Big(1+\frac1{2n}\Big)\Big(1+\frac1{2n+1}\Big)\right]\\
&=\log_2\left[1+\frac{2n+(2n+1)+1}{2n(2n+1)}\right]\\
&=\log_2\left[1+\frac1n\right]=a_n.
\end{align}</p>
<p>As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
\begin{align}
2a_m&=2\log_2\Big(1+\frac1m\Big)=\log_2\Big(1+\frac1m\Big)^2=\log_2\Big(1+\frac2m+\frac1{m^2}\Big)\\
&\ge\log_2\Big(1+\frac2m\Big)>\log_2\Big(1+\frac2{2n}\Big)=a_n,
\end{align}</p>
<p>so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $\frac 1{\log 2}=\log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.</p>
<p>It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):</p>
<ul>
<li>$58:42$, ratio = $1.41$</li>
<li>$42:32:26$, ratio = $1.58$</li>
<li>$32:26:22:19$, ratio = $1.67$</li>
<li>$26:22:19:17:15$, ratio = $1.73$</li>
<li>$22:19:17:15:14:13$, ratio = $1.77$</li>
</ul>
<p>If you are working with a sequence of points treated$\bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=\log_2(2n+1)\bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:</p>
<p> <a href="https://i.sstatic.net/SCaaE.gif" rel="noreferrer"><img src="https://i.sstatic.net/SCaaE.gif" alt="sausages"></a></p>
<p>A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0\le x\le 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0\le x\le 1$, and no solution can do better than that (in the limit) for any $x$.</p>
| <p>YES, it is possible!</p>
<p>You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$. </p>
<p>From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.</p>
<p>So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/b\approx 1$). All you have to make sure is that </p>
<ul>
<li>$a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round. </li>
<li>$a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round). </li>
</ul>
<p>Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.</p>
|
probability | <p>I've been reading articles on pseudo-randomness in computing when generating a random value. They all state that the generated numbers are pseudo-random because we know all the factors that influence the outcome, and that the roll of a die is considered truly random. But I'm wondering why. Don't we know all the physical forces that influence the die when it's being rolled? Or is there too many of them?</p>
| <p>A die roll is only considered random if the external factors are not controlled. </p>
<p><strong>Practiced dice cheats can roll numbers they want to roll</strong>. So talk about nerves and blood vessels and quantum effects are just wrong. These cheats control the meaningful factors such that they influence the outcome of the roll, predictably. Even if someone only increases their chance of rolling a certain number by a few percentage points, that's huge in gambling terms.</p>
<p>That's why there are rules on how the dice must be rolled at casinos, and inventions such as the dice tower:</p>
<p><a href="https://i.sstatic.net/yj5sE.jpg" rel="noreferrer"><img src="https://i.sstatic.net/yj5sE.jpg" alt="Dice Tower"></a>. </p>
| <p>This has to do with <a href="https://en.wikipedia.org/wiki/Chaos_theory" rel="nofollow noreferrer">chaos theory</a>: the tiniest variation of the initial conditions will cause an enormously different output. For a physical system like a die toss:</p>
<ul>
<li><p>even from a classical point of view, it is very unlikely that you <em>can</em> know the <em>very exact</em> initial conditions of the throw. And of the environment: the "floor" distance and surface characteristics (think of the abrupt effect of each bounce, that will be very different depending on the most infinitesimal variation of the impact parameters), the air conditions (thermodynamic and kinematic)...!</p></li>
<li><p>this becomes actually impossible if you include the <a href="https://en.wikipedia.org/wiki/Uncertainty_principle" rel="nofollow noreferrer">uncertainty principle</a> (that prevents you from knowing the exact value of certain pairs of variables at the same time, e.g. position and momentum, but see below);</p></li>
<li><p>it would be impossible from a practical point of view to propagate these initial conditions without introducing round-off errors, that due to the chaotic nature of the problem would make the result completely unreliable;</p></li>
<li><p>even if you could perform exact calculations, there is still the quantum indeterminacy (again, see below) that affects the development of the status of the die: at each bounce, even when air molecules brake the die rotation, it is impossible even theoretically to predict what will happen in the next instant with absolute certainty.</p></li>
</ul>
<p>As pointed out in many comments and with many downvotes, the contributions to the randomness of the roll from quantum effects are <em>insignificant</em> from any practical point of view. Nevertheless I do want to mention them since they provide a theoretical watertight border against a deterministic idea of the phenomenon.</p>
<p>Taking care of another possible correct objection, I have to underline that my answer holds for a fair throw. If you think of a die "tossed" from, say, $1\,\mathrm{mm}$ above a horizontal flat floor, with negligible initial velocity and a face parallel to the ground, it is obvious that you can predict the outcome with practical certainty. Moving progressively away from this limit situation, you have many halfway toss styles that can influence the probability distribution of the outcomes, if only by a few percent. I'm referring to the opposite limit, when the system can be considered <a href="https://en.wikipedia.org/wiki/Ergodic_hypothesis" rel="nofollow noreferrer">ergodic</a>. When I heard this term applied to the die, maybe not $100\%$ properly, it was with the meaning that the system "scans" over time all the possible outcomes many many times, with equal probability and with no recognizable pattern. Add the fact that a fair throw starts with a random grip of the die, and you really have equal chances for all the outcomes.</p>
|
matrices | <p>A method called "Robust PCA" solves the matrix decomposition problem</p>
<p>$$L^*, S^* = \arg \min_{L, S} \|L\|_* + \|S\|_1 \quad \text{s.t. } L + S = X$$</p>
<p>as a surrogate for the actual problem</p>
<p>$$L^*, S^* = \arg \min_{L, S} rank(L) + \|S\|_0 \quad \text{s.t. } L + S = X,$$ i.e. the actual goal is to decompose the data matrix $X$ into a low-rank signal matrix $L$ and a sparse noise matrix $S$. In this context: <strong>why is the nuclear norm a good approximation for the rank of a matrix?</strong> I can think of matrices with low nuclear norm but high rank and vice-versa. Is there any intuition one can appeal to?</p>
| <p>Why does <a href="http://en.wikipedia.org/wiki/Compressed_sensing">compressed sensing</a> work? Because the $\ell_1$ ball in high dimensions is extremely "pointy" -- the extreme values of a linear function on this ball are very likely to be attained on the faces of low dimensions, those that consist of sparse vectors. When applied to matrices, the sparseness of the set of eigenvalues means low rank, as @mrig wrote before me. </p>
| <p>To be accurate, it has been shown that the $\ell_1$ norm is the <em>convex envelope</em> of the $\| \cdot \|_0$ pseudo-norm while the nuclear norm is the <em>convex envelope</em> of the rank.</p>
<p>As a reminder, the convex envelope is the tightest convex surrogate of a function. An important property is that a function and its convex envelope have the same <strong>global minimizer</strong>.</p>
|
probability | <p>On a "bottom" disk of area <span class="math-container">$A$</span>, we place "top" disks of areas <span class="math-container">$1,\frac12,\frac13,\cdots$</span> such that the centre of each top disk is an independent uniformly random point on the bottom disk.</p>
<blockquote>
<p>Find the maximum value of <span class="math-container">$A$</span> such that the bottom disk will be completely covered by the top disks with probability <span class="math-container">$1$</span>, or show that there is no maximum.</p>
</blockquote>
<p>The harmonic series diverges, but the problem here is that the top disks overlap, so it is not clear to me whether a bottom disk of a given area will be completely covered by the top disks, with probability <span class="math-container">$1$</span>.</p>
<p>I made a <a href="https://www.desmos.com/calculator/d5lfcluwso?lang=zh-CN" rel="noreferrer">desmos graph</a> to help visualise the disks.</p>
<p>(This question was inspired by a <a href="https://math.stackexchange.com/q/176383/398708">question</a> about rain droplets falling on a table.)</p>
| <p><span class="math-container">$\newcommand{\bB}{\mathbf{B}}$</span>
<span class="math-container">$\newcommand{\PP}{\mathbb{P}}$</span>
<span class="math-container">$\newcommand{\EE}{\mathbb{E}}$</span>
<span class="math-container">$\newcommand{\Var}{\text{Var}}$</span>
<span class="math-container">$\newcommand{\Cov}{\text{Cov}}$</span></p>
<p><strong>Update</strong>: It looks like the case <span class="math-container">$A > 1$</span> is essentially contained in Proposition 11.5 of Kahane's book <em>Some Random Series of Functions</em>. The proof given there is much more beautiful, utilizing the second moment method in a very simplistic yet ingenious way.</p>
<p>More specifically, let <span class="math-container">$S_N$</span> be the area remaining after placing the first <span class="math-container">$N$</span> balls. Then Kahane used the inequality
<span class="math-container">$$\PP(S_N \neq 0) \geq \frac{\EE[S_N]^2}{\EE[S_N^2]}.$$</span>
One can compute that <span class="math-container">$\EE[S_N]$</span> grows like <span class="math-container">$N^{-1/A}$</span>, while <span class="math-container">$\EE[S_N^2]$</span> grows like <span class="math-container">$N^{-1/2A}$</span>. So this shows that <span class="math-container">$\PP(S_N \neq 0)$</span> is lower bounded by a constant.</p>
<hr />
<p>By <span class="math-container">$\odot(p, r)$</span>, I mean the closed disk centered at <span class="math-container">$p$</span> of radius <span class="math-container">$r$</span>.</p>
<p>I believe we can argue that when <span class="math-container">$A < 1$</span> such covering happens with probability <span class="math-container">$1$</span>, and when <span class="math-container">$A > 1$</span> such covering happens with probability less than <span class="math-container">$1$</span>. I don't have an answer when <span class="math-container">$A = 1$</span>.</p>
<p>Throughout, let <span class="math-container">$T_1, T_2, \cdots$</span> denote the "top disk", with <span class="math-container">$T_i = \odot(t_i, \frac{1}{\sqrt{\pi i}})$</span>. Let <span class="math-container">$B$</span> denote the "bottom disk". Assume <span class="math-container">$B$</span> is centered at the origin <span class="math-container">$0$</span>.</p>
<hr />
<p>The part when <span class="math-container">$A < 1$</span> has a relatively straightforward proof. The idea is to discretize Dan's original answer.</p>
<p>Let <span class="math-container">$E_N$</span> denote the event that the first <span class="math-container">$N$</span> circles cover the bottom disk. It suffices to show that
<span class="math-container">$$\lim_{N \to \infty} \PP(E_N) = 1.$$</span>
To prove this, we take an <strong>optimal <span class="math-container">$(\sqrt{4\pi N})^{-1}$</span>-net</strong> inside the bottom disk, which is defined as the largest set of points <span class="math-container">$\bB = \{b_1, \cdots, b_k\}$</span> inside the bottom disk that are at distance at least <span class="math-container">$\frac{1}{\sqrt{4\pi N}}$</span> from each other. Here are two facts about such nets</p>
<ol>
<li><p>The disks <span class="math-container">$B_i = \odot(b_i, \frac{1}{\sqrt{4\pi N}})$</span> cover the entire bottom disk.</p>
</li>
<li><p>The disks <span class="math-container">$B_i' = \odot(b_i, \frac{1}{\sqrt{16\pi N}})$</span> are disjoint.</p>
</li>
</ol>
<p>Let <span class="math-container">$E_{iN}$</span> denote the event that circle <span class="math-container">$B_i$</span> is completely covered. To carry out the analysis below, we use a trick known as dyadic partitioning: for each <span class="math-container">$0 \leq k \leq \log_2 N$</span>, let <span class="math-container">$\bB_k$</span> denote the points in <span class="math-container">$\bB$</span> at distance between <span class="math-container">$[2^{-k-1}, 2^{-k}]$</span> from the boundary of the base circle, with <span class="math-container">$\bB_{\log_2 \sqrt{N}}$</span> also including all the points in <span class="math-container">$\bB$</span> at distance less than <span class="math-container">$\sqrt{N}^{-1}$</span> from the boundary of the base circle.</p>
<p>Let <span class="math-container">$b_i$</span> lies in <span class="math-container">$\bB_k$</span>. Note that for each <span class="math-container">$1 \leq j \leq N$</span>, the top disk <span class="math-container">$T_j$</span> covers <span class="math-container">$B_i$</span> completely iff <span class="math-container">$t_j$</span> lies in <span class="math-container">$C_{ij} = \odot(b_i, \frac{1}{\sqrt{\pi j}} - \frac{1}{\sqrt{4\pi N}})$</span>.</p>
<p>We now need to understand the intersection between <span class="math-container">$C_{ij}$</span> and the bottom disk <span class="math-container">$B$</span>. Note that <span class="math-container">$C_{ij}$</span> has radius at most <span class="math-container">$\frac{1}{\sqrt{\pi j}}$</span>, so when <span class="math-container">$j$</span> is larger than a constant, its intersection with the base circle is at least <span class="math-container">$1/2 - O(j^{-1})$</span> the area of the whole circle. Furthermore, when <span class="math-container">$j \geq 2^{2k}$</span>, the entire <span class="math-container">$C_{ij}$</span> is contained in the base circle. We can write this as
<span class="math-container">$$\text{Area}(C_{ij} \cap B) \geq \begin{cases}
(1/2 - O(j^{-1})) \left(\frac{1}{\sqrt{j}} - \frac{1}{\sqrt{4N}}\right)^2, j < 2^{2k} \\
\left(\frac{1}{\sqrt{j}} - \frac{1}{\sqrt{4N}}\right)^2, j \geq 2^{2k}
\end{cases}.$$</span>
So we conclude that
<span class="math-container">$$\mathbb{P}(\overline{E_{iN}}) \leq \prod_{j = 1}^N (1 - \PP(B_i \subset T_j)) = \prod_{j = 1}^N \frac{A - \text{Area}(C_{ij} \cap B)}{A} \leq \exp\left(- A^{-1}\sum_{j = 1}^N \text{Area}(C_{ij} \cap B)\right).$$</span>
We analyze the sum as follows
<span class="math-container">$$\sum_{j = 1}^N \text{Area}(C_{ij} \cap B) \geq \sum_{j = 1}^{2^{2k}} (1/2 - O(j^{-1})) \left(\frac{1}{\sqrt{j}} - \frac{1}{\sqrt{4N}}\right)^2 + \sum_{j = 2^{2k} + 1}^N \left(\frac{1}{\sqrt{j}} - \frac{1}{\sqrt{4N}}\right)^2.$$</span>
We need to understand this asymptotically. Fortunately, it is not too hard to check that
<span class="math-container">$$\sum_{j = 1}^{2^{2k}} (1/2 - O(j^{-1})) \left(\frac{1}{\sqrt{j}} - \frac{1}{\sqrt{4N}}\right)^2 \geq \frac{1}{2} \log(2^{2k}) - O(1).$$</span>
<span class="math-container">$$\sum_{j = 2^{2k} + 1}^N \left(\frac{1}{\sqrt{j}} - \frac{1}{\sqrt{4N}}\right)^2 \geq \log(N / 2^{2k}) - O(1).$$</span>
So, substituting this back in, we conclude that
<span class="math-container">$$\mathbb{P}(\overline{E_{iN}}) = O\left(\frac{2^{k / A}}{N^{1/A}}\right).$$</span>
We now use the union bound on these events. Using dyadic summation, we need to estimate
<span class="math-container">$$\sum_{b_i \in \bB_k} \mathbb{P}(\overline{E_{iN}}) = O\left(|\bB_k| \frac{2^{k / A}}{N^{1/A}} \right).$$</span>
We can estimate <span class="math-container">$|\bB_k|$</span> using the second property of nets above. The circles <span class="math-container">$B_i' = \odot(b_i, \frac{1}{\sqrt{16\pi N}})$</span> are disjoint, and they must be contained in a ring of width <span class="math-container">$O(2^{-k})$</span> around the boundary of <span class="math-container">$B$</span>. So we conclude that
<span class="math-container">$$|\bB_k| = O(N 2^{-k}).$$</span>
Thus, we conclude that
<span class="math-container">$$\sum_{b_i \in \bB_k} \mathbb{P}(\overline{E_{iN}}) = O\left(\frac{2^{k(1/A - 1)}}{N^{(1/A - 1)}} \right).$$</span>
Finally, we have
<span class="math-container">$$\mathbb{P}(\overline{E_{N}}) \leq \sum_{k = 0}^{\log_2 \sqrt{N}} \sum_{b_i \in \bB_k} \mathbb{P}(\overline{E_{iN}}) \leq O\left(\sum_{k = 0}^{\log_2 \sqrt{N}} \frac{2^{k(1/A - 1)}}{N^{(1/A - 1)}}\right).$$</span>
We find that each sum inside the big <span class="math-container">$O$</span> is <span class="math-container">$O(N^{-(1/A - 1) / 2})$</span>. So we conclude that
<span class="math-container">$$\mathbb{P}(\overline{E_{N}}) = O(\log N \cdot N^{-(1/A - 1) / 2}).$$</span>
Thus
<span class="math-container">$$\lim_{N \to \infty} \mathbb{P}(\overline{E_{N}}) = 0$$</span>
as desired.</p>
<hr />
<p>The part when <span class="math-container">$A > 1$</span> is more difficult. My idea is to show that with nonzero probability, after we have placed disks <span class="math-container">$T_1,T_2,\cdots,T_N$</span>, the uncovered region contains many disjoint, microscopic disks.</p>
<p>To make the rigorous, let <span class="math-container">$K = 10^{10}$</span> and <span class="math-container">$Q = K^{A / (A - 1)}$</span>. We consider the following event:</p>
<p><span class="math-container">$E_t$</span>: For each <span class="math-container">$Q \leq s \leq t$</span> the following holds. After we have placed <span class="math-container">$T_1, \cdots, T_{Q^s}$</span>, we can find <span class="math-container">$2^s$</span> closed disks of area <span class="math-container">$Q^{-s}$</span> inside <span class="math-container">$\odot(0, 0.1)$</span>, such that all of them are completely uncovered, and each pair of these disks are at a distance at least <span class="math-container">$2\pi^{-1/2}Q^{-s/2}$</span> apart from each other. Furthermore, if <span class="math-container">$U_s$</span> denotes the union of these disks, then <span class="math-container">$U_s \subset U_{s - 1}$</span>.</p>
<p>Then my main observation is</p>
<p><strong>Lemma</strong>: Assume <span class="math-container">$t \geq Q$</span>. Condition on the placement of disk <span class="math-container">$T_1, \cdots, T_{Q^t}$</span>, and suppose <span class="math-container">$E_t$</span> happens. Then the probability that <span class="math-container">$E_{t + 1}$</span> happens is at least <span class="math-container">$1 - \frac{Q}{2^t}$</span>.</p>
<p><strong>Proof</strong>: The main method we use to prove this is called the <strong>second moment method</strong>.</p>
<p>Let <span class="math-container">$B_1, \cdots, B_{2^t}$</span> be the <span class="math-container">$2^t$</span> disks of area <span class="math-container">$Q^{-t}$</span> inside <span class="math-container">$\odot(0, 0.1)$</span>, such that all of them are uncovered, and each pair of these disks are at a distance at least <span class="math-container">$2\pi^{-1/2}Q^{-t/2}$</span> apart from each other.</p>
<p>It is not hard to show that, in each <span class="math-container">$B_i$</span>, we can fit at least <span class="math-container">$R = \lceil{Q / 100\rceil}$</span> disks <span class="math-container">$B_{i1}, \cdots, B_{iR}$</span> inside <span class="math-container">$B_i$</span>, such that they have area <span class="math-container">$Q^{-t-1}$</span> each and are distance at least <span class="math-container">$2\pi^{-1/2}Q^{-(t + 1)/2}$</span> apart from each other. Let <span class="math-container">$b_{ij}$</span> be the center of <span class="math-container">$B_{ij}$</span>. Let <span class="math-container">$I_{ij}$</span> be <span class="math-container">$1$</span> if none of the disks <span class="math-container">$T_{Q^t + 1}, \cdots, T_{Q^{t + 1}}$</span> touch <span class="math-container">$B_{ij}$</span>, and <span class="math-container">$0$</span> otherwise.</p>
<p>We first compute the expectation of <span class="math-container">$I_{ij}$</span>. Note that for each <span class="math-container">$k \in [Q^t + 1,Q^{t + 1}]$</span>, <span class="math-container">$T_{k}$</span> touches <span class="math-container">$B_{ij}$</span> if and only if <span class="math-container">$t_k$</span> lies in a circle <span class="math-container">$C_{ijk}$</span> of radius <span class="math-container">$\pi^{-1/2}(k^{-1/2} + Q^{-(t + 1) / 2})$</span> centered at <span class="math-container">$b_{ij}$</span>. Note that we assumed that <span class="math-container">$B_{ij}$</span> are all far away from the boundary of <span class="math-container">$B$</span>. So we have
<span class="math-container">$$\PP(T_k \text{ touch }B_{ij}) = \frac{1}{A} \left(k^{-1/2} + Q^{-(t + 1) / 2}\right)^2.$$</span>
Thus, we have
<span class="math-container">$$\EE[I_{ij}] = \prod_{k = Q^t + 1}^{Q^{t + 1}}\left(1 - \frac{1}{A} \left(k^{-1/2} + Q^{-(t + 1) / 2}\right)^2\right).$$</span>
Before we simplify this, we compute the covariance of <span class="math-container">$I_{ij}$</span> and <span class="math-container">$I_{i'j'}$</span> when <span class="math-container">$i \neq i'$</span>. Note that by the separation condition on <span class="math-container">$B_i$</span> and <span class="math-container">$B_{i'}$</span>, the circles <span class="math-container">$C_{ijk}$</span> and <span class="math-container">$C_{i'j'k}$</span> are disjoint. So we have
<span class="math-container">$$\PP(T_k \text{ touch }B_{ij}\text{ or }B_{i'j'}) = \frac{2}{A} \left(k^{-1/2} + Q^{-(t + 1) / 2}\right)^2.$$</span>
Thus we have
<span class="math-container">$$\EE[I_{ij} I_{i'j'}] = \prod_{k = Q^t + 1}^{Q^{t + 1}}\left(1 - \frac{2}{A} \left(k^{-1/2} + Q^{-(t + 1) / 2}\right)^2\right).$$</span>
The crucial observation is that we can, using the relation <span class="math-container">$1 - 2x \leq (1 - x)^2$</span>, compute that
<span class="math-container">$$\EE[I_{ij} I_{i'j'}] \leq \EE[I_{ij}] \EE[I_{i'j'}].$$</span>
So we conclude that
<span class="math-container">$$\text{Cov}(I_{ij}, I_{i'j'}) \leq 0.$$</span>
In other words, if a <span class="math-container">$T_k$</span> does not cover <span class="math-container">$I_{ij}$</span>, it is more likely to cover <span class="math-container">$I_{i'j'}$</span>. This is crucially what makes the second moment argument work! We can now estimate the expectation <span class="math-container">$\EE[I_{ij}]$</span>. Recall
<span class="math-container">$$\EE[I_{ij}] = \prod_{k = Q^t + 1}^{Q^{t + 1}}\left(1 - \frac{1}{A} \left(k^{-1/2} + Q^{-(t + 1) / 2}\right)^2\right).$$</span>
Note <span class="math-container">$\left(k^{-1/2} + Q^{-(t + 1) / 2}\right)^2 = k^{-1} + 2 k^{-1/2}Q^{-(t + 1) / 2} + Q^{-(t + 1)}$</span>, so
<span class="math-container">$$\EE[I_{ij}] \geq \prod_{k = Q^t + 1}^{Q^{t + 1}}\left(1 - \frac{1}{A}(k^{-1} + 2 k^{-1/2}Q^{-(t + 1) / 2} + Q^{-(t + 1)})\right).$$</span>
Using the estimate <span class="math-container">$1 - x \geq e^{-x-x^2}$</span> when <span class="math-container">$x \leq 1/2$</span>, we get
<span class="math-container">$$\EE[I_{ij}] \geq \exp\left(-\sum_{k = Q^t + 1}^{Q^{t + 1}}\frac{1}{A}(k^{-1} + 2 k^{-1/2}Q^{-(t + 1) / 2} + Q^{-(t + 1)}) + 4k^{-2}\right).$$</span>
Now using familiar results about Harmonic series, we conclude that
<span class="math-container">$$\EE[I_{ij}] \geq \exp\left(-\frac{1}{A} \log Q - 10\right) = e^{-10} Q^{-1/A}.$$</span>
Now, let
<span class="math-container">$$X = \sum_{i, j} I_{ij}.$$</span>
There are <span class="math-container">$R 2^i$</span> terms in this sum. By linearity of expectations, we have
<span class="math-container">$$\EE[X] \geq e^{-10} Q^{-1/A} R 2^t \geq 100^{-1} e^{-10} Q^{1-1/A} 2^t \geq 2^{t + 2}.$$</span>
We note that
<span class="math-container">$$\Var[X] = \sum_{i,j,i',j'} \Cov[I_{ij}, I_{i'j'}].$$</span>
The vast majority of terms in this sum has <span class="math-container">$i \neq i'$</span>! We have
<span class="math-container">$$\sum_{i,j,j'} \Cov[I_{ij}, I_{ij'}] \leq \sum_{i,j,j'} \EE[I_{ij}] \leq R \EE[X].$$</span>
And
<span class="math-container">$$\sum_{i,j,i',j': i\neq i'} \Cov[I_{ij}, I_{i'j'}] \leq 0.$$</span>
So we have
<span class="math-container">$$\Var[X] \leq R \EE[X].$$</span>
Our hard work has finally paid off! By Chebyshev's inequality, recalling that <span class="math-container">$\EE[X] \geq 2^{t + 2}$</span>, we have
<span class="math-container">$$\PP[X < 2^{t + 1}] \leq \frac{\Var[X]}{(\EE[X] - 2^{t + 1})^2} \leq \frac{4R\EE[X]}{\EE[X]^2} \leq \frac{4R}{\EE[X]} \leq \frac{4R}{2^t}.$$</span>
If <span class="math-container">$X \geq 2^{t + 1}$</span>, then <span class="math-container">$E_{t + 1}$</span> happens, as desired. We have completed the proof of the lemma.</p>
<p>Note that <span class="math-container">$E_Q$</span> happens with non-zero probability, since it happens whenever the first <span class="math-container">$Q^Q$</span> circles all lie in a semi-circle of <span class="math-container">$B$</span>. The lemma tells us that
<span class="math-container">$$\PP(E_{i + 1}) \geq \PP(E_i) \cdot \left(1 - \frac{Q}{2^i}\right).$$</span>
So telescoping gives
<span class="math-container">$$\PP(E_i) \geq \PP(E_Q) \cdot \prod_{Q \leq j \leq i} \left(1 - \frac{Q}{2^j}\right).$$</span>
Thus we have
<span class="math-container">$$\PP(\cap_{i = Q}^\infty E_i) \geq \PP(E_Q) \cdot \prod_{j = Q}^\infty \left(1 - \frac{Q}{2^j}\right) > 0.$$</span>
Finally, if <span class="math-container">$\cap_{i = Q}^\infty E_i$</span> happens, then the circle is not covered (thanks to Cantor's intersection theorem). So the circle is not covered with non-zero probability as desired.</p>
| <p><strong>EDIT:</strong> The second to last line in my answer is flawed, as pointed out by @Dominik Kutek.</p>
<hr />
<p>Consider a fixed point on the bottom disk. The probability that it is covered by the top disk of area <span class="math-container">$\frac{1}{k}$</span>, is at least <span class="math-container">$\frac{1}{2kA}$</span>. (The <span class="math-container">$2$</span> is there because the fixed point might be near the edge of the bottom disk.)</p>
<p>So the probability that the fixed point is <em>not</em> covered by the top disk of area <span class="math-container">$\frac{1}{k}$</span>, is less than or equal to <span class="math-container">$1-\frac{1}{2kA}$</span>.</p>
<p>So the probability that the fixed point is <em>not</em> covered by any of the top disks, is less than or equal to</p>
<p><span class="math-container">$$\prod\limits_{k=1}^\infty\left(1-\frac{1}{2kA}\right)=\exp \sum\limits_{k=1}^\infty \ln \left(1-\frac{1}{2kA}\right)\le \exp \sum\limits_{k=1}^\infty\left(-\frac{1}{2kA}\right)=0$$</span></p>
<p>So for any value of <span class="math-container">$A$</span>, every fixed point on the bottom disk will be covered with probability <span class="math-container">$1$</span>.</p>
<p>So for any value of <span class="math-container">$A$</span>, the probability that there will be an <em>un</em>covered region of positive area, is <span class="math-container">$0$</span>.</p>
<p>So for any value of <span class="math-container">$A$</span>, the probability that the bottom disk will be completely covered, is <span class="math-container">$1$</span>.</p>
|
linear-algebra | <p>The dot product of vectors $\mathbf{a}$ and $\mathbf{b}$ is defined as:
$$\mathbf{a} \cdot \mathbf{b} =\sum_{i=1}^{n}a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}$$</p>
<p>What about the quantity?
$$\mathbf{a} \star \mathbf{b} = \prod_{i=1}^{n} (a_{i} + b_{i}) = (a_{1} +b_{1})\,(a_{2}+b_{2})\cdots \,(a_{n}+b_{n})$$</p>
<p>Does it have a name?</p>
<p>"Dot sum" seems largely inappropriate. Come to think of it, I find it interesting that the dot product is named as such, given that it is, after all, a "sum of products" (although I am aware that properties of $\mathbf{a} \cdot{} \mathbf{b}$, in particular distributivity, make it a meaningful name).</p>
<p>$\mathbf{a} \star \mathbf{b}$ is commutative and has the following property:</p>
<p>$\mathbf{a} \star (\mathbf{b} + \mathbf{c}) = \mathbf{b} \star (\mathbf{a} + \mathbf{c}) = \mathbf{c} \star (\mathbf{a} + \mathbf{b})$</p>
| <p>Too long for a comment, but I'll list some properties below, in hopes some idea comes up.</p>
<ul>
<li>${\bf a}\star {\bf b}={\bf b}\star {\bf a}$;</li>
<li>$(c{\bf a})\star (c {\bf b})=c^n ({\bf a}\star {\bf b})$;</li>
<li>$({\bf a+b})\star {\bf c} = ({\bf a+c})\star {\bf b} = ({\bf b+c})\star {\bf a}$;</li>
<li>${\bf a}\star {\bf a} = 2^n a_1\cdots a_n$;</li>
<li>${\bf a}\star {\bf 0} = a_1\cdots a_n$;</li>
<li>$(c{\bf a})\star {\bf b} = c^n ({\bf a}\star ({\bf b}/c))$;</li>
<li>${\bf a}\star (-{\bf a}) = 0$;</li>
<li>${\bf 1}\star {\bf 0} = 1$, where ${\bf 1} = (1,\ldots,1)$;</li>
<li>$\sigma({\bf a}) \star \sigma({\bf b}) = {\bf a}\star {\bf b}$, where $\sigma \in S_n$ acts as $\sigma(a_1,\ldots,a_n) \doteq (a_{\sigma(1)},\ldots,a_{\sigma(n)})$.</li>
</ul>
| <p>I don't know if it has a particular name, but it is essentially a peculiar type of convolution.
Note that
$$ \prod_{i}(a_{i} + b_{i}) = \sum_{X \subseteq [n]} \left( \prod_{i \in X} a_{i} \right) \left( \prod_{i \in X^{c}} b_{i} \right), $$
where $X^{c} = [n] \setminus X$ and $[n] = \{1, 2, \dots n\}$. In other words, if we define $f_{a}, f_{b}$ via
$$ f_{a}(X) = \prod_{i \in X}a_{i}, $$
then
$$ a \star b = (f_{a} \ast f_{b})([n]) $$
where $\ast$ denotes the convolution product
$$ (f \ast g)(Y) = \sum_{X \subseteq Y} f(X)g(Y \setminus X). $$
To learn more about this, I would recommend reading about multiplicative functions and Moebius inversion in number theory. I don't know if there is a general theory concerning this, but the notion of convolutions comes up in many contexts (see this <a href="https://en.wikipedia.org/wiki/Convolution" rel="noreferrer">wikipedia article</a>, and another on <a href="https://en.wikipedia.org/wiki/Dirichlet_convolution" rel="noreferrer">its role in number theory</a>).</p>
<p>Edit:
For what it's worth, the operation is not a vector operation in linear-algebraic sense. That is, it is not preserved under change-of-basis. In fact, it is not even preserved under orthogonal change-of-basis (aka rotation). For example, consider $a = (3,4) \in \mathbb{R}^{2}$. Note that $a \star a = 32$. Then we apply the proper rotation $T$ defined by $T(a) = (5, 0)$. Then we see $T(a) \star T(a) = 0$.</p>
|
number-theory | <p>My question (more of a hypothesis really) is basically this: If a function $f(x)$ is defined such that $f'(x)$ is not constant and never the same for any 2 values of $x$. Then there do not exist positive integers $a,b,c$ and $a\le b\le c$ such that, </p>
<p>$$\int_0^{a} f(x)dx = \int_b^{c} f(x)dx\tag1$$</p>
<p>Taking $f(x)=x^n$ then</p>
<ul>
<li>for $n \gt 1$, $(1)$ becomes Fermat's Last Theorem, while</li>
<li>for $n=1$, $(1)$ becomes $a^2 + b^2 = c^2$ </li>
</ul>
<p>Maybe this has something to do with the "curviness" (I am 16 so please forgive me for non-technical language) of the graphs of $x^n$ since for $n=1$ the graph is linear and for $n>1$ the graph is curvy. </p>
<p>My school teachers are not talking to me about this because "it is out of syllabus", so I thought maybe this community could help.</p>
| <p>Interesting. But not true. If you define <span class="math-container">$f(x)=\dfrac1{x+1}$</span>, then<span class="math-container">$$\int_0^1f(x)\,\mathrm dx=\int_1^3f(x)\,\mathrm dx,$$</span>since<span class="math-container">$$\int_\alpha^\beta f(x)\,\mathrm dx=\log\left(\frac{\beta+1}{\alpha+1}\right).$$</span></p>
| <p>(Too long for a comment, therefore adding as an answer)</p>
<p>Creating conjectures is a great way to challenge your understanding and improve your skills. Seeing that you're only 16 and you're already creating conjectures involving integrals that your teachers are avoiding to tackle, I thought I'd give my 2 cents (reminds me of myself).</p>
<p>As for your specific question, there are many counterexamples, as others pointed out. I'm going to talk about some general ideas of creating conjectures as a way to improve your skills.</p>
<p>What I'd like to say is basically that you should give <strong>a lot of thought in your choice of hypothesis</strong>. This is not silly nitpicking, this is a serious advice - good conjectures always have meaningful and logical hypothesis. By questioning yourself why you're using certain hypothesis often makes it clearer if your conjecture makes sense or not.</p>
<p>For example:</p>
<blockquote>
<p>a function $f(x)$ is defined such that $f'(x)$ is not constant and never the same for any 2 values of $x$</p>
</blockquote>
<p>I know you didn't say this, but assume for a moment that $f'$ is continuous. If this is the case, it is clear that from your hypothesis, $f'$ must be strictly monotonic. So, question yourself if you have good reasons to include in your conjecture functions that are differentiable but not $C^1$. If yes, why? (I don't see any reason). If not, then you should choose a less convoluted statement (such as "$f'$ is strictly monotonic").</p>
<p>(Also, note that "is not constant" is redundant anyway)</p>
<p>Also, I know it's cool that your conjecture becomes Fermat's Last Theorem in a certain case, but you should also question yourself what fundamental difference does it make that your integration limits are integers. Since your hypotheses are incredibly broad, I don't see why integer limits would have anything special - again, this is a suggestion for you to rethink your hypothesis. Why integers are special here? Why not allow any reals? By asking yourself this kind of question, you have the means to assess the value of your own conjecture yourself.</p>
<p>(Observe, for example, that if your conjecture was true, it would be trivial to prove a broader version for rational limits, by shrinking/enlarging $f$ by a constant...)</p>
<p>Keep up the good work!</p>
|
matrices | <p>Recently on this site, the <a href="https://math.stackexchange.com/q/1634488/264509">question was raised</a> how we might define the factorial operation $\mathsf{A}!$ on a square matrix $\mathsf{A}$. The <a href="https://math.stackexchange.com/a/1634551/264509">answer</a>, perhaps unsurprisingly, involves the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="noreferrer">Gamma function</a>.</p>
<p>What use might it be to take the factorial of a matrix? Do any applications come to mind, or does this – for now* – seem to be restricted to the domain of recreational mathematics?</p>
<p><sup>(*Until e.g. theoretical physics turns out to have a use for this, as happened with <a href="https://en.wikipedia.org/wiki/Calabi%E2%80%93Yau_manifold" rel="noreferrer">Calabi–Yau manifolds</a> and <a href="https://en.wikipedia.org/wiki/Superstring_theory" rel="noreferrer">superstring theory</a>...)</sup></p>
| <p>I could find the following references that take the use of matrix factorial for a concrete applied context:</p>
<p><em><strong>Coherent Transform, Quantization and Poisson Geometry:</strong></em> In his <a href="https://www.google.de/books/edition/Coherent_Transform_Quantization_and_Pois/5jGveNHXftgC" rel="nofollow noreferrer">book</a> Mikhail Vladimirovich Karasev uses for example the matrix factorial for hypersurfaces and twisted hypergeometric functions.</p>
<p><a href="https://i.sstatic.net/2nsEjm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2nsEjm.png" alt="enter image description here" /></a></p>
<p><em><strong>Artificial Intelligence Algorithms and Applications:</strong></em> In this <a href="https://www.google.de/books/edition/Artificial_Intelligence_Algorithms_and_A/0DTnDwAAQBAJ" rel="nofollow noreferrer">conference proceeding</a> the Bayesian probability matrix factorial is used in the context of classification of imputation methods for missing traffic data.</p>
<p><em><strong>Fluid model:</strong></em> Mao, Wang and Tan deal in their <a href="https://link.springer.com/article/10.1007/s12190-010-0467-7" rel="nofollow noreferrer">paper</a> (Journal of Applied Mathematics and Computing) with a fluid model driven by an <span class="math-container">$M/M/1$</span> queue with multiple exponential vacations and <span class="math-container">$N$</span>-policy.</p>
<p><em><strong>Construction of coherent states for multi-level quantum systems:</strong></em> In <a href="https://iopscience.iop.org/article/10.1088/0305-4470/37/40/014/meta" rel="nofollow noreferrer">their paper</a> "Vector coherent states with matrix moment problems" Thirulogasanthar and Hohouéto use matrix factorial in context of quantum physics.</p>
<p><em><strong>Algorithm Optimization</strong></em>: Althought this is a more theoretic field of application, I would like to mention this matrix factorial use case as well. Vladica Andrejić, Alin Bostan and Milos Tatarevic present in their <a href="https://www.sciencedirect.com/science/article/abs/pii/S0020019020301654" rel="nofollow noreferrer">paper</a> improved algorithms for computing the left factorial residues <span class="math-container">$!p=0!+1!+\ldots+(p−1)!\bmod{p}$</span>. They confirm that there are no socialist primes <span class="math-container">$p$</span> with <span class="math-container">$5<p<2^{40}$</span>. You may take a look into an <a href="https://arxiv.org/abs/1904.09196" rel="nofollow noreferrer">arXiv version</a> of this paper.</p>
| <p>The factorial has a straightforward interpretation in terms of automorphisms/permutations as the size of the set of automorphisms.</p>
<p>One possible generalization of matrices is the double category of spans.</p>
<p>So an automorphism <span class="math-container">$ R ! = R \leftrightarrow R $</span> over a span <span class="math-container">$A \leftarrow R \rightarrow B$</span> ought to be a reasonable generalization.</p>
<p>I usually find it easier to think in terms of profunctors or relations than spans.</p>
<p>The residual/internal hom of profunctors <span class="math-container">$(R/R)(a, b) = \forall x, R(x, a) \leftrightarrow R(x, b) $</span> is a kan extension. The kan extension of a functor with itself is the codensity monad. For profunctors and spans the automorphism ought to be a groupoid (a monad in the category of endospans equipped with inverses.)</p>
<p>The factorial is the size of the automorphism group of a set. The automorphism group ought to generalize to a "automorphism groupoid" of a span. I suspect permutation of a matrix ought to be a automorphism groupoid enriched in Vect but this confuses me.</p>
|
linear-algebra | <p>Let $V \neq \{\mathbf{0}\}$ be a inner product space, and let $f:V \to V$ be a linear transformation on $V$.</p>
<p>I understand the <em>definition</em><sup>1</sup> of the adjoint of $f$ (denoted by $f^*$), but I can't say I really <em>grok</em> this other linear transformation $f^*$.</p>
<p>For example, it is completely unexpected to me that to say that $f^* = f^{-1}$ is equivalent to saying that $f$ preserves all distances and angles (as defined by the inner product on $V$).</p>
<p>It is even more surprising to me to learn that to say that $f^* = f$ is equivalent to saying that there exists an orthonormal basis for $V$ that consists entirely of eigenvectors of $f$.</p>
<p>Now, I can <em>follow</em> the proofs of these theorems perfectly well, but the exercise gives me no insight into <em>the nature of the adjoint</em>.</p>
<p>For example, I can visualize a linear transformation $f:V\to V$ whose eigenvectors are orthogonal and span the space, but this visualization tells <em>me</em> nothing about what $f^*$ should be like when this is the case, largely because I'm completely in the dark about the adjoint <em>in general</em>.</p>
<p>Similarly, I can visualize a linear transformation $f:V\to V$ that preserves lengths and angles, but, again, and for the same reason, this visualization tells me nothing about what this implies for $f^*$.</p>
<p>Is there (coordinate-free, representation-agnostic) way to interpret the adjoint that will make theorems like the ones mentioned above less surprising?</p>
<hr>
<p><sup>1</sup> The adjoint of $f:V\to V$ is the unique linear transformation $f^*:V\to V$ (guaranteed to exist for every such linear transformation $f$) such that, for all $u, v \in V$,</p>
<p>$$ \langle f(u), v\rangle = \langle u, f^*(v)\rangle \,.$$</p>
| <p>For simplicity, let me consider only the finite-dimensional picture. In the infinite-dimensional world, you should consider bounded maps between Hilbert spaces, and the continuous duals of Hilbert spaces.</p>
<p>Recall that an inner product on a real [complex] vector space $V$ defines a canonical [conjugate-linear] isomorphism from $V$ to its dual space $V^\ast$ by $v \mapsto (w \mapsto \langle v,w\rangle)$, where I shamelessly use the mathematical physicist's convention that an inner product is linear in the second argument and conjugate-linear in the first; let us denote this isomorphism $V \cong V^\ast$ by $R$, so that $R(v)(w) := \langle v,w\rangle$.</p>
<p>Now, recall that a linear transformation $f : V \to W$ automatically induces a linear transformation $f^T : W^\ast \to V^\ast$, the <em>transpose</em> of $f$, by $\phi \mapsto \phi \circ f$; all $f^T$ does is use $f$ in the obvious way to turn functionals over $W$ into functionals over $V$, and really represents the image of $f$ through the looking glass, as it were, of taking dual spaces. However, if you have inner products on $V$ and $W$, then you have corresponding [conjugate-linear] isomorphisms $R_V : V \cong V^\ast$ and $R_W : W \cong W^\ast$, so you can use $R_V$ and $R_W$ to reinterpret $f^T : W^\ast \to V^\ast$ as a map $W \to V$, i.e., you can form $R_V^{-1} \circ f^T R_W : W \to V$. If you unpack definitions, however, you'll find that $R_V^{-1} \circ f^T \circ R_W$ is none other than your adjoint $f^\ast$. So, given <em>fixed</em> inner products on $V$ and $W$, $f^\ast$ is simply $f^T$, arguably a more fundamental object, except reinterpreted as a map between your original vector spaces, and not their duals. If you like commutative diagrams, then $f^T$, $f^\ast$, $R_V$ and $R_W$ all fit into a very nice commutative diagram.</p>
<p>As for the specific cases of unitary and self-adjoint operators:</p>
<ol>
<li><p>If you want to be resolutely geometrical about everything, the fundamental notion is not the notion of a unitary, but rather that of an <em>isometry</em>, i.e., a linear transformation $f : V \to V$ such that $\langle f(u),f(v)\rangle = \langle u,v\rangle$. You can then define a <em>unitary</em> as an invertible isometry, which is equivalent to the definition in terms of the adjoint. In fact, if you're working on finite-dimensional $V$, then you can check that $f$ is unitary if and only if it is isometric.</p></li>
<li><p>In light of the longish discussion above, an operator $f : V \to V$ is self-adjoint if and only if $f^T : V^\ast \to V^\ast$ is exactly the same as $f$ after applying the [conjugate-linear] isomorphism $R_V : V \to V^\ast$, i.e., $f = R_V^{-1} \circ f^T \circ R_V$, or equivalently, $R_V \circ f = f^T \circ R_V$, which you can interpret as commutativity of a certain diagram. That self-adjointness implies all these nice spectral properties arguably <em>shouldn't</em> be considered obvious---at the end of the day, the spectral theorem, even in the finite-dimensional case, is a highly non-trivial theorem in every respect, especially conceptually!</p></li>
</ol>
<p>I'm not sure that my overall spiel about adjoints and transposes is all that convincing, but I stand by my statement that the notion of isometry is the more fundamental one geometrically, one that happens to yield the notion of unitary simply out of the very definition of an adjoint, and that the spectral properties of a self-adjoint operator really are a highly non-trivial fact that shouldn't be taken for granted.</p>
| <p>The adjoint allows us to go from an active transformation view of a linear map to a passive view and vice versa. Consider a map $T$ and a vector $u$ and a set of basis covectors $e^i$ for $i \in 1, 2, \ldots$. Given the definition of the adjoint, we have</p>
<p>$$\left[\underline T(u), e^i \right] = \left[u, \overline T(e^i) \right]$$</p>
<p>where the commonly used bracket notation $[x, f]$ means $f(x)$.</p>
<p>On the left, we're actively transforming $u$ to a new vector and evaluating components in some pre-established basis. On the right, we're passively transforming our space to use a new basis and evaluating $u$ in terms of that basis. So for each active transformation, there is a corresponding (and equivalent) passive one.</p>
<p>That said, while I think this can help identify the meaning of the adjoint, I don't see how this helps make intuitive the theorems you described.</p>
|
linear-algebra | <p>What is an <em>intuitive</em> meaning of the null space of a matrix? Why is it useful?</p>
<p>I'm not looking for textbook definitions. My textbook gives me the definition, but I just don't "get" it.</p>
<p>E.g.: I think of the <em>rank</em> $r$ of a matrix as the minimum number of dimensions that a linear combination of its columns would have; it tells me that, if I combined the vectors in its columns in some order, I'd get a set of coordinates for an $r$-dimensional space, where $r$ is minimum (please correct me if I'm wrong). So that means I can relate <em>rank</em> (and also dimension) to actual coordinate systems, and so it makes sense to me. But I can't think of any physical meaning for a null space... could someone explain what its meaning would be, for example, in a coordinate system?</p>
<p>Thanks!</p>
| <p>If $A$ is your matrix, the null-space is simply put, the set of all vectors $v$ such that $A \cdot v = 0$. It's good to think of the matrix as a linear transformation; if you let $h(v) = A \cdot v$, then the null-space is again the set of all vectors that are sent to the zero vector by $h$. Think of this as the set of vectors that <em>lose their identity</em> as $h$ is applied to them.</p>
<p>Note that the null-space is equivalently the set of solutions to the homogeneous equation $A \cdot v = 0$.</p>
<p>Nullity is the complement to the rank of a matrix. They are both really important; here is a <a href="https://math.stackexchange.com/questions/21100/importance-of-rank-of-a-matrix">similar question</a> on the rank of a matrix, you can find some nice answers why there.</p>
| <p>This is <a href="https://math.stackexchange.com/a/987657">an answer</a> I got from <a href="https://math.stackexchange.com/q/987146">my own question</a>, it's pretty awesome!</p>
<blockquote>
<p>Let's suppose that the matrix A represents a physical system. As an example, let's assume our system is a rocket, and A is a matrix representing the directions we can go based on our thrusters. So what do the null space and the column space represent?</p>
<p>Well let's suppose we have a direction that we're interested in. Is it in our column space? If so, then we can move in that direction. The column space is the set of directions that we can achieve based on our thrusters. Let's suppose that we have three thrusters equally spaced around our rocket. If they're all perfectly functional then we can move in any direction. In this case our column space is the entire range. But what happens when a thruster breaks? Now we've only got two thrusters. Our linear system will have changed (the matrix A will be different), and our column space will be reduced.</p>
<p>What's the null space? The null space are the set of thruster intructions that completely waste fuel. They're the set of instructions where our thrusters will thrust, but the direction will not be changed at all.</p>
<p>Another example: Perhaps A can represent a rate of return on investments. The range are all the rates of return that are achievable. The null space are all the investments that can be made that wouldn't change the rate of return at all.</p>
<p>Another example: room illumination. The range of A represents the area of the room that can be illuminated. The null space of A represents the power we can apply to lamps that don't change the illumination in the room at all.</p>
</blockquote>
<p>-- <a href="https://math.stackexchange.com/a/987657">NicNic8</a></p>
|
number-theory | <p>Let $\mu\left(n\right)$ be the Möbius function. Let $\phi\left(n\right)$ be Euler's totient function. Let $\sigma\left(n\right)$ be the sum of divisors and $\tau\left(n\right)$ be the number of divisors functions. I am curious to know whether or not the system:</p>
<p>$\mu\left(n\right)=a$</p>
<p>$\phi\left(n\right)=b$</p>
<p>$\sigma\left(n\right)=c$</p>
<p>$\tau\left(n\right)=d$</p>
<p>has at most one solution. </p>
<p>Motivation: I remember a number theory assignment I had where we were given particular values for each of these functions and asked to recover the original number. I can't for the life of me remember how (or if) I managed to solve this problem. I tried to work out a general proof, but couldn't. I also wrote a loop in maple to check for counterexamples, but haven't found any yet. I feel like this is something I should know, but probably have forgotten the relevant facts to approaching this problem.</p>
| <p>The answer is No. The smallest counterexamples I could find are {1836, 1824), {5236, 4960}, {5742, 5112}, {6764, 6368}, {9180, 9120} and {9724, 9184}. I think those are all the pairs in which both numbers are less than 10,000. </p>
<p>For example, both $n=1836$ and $n=1824$ satisfy $\mu(n)=0$, $\varphi(n)=576$, $\sigma(n)=5040$ and $\tau(n)=24$. </p>
<p>EDIT: here's the code of the program I used in GAP.</p>
<pre><code>vec := function(n)
return [MoebiusMu(n), Phi(n), Sigma(n), Tau(n)];
end;
dic:=NewDictionary([1,2,3,4], true);
for i in [2..10000] do
v:=vec(i);
if (LookupDictionary(dic, v) <> fail) then Print(i," <=> ", LookupDictionary(dic, v), "\n"); fi;
AddDictionary(dic, v, i);
od;
</code></pre>
| <p>First, congratulations on asking a number theory question which is quite unlike any I have seen before. It's certainly more fun to read about questions like this rather than computing $a^b$ modulo $n$. </p>
<p>Regarding the assignment you got: one can certainly find particular values of $a$, $b$, $c$ and $d$ such that there is exactly one $n$ solving the equations. For instance, if $p$ is a prime number then taking $a = -1$, $b = p-1$, $c = p+1$, $d = 2$ has the unique
solution $n = p$: $\tau(n) = 2$ forces $n$ to be prime and thus $\varphi(n) = n-1$. </p>
<p>Although I am a number theorist, I don't have an expert opinion on the general question. (I'm not really "the right kind" of number theorist to answer this question. You might ask Carl Pomerance, for instance.) I suppose the more reasonable guess is that there are values of $a,b,c,d$ which yield more than one solution. </p>
<p>Probably the first thing to do is what you're doing: search for counterexamples by computer. When you get tired of doing that, let us know how far you looked (and, of course, whether you've found any). The second thing to do is to come up with a heuristic on how often one might expect multiple solutions to be found...</p>
|
combinatorics | <p>Suppose that at the beginning there is a blank document, and a letter "a" is written in it. In the following steps, only the three functions of "select all", "copy" and "paste" can be used.</p>
<p>Find the minimum number of steps to reach
<strong>at least</strong> <span class="math-container">$100,000$</span> a's (each of the three operations of "select all", "copy" and "paste" is counted as one step). If the target number is not specified, and I want to get <strong>the exact amount</strong> of "a"s, is there a general formula?</p>
<p>This is a fascinating question. My friend and I discussed it for a long time that day, but to no avail.</p>
<p>What I started thinking about was –
If the steps of "select all", "copy" and "paste" are roughly counted as one step. Each step makes the number <span class="math-container">$\times2$</span>, so it is a geometric progression with a common ratio of 2.</p>
<p>Let <span class="math-container">$a_{n}≥100000 \: (n\in \mathbb{N})$</span>, where <span class="math-container">$a_{1}=1$</span>.</p>
<p>According to the general formula of geometric progression: <span class="math-container">$a_{n}=a_{1}\times q^{n-1}$</span></p>
<p>We can get: <span class="math-container">$n=17$</span></p>
<p>So if the three operations of "select all", "copy" and "paste" are each counted as one step, there are a total of <span class="math-container">$17×3=51$</span> steps</p>
<p>But this ignores a problem: can we paste all the time?</p>
<p>So this seems to be an interesting optimization problem, and we need to find a strategy to minimize the number of steps from one "a" to one hundred thousand "a"s.</p>
<ol>
<li><p>Select all + copy + paste: These three operations double the number of "a"s. If there are currently <span class="math-container">$n$</span> "a"s, then there will be <span class="math-container">$2n$</span> "a"s after the operation.</p>
</li>
<li><p>Paste: This operation will add the number of "a"s equal to the clipboard content. If there are <span class="math-container">$k$</span> "a"s in the clipboard, then there will be <span class="math-container">$(n+k)$</span> "a"s after the operation.</p>
</li>
</ol>
<p>We define a function <span class="math-container">$f(n)$</span> that represents the minimum number of steps required to reach <span class="math-container">$n$</span> "a"s. Initially, we have one "a", so <span class="math-container">$f(1) = 0$</span>.</p>
<p>If we choose the doubling operation, <span class="math-container">$f(2n) = f(n) + 3$</span>.</p>
<p>If we choose the paste operation, then <span class="math-container">$f(n+k) = f(n) + 1$</span>, where <span class="math-container">$k$</span> is the number of "a"s in the clipboard.</p>
<p>Then I started to get confused because I realized that it seemed that every step was facing optimization, and it seemed to be complicated. I started to use function comparison to get <span class="math-container">$n=14$</span> as the minimum value, but I realized that it was only optimized once.</p>
| <p>S=select all<br />
C=copy<br />
P=paste</p>
<p>SS, SP, CS, PC or CC don't make sense, of course. So after a S there must be a C, after a C there must be a P, and after a P there must be a P or an S.</p>
<p>SC<span class="math-container">$k$</span>P is SC and <span class="math-container">$k$</span> pastes, for example SCPPPP=SC<span class="math-container">$4$</span>P.</p>
<p>SC<span class="math-container">$k$</span>P is <span class="math-container">$k+2$</span> steps and multiplies the number of characters by <span class="math-container">$k+1$</span>.</p>
<p>So, after <span class="math-container">$a_1$</span> SC<span class="math-container">$1$</span>P, <span class="math-container">$a_2$</span> SC<span class="math-container">$2$</span>P, etc, the number of characters is
<span class="math-container">$$\prod_{k=1}^\infty (k+1)^{a_k}$$</span>
and the number of steps is
<span class="math-container">$$\sum_{k=1}^\infty (k+2)a_k$$</span></p>
<p>Note that only a finite number of <span class="math-container">$a_k$</span> are positive.</p>
<p>Roughly, SC<span class="math-container">$k$</span>P multiplies the number of characters by <span class="math-container">$\sqrt[k+2]{k+1}$</span> for each step. The maximum of <span class="math-container">$\sqrt[k+2]{k+1}$</span> for integer <span class="math-container">$k$</span> is at <span class="math-container">$k=3$</span>, but <span class="math-container">$k=2$</span> is very close to it. (For real <span class="math-container">$k$</span> it's not <span class="math-container">$e$</span>, just in case you are guessing). So we'd rather choose SC<span class="math-container">$2$</span>P or SC<span class="math-container">$3$</span>P whenever it is possible.</p>
<p>With some try-error I have found that
<span class="math-container">$$3^3\cdot 4^6=110592>10^5$$</span>
which yields <span class="math-container">$3\cdot 4+6\cdot 5=42$</span> steps.</p>
<p>On the other hand, to get exactly 100,000 characters, since <span class="math-container">$10^5=2\cdot 4^2\cdot 5^5$</span>, the number of steps is
<span class="math-container">$$3\cdot1+5\cdot 2+6\cdot 5=43$$</span></p>
<p>I don't know if <span class="math-container">$41$</span> or less steps is possible, but I don't think so.</p>
<p>EDIT<br />
To show an example to @badjohn, if you want to get exactly 100,001 characters, you should facttorize <span class="math-container">$100001=11\cdot 9091$</span>, so the best way to get it is 1SC<span class="math-container">$10$</span>P and 1SC<span class="math-container">$9090$</span>P, that is, <span class="math-container">$12+9092=9104$</span> steps.</p>
<p>For 99,999 it is <span class="math-container">$99999=3^2\cdot 41\cdot 271$</span>. Since <span class="math-container">$3^2$</span> is relatively small, we try</p>
<ul>
<li>2SP2C+1SP40C+1SP270C<span class="math-container">$=8+42+272=302$</span> steps.</li>
<li>1SP8C+1SP40C+1SP270C<span class="math-container">$=10+42+272=304$</span> steps.</li>
</ul>
<p>EDIT2</p>
<p>The formula of the minimal steps to get <em>exactly</em> <span class="math-container">$n$</span> characters is as follows:</p>
<p>Factorize <span class="math-container">$n$</span> this way:
<span class="math-container">$$n=2^{\alpha_2}4^{\alpha_4}\prod_{p\text{ odd prime}}p^{\alpha_p},$$</span>
where <span class="math-container">$a_2$</span> must be <span class="math-container">$0$</span> or <span class="math-container">$1$</span> and <span class="math-container">$a_k\ge 0$</span> for <span class="math-container">$k\ge 3$</span>.</p>
<p>Then the minimal number of steps is
<span class="math-container">$$\sum_{p=4\text{ or $p$ is prime}}\alpha_p(p+1)$$</span></p>
<hr />
<p><strong>Some notes:</strong></p>
<p><strong>1</strong>. Finding the maximum of <span class="math-container">$\sqrt[k+2]{k+1}$</span> with calculus involves a non-elementary equation that I solved with sofware. Fortunately, it is not essential, it's just a hint.</p>
<p><strong>2</strong>. The essential points of my reasoning is that that the minimal number of steps <span class="math-container">$S(n)$</span> to get exactly <span class="math-container">$n$</span> characters, depends only on some factorization of <span class="math-container">$n$</span>, that involves <span class="math-container">$4$</span>, odd prime numbers and <span class="math-container">$2$</span> only if it's needed.</p>
<p>The number of steps for some factorization (not necessarily with prime factors) <span class="math-container">$$\prod_{n}n^{\alpha_n}$$</span> is <span class="math-container">$$\sum_{n}(n+1)\alpha_n$$</span></p>
<p>To get the best factorization we can begin from the prime factorization and then we can see if 'combining' some factors, we can get a better one.</p>
<p>Combining a factor <span class="math-container">$p^{\alpha}$</span> with a factor <span class="math-container">$q^{\alpha}$</span>, (<span class="math-container">$p,q\ge 2$</span>, of course) ,we get the factor <span class="math-container">$(pq)^{\alpha}$</span>. The number of steps for that factors goes from
<span class="math-container">$$\alpha(p+q+2)$$</span>
to
<span class="math-container">$$\alpha(pq+1)$$</span>
so the question is if <span class="math-container">$p+q+2$</span> is greater or lesser than <span class="math-container">$pq+1$</span>. Combining gets lesser number of steps iff <span class="math-container">$p+q+2>pq+1$</span>.</p>
<p>So assume for example that <span class="math-container">$p+q+2>pq+1$</span>. The following inequalities are equivalent to this one:
<span class="math-container">$$p+q+1>pq$$</span>
<span class="math-container">$$p(q-1)<q+1$$</span>
Since <span class="math-container">$q-1>0$</span>,
<span class="math-container">$$p<\frac{q+1}{q-1}=1+\frac2{q-1}$$</span>
This is true only for <span class="math-container">$p=q=2$</span>, so the 'combining' gets better results only to make a <span class="math-container">$4$</span> from two <span class="math-container">$2$</span>'s.</p>
<p>That is, SC3P is better than 2SCP. But repeating P instead of making a new SC is worse in every other situation. Or equal for SCPPSCP and SC5P. (That is, to multiply the number of charaters by 6 you must make 7 steps, no matter how).</p>
<p><strong>3</strong>. The problem is much, much harder if you want to get at least <span class="math-container">$n$</span> characters, because it involves the factorization of every number <span class="math-container">$\ge n$</span>.</p>
| <p>Elaborating on the answer by @ajotatxe we can actually show what is the minimum number of steps needed.</p>
<p>As he mentioned, the only sensible moves are <span class="math-container">$M_k : =\textrm{SC}$</span>k<span class="math-container">$\textrm{P}$</span> , that is "select all, copy and then paste <span class="math-container">$k$</span> times" where <span class="math-container">$k \ge 1$</span>.</p>
<p>Each time we apply <span class="math-container">$M_p$</span>, we multiply the number by <span class="math-container">$(p+1)$</span> and use <span class="math-container">$(p+2)$</span> steps. Let us call <span class="math-container">$C(M_p) = p+2$</span> the <em>cost</em> of the move and <span class="math-container">$U(M_p) = p+1$</span> the <em>utility</em> of the move. For a sequence of moves <span class="math-container">$\bar{M} = (M_{p_1}, \ldots, M_{p_k})$</span>, the cost and utility functions are respectively
<span class="math-container">$$ C(\bar{M} ) = \sum_{j=1}^k (p_j +2) , \ \ \ U(\bar{M}) = \prod_{j=1}^k (p_j+1) $$</span>
Let us say that a sequence of moves <span class="math-container">$\bar{M}$</span> is <em>worse</em> than another sequence <span class="math-container">$\bar{N}$</span>, denoted <span class="math-container">$\bar{M} \prec \bar{N}$</span>, if <span class="math-container">$C(\bar{M}) \ge C(\bar{N})$</span> and <span class="math-container">$U(\bar{M}) \le U(\bar{N})$</span>. In practice, we will almost always compare sequences with the same cost. A sequence of moves <span class="math-container">$\bar{M}$</span> is called optimal if it is maximal with respect to <span class="math-container">$\prec$</span>. On a side, note that <span class="math-container">$\prec$</span> is only a preorder, that is we can have different sequences with equal utility and cost.</p>
<p>I claim that the only sensible moves are <span class="math-container">$M_1, M_2, M_3, M_4$</span>. I will show this fact by the following
<strong>Lemma.</strong> If a sequence of moves <span class="math-container">$\bar{M}$</span> contains <span class="math-container">$M_k$</span> with <span class="math-container">$k \ge 5$</span>, there exists <span class="math-container">$\bar{M} \prec \bar{N} $</span> with <span class="math-container">$\bar{N}$</span> not containing <span class="math-container">$M_k$</span> for <span class="math-container">$k \ge 5$</span>.</p>
<p><strong>Proof.</strong> Since composing moves is multiplicative in the utility and additive in the cost, it is enough to show that <span class="math-container">$M_k$</span> for <span class="math-container">$k\ge 5$</span> is worse than a sequence of moves only made of <span class="math-container">$M_1, M_2, M_3, M_4$</span>. For <span class="math-container">$k =5$</span> we have <span class="math-container">$M_5 \preceq (M_2, M_1)$</span> , while for <span class="math-container">$k \ge 6$</span> I claim that <span class="math-container">$M_k \prec (M_{k-5}, M_3)$</span>. Indeed, <span class="math-container">$k+1 \le 4k - 16 $</span> whenever <span class="math-container">$k \ge 17/3 \approx 5.67$</span>.</p>
<p>The second observation is that <span class="math-container">$M_3$</span> has the best utility-per-cost; this can be seen by comparing the factor <span class="math-container">$\sqrt[k+2]{k+1}$</span> for <span class="math-container">$k=1,2,3,4$</span>:
<span class="math-container">$$ \alpha_1 = \sqrt[3]{2} \approx 1.26, \ \ \ \alpha_2 = \sqrt[4]{3} \approx 1.316, \ \ \ \alpha_3 = \sqrt[5]{4} \approx 1.319, \ \ \ \alpha_4 =\sqrt[6]{5} \approx 1.308$$</span>
As a result, we have the following:</p>
<p><strong>Lemma.</strong> An optimal sequence contains <span class="math-container">$M_1$</span> and <span class="math-container">$M_4$</span> at most once and <span class="math-container">$M_2$</span> at most four times. Furthermore, <span class="math-container">$M_2$</span> and <span class="math-container">$M_4$</span> cannot appear together. In particular, all but 5 moves in an optimal sequence are <span class="math-container">$M_3$</span> moves.</p>
<p><strong>Proof.</strong> Suppose by contradiction that <span class="math-container">$M_2$</span> appears five times in an optimal sequence. Note that
<span class="math-container">$$ C(M_2, M_2, M_2, M_2, M_2) = 20 = C(M_3, M_3, M_3, M_3) $$</span>
The utility of <span class="math-container">$M_2$</span> five times is <span class="math-container">$U(M_2)^5 = \alpha_2^{20}$</span>, while the utility of <span class="math-container">$M_3$</span> repeated <span class="math-container">$4$</span> times is <span class="math-container">$\alpha_3^{20}$</span>, concluding the argument.</p>
<p>To prove that <span class="math-container">$M_4$</span> and <span class="math-container">$M_1$</span> can appear at most once, note that <span class="math-container">$(M_4, M_4) \prec (M_2, M_2, M_2)$</span> and <span class="math-container">$(M_1, M_1) \prec M_4$</span>. The last claim follows from <span class="math-container">$(M_2, M_4) \prec (M_3, M_3)$</span>.</p>
<p>Lastly, note that <span class="math-container">$(M_1, M_2, M_2) \prec (M_4, M_3)$</span> allows for only one <span class="math-container">$M_2$</span> when <span class="math-container">$M_1$</span> appears. As a result, the only possible optimal, non-<span class="math-container">$M_3$</span> moves are the following <span class="math-container">$7$</span> moves:
<span class="math-container">$$M_1, \ \ \ M_1 M_2, \ \ \ M_2, \ \ \ M_2 M_2, \ \ \ M_2 M_2M_2, \ \ \ M_2 M_2 M_2 M_2, \ \ \ M_4 $$</span>
It is possible to show, with the help of a simple program, that these are indeed optimal sequences of moves.</p>
<p>We are ready to show the final theorem:</p>
<p><strong>Theorem.</strong> The minimal number of steps <span class="math-container">$S(n)$</span> to copy&paste a text <span class="math-container">$n$</span> times with select-all, copy and paste moves is given by
<span class="math-container">$$ S(n) = \min_{i=1, \ldots, 8} 5 \lceil \log_4(n/u_i) \rceil+c_i $$</span>
where <span class="math-container">$(c_i, u_i)$</span> varies in the set
<span class="math-container">$$ I = \{ (0,1), (3,2), (7,6), (4,3), (8,9), (12,27), (16,81), (6,5) \} $$</span></p>
<p><strong>Proof.</strong> The proof is a direct consequence of the above argument. The set <span class="math-container">$I$</span> is the set of cost-utility coordinates for the 7 above move sequences, plus an eight element representing the empty move (corresponding to only using <span class="math-container">$M_3$</span>). If we use a starting sequence with cost-utility <span class="math-container">$(c,u)$</span> and then apply <span class="math-container">$k$</span> times <span class="math-container">$M_3$</span>, we get a total cost-utility of <span class="math-container">$(c+5k, u \cdot 4^k)$</span>. Imposing <span class="math-container">$u \cdot 4^k \ge n$</span> we get <span class="math-container">$k \ge \log_4(n/u)$</span>, so that the smallest integer solution is <span class="math-container">$k_* = \lceil \log_4(n/u) \rceil$</span>. Substituting into the cost we obtain the claimed formula.</p>
<p>As an application, for <span class="math-container">$n= 100,000$</span> we get the best value is obtained with starting move <span class="math-container">$M_2 M_2 M_2$</span>, and the associated number of steps is... <strong>42</strong>, the Answer to the Ultimate Question of Life, The Universe, and Everything!! My compliments to @ajotatxe for having guessed the answer by trial and errors.</p>
<p>To the OP: in hindsight, it's no surprise you and your friend had a hard time finding the right answer. Thanks for the very nice problem!</p>
|
differentiation | <p>Let <span class="math-container">$0<x<1$</span> and <span class="math-container">$f(x)=x^{x^{x^{x}}}$</span> then we have :</p>
<p>Claim :</p>
<p><span class="math-container">$$f''(x)\geq 0$$</span></p>
<p>My attempt as a sketch of partial proof :</p>
<p>We introduce the function (<span class="math-container">$0<a<1$</span>):</p>
<p><span class="math-container">$$g(x)=x^{x^{a^{a}}}$$</span></p>
<p>Second claim :
<span class="math-container">$$g''(x)\geq 0$$</span></p>
<p>We have :</p>
<p><span class="math-container">$g''(x)=x^{x^{a^{a}}+a^a-2}(a^{\left(2a\right)}\ln(x)+x^{a^{a}}+2a^{a}x^{a^{a}}\ln(x)-a^{a}\ln(x)+2a^{a}+a^{\left(2a\right)}x^{a^{a}}\ln^{2}(x)-1)$</span></p>
<p>We are interested by the inequality :</p>
<p><span class="math-container">$$(a^{\left(2a\right)}\ln(x)+x^{a^{a}}+2a^{a}x^{a^{a}}\ln(x)-a^{a}\ln(x)+2a^{a}+a^{\left(2a\right)}x^{a^{a}}\ln^{2}(x)-1)\geq 0$$</span></p>
<p>I'm stuck here .</p>
<hr />
<p>As noticed by Hans Engler we introduce the function :</p>
<p><span class="math-container">$$r(x)=x^{a^a}\ln(x)$$</span>
We have :</p>
<p><span class="math-container">$$r''(x)=x^{a^a - 2} ((a^a - 1) a^a \ln(x) + 2 a^a - 1)$$</span></p>
<p>The conclusion is straightforward the function <span class="math-container">$\ln(g(x))$</span> is convex so it implies that <span class="math-container">$g(x)$</span> is also convex on <span class="math-container">$(0,1)$</span>.</p>
<p>Now starting with the second claim and using the Jensen's inequality we have <span class="math-container">$x,y,a\in(0,1)$</span>:</p>
<p><span class="math-container">$$x^{x^{a^{a}}}+y^{y^{a^{a}}}\geq 2\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)^{a^{a}}}$$</span></p>
<p>We substitute <span class="math-container">$a=\frac{x+y}{2}$</span> we obtain :</p>
<p><span class="math-container">$$x^{x^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}}+y^{y^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}}\geq 2\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}}$$</span></p>
<p>Now the idea is to compare the two quantities :</p>
<p><span class="math-container">$$x^{x^{x^{x}}}+y^{y^{y^{y}}}\geq x^{x^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}}+y^{y^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}}$$</span></p>
<p>We split in two the problem as :</p>
<p><span class="math-container">$$x^{x^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}}\leq x^{x^{x^{x}}}$$</span></p>
<p>And :</p>
<p><span class="math-container">$$y^{y^{y^{y}}}\geq y^{y^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}}$$</span></p>
<p>Unfortunetaly it's not sufficient to show the convexity because intervals are disjoint .</p>
<hr />
<h2>A related result :</h2>
<p>It seems that the function :</p>
<p><span class="math-container">$r(x)=x^x\ln(x)=v(x)u(x)$</span> is increasing on <span class="math-container">$I=(0.1,e^{-1})$</span> where <span class="math-container">$v(x)=x^x$</span> . For that differentiate twice and with a general form we have :
<span class="math-container">$$v''(x)u(x)\leq 0$$</span>
<span class="math-container">$$v'(x)u'(x)\leq 0$$</span>
<span class="math-container">$$v(x)u''(x)\leq 0$$</span></p>
<p>So the derivative is decreasing on this interval <span class="math-container">$I$</span> and <span class="math-container">$r'(e^{-1})>0$</span></p>
<p>We deduce that <span class="math-container">$R(x)=e^{r(x)}$</span> is increasing . Furthermore on <span class="math-container">$I$</span> the function <span class="math-container">$R(x)$</span> is concave and I have not a proof of it yet .</p>
<p>We deduce that the function <span class="math-container">$R(x)^{R(x)}$</span> is convex on <span class="math-container">$I$</span> . To show it differentiate twice and use a general form like : <span class="math-container">$(n(m(x)))''=R(x)^{R(x)}$</span> and we have on <span class="math-container">$I$</span> :</p>
<p><span class="math-container">$$n''(m(x))(m'(x))^2\geq 0$$</span></p>
<p>And :</p>
<p><span class="math-container">$$m''(x)n'(m(x))\geq 0$$</span></p>
<p>Because <span class="math-container">$x^x$</span> on <span class="math-container">$x\in I$</span> is convex decreasing .</p>
<p>Conlusion :</p>
<p><span class="math-container">$$x^{x^{\left(x^{x}+x\right)}}$$</span> is convex on <span class="math-container">$I$</span></p>
<p>The same reasoning works with <span class="math-container">$x\ln(x)$</span> wich is convex decreasing on <span class="math-container">$I$</span> .</p>
<p>Have a look to <a href="https://www.wolframalpha.com/input/?i=second+derivative+g%28x%29%2Fx%5Ex" rel="nofollow noreferrer">the second derivative divided by <span class="math-container">$x^x$</span></a></p>
<p>In the last link all is positive on <span class="math-container">$J=(0.25,e^{-1})$</span> taking the function <span class="math-container">$g(x)=\ln\left(R(x)^{R(x)}\right)$</span></p>
<hr />
<p>Question :</p>
<p>How to show the first claim ?Is there a trick here ?</p>
<p>Ps:feel free to use my ideas .</p>
| <p>Proof:</p>
<p>First, we have that</p>
<p>(1) Midpoint convex implies rational convex</p>
<p>(2) And Rational convex plus continuous implies convex.</p>
<p>See <a href="https://math.stackexchange.com/questions/83383/midpoint-convex-and-continuous-implies-convex">Midpoint-Convex and Continuous Implies Convex</a> for details on this.</p>
<p>So it suffices to prove the function is mid-point convex. Let <span class="math-container">$x\in (0,1)$</span> and let <span class="math-container">$y\in (x,1)$</span> then we want to show that</p>
<p><span class="math-container">$$f(\frac{x+y}{2}) \le \frac{f(x)+f(y)}{2}$$</span></p>
<p>That is, we want to show that</p>
<p><span class="math-container">$$\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)^{\left(\frac{x+y}{2}\right)}}} \le \frac{x^{x^{x^{x}}}+y^{y^{y^{y}}}}{2}$$</span></p>
<p>but this follows from the argument in your original post.</p>
| <p>Mathematica 12.3 does it in moment by</p>
<pre><code>NMinimize[{D[x^x^x^x, {x, 2}], x > 0 && x < 1}, x]
</code></pre>
<blockquote>
<p><span class="math-container">$\{0.839082,\{x\to 0.669764\}\}$</span></p>
</blockquote>
<p>Since the minimum value of the second derivative on <span class="math-container">$(0,1)$</span> is positive, the function under consideration is convex on <span class="math-container">$(0,1)$</span>.</p>
<p>Addition. The @RiverLi user states doubts concerning the <code>NMinimize</code> result. Here are
additional arguments. First, as</p>
<pre><code>D[x^x^x^x, {x, 2}] // Simplify
</code></pre>
<p><span class="math-container">$ x^{x^x+x^{x^x}-2} \left(x^{2 x} \log (x) \left(x \log ^2(x)+x \log (x)+1\right)^2+x^{x+1}
\log ^2(x)+x^{x+2} \log ^2(x) (\log (x)+1)^2+3 x^{x+1} \log (x)
(\log (x)+1)+2 x^x+x^x \log (x) (x+x \log (x)-1)+x^{x^x} \left(x^x \log (x)
\left(x \log ^2(x)+x \log (x)+1\right)+1\right)^2-1\right)$</span>
and</p>
<pre><code>D[x^x^x^x, {x, 3}] // Simplify
</code></pre>
<p><span class="math-container">$x^{x^x+x^{x^x}} \left(x^{3 x} \log (x) \left(\frac{1}{x}+\log ^2(x)+\log (x)\right)^3+x^{2 x-3} \left(x \log ^2(x)+x \log (x)+1\right)^2+x^{x-2} \left(\frac{1}{x}+\log ^2(x)+\log (x)\right) \left(x^{x+1} \log (x) (\log (x)+1)+x^x-1\right)+x^{2 x^x} \left(x^x \log (x) \left(\frac{1}{x}+\log ^2(x)+\log (x)\right)+\frac{1}{x}\right)^3+3 x^{x^x-3} \left(x^{x+1} \log ^3(x)+x^{x+1} \log ^2(x)+x^x \log (x)+1\right) \left(x^{2 x+2} \log ^5(x)+2 x^x+\left(x^x+4 x-1\right) x^x \log (x)+\left(2 x^x+1\right) x^{x+2} \log ^4(x)+\left(x^{x+1}+2 x^x+2 x\right) x^{x+1} \log ^3(x)+\left(2 x^x+x+5\right) x^{x+1} \log ^2(x)-1\right)+2 x^{x-3} \left(x^2 \log ^3(x)+2 x^2 \log ^2(x)+2 x+x (x+3) \log (x)-1\right)+3 x^{2 x-3} \log (x) \left(x \log ^2(x)+x \log (x)+1\right) \left(x^2 \log ^3(x)+2 x^2 \log ^2(x)+2 x+x (x+3) \log (x)-1\right)+\frac{\left(x^{x+1} \log (x) (\log (x)+1)+x^x-1\right)^2}{x^3}+\frac{x^{x+1} \log (x)+2 x^{x+1} (\log (x)+1)+x^{x+2} \log (x) (\log (x)+1)^2-x^x+1}{x^3}+x^{x-3} \log (x) \left(x^3 \log ^4(x)+3 x^3 \log ^3(x)+3 x^2+3 x^2 (x+2) \log ^2(x)+x \left(x^2+9 x-4\right) \log (x)+2\right)\right)$</span>
show, the second derivative is continuously differentiable on <span class="math-container">$(0,1]$</span>.
Therefore, we can draw a conclusion that the second derivative is continuously differentiable on <span class="math-container">$[0.01,1]$</span>.</p>
<p>Second,</p>
<pre><code>Limit[D[x^x^x^x, {x, 2}], x -> 0, Direction -> "FromAbove"]
</code></pre>
<p><span class="math-container">$\infty$</span></p>
<p>and</p>
<pre><code>D[x^x^x^x, {x, 2}] /. x -> 0.01
</code></pre>
<p><span class="math-container">$77.923$</span></p>
<p>and</p>
<pre><code>D[x^x^x^x, {x, 2}] /. x -> 1
</code></pre>
<p><span class="math-container">$2$</span></p>
<p>Third, the command of Maple (here Maple is stronger than Mathematica)</p>
<pre><code>DirectSearch:-SolveEquations(diff(x^(x^(x^x)), x $ 3) = 0, {0 <= x, x <= 1}, AllSolutions);
</code></pre>
<p><span class="math-container">$$\left[\begin{array}{cccc}
2.58795803978585\times10^{-24} & \left[\begin{array}{c}
- 1.60871316268185183\times10^{-12}
\end{array}\right] & \left[x= 0.669764056702161\right] & 23
\end{array}\right] $$</span>
shows there is only one critical point of the second derivative on <span class="math-container">$[0.01,1]$</span>.
Combining the above with the value of the second derivative at <span class="math-container">$x=0.669764056702161$</span>, i.e. with <span class="math-container">$0.83908$</span>, and with the result of</p>
<pre><code>NMaximize[{D[x^x^x^x, {x,3}] // Simplify, x >= 0 && x <= 0.01}, x]
</code></pre>
<p><span class="math-container">$\{-4779.93,\{x\to 0.01\}\},$</span></p>
<p>we conclude that the second derivative takes its global minimum on <span class="math-container">$(0,1]$</span> at <span class="math-container">$x=0.669764056702161$</span> .</p>
|
logic | <p>According to <a href="https://johncarlosbaez.wordpress.com/2016/04/02/computing-the-uncomputable/">here</a>, there is the "standard" model of Peano Arithmetic. This is defined as $0,1,2,...$ in the usual sense. What would be an example of a <em>nonstandard</em> model of Peano Arithmetic? What would a <em>nonstandard</em> amount of time be like?</p>
| <p>Peano arithmetic is a first-order theory, and therefore if it has an infinite model---and it has---then it has models of every cardinality.</p>
<p>Not only that, because it has a model which is pointwise definable (every element is definable), then there are non-isomorphic countable models. Which means that you can find models which are not the standard model already at the countable cardinality.</p>
<p>What do these models look like? Well, it's kinda hard to explain. They all have an initial segment that looks like the natural numbers. That much is easy to prove. Also not hard to show is that the rest of the model can be decomposed into $\Bbb Z$-chains. Namely, if $c$ is a non-standard number, then it has a predecessor (since Peano proves that every non-zero element is a successor). So we can define $f(k)=S^k(c)$, as an order isomorphism between the "chunk" of the model that $x$ lives in.</p>
<p>Harder to prove, but still not impossible, is that at least for the countable parts, the models all look the same as far as the order go, they all have an initial segment of $\Bbb N$, followed by $\Bbb{Q\times Z}$ ordered lexicographically.</p>
<p>To produce such models you can use three standard methods:</p>
<ol>
<li><p>Compactness. Add a constant $c$, require it to be larger than any numeral, by compactness this is a consistent theory so it has a model. This model cannot be the standard model, because it has an element larger than all the numerals.</p></li>
<li><p>Ultrapowers. Take a free ultrafilter $\mathcal U$ over $\Bbb N$, and consider the ultrapower $\Bbb{N^N}/\cal U$. Counting arguments will show you that this ultrapower has cardinality $2^{\aleph_0}$, so it is certainly not isomorphic to $\Bbb N$. If you prefer, you can use the fact that $\mathcal U$ is not a countably-complete ultrafilter, and therefore the ultrapower cannot be well-ordered, so without checking for cardinality it cannot be isomorphic to $\Bbb N$.</p></li>
<li><p>Incompleteness. We know that Peano is not a complete theory. Therefore there are statements which are true in $\Bbb N$, but Peano does not prove. Therefore the negation of such statement is consistent with the rest of the axioms of Peano, and must have a model. But this model cannot be isomorphic to $\Bbb N$. The benefit of this method is that it allows you to obtain very different theories of your models, whereas ultrapowers and compactness arguments tend to result in elementarily equivalent models.</p></li>
</ol>
| <p>It should be mentioned that one of the most concrete nonstandard models of PA was developed by Skolem in the 1930s in ZF (without the axiom of choice, unlike the constructions mentioned in the other <em>answer</em>). This is roughly in terms of definable functions on $\mathbb N$ ordered by their asymptotic growth; for details see <a href="http://dx.doi.org/10.1007/s10699-012-9316-5" rel="noreferrer">this 2013 publication in <em>Foundations of Science</em></a>, Section 3.2, page 272.</p>
|
differentiation | <p>Are there any approaches that allow to find a derivative of the <a href="http://mathworld.wolfram.com/MeijerG-Function.html">Meijer G-function</a> with respect to one of its parameters in a closed form (or at least numerically with a high precision and in reasonable time, with all found digits provably correct)? I am particularly interested in this case:
$$\mathcal{D}=\left.\partial_\alpha G_{2,3}^{2,1}\left(1\middle|\begin{array}c1,\alpha\\1,1,0\end{array}\right)\right|_{\alpha=1}$$</p>
| <p>Yes, it is possible in some cases. For example,
$$\begin{align}\mathcal{D}&={_2F_2}\left(\begin{array}c1,1\\2,2\end{array}\middle|-1\right)\\&=\gamma-\operatorname{Ei}(-1),\end{align}$$
where ${_pF_q}$ is the <a href="http://en.wikipedia.org/wiki/Generalized_hypergeometric_function">generalized hypergeometric function</a>, $\gamma$ is the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Euler–Mascheroni constant</a>, and $\operatorname{Ei}(z)$ is the <a href="http://en.wikipedia.org/wiki/Exponential_integral">exponential integral</a>. In case you need a numeric value,
$$\mathcal{D}\approx0.7965995992970531342836758655425240800732066293468318063837458...$$</p>
| <p>To prove the result stated in @Cleo answer, we can use the <a href="https://functions.wolfram.com/HypergeometricFunctions/MeijerG/02/" rel="nofollow noreferrer">definition</a> of the Meijer function:
<span class="math-container">\begin{equation}
G_{2,3}^{2,1}\left(1\middle|\begin{array}c1,\alpha\\1,1,0\end{array}\right)=\frac{1}{2i\pi}\int_\mathcal L\frac{[\Gamma(1+s)]^2\Gamma(-s)}{\Gamma(\alpha+s)\Gamma(1-s)}\,ds
\end{equation}</span>
Here (case (i) of the above definition), <span class="math-container">$\mathcal L$</span> can be a straight line <span class="math-container">$(\gamma-i\infty,\gamma+i\infty)$</span> with <span class="math-container">$-1<\gamma<0$</span>.</p>
<p>The above expression can be simplified by the Gamma functional relation to express
<span class="math-container">\begin{equation}
G_{2,3}^{2,1}\left(1\middle|\begin{array}c1,\alpha\\1,1,0\end{array}\right)=-\frac{1}{2i\pi}\int_\mathcal L\frac{[\Gamma(1+s)]^2}{\Gamma(\alpha+s)}\,\frac{ds}{s}
\end{equation}</span>
By differentiation (assuming the validity),
<span class="math-container">\begin{equation}
\left.\partial_\alpha G_{2,3}^{2,1}\left(1\middle|\begin{array}c1,\alpha\\1,1,0\end{array}\right)\right|_{\alpha=1}=\frac{1}{2i\pi}\int_\mathcal L\Psi(s+1)\Gamma(s+1)\,\frac{ds}{s}
\end{equation}</span>
The poles are situated at <span class="math-container">$s=0$</span> and <span class="math-container">$s=-n-1$</span> with <span class="math-container">$n=0,1,2,\cdots$</span>.
The residue at the pole <span class="math-container">$s=-n-1$</span> is <span class="math-container">$\frac{(-1)^n}{(n+1)\Gamma(n+2)}$</span>. To evaluate the integral, we close the contour on the left side (the contribution of the left half-circle being asymptotically vanishing) to express
<span class="math-container">\begin{align}
\left.\partial_\alpha G_{2,3}^{2,1}\left(1\middle|\begin{array}c1,\alpha\\1,1,0\end{array}\right)\right|_{\alpha=1}&=\sum_{n=0}^\infty\frac{(-1)^n}{(n+1)\Gamma(n+2)}\\
&=-\sum_{k=1}^\infty\frac{(-1)^k}{k\,k!}
\end{align}</span>
By comparison with the series for the <a href="https://en.wikipedia.org/wiki/Exponential_integral#Convergent_series" rel="nofollow noreferrer">exponential integral</a>
<span class="math-container">\begin{align}
\left.\partial_\alpha G_{2,3}^{2,1}\left(1\middle|\begin{array}c1,\alpha\\1,1,0\end{array}\right)\right|_{\alpha=1}&=\gamma+E_1(1)\\
&=\gamma-\operatorname{Ei}(-1)
\end{align}</span></p>
|
logic | <p>a) there are mathematical statements, eg. formulated in Peano, which are known to be true but not provable. If not provable, how do we "prove" they are true? Is it not fishy? (I think Gödel does not address this exactly in this form. He says there is something which is true, but he does not say that we can know it is true.)</p>
<p>b) Suppose I add to Peano all those statements which can be known to be true theoretically. Will I have a statement then, which can be known to be true but not provable? I think Gödel does not address this exactly in this form. He says there will be something which is true, but he does not say that we can know it is true.</p>
| <p>The statements of Gödel's theorems are about (for simplicity) a certain formal theory, namely $PA$, known as Peano's arithmetic (actually they're more general but I'll stick to that). This theory contains axioms, such as $\forall x \forall y, x\times s(y) = x\times y + x$ and many others. </p>
<p>Now there is also a formal system that allows one to deduce theorems from these axioms; one such theorem would be $\forall x \forall y, x\times y = y\times x$. </p>
<p>There are also sentences $\phi$ that can be expressed in this language that cannot be proved or refuted in this theory. This is somewhat to be expected, indeed if I have no axioms for instance, then clearly I can't prove much beyond logical tautologies (although it's not so easy to see what one can prove without axioms, but that's another question), so we can expect that with too few axioms, some things are left undecided (why should we have found all the right axioms ?)</p>
<p>However we also have a "model" for these axioms, that is in a sense a universe in which these are true. Such a "universe" is $\mathbb{N}$. In this universe all axioms in $PA$ are true, and therefore all theorems of $PA$ are true as well. However, a statement $\phi$ that cannot be proved or refuted in $PA$ <em>has</em> a truth value in $\mathbb{N}$ : it is either true or false (which is not to be mistaken with "either provable or refutable"). The sentences that are true in $\mathbb{N}$ are sometimes called statements of "true arithmetic". </p>
<p>Since we work in a much more powerful theory than $PA$ (namely ZF) we can prove things about $\mathbb{N}$ that go beyond the theorems of $PA$. Obviously what we prove can't contradict the theorems of $PA$, but we can prove things that $PA$ can't. In particular it is not surprising that we can decide sentences that $PA$ can't: Gödel's first incompleteness theorem says that this is the case; there is a statement $\phi$ that is part of true arithmetic (it is true in $\mathbb{N}$, and for vulgarization purposes one may say it is <em>true</em>) but it is not provable from $PA$. In short, there are true but unprovable sentences. </p>
<p>Now if you add to Peano all statements of true arithmetic you obtain... true arithmetic ! Since the Peano axioms are true in $\mathbb{N}$, they are part of true arithmetic, so true arithmetic + $PA$ = true arithmetic. However, as each sentence is either true or false in $\mathbb{N}$, it means that this new theory (sometimes written $Th(\mathbb{N})$ for "theory of $\mathbb{N}$") decides every statement: every statement is provable or refutable from this, so there will be no more "true but unprovable statements". </p>
<p>This seems to contradict Gödel's theorem, but actually it doesn't since $Th(\mathbb{N})$ doesn't satisfy the hypotheses of this theorem, indeed it is not <em>recursively axiomatizable</em>, which means there is no algorithm that can decide whether a given sentence $\phi$ is an axiom. So it's a pretty "lousy" theory in the sense that we can't use it, contrary to $PA$. Gödel's theorem doesn't say that any theory about the integers has true but unprovable statements, it says that any such usable theory has true but unprovable statements. </p>
<p>Hope this makes things clearer</p>
| <p>Intuitively: consider the statement </p>
<blockquote>
<p>''this statement is not provable in the theory $T$''.</p>
</blockquote>
<p>This statement is true if it is not provable in $T$. </p>
<p>Godel has formally proved that such a statement can be expressed, and it and its negation are not provable, in any theory that contains Peano arithmetics. </p>
<p>And adding the statement as a new axiom gives a new theory in wich we can define a new not provable statement of the same kind.</p>
|
linear-algebra | <p>I am trying to understand how - exactly - I go about projecting a vector onto a subspace.</p>
<p>Now, I know enough about linear algebra to know about projections, dot products, spans, etc etc, so I am not sure if I am reading too much into this, or if this is something that I have missed.</p>
<p>For a class I am taking, the proff is saying that we take a vector, and 'simply project it onto a subspace', (where that subspace is formed from a set of orthogonal basis vectors).</p>
<p>Now, I know that a subspace is really, at the end of the day, just a set of vectors. (That satisfy properties <a href="http://en.wikipedia.org/wiki/Projection_%28linear_algebra%29">here</a>). I get that part - that its this set of vectors. So, how do I "project a vector on this subspace"?</p>
<p>Am I projecting my one vector, (lets call it a[n]) onto ALL the vectors in this subspace? (What if there is an infinite number of them?)</p>
<p>For further context, the proff was saying that lets say we found a set of basis vectors for a signal, (lets call them b[n] and c[n]) then we would project a[n] onto its <a href="http://en.wikipedia.org/wiki/Signal_subspace">signal subspace</a>. We project a[n] onto the signal-subspace formed by b[n] and c[n]. Well, how is this done exactly?..</p>
<p>Thanks in advance, let me know if I can clarify anything!</p>
<p>P.S. I appreciate your help, and I would really like for the clarification to this problem to be somewhat 'concrete' - for example, something that I can show for myself over MATLAB. Analogues using 2-D or 3-D space so that I can visualize what is going on would be very much appreciated as well. </p>
<p>Thanks again.</p>
| <p>I will talk about orthogonal projection here.</p>
<p>When one projects a vector, say $v$, onto a subspace, you find the vector in the subspace which is "closest" to $v$. The simplest case is of course if $v$ is already in the subspace, then the projection of $v$ onto the subspace is $v$ itself.</p>
<p>Now, the simplest kind of subspace is a one dimensional subspace, say the subspace is $U = \operatorname{span}(u)$. Given an arbitrary vector $v$ not in $U$, we can project it onto $U$ by
$$v_{\| U} = \frac{\langle v , u \rangle}{\langle u , u \rangle} u$$
which will be a vector in $U$. There will be more vectors than $v$ that have the same projection onto $U$.</p>
<p>Now, let's assume $U = \operatorname{span}(u_1, u_2, \dots, u_k)$ and, since you said so in your question, assume that the $u_i$ are orthogonal. For a vector $v$, you can project $v$ onto $U$ by
$$v_{\| U} = \sum_{i =1}^k \frac{\langle v, u_i\rangle}{\langle u_i, u_i \rangle} u_i = \frac{\langle v , u_1 \rangle}{\langle u_1 , u_1 \rangle} u_1 + \dots + \frac{\langle v , u_k \rangle}{\langle u_k , u_k \rangle} u_k.$$</p>
| <p>Take a basis $\{v_1, \dots, v_n\}$ for the "signal subspace" $V$. Let's assume $V$ is finite dimensional for simplicity and practical purposes, but you can generalize to infinite dimensions. Let's also assume the basis is orthonormal.</p>
<p>The projection of your signal $f$ onto the subspace $V$ is just</p>
<p>$$\mathrm{proj}_V(f) = \sum_{i=1}^n \langle f, v_i \rangle v_i$$</p>
<p>and $f = \mathrm{proj}_V(f) + R(f)$, where $R(f)$ is the remainder, or orthogonal complement, which will be 0 if $f$ lies in the subspace $V$. </p>
<p>The $i$-th term of the sum, $\langle f, v_i\rangle$, is the projection of $f$ onto the subspace spanned by the $i$-th basis vector. (Note, if the $v_i$ are orthogonal, but not necessarily orthonormal, you must divide the $i$-th term by $\|v_i\|^2$.)</p>
|
geometry | <p>This is something that always annoys me when putting an A4 letter in a oblong envelope: one has to estimate where to put the creases when folding the letter. I normally start from the bottom and on eye estimate where to fold. Then I turn the letter over and fold bottom to top. Most of the time ending up with three different areas. There must be a way to do this exactly, without using any tools (ruler, etc.).</p>
| <p>Fold twice to obtain quarter markings at the paper bottom.
Fold along the line through the top corner and the third of these marks.
The vertical lines through the first two marks intersect this inclined line at thirds, which allows the final foldings.</p>
<p>(Photo by Ross Millikan below - if the image helped you, you can up-vote his too...)
<img src="https://i.sstatic.net/LZr8b.jpg" alt="Graphical representation of the folds"></p>
| <p>Here is a picture to go with Hagen von Eitzen's answer. The horizontal lines are the result of the first two folds. The diagonal line is the third fold. The heavy lines are the points at thirds for folding into the envelope.</p>
<p><img src="https://i.sstatic.net/LZr8b.jpg" alt="enter image description here"></p>
|
probability | <p>I'm struggling with the concept of conditional expectation. First of all, if you have a link to any explanation that goes beyond showing that it is a generalization of elementary intuitive concepts, please let me know. </p>
<p>Let me get more specific. Let <span class="math-container">$\left(\Omega,\mathcal{A},P\right)$</span> be a probability space and <span class="math-container">$X$</span> an integrable real random variable defined on <span class="math-container">$(\Omega,\mathcal{A},P)$</span>. Let <span class="math-container">$\mathcal{F}$</span> be a sub-<span class="math-container">$\sigma$</span>-algebra of <span class="math-container">$\mathcal{A}$</span>. Then <span class="math-container">$E[X|\mathcal{F}]$</span> is the a.s. unique random variable <span class="math-container">$Y$</span> such that <span class="math-container">$Y$</span> is <span class="math-container">$\mathcal{F}$</span>-measurable and for any <span class="math-container">$A\in\mathcal{F}$</span>, <span class="math-container">$E\left[X1_A\right]=E\left[Y1_A\right]$</span>.</p>
<p>The common interpretation seems to be: "<span class="math-container">$E[X|\mathcal{F}]$</span> is the expectation of <span class="math-container">$X$</span> given the information of <span class="math-container">$\mathcal{F}$</span>." I'm finding it hard to get any meaning from this sentence.</p>
<ol>
<li><p>In elementary probability theory, expectation is a real number. So the sentence above makes me think of a real number instead of a random variable. This is reinforced by <span class="math-container">$E[X|\mathcal{F}]$</span> sometimes being called "conditional expected value". Is there some canonical way of getting real numbers out of <span class="math-container">$E[X|\mathcal{F}]$</span> that can be interpreted as elementary expected values of something?</p></li>
<li><p>In what way does <span class="math-container">$\mathcal{F}$</span> provide information? To know that some event occurred, is something I would call information, and I have a clear picture of conditional expectation in this case. To me <span class="math-container">$\mathcal{F}$</span> is not a piece of information, but rather a "complete" set of pieces of information one could possibly acquire in some way. </p></li>
</ol>
<p>Maybe you say there is no real intuition behind this, <span class="math-container">$E[X|\mathcal{F}]$</span> is just what the definition says it is. But then, how does one see that a martingale is a model of a fair game? Surely, there must be some intuition behind that!</p>
<p>I hope you have got some impression of my misconceptions and can rectify them.</p>
| <p>Maybe this simple example will help. I use it when I teach
conditional expectation. </p>
<p>(1) The first step is to think of ${\mathbb E}(X)$ in a new way:
as the best estimate for the value of a random variable $X$ in the absence of any information.
To minimize the squared error
$${\mathbb E}[(X-e)^2]={\mathbb E}[X^2-2eX+e^2]={\mathbb E}(X^2)-2e{\mathbb E}(X)+e^2,$$
we differentiate to obtain $2e-2{\mathbb E}(X)$, which is zero at $e={\mathbb E}(X)$.</p>
<p>For example, if I throw a fair die and you have to
estimate its value $X$, according to the analysis above, your best bet is to guess ${\mathbb E}(X)=3.5$.
On specific rolls of the die, this will be an over-estimate or an under-estimate, but in the long run it minimizes the mean square error. </p>
<p>(2) What happens if you <em>do</em> have additional information?
Suppose that I tell you that $X$ is an even number.
How should you modify your estimate to take this new information into account?</p>
<p>The mental process may go something like this: "Hmmm, the possible values <em>were</em> $\lbrace 1,2,3,4,5,6\rbrace$
but we have eliminated $1,3$ and $5$, so the remaining possibilities are $\lbrace 2,4,6\rbrace$.
Since I have no other information, they should be considered equally likely and hence the revised expectation is $(2+4+6)/3=4$".</p>
<p>Similarly, if I were to tell you that $X$ is odd, your revised (conditional) expectation is 3. </p>
<p>(3) Now imagine that I will roll the die and I will tell you the parity of $X$; that is, I will
tell you whether the die comes up odd or even. You should now see that a single numerical response
cannot cover both cases. You would respond "3" if I tell you "$X$ is odd", while you would respond "4" if I tell you "$X$ is even".
A single numerical response is not enough because the particular piece of information that I will give you is <strong>itself random</strong>.
In fact, your response is necessarily a function of this particular piece of information.
Mathematically, this is reflected in the requirement that ${\mathbb E}(X\ |\ {\cal F})$ must be $\cal F$ measurable. </p>
<p>I think this covers point 1 in your question, and tells you why a single real number is not sufficient.
Also concerning point 2, you are correct in saying that the role of $\cal F$ in ${\mathbb E}(X\ |\ {\cal F})$
is not a single piece of information, but rather tells what possible specific pieces of (random) information may occur. </p>
| <p>I think a good way to answer question 2 is as follows.</p>
<p>I am performing an experiment, whose outcome can be described by an element $\omega$ of some set $\Omega$. I am not going to tell you the outcome, but I will allow you to ask certain questions yes/no questions about it. (This is like "20 questions", but infinite sequences of questions will be allowed, so it's really "$\aleph_0$ questions".) We can associate a yes/no question with the set $A \subset \Omega$ of outcomes for which the answer is "yes". </p>
<p>Now, one way to describe some collection of "information" is to consider all the questions which could be answered with that information. (For example, the 2010 Encyclopedia Britannica is a collection of information; it can answer the questions "Is the dodo extinct?" and "Is the elephant extinct?" but not the question "Did Justin Bieber win a 2011 Grammy?") This, then, would be a set $\mathcal{F} \subset 2^\Omega$.</p>
<p>If I know the answer to a question $A$, then I also know the answer to its negation, which corresponds to the set $A^c$ (e.g. "Is the dodo not-extinct?"). So any information that is enough to answer question $A$ is also enough to answer question $A^c$. Thus $\mathcal{F}$ should be closed under taking complements. Likewise, if I know the answer to questions $A,B$, I also know the answer to their disjunction $A \cup B$ ("Are either the dodo or the elephant extinct?"), so $\mathcal{F}$ must also be closed under (finite) unions. Countable unions require more of a stretch, but imagine asking an infinite sequence of questions "converging" on a final question. ("Can elephants live to be 90? Can they live to be 99? Can they live to be 99.9?" In the end, I know whether elephants can live to be 100.)</p>
<p>I think this gives some insight into why a $\sigma$-field can be thought of as a collection of information.</p>
|
probability | <p>$N$ points are selected in a uniformly distributed random way in a disk of a unit radius. Let $L(N)$ denote the <a href="http://en.wikipedia.org/wiki/Expected_value">expected</a> length of the shortest <a href="http://en.wikipedia.org/wiki/Polygonal_chain">polygonal path</a> that visits each of the points at least once (the path need not to be closed any may be self-intersecting).</p>
<ul>
<li>For what $N$ do we know the exact value of $L(N)$?</li>
<li>Is there a general formula for $L(N)$?</li>
<li>What is the asymptotic behavior of $L(N)$ as $N\to\infty$?</li>
<li>What are the answers to previous questions, if the disk is replaced with a ball?</li>
</ul>
| <p>Given: $(X,Y)$ is Uniformly distributed on a disc of unit radius. Then, the joint distribution of $n$ such points, $((X_1,Y_1), ..., (X_n,Y_n))$, each drawn independently, will have joint pdf:</p>
<p>$$f( (x_1,y_1), ..., (x_n,y_n) ) = \begin{cases}\pi^{-n}& (x_1^2 + y_1^2 < 1) & \text{ & } \cdots \text{ & }&(x_n^2 + y_n^2 < 1) \\ 0 & otherwise \end{cases}$$ </p>
<p>Next, let $Z_n$ denote the Euclidean length of the shortest polygonal path that visits each point (at least) once. For any given $n$, it is possible to express $Z_n$ as an exact symbolic construct. This rapidly gets tiresome to do manually, so I have set up a function <code>PolygonPathMinDistance[points]</code> to automate same (I have provided the code at the bottom of this post). For example:</p>
<ul>
<li>Case $n = 2$:<br>
With 2 points, the shortest polygonal path $Z_2$ is, of course, ... :</li>
</ul>
<p><a href="https://i.sstatic.net/Ejl9g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ejl9g.png" alt="enter image description here"></a></p>
<p>and so the desired exact solution is: </p>
<p>$$E[Z_2] = E\left[\sqrt{\left(X_1-X_2\right){}^2+\left(Y_1-Y_2\right){}^2}\right] \approx 0.9054$$</p>
<p>where the random variables $(X_1, Y_1, X_2, Y_2)$ have joint pdf $f(\cdots)$ given above.</p>
<p>Although getting a closed form solution for the expectation integral has thus far proved elusive (perhaps transforming to polar coordinates might be a path worth pursuing - see below), the result is just a number, and we can find that number to arbitrary precision using numerical integration rather than symbolic integration. Doing so yields the result that: $E[Z_2] = 0.9054 ...$. </p>
<ul>
<li>Case $n = 3$:<br>
With 3 points, the shortest polygonal path $Z_3$ is:</li>
</ul>
<p><a href="https://i.sstatic.net/gqNKT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gqNKT.png" alt="enter image description here"></a></p>
<p>and similarly obtain: </p>
<p>$$E[Z_3] \approx 1.49 ...$$</p>
<ul>
<li>Case $n = 4$:<br>
With 4 points, the shortest polygonal path $Z_4$ is:</li>
</ul>
<p><a href="https://i.sstatic.net/G5Llf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G5Llf.png" alt="enter image description here"></a></p>
<p>and similarly obtain: </p>
<p>$$E[Z_4] \approx 1.96 ...$$</p>
<p>... and so on. For cases above $n \ge 5$, the method still works fine, but numerical integration gets less reliable, and other methods, such as Monte Carlo simulation [<em>i.e.</em> actually generating $n$ pseudorandom points in the disc (say 100,000 times), evaluating the actual shortest path for each and every $n$-dimensional drawing, and calculating the sample mean] appear to work more reliably. The latter is, of course, no longer an exact conceptual methodology, but as a practical evaluation method, for larger $n$, it appears to become the more practical option.</p>
<p><strong>Summary Plots and Fitting</strong>
$$\color{red}{\text{Expected shortest polygon path connecting $n$ random points in a disc of unit radius }}$$</p>
<p><a href="https://i.sstatic.net/Ge0Lp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ge0Lp.png" alt="enter image description here"></a></p>
<p>... and with $n=50, 100$ and $150$ points added:</p>
<p><a href="https://i.sstatic.net/R8vbi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R8vbi.png" alt="enter image description here"></a></p>
<p>The fitted model here is simply:</p>
<p>$$a + b \frac{n-1}{\sqrt{n}}$$</p>
<p>with $a = -0.11$, $b = 1.388$.</p>
<p><strong>Fitting curves</strong></p>
<p>A number of 'natural' contenders present themselves, including models of form:</p>
<ul>
<li><p>$a + b n + c \sqrt{n}$ ... this performs neatly within sample, but as soon as one extends to larger values of $n$, the model performs poorly out of sample, and needs to be re-fitted given the extra data points. This suggests that the true model is not just square root $n$. </p></li>
<li><p>$a + b n^c$ ... this works very neatly within sample, and outside sample, but the optimal parameter values show some instability as more data values are added.</p></li>
<li><p>$a + b \frac{n-1}{\sqrt{n}}$ ... just 2 parameters, and works absolutely beautifully. The idea for this model was inspired by Ju'x posting. The parameter values are remarkably stable as more data points are added (they remain constant to 2 decimal places, irrespective of whether we use just $n = 2 ... 20$, or $n = 50$ is added, or $n=100$ is added, all the way up to $n = 300$), as shown here:</p></li>
</ul>
<p><a href="https://i.sstatic.net/pDX8j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pDX8j.png" alt="enter image description here"></a></p>
<p>The parameter values for $a$ and $b$ are the same as for the diagram above.</p>
<hr>
<p><strong>Code for the <code>PolygonPathMinDistance</code> function</strong></p>
<p>Here is some <em>Mathematica</em> code for the exact <code>PolygonPathMinDistance</code> function that calculates all of the possible relevant permutations involved in calculating the shortest polygon path, as an exact symbolic/algebraic construct:</p>
<pre><code>PolygonPathMinDistance[points_] := Module[{orderings, pointorderings, pathsToCheck, ww},
orderings = Union[Permutations[Range[Length[points]]],
SameTest -> (#1 == Reverse[#2] &)];
pointorderings = Map[points[[#]] &, orderings];
pathsToCheck = Map[Partition[#, 2, 1] &, pointorderings];
Min[Map[Total[Map[EuclideanDistance @@ # &, #1] /. Abs[ww_]->ww]&, pathsToCheck]]
]
</code></pre>
<p>. </p>
<p><strong>Alternative specification of problem in Polar Coordinates</strong></p>
<p>If points are Uniformly distributed on a disc of unit radius and independent, the joint distribution of $n$ points, $((X_1,Y_1), ..., (X_n,Y_n))$, with joint pdf $f(\cdots)$ is given above. Alternatively, consider the transformation to polar coordinates, $(X_i \to R_i cos(\Theta_i)$, $Y_i \to R_i sin(\Theta_i))$, for $i = 1$ to $n$. Then, the joint pdf is:</p>
<p>$$f( (r_1,\theta_1), ..., (r_n,\theta_n) ) = \frac{r_1 \times r_2 \times \cdots \times r_n}{\pi^{n}}$$</p>
<p>with domain of support, $R_i=r_i \in \{r_i:0<r_i<1\}$ and $\Theta_i=\theta_i \in \{\theta_i:0<\theta_i<2 \pi\}$. Then, the distance between any two random points, say $(X_i,Y_i)$ and $(X_j,Y_j)$ is given by the distance formula for polar coordinates:</p>
<p>$$\sqrt{\left(X_j-X_i\right){}^2+\left(Y_j-Y_i\right){}^2} = \sqrt{R_i^2 + R_j^2 -2 R_i R_j \cos \left(\Theta_i-\Theta_j\right)}$$</p>
<p>Taking expectations then yields the same results as above. For the $n = 2$ case, symbolic integration is a little easier, and one can obtain the result: </p>
<p>$$\begin{align*}\displaystyle E\left[ Z_2\right] &= E\left[ \sqrt{R_1^2 + R_2^2 -2 R_1 R_2 \cos \left(\Theta _1-\Theta _2\right)}\right]
\\ &= \frac{8}{\pi} \int_0^1\int_0^1 |r_1 - r_2| EllipticE[\frac{-4 r_1 r_2}{(r_1-r_2)^2}]r_1 r_2 dr_1\,dr_2 \\ &= \frac{8}{\pi} \frac{16}{45} \approx 0.9054 \text{(as above)}\end{align*}$$</p>
<p><strong>Extending from a disc to a ball</strong></p>
<p>There is no inherent reason the same methods cannot be extended to a ball ... just more dimensions (and more computational grunt required). </p>
| <p>There is a very elegant theorem published by Beardwood, Halton and Hammersley in 1959 that gives an alsmost-sure convergence for the length of the shortest path divided by $\sqrt{N}$. See this <a href="http://www.theoremoftheday.org/OR/BHH/TotDBHH.pdf" rel="noreferrer">poster</a> for a statement in a general setting.</p>
<p>Also, the convergence holds for expectations. In 1989, Rhee and Talagrand showed that the length of the shortest path is very tightly concentrated around its mean.</p>
<hr>
<p>Let us give a very elementary proof that there exists constant $c,C > 0$ such that
$$
c \sqrt{N} \leq L(N) \leq C \sqrt{N}.
$$
In order to simplify the presentation I will do the proof for a unit square instead of a disk, but it should work just the same.</p>
<p><strong>1. Upper bound (deterministic)</strong></p>
<p>Take the unit square to simplify and split it into $N$ small boxes of area $1/N$ and side length $\frac{1}{\sqrt{N}}$, accordingly to the following figure (to be rigourous this is only possible if $N$ is a square, but by monotonicity it is sufficient to treat only this case).</p>
<p><img src="https://i.sstatic.net/HbwdD.png" alt="enter image description here"></p>
<p>In each box, consider an arbitrary polygonal line that joins each of the points contained in the box. Its length is at most $\sqrt{2/N}$ times the number of points in the box except one.</p>
<p>Now link the small chains together (for instance, follow the red path): this is done by adding at most $N-1$ lines of length at most $\sqrt{\frac{5}{N}}$. Thus this procedure gives a full polygonal path that visits each of the points. If $B_i$ denotes the number of points in the $i$th box, the length of the total path is at most
$$
\sum_{i=1}^N \sqrt{\frac{2}{N}}(B_i-1)_+ + (N-1)\times \sqrt{\frac{5}{N}} \leq (\sqrt{2}+\sqrt{5})\times \frac{N-1}{\sqrt{N}}
$$
since $\sum_{i=1}^N B_i = N$. This gives an upper bound for the length of the shortest path, hence for its expectation. Of course, the constant $\sqrt{2}+\sqrt{5}$ is far from optimal and can easily be improved.</p>
<p><strong>2. Lower bound (probabilistic)</strong></p>
<p>Let us scale the $N$ previous boxes by a factor $\frac{1}{2}$, so that the area of each of them is now $\frac{1}{4N}$.</p>
<p><img src="https://i.sstatic.net/t6GaV.png" alt="Smaller boxes"></p>
<p>As before, let $B_i$ denote the number of points in the $i$th box. It follows a Binomial$(N,\frac{1}{4N})$ distribution. The shortest path should at least link together the nonempty boxes, and the cost of these links is at least (thanks to the triangle inequality) $\frac{1}{2\sqrt{N}}$ for each nonempty box except the first one. Using the linearity of expectations we see that $L(N)$ is bounded by below by
$$
\mathbb{E}\left[\frac{1}{2\sqrt{N}}\times \left(\sum_{i=1}^n \mathbf{1}_{B_i \neq 0}-1\right)\right] \geq \frac{\sqrt{N}}{2}\Pr(B_1 \neq 0) - \frac{1}{2\sqrt{N}}.
$$
Note finally that
$$
\Pr(B_i \neq 0) = 1-\left(1-\frac{1}{4N}\right)^N \geq 1-e^{-\frac{1}{4}} > 0.
$$</p>
<p><strong>3. Extension to dimension $d\geq 3$</strong> : The proof readily extends to $c\cdot N^{\frac{d-1}{d}} \leq L(N) \leq C\cdot N^{\frac{d-1}{d}}$</p>
|
geometry | <p>Suppose I draw <span class="math-container">$20$</span> circles in the plane, all passing through the origin, but no two tangent at the origin. Also, except for the origin, no three circles pass through a common point. How many regions are created in the plane? </p>
<p><a href="https://i.sstatic.net/Izr5Q.png" rel="noreferrer"><img src="https://i.sstatic.net/Izr5Q.png" alt="enter image description here"></a></p>
| <p>A hint:</p>
<p>Move the origin to $\infty$ using the map $z\mapsto{1\over z}$. Then the circles become lines, no two of them parallel, and no three of them going through the same point. </p>
<p>Denote the number of regions created by $n$ lines by $a_n$, and find a recursion for the $a_n$.</p>
| <p>Imagine the drawing of your $n$ circles as a plane graph. The intersections of the circles are the vertices, the arcs between the intersections are the edges. You have one central vertex of degree $2n$ and $\frac12 n(n-1)$ other vertices of degree $4$ (here, the tangent condition and the "no three circles intersect in a single point" condition are used). Now we have
$$v=\frac12n(n-1)+1$$
vertices and
$$e=\frac12 \sum_i{\mathrm{degree}(v_i)} = \frac12\left(\frac12 n(n-1)\cdot 4+2n\right)=n^2$$
edges. Using Euler's polyhedral formula gives
$f=e-v+2=\frac12n(n+1)+1$ faces. For $n=20$ you got $211$ faces.</p>
|
number-theory | <p>Mathematica knows that the logarithm of $n$ is:</p>
<p>$$\log(n) = \lim\limits_{s \rightarrow 1} \zeta(s)\left(1 - \frac{1}{n^{(s - 1)}}\right)$$</p>
<p>The von Mangoldt function should then be:</p>
<p>$$\Lambda(n)=\lim\limits_{s \rightarrow 1} \zeta(s)\sum\limits_{d|n} \frac{\mu(d)}{d^{(s-1)}}.$$</p>
<p>Setting the first term of the von Mangoldt function $\Lambda(1)$ equal to the harmonic number $H_{\operatorname{scale}}$ where scale is equal to the scale of the Fourier transform matrix, one can calculate the Fourier transform of the von Mangoldt function with the Mathematica program at the link below:</p>
<p><a href="http://pastebin.com/ZNNYZ4F6" rel="noreferrer">http://pastebin.com/ZNNYZ4F6</a></p>
<p>In the program I studied the function within the limit for the von Mangoldt function, and made some small changes to the function itself:</p>
<p>$f(t)=\sum\limits_{n=1}^{n=k} \frac{1}{\log(k)} \frac{1}{n} \zeta(1/2+i \cdot t)\sum\limits_{d|n} \frac{\mu(d)}{d^{(1/2+i \cdot t-1)}}$
as $k$ goes to infinity.</p>
<p>(Edit 20.9.2013: The function $f(t)$ had "-1" in the argument for the zeta function.)</p>
<p>The plot of this function looks like this:</p>
<p><img src="https://i.sstatic.net/TAwEf.png" alt="function"> </p>
<p>While the plot of the Fourier transform of the von Mangoldt function with the program looks like this:</p>
<p><img src="https://i.sstatic.net/REY7h.png" alt="Fourier transform"></p>
<p>There are some similarities but the Fourier transform converges faster towards smaller oscillations in between the spikes at zeta zeros and the scale factor is wrong.</p>
<p>Will the function $f(t)$ above eventually converge to the Fourier transform of the von Mangoldt function, or is it only yet another meaningless plot?</p>
<p>Now when I look at it I think the spikes at zeros comes from the zeta function itself and the spectrum like feature comes from the Möbius function which inverts the zeta function.</p>
<p>In the Fourier transform the von Mangoldt function has this form:</p>
<p>$$\log (\text{scale}) ,\log (2),\log (3),\log (2),\log (5),0,\log (7),\log (2),\log (3),0,\log (11),0,...,\Lambda(\text{scale})$$</p>
<p>$$scale = 1,2,3,4,5,6,7,8,9,10,...k$$</p>
<p>Or as latex:</p>
<p>$$\Lambda(n) = \begin{cases} \log q & \text{if }n=1, \\\log p & \text{if }n=p^k \text{ for some prime } p \text{ and integer } k \ge 1, \\ 0 & \text{otherwise.} \end{cases}$$</p>
<p>$$n=1,2,3,4,5,...q$$</p>
<pre><code>TableForm[Table[Table[If[n == 1, Log[q], MangoldtLambda[n]], {n, 1, q}],
{q, 1, 12}]]
</code></pre>
<hr>
<pre><code>scale = 50; (*scale = 5000 gives the plot below*)
Print["Counting to 60"]
Monitor[g1 =
ListLinePlot[
Table[Re[
Zeta[1/2 + I*k]*
Total[Table[
Total[MoebiusMu[Divisors[n]]/Divisors[n]^(1/2 + I*k - 1)]/(n*
k), {n, 1, scale}]]], {k, 0 + 1/1000, 60, N[1/6]}],
DataRange -> {0, 60}, PlotRange -> {-0.15, 1.5}], Floor[k]]
</code></pre>
<p>Dirichlet series:</p>
<p><img src="https://i.sstatic.net/SjS5c.jpg" alt="zeta zero spectrum 1"></p>
<hr>
<pre><code>Clear[f];
scale = 100000;
f = ConstantArray[0, scale];
f[[1]] = N@HarmonicNumber[scale];
Monitor[Do[
f[[i]] = N@MangoldtLambda[i] + f[[i - 1]], {i, 2, scale}], i]
xres = .002;
xlist = Exp[Range[0, Log[scale], xres]];
tmax = 60;
tres = .015;
Monitor[errList =
Table[(xlist^(1/2 + I k - 1).(f[[Floor[xlist]]] - xlist)), {k,
Range[0, 60, tres]}];, k]
ListLinePlot[Im[errList]/Length[xlist], DataRange -> {0, 60},
PlotRange -> {-.01, .15}]
</code></pre>
<p>Fourier transform:</p>
<p><img src="https://i.sstatic.net/QVWkQ.jpg" alt="zeta zero spectrum 2"></p>
<hr>
<p>Matrix inverse:</p>
<pre><code>Clear[n, k, t, A, nn];
nn = 50;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1, nn}], {n, 1,
nn}];
MatrixForm[A];
ListLinePlot[
Table[Total[
1/Table[n*t, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 1/1000, 60,
N[1/6]}], DataRange -> {0, 60}, PlotRange -> {-0.15, 1.5}]
</code></pre>
<hr>
<pre><code>Clear[n, k, t, A, nn];
nnn = 12;
Show[Flatten[{Table[
ListLinePlot[
Table[Re[
Total[1/Table[n*t, {n, 1, nn}]*
Total[Transpose[
Inverse[
Table[Table[
If[Mod[n, k] == 0, N[1/(n/k)^(1/2 + I*t - 1)], 0], {k,
1, nn}], {n, 1, nn}]]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
60, N[1/10]}], DataRange -> {0, 60},
PlotRange -> {-0.15, 1.5}], {nn, 1, nnn}],
Table[ListLinePlot[
Table[Re[
Total[1/Table[n*t, {n, 1, nn}]*
Total[Transpose[
Inverse[
Table[Table[
If[Mod[n, k] == 0, N[1/(n/k)^(1/2 + I*t - 1)], 0], {k,
1, nn}], {n, 1, nn}]]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
60, N[1/10]}], DataRange -> {0, 60},
PlotRange -> {-0.15, 1.5}, PlotStyle -> Red], {nn, nnn, nnn}]}]]
</code></pre>
<p>12 first curves together or partial sums:</p>
<p><img src="https://i.sstatic.net/HG7Jy.jpg" alt="12 curves"></p>
<pre><code>Clear[n, k, t, A, nn, dd];
dd = 220;
Print["Counting to ", dd];
nn = 20;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
Monitor[g1 =
ListLinePlot[
Table[Total[
1/Table[n*t, {n, 1, nn}]*
Total[Transpose[
Re[Inverse[
IdentityMatrix[nn] + (Inverse[A] - IdentityMatrix[nn])*
Zeta[1/2 + I*t]]]]]], {t, 1/1000, dd, N[1/100]}],
DataRange -> {0, dd}, PlotRange -> {-7, 7}];, Floor[t]];
mm = N[2*Pi/Log[2], 20];
g2 = Graphics[
Table[Style[Text[n, {mm*n, 1}], FontFamily -> "Times New Roman",
FontSize -> 14], {n, 1, 32}]];
Show[g1, g2, ImageSize -> Large];
</code></pre>
<p>Matrix Inverse of matrix inverse times zeta function (on critical line):</p>
<p><img src="https://i.sstatic.net/PVrdb.jpg" alt="matrix inverse of matrix inverse as function of t"></p>
<pre><code>Clear[n, k, t, A, nn, h];
nn = 60;
h = 2; (*h=2 gives log 2 operator, h=3 gives log 3 operator and so on*)
A = Table[
Table[If[Mod[n, k] == 0,
If[Mod[n/k, h] == 0, 1 - h, 1]/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
MatrixForm[A];
g1 = ListLinePlot[
Table[Total[
1/Table[n*t, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
nn, N[1/6]}], DataRange -> {0, nn}, PlotRange -> {-3, 7}];
mm = N[2*Pi/Log[h], 12];
g2 = Graphics[
Table[Style[Text[n*2*Pi/Log[h], {mm*n, 1}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 32}]];
Show[g1, g2, ImageSize -> Large]
</code></pre>
<p>Matrix inverse of Riemann zeta times log 2 operator:</p>
<p><img src="https://i.sstatic.net/5aaE9.jpg" alt="2 Pi div log2 spectrum"></p>
<p><a href="http://www.math.ucsb.edu/~stopple/zeta.html" rel="noreferrer">Jeffrey Stopple's code</a>:</p>
<pre><code>Show[Graphics[
RasterArray[
Table[Hue[
Mod[3 Pi/2 + Arg[Zeta[sigma + I t]], 2 Pi]/(2 Pi)], {t, -30,
30, .1}, {sigma, -30, 30, .1}]]], AspectRatio -> Automatic]
</code></pre>
<p>Normal or usual zeta:</p>
<p><img src="https://i.sstatic.net/hFtGB.jpg" alt="Normal zeta"> </p>
<pre><code>Show[Graphics[
RasterArray[
Table[Hue[
Mod[3 Pi/2 +
Arg[Sum[Zeta[sigma + I t]*
Total[1/Divisors[n]^(sigma + I t - 1)*
MoebiusMu[Divisors[n]]]/n, {n, 1, 30}]],
2 Pi]/(2 Pi)], {t, -30, 30, .1}, {sigma, -30, 30, .1}]]],
AspectRatio -> Automatic]
</code></pre>
<p>Spectral zeta (30-th partial sum):</p>
<p><img src="https://i.sstatic.net/2iXhr.jpg" alt="spectral zeta"></p>
<pre><code>Clear[n, k, t, A, nn, B];
nn = 60;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}]; MatrixForm[A];
B = FourierDCT[
Table[Total[
1/Table[n, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 1/1000,
600, N[1/6]}]];
g1 = ListLinePlot[B[[1 ;; 700]]*Table[Sqrt[n], {n, 1, 700}],
DataRange -> {0, 60}, PlotRange -> {-60, 600}];
mm = 11.35/Log[2];
g2 = Graphics[
Table[Style[Text[n, {mm*Log[n], 100 + 20*(-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 16}]];
Show[g1, g2, ImageSize -> Large]
</code></pre>
<p>Mobius function -> Dirichlet series -> Spectral Riemann zeta -> Fourier transform -> von Mangoldt function:</p>
<p><img src="https://i.sstatic.net/yj8Ef.jpg" alt="from Mobius via spectral Riemann zeta to von Mangoldt"></p>
<p>Larger von Mangoldt function plot still wrong amplitude:
<a href="https://i.sstatic.net/02A1p.jpg" rel="noreferrer">https://i.sstatic.net/02A1p.jpg</a></p>
<pre><code>Clear[n, k, t, A, nn, B, g1, g2];
nn = 32;
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
MatrixForm[A];
B = FourierDCT[
Table[Total[
1/Table[n, {n, 1, nn}]*
Total[Transpose[Re[Inverse[A]*Zeta[1/2 + I*t]]]]], {t, 0, 2000,
N[1/6]}]];
g1 = ListLinePlot[B[[1 ;; 2000]], DataRange -> {0, 60},
PlotRange -> {-5, 50}];
2*N[Length[B]/1500, 12];
mm = 13.25/Log[2];
g2 = Graphics[
Table[Style[Text[n, {mm*Log[n], 7 + (-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 40}]];
Show[g1, g2, ImageSize -> Full]
</code></pre>
<p>Plot from program above: <a href="https://i.sstatic.net/r6mTJ.jpg" rel="noreferrer">https://i.sstatic.net/r6mTJ.jpg</a></p>
<p>Partial sums of zeta function, use this one:</p>
<pre><code>Clear[n, k, t, A, nn, B];
nn = 80;
mm = 11.35/Log[2];
A = Table[
Table[If[Mod[n, k] == 0, 1/(n/k)^(1/2 + I*t - 1), 0], {k, 1,
nn}], {n, 1, nn}];
MatrixForm[A];
B = Re[FourierDCT[
Monitor[Table[
Total[1/Table[
n, {n, 1, nn}]*(Total[
Transpose[Inverse[A]*Sum[1/j^(1/2 + I*t), {j, 1, nn}]]] -
1)], {t, 1/1000, 600, N[1/6]}], Floor[t]]]];
g1 = ListLinePlot[B[[1 ;; 700]], DataRange -> {0, 60/mm},
PlotRange -> {-30, 30}];
g2 = Graphics[
Table[Style[Text[n, {Log[n], 5 - (-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 32}]];
Show[g1, g2, ImageSize -> Full]
</code></pre>
<hr>
<p>Edit 17.1.2015:</p>
<pre><code>Clear[g1, g2, scale, xres, x, a, c, d, datapointsdisplayed];
scale = 1000000;
xres = .00001;
x = Exp[Range[0, Log[scale], xres]];
a = -FourierDCT[
Log[x]*FourierDST[
MangoldtLambda[Floor[x]]*(SawtoothWave[x] - 1)*(x)^(-1/2)]];
c = 62.357;
d = N[Im[ZetaZero[1]]];
datapointsdisplayed = 500000;
ymin = -1.5;
ymax = 3;
p = 0.013;
g1 = ListLinePlot[a[[1 ;; datapointsdisplayed]],
PlotRange -> {ymin, ymax},
DataRange -> {0, N[Im[ZetaZero[1]]]/c*datapointsdisplayed}];
Show[g1, Graphics[
Table[Style[Text[n, {22800*Log[n], -1/4*(-1)^n}],
FontFamily -> "Times New Roman", FontSize -> 14], {n, 1, 12}]],
ImageSize -> Large]
</code></pre>
<p><img src="https://i.sstatic.net/cvtCy.png" alt="MangoldLambdaTautology"></p>
<pre><code>Show[Graphics[
RasterArray[
Table[Hue[
Mod[3 Pi/2 +
Arg[Sum[Zeta[sigma - I t]*
Total[1/Divisors[n]^(sigma + I t)*MoebiusMu[Divisors[n]]]/
n, {n, 1, 30}]], 2 Pi]/(2 Pi)], {t, -30,
30, .1}, {sigma, -30, 30, .1}]]], AspectRatio -> Automatic]
</code></pre>
<p><img src="https://i.sstatic.net/t9qup.png" alt="spectral riemann"></p>
<p>The following is a relationship:</p>
<p>Let $\mu(n)$ be the Möbius function, then:</p>
<p>$$a(n) = \sum\limits_{d|n} d \cdot \mu(d)$$</p>
<p>$$T(n,k)=a(GCD(n,k))$$</p>
<p>$$T = \left( \begin{array}{ccccccc} +1&+1&+1&+1&+1&+1&+1&\cdots \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&-2&+1&+1&-2&+1 \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&+1&+1&-4&+1&+1 \\ +1&-1&-2&-1&+1&+2&+1 \\ +1&+1&+1&+1&+1&+1&-6 \\ \vdots&&&&&&&\ddots \end{array} \right)$$</p>
<p>$$\sum\limits_{k=1}^{\infty}\sum\limits_{n=1}^{\infty} \frac{T(n,k)}{n^c \cdot k^z} = \sum\limits_{n=1}^{\infty} \frac{\lim\limits_{s \rightarrow z} \zeta(s)\sum\limits_{d|n} \frac{\mu(d)}{d^{(s-1)}}}{n^c} = \frac{\zeta(z) \cdot \zeta(c)}{\zeta(c + z - 1)}$$</p>
<p>which is part of the limit:</p>
<p>$$\frac{\zeta '(s)}{\zeta (s)}=\lim_{c\to 1} \, \left(\zeta (c)-\frac{\zeta (c) \zeta (s)}{\zeta (c+s-1)}\right)$$</p>
| <p>The Laplace transform of a function</p>
<p>$\sum _{i=1}^{\infty } a_i \delta (t-\log (i))$
where $\delta (t-\log (i))$ is the Delta function (i.e Unit impulse) at time $\log(i)$</p>
<p>is</p>
<p>$\int_0^{\infty } e^{-s t} \sum _{i=1}^{\infty } a_i \delta (t-\log (i)) \,
dt$</p>
<p>or</p>
<p>$\sum _{i=1}^{\infty } a_i i^{-s}$</p>
<p>Your $a_i$ are $log(p)$ if $i = prime^k$ else 0, so it is the Laplace transform (of what you think) which is very closely related to Fourier transform. You might find that $\sum _{i=1}^{\infty }\frac{1}{s} a_i i^{-s} $ gives smoother results (although it then becomes a sum of $a_i$)</p>
| <p>This is not a complete answer but I want show that there is nothing mysterious going on here.</p>
<p>We want to prove that:</p>
<p>$$\text{Fourier Transform of } \Lambda(1)...\Lambda(k) \sim \sum\limits_{n=1}^{n=\infty} \frac{1}{n} \zeta(s)\sum\limits_{d|n}\frac{\mu(d)}{d^{(s-1)}}$$</p>
<p>The Dirichlet inverse of the Euler totient function is</p>
<p>$$a(n)=\sum\limits_{d|n} \mu(d)d$$</p>
<p>Construct the matrix $$T(n,k)=a(GCD(n,k))$$</p>
<p>which starts:</p>
<p>$$\displaystyle T = \begin{bmatrix} +1&+1&+1&+1&+1&+1&+1&\cdots \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&-2&+1&+1&-2&+1 \\ +1&-1&+1&-1&+1&-1&+1 \\ +1&+1&+1&+1&-4&+1&+1 \\ +1&-1&-2&-1&+1&+2&+1 \\ +1&+1&+1&+1&+1&+1&-6 \\ \vdots&&&&&&&\ddots \end{bmatrix}
$$</p>
<p>where GCD is the Greatest Common Divisor of row index $n$ and column index $k$.</p>
<p>joriki <a href="https://math.stackexchange.com/a/51708/8530">showed</a> that the von Mangoldt function is $$\Lambda(n)=\sum\limits_{k=1}^{k=\infty} \frac{T(n,k)}{k}$$</p>
<p>Then add this quote by Terence Tao from <a href="https://groups.yahoo.com/neo/groups/harmonicanalysis/conversations/messages/230" rel="nofollow noreferrer">here</a>, that I don't completely understand but I do almost see why it should be true:</p>
<p>Quote:" The Fourier transform in this context becomes (essentially) the Mellin transform, which is particularly important in analytic number theory. (For instance, the Riemann zeta function is essentially the Mellin transform of the Dirac comb on the natural numbers"</p>
<p>Now let us return to the matrix $T$:</p>
<p>First the von Mangoldt is expanded as:</p>
<p>$$\displaystyle \begin{bmatrix} +1/1&+1/1&+1/1&+1/1&+1/1&+1/1&+1/1&\cdots \\ +1/2&-1/2&+1/2&-1/2&+1/2&-1/2&+1/2 \\ +1/3&+1/3&-2/3&+1/3&+1/3&-2/3&+1/3 \\ +1/4&-1/4&+1/4&-1/4&+1/4&-1/4&+1/4 \\ +1/5&+1/5&+1/5&+1/5&-4/5&+1/5&+1/5 \\ +1/6&-1/6&-2/6&-1/6&+1/6&+2/6&+1/6 \\ +1/7&+1/7&+1/7&+1/7&+1/7&+1/7&-6/7 \\ \vdots&&&&&&&\ddots \end{bmatrix}$$</p>
<p><strong>Edit: 24.1.2016.</strong> From here on the variables $n$ and $k$ should be permutated but I don't know how to fix the rest of this answer right now.</p>
<p>Summing the columns first is equivalent to what was said earlier:
$$\Lambda(n)=\sum\limits_{k=1}^{k=\infty} \frac{T(n,k)}{k}$$</p>
<p>where:</p>
<p>$$\Lambda(n) = \begin{cases} \infty & \text{if }n=1, \\\log p & \text{if }n=p^k \text{ for some prime } p \text{ and integer } k \ge 1, \\ 0 & \text{otherwise.} \end{cases}$$</p>
<p>or as a sequence:</p>
<p>$$\infty ,\log (2),\log (3),\log (2),\log (5),0,\log (7),\log (2),\log (3),0,\log (11),0,...,\Lambda(\infty)$$</p>
<p>And now based on the quote above let us say that:
$$\text{Fourier Transform of } \Lambda(1)...\Lambda(k) = \sum\limits_{n=1}^{n=k}\frac{\Lambda(n)}{n^s}$$</p>
<p>Expanding this into matrix form we have the matrix:</p>
<p>$$\displaystyle \begin{bmatrix}
\frac{T(1,1)}{1 \cdot 1^s}&+\frac{T(1,2)}{1 \cdot 2^s}&+\frac{T(1,3)}{1 \cdot 3^s}+&\cdots&+\frac{T(1,k)}{1 \cdot k^s} \\
\frac{T(2,1)}{2 \cdot 1^s}&+\frac{T(2,2)}{2 \cdot 2^s}&+\frac{T(2,3)}{2 \cdot 3^s}+&\cdots&+\frac{T(2,k)}{2 \cdot k^s} \\
\frac{T(3,1)}{3 \cdot 1^s}&+\frac{T(3,2)}{3 \cdot 2^s}&+\frac{T(3,3)}{3 \cdot 3^s}+&\cdots&+\frac{T(3,k)}{3 \cdot k^s} \\
\vdots&\vdots&\vdots&\ddots&\vdots \\
\frac{T(n,1)}{n \cdot 1^s}&+\frac{T(n,2)}{n \cdot 2^s}&+\frac{T(n,3)}{n \cdot 3^s}+&\cdots&+\frac{T(n,k)}{n \cdot k^s} \end{bmatrix} = \begin{bmatrix} \frac{\zeta(s)}{1} \\ +\frac{\zeta(s)\sum\limits_{d|2} \frac{\mu(d)}{d^{(s-1)}}}{2} \\ +\frac{\zeta(s)\sum\limits_{d|3} \frac{\mu(d)}{d^{(s-1)}}}{3} \\ \vdots \\ +\frac{\zeta(s)\sum\limits_{d|n} \frac{\mu(d)}{d^{(s-1)}}}{n} \end{bmatrix}$$</p>
<p>On the right hand side we see that it sums to the right hand side of what we set out to prove, namely:</p>
<p>$$\text{Fourier Transform of } \Lambda(1)...\Lambda(k) \sim \sum\limits_{n=1}^{n=\infty} \frac{1}{n} \zeta(s)\sum\limits_{d|n}\frac{\mu(d)}{d^{(s-1)}}$$</p>
<p>Things that remain unclear are: What factor should the left hand side be multiplied with in order to have the same magnitude as the right hand side? And why in the Fourier transform does the first term of the von Mangoldt function appear to be $\log q$?</p>
<p>$$\Lambda(n) = \begin{cases} \log q & \text{if }n=1, \\\log p & \text{if }n=p^k \text{ for some prime } p \text{ and integer } k \ge 1, \\ 0 & \text{otherwise.} \end{cases}$$</p>
<p>$$n=1,2,3,4,5,...q$$</p>
<p>As a heuristic, $$\Lambda(n) = \log q \;\;\;\; \text{if }n=1$$ probably has to do with that in the Fourier transform $q$ terms of $\Lambda$ are used and the first column in square matrix $T(1..q,1..q)$ sums to a Harmonic number.</p>
|
logic | <p>The hypothetical relation is $z = \mathrm{xor}\left(x,y\right)$ where xor is any bitwise operator such as AND, OR, NAND, etc. I see that these operations may be defined for integers <a href="https://math.stackexchange.com/questions/37877/is-there-any-mathematical-operation-on-integers-that-yields-the-same-result-as-d">trivially using binary-decimal conversion</a>.</p>
<p>In the same way, can't we perform bitwise arithmetic on real numbers? For example, the following is $\mathrm{xor}\left(1.5, 2.75\right)$:</p>
<pre><code> 01.01
xor 10.11
---------
= 11.10
</code></pre>
<p>The answer is 3.5.</p>
<p>What do the 3D plots of the binary bitwise operators look like, and what are some interesting mathematical properties? (e.g. gradient)</p>
<p>By the way, if you plot any of these using Sage, can I see the code? I couldn't get bitwise operations to work this way.</p>
| <p>Basically, it looks like this:</p>
<p><img src="https://i.sstatic.net/0QYnq.png" alt="3D plot of z = x xor y, producing a Sierpinski tetrahedron"><br>
<sup>(Image rendered in <a href="http://www.povray.org/" rel="noreferrer">POV-Ray</a> by the author, using a recursively constructed mesh, some area lights and lots of anti-aliasing.)</sup></p>
<p>In the picture, the blue square on the $x$-$y$ plane represents the unit square $[0,1]^2$, and the yellow shape is the graph $z = x \oplus y$ over this square, where $\oplus$ denotes bitwise $\rm xor$.</p>
<p>Note that this graph is discontinuous at a dense subset of the plane. In the 3D rendering above, no attempt has been made to accurately portray the precise value of $x \oplus y$ at the points of discontinuity, and indeed, it is not generally uniquely defined. That is because the discontinuities occur at points where $x$ or $y$ is a <a href="//en.wikipedia.org/wiki/Dyadic_rational" rel="noreferrer">dyadic fraction</a>, and therefore has two possible binary expansions (e.g. $\frac12 = 0.100000\dots_2 = 0.011111\dots_2$).</p>
<p>As can be seen from the picture, the graph is self-similar, in the sense that the full graph over $[0,1]^2$ consists of four scaled-down and translated copies of itself. Indeed, this self-similarity is evident from the properties of the $\oplus$ operation, namely that:</p>
<ul>
<li>$\displaystyle \frac x2 \oplus \frac y2 = \frac{x \oplus y}2$, and</li>
<li>$\displaystyle x \oplus \left(y \oplus \frac12\right) = \left(x \oplus \frac12\right) \oplus y = (x \oplus y) \oplus \frac12$.</li>
</ul>
<p>The first property implies that the graph of $x \oplus y$ over the bottom left quarter $[0,1/2]^2$ of the square $[0,1]^2$ is a scaled-down copy of the full graph, while the second property implies that the graphs of $x \oplus y$ in the other quarters are identical to the first quarter, except that the lower right and upper left ones are translated up by $\frac12$.</p>
<p>The resulting fractal shape is also known as the <a href="http://mathworld.wolfram.com/Tetrix.html" rel="noreferrer">Tetrix</a> or the Sierpinski tetrahedron, and is a 3D analogue of the 2-dimensional <a href="//en.wikipedia.org/wiki/Sierpinski_triangle" rel="noreferrer">Sierpinski triangle</a>, which is also closely linked with the $\rm xor$ operation — one way to construct approximations of the Sierpinski triangle is to compute $2^n$ rows of <a href="//en.wikipedia.org/wiki/Pascal%27s_triangle" rel="noreferrer">Pascal's triangle</a> using integer addition modulo $2$, which is equivalent to logical $\rm xor$.</p>
<p>It may be surprising to observe that this fully 3-dimensional fractal shape is indeed (at least approximately, ignoring the pesky multivaluedness issues at the discontinuities) the graph of a function in the $x$-$y$ plane. Yet, when viewed from above, each of the four sub-tetrahedra indeed precisely covers one quarter of the full unit square (and each of the 16 sub-sub-tetrahedra covers one quarter of a quarter, and so on...).</p>
| <p>Here's what XOR looks like. Black is $z=0$; White is $z=1$. The x and y axes run from $[0,1]$.</p>
<p><img src="https://i.sstatic.net/MbmDz.png" alt="XOR"></p>
<p>I used the (naive) formula</p>
<p>$ x \oplus' y = \left(\lfloor 2^{20}x\rfloor \oplus \lfloor 2^{20}y\rfloor\right)2^{-20} $</p>
<p>At least with this naive interpretation, the function does not seem continuous.</p>
<p>EDIT: using analogous formulas, here are </p>
<p>AND:
<img src="https://i.sstatic.net/sSyN2.png" alt="enter image description here"></p>
<p>OR:
<img src="https://i.sstatic.net/88W3i.png" alt="enter image description here"></p>
<p>They all look pretty similar, but the 0:1 ratio difference is noticable.</p>
<p>The XOR looks like a fractal, rather than an addition plot.</p>
|
logic | <p>I want to know the difference between ⊢ and ⊨.</p>
<p><a href="http://en.wikipedia.org/wiki/List_of_logic_symbols">http://en.wikipedia.org/wiki/List_of_logic_symbols</a></p>
<p>⊢ means ”provable”
But ⊨ is used exactly the same:</p>
<pre><code>A → B ⊢ ¬B → ¬A
A → B ⊨ ¬B → ¬A
</code></pre>
<p>Can you present a good example where they are different? Is it like the incompleteness theorem of recursive sets that there are sentences that are true i.e. ⊨ but do not have the property ⊢ i.e. provable?</p>
<p>Thanks for any insight</p>
| <p>$A \models B$ means that $B$ is true in every structure in which $A$ is true. $A\vdash B$ means $B$ can be proved using $A$ as the premises. (In both cases, $A$ is a not necessarily finite set of formulas and $B$ is a formula.)</p>
<p>First-order logic simultaneously enjoys the following properties: There is a system of proof for which</p>
<ul>
<li>If $A\vdash B$ then $A\models B$ (soundness)</li>
<li>If $A\models B$ then $A\vdash B$ (completeness)</li>
<li>There is a proof-checking algorithm (effectiveness). <br>(And it's fortunately quite a fast-running algorithm.)</li>
</ul>
<p>That last point is in stark contrast to this fact: There is no provability-checking algorithm. You can search for a proof of a first-order formula in such a systematic way that you'll find it if it exists, and you'll search forever if it doesn't. But if you've been searching for a million years and it hasn't turned up yet, you don't know whether the search will go on forever or end next week. These are results proved in the 1930s. The non-existence of an algorithm for deciding whether a formula is provable is called Church's theorem, after Alonzo Church.</p>
| <p>I learned that $\models$ stands for semantic entailment, while $\vdash$ stands for provability in a certain proof system.</p>
<p>More concretely: Given a set of formulas $\Gamma$ and a formula $\varphi$ in some logic (e.g., first-order logic), $\Gamma \models \varphi$ means that every model of $\Gamma$ is also a model of $\varphi$. On the other hand, fix a proof system (e.g., sequent calculus) for that logic. Then $\Gamma \vdash \varphi$ means that there is a proof using that proof system of $\varphi$ assuming the formulas in $\Gamma$.</p>
<p>The relationship between $\models$ and $\vdash$ actually describes two important properties of a proof calculus: If $\Gamma \vdash \varphi$ implies $\Gamma \models \varphi$, the proof system is <em>sound</em>, and if $\Gamma \models \varphi$ implies $\Gamma \vdash \varphi$, it is <em>complete</em>. For propositional and first-order logic, there are proof systems that are both sound and complete; this is not the case for some other logics. For example, second-order logic does not admit an effective sound and complete proof system (e.g., the set of rules for a sound and complete proof system would not be decidable).</p>
<p><strong>Edit:</strong> Not every proof calculus for propositional or first-order logic is both complete and sound. For example, consider the system with the following rule: $\vdash \varphi \vee \neg \varphi$ where $\varphi$ is an arbitrary formula. This is undoubtably correct, but it's also incomplete, since one cannot derive trivially true statements like $\varphi \vdash \varphi$. On the other hand, the system $\Gamma \vdash \varphi$ for arbitrary formulae $\varphi$ and formula sets $\Gamma$ is complete, but obviously incorrect.</p>
|
geometry | <p>In a freshers lecture of <strong>3-D geometry</strong>, our teacher said that <strong>3-D</strong> objects can be viewed as projections of <strong>4-D</strong> objects. How does this helps us visualize <strong>4-D</strong> objects?
I searched that we can at least see their <strong>3-D</strong> cross-sections. A <strong>Tesseract</strong> hypercube would be a good example.<br>
Can we conclude that a 3-D cube is a shadow of a 4-D tesseract? </p>
<p>But, how can a shadow be <strong>3-D</strong>? Was the screen used for casting shadow also <strong>3-D</strong>; or else, what way is it different from basic physics of shadows we learnt? </p>
<blockquote>
<p><strong>edit:</strong> The responses are pretty good for <strong>4th</strong> dimensional analysis, but can we generalize this projection idea for <strong>n</strong> dimensions, i.e. all <strong>n</strong> dimensional objects will have <strong>n-1</strong> dimensional projections?<br>
This makes me think about <em>higher dimensions</em> discussed in <strong>string theory</strong>.<br>
What other areas of <strong>Mathematics</strong> will be helpful?</p>
</blockquote>
| <p>The animations below accompany an introductory talk on high-dimensional geometry.</p>
<p>Mathematically, the second was made by putting a "light source" at a point $(0, 0, 0, h)$ (with $h > 0$) and sending each point $(x, y, z, w)$ (with $w < h$) to $\frac{h}{h - w}(x, y, z)$.</p>
<p><a href="https://i.sstatic.net/0QOAA.gif" rel="noreferrer"><img src="https://i.sstatic.net/0QOAA.gif" alt="Shadow of a rotating cube"></a>
<a href="https://i.sstatic.net/M4gOo.gif" rel="noreferrer"><img src="https://i.sstatic.net/M4gOo.gif" alt="Shadow of a rotating hypercube"></a></p>
| <p>First things first: your brain is simply not made to visualize anything higher than three spacial dimensions geometrically. All we can do is using tricks an analogies, and of course, the vast power of abstract mathematics. The latter one might help us understand the generalized properties of such objects, but does not help visualizing it for us 3D-beings in a familiar way.</p>
<p>I will keep things simple and will talk more about the philosophy and visualization than about the math behind it. I will avoid using formulas as long as possible.</p>
<hr>
<h2>The analogy</h2>
<p>If you look at an image of a cube, the image is absolutely 2D, but you intuitively understand that the depicted object is some 3D-model. Also you intuitively understand its shape and positioning in space. How is your brain achieving this? There is obviously information lost during the projection into 2D. </p>
<p>Well, you (or your brain) have different techniques to reconstruct the 3D-nature of the object from this simplified representation. For example: you know things are smaller if they are farther away. You <em>know</em> that a <a href="https://upload.wikimedia.org/wikipedia/commons/7/78/Hexahedron.jpg" rel="noreferrer">cube</a> is composed of faces of the same size. However, any (perspectively correct) image shows some face (the back face, if visible) as smaller as the front face. Also it is no longer in the form of a square, but you still recognize it as such.</p>
<p>We use the same analogies for projecting 4D-objects into 3D space (and then further into 2D-space to make an image file out of it). Look at your favorite "picture" of a <a href="https://upload.wikimedia.org/wikipedia/commons/2/22/Hypercube.svg" rel="noreferrer">4D-cube</a>. You recognize that it is composed of several cubes. For example, you see a small cube inside, and a large cube outside. Actually a tesseract (technical term for 4D-cube) is composed of <em>eight identical cubes</em>. But good luck finding these cubes in this picture. They look as much as a cube as a square looks like a square in the 2D-depiction of a 3D-cube.</p>
<p>The small cube "inside" is the "back-cube" of the tesseract. It is the farthest away from you, hence the smallest. The "outer" cube is the front cube for the analogue reason. It might be hard to see, but there are "cubes" connecting the inner and outer cube. These cubes are distorted, and cannot be recognized as cubes. This is the problem of projections. A shape might not be kept.</p>
<p>Even more interesting: look at this picture of a <a href="https://upload.wikimedia.org/wikipedia/commons/5/55/8-cell-simple.gif" rel="noreferrer"><em>rotating tesseract</em></a>. It seems like the complete structure is shifting, nothing is rigid at all. But again, take the anagoly of a rotating 3D-cube in a picture. During the process of rotation, the back face is becoming the front face, hence shifting its size. And during this process it even loses its shape as a square. All line lengths are changing. Just a flaw of the projection. You see the same for the rotating tesseract. The inner cube is becoming the outer one, becoming bigger because it is becoming the "front cube".</p>
<hr>
<h2>The projection</h2>
<p>I just talked about projections. But a projection is essentially the same as a shadow image on the wall. Light is cast on a 3D-object and blocked out by it and you try to reconstruct its shape from the light and dark spots left by it on the wall behind it. Of course we cannot talk physically about a 4D-light source and light traveling through 4D-space and hitting a 3D-wall and casting a 3D-shadow. As I said, this is just pure analogy. We only know the physics of our 3D-world (4D-spacetime if you want, but there are no four spacial dimensions, but one is temporal)</p>
<p>So what are you projecting on? Well, you are projecting on our 3D-space. Hard to imagine, but our 3D-space is as much to the 4D-space as a simple sheet of paper is to our space $-$ flat and infinitely thin.</p>
<p>How crude these depictions are will become evident when making clear how many dimensions are lost in the process. Essentially you are projecting a 4D-object onto a 3D-space and then onto a 2D-image so we can show it on screen. Of course, our brain is clever enought to allow us to reconstruct the 3D-model from it. But essentially we go from 4D to 2D. This is like going from 3D to 1D, trying to understand a 3D-object by looking at its shadow cast at a thin thread. Good luck for 2D-creatures understanding 3D from this crude simplification.</p>
|
game-theory | <p>There are many games that even though they include some random component (for example dice rolls or dealing of cards) they are skill games. In other words, there is definite skill in how one can play the game taking into account the random component. Think of backgammon or poker as good examples of such games. </p>
<p>Moreover, novice players or outsiders might fail to recognise the skill involved and attribute wins purely to luck. As someone gains experience, they usually start to appreciate the skill involved, and concede that there is more to the game than just luck. They might even concede that there is "more" skill than luck. How do we quantify this? <strong>How "much" luck vs skill</strong>? People can have very subjective feelings about this balance. Recently, I was reading someone arguing that backgammon is $9/10$ luck, while another one was saying it's $6/10$. These numbers mean very little other than expressing gut feelings. </p>
<p><strong>Can we do better?</strong> Can we have an objective metric to give us a good sense of the skill vs luck component of a game.</p>
<p>I was thinking along these lines: Given a game and a lot of empirical data on matches between players with different skills, a metric could be:</p>
<blockquote>
<p>How many games on the average do we need to have an overall win (positive win-loss balance) with probability $P$ (let's use $P=0.95$) between a top level player and a novice?</p>
</blockquote>
<p>For the game of chess this metric would be $1$ (or very close to $1$). For the game of scissors-paper-rock it would be $\infty$.</p>
<p>This is an objective measure (we can calculate it based on empirical data) and it is intuitive. There is however an ambiguity in what top-level and novice players mean. Empirical data alone does not suffice to classify the players as novices or experts. For example, imagine that we have the results from 10,000 chess games between 20 chess grandmasters. Some will be better, some will be worse, but analysing the data with the criterion I defined, we will conclude that chess has a certain (significant) element of luck. Can we make this more robust? Also, given a set of empirical data (match outcomes) how do we know we have enough data?</p>
<p>What other properties do we want to include? Maybe a rating between $[0, 1]$, zero meaning no luck, and one meaning all luck, would be easier to talk about. </p>
<p>I am happy to hear completely different approaches too. </p>
| <p>A natural place to start, depending upon your mathematical background might be the modern cardinal generalizations of the voting literature. Consider the following toy model, and feel free to adapt any parts which don't seem appropriate to the problem at hand.</p>
<p><strong>Data</strong>: Say we have a fixed, finite population of players $N$ who we observe play. A single observation consists of the result of a single match between two players $\{i \succ j\}$ for some $i,j \in N$. Let's the space of datasets $\mathcal{D}$ consists, for fixed $N$, of all finite collections of such observations. </p>
<p><strong>Solution Concept</strong>: Our goal here is twofold: mathematically we're going to try to get a vector in $\mathbb{R}^N$ whose $i^\textrm{th}$ component is the 'measure of player $i$'s caliber relative to all the others.' A player $i$ is better than player $j$ by our rule if the $i^\textrm{th}$ component of this score vector is higher than the $j^\textrm{th}$. We might also like the magnitude of the differences in these components to carry be increasing in some measure of pairwise skill difference (the score vector is <em>cardinal</em>, to use economics lingo). </p>
<blockquote>
<p><strong>Edit</strong>: This score vector should not be seen as a measure of luck versus skill. But what we will be able to do is use this to 'clean' our data of the component of it generated by differences in skill, leaving behind a residual which we may then interpret as 'the component of the results attributable to luck.' In particular, this 'luck component' lives in a normed space, and hence comes with a natural means of measuring its magnitude, which seems to be what you are after.</p>
</blockquote>
<p>Our approach is going to use a cardinal generalization of something known as the Borda count from voting theory.</p>
<p><strong>Aggregation</strong>: Our first step is to aggregate our dataset. Given a dataset, consider the following $N\times N$ matrix 'encoding' it. For all $i,j \in N$ define: </p>
<p>$$
D_{ij} = \frac{\textrm{Number of times $i$ has won over $j$}- \textrm{Number of times $j$ has won over $i$}}{\textrm{Number of times $i$ and $j$ have played}}
$$</p>
<p>if the denominator is non-zero, and $0$ else. The ${ij}^\textrm{th}$ element of this matrix encodes the relative frequency with which $i$ has beaten $j$. Moreover, $D_{ij} = -D_{ij}$. Thus we have identified this dataset with a skew-symmetric, real-valued $N\times N$ matrix.</p>
<p>An equivalent way of viewing this data is as a <em>flow</em> on a graph whose vertices correspond to the $N$ players (every flow on a graph has a representation as a skew-symmetric matrix and vice-versa). In this language, a natural candidate for a score vector is a <em>potential function</em>: a function from the vertices to $\mathbb{R}$ (i.e. a vector in $\mathbb{R}^N$) such that the value of the flow on any edge is given by the its gradient. In other words, we ask if there exists some vector $s \in \mathbb{R}^N$ such that, for all $i,j \in N$:</p>
<p>$$
D_{ij} = s_j - s_i.
$$</p>
<p>This would very naturally be able to be perceived as a metric of 'talent' given the dataset. If such a vector existed, it would denote that differences in skill could 'explain away all variation in the data.' Generally, however, for a given aggregate data matrix, such a potential function generally does not exist (as we would hope, in line with our interpretation). </p>
<blockquote>
<p><strong>Edit</strong>: It should be noted that the way we are aggregating the data (counting relative wins) will generally preclude such a function from existing, even when the game is 'totally determined by skill.' In such cases our $D$ matrix will take values exclusively in $\{-1, 0 ,1\}$. Following the approach outlined below, one will get out a score function which rationalizes this ordering but the residual will not necessarily be zero (but will generally take a specific form, of a $N$-cycle conservative flow that goes through each vertex). If one were to, say, make use of scores of games, the aggregated data would have a cardinal interpretation.</p>
</blockquote>
<p><strong>Construction of a score</strong>: The good news is that the mathematical tools exist to construct a scoring function that is, in a rigorous sense, the 'best fit for the data,' even if it is not a perfect match (think about how a data point cloud rarely falls perfectly on a line, but we nonetheless can find it instructive to find the line that is the best fit for it). Since this has been tagged a soft question, I'll just give a sketch of how and why such a score can be constructed. The space of all such aggregate data matrices actually admits a decomposition into the linear subspace of flows that admit a potential function, and its orthogonal complement. Formally, this is a combinatorial version of the Hodge decomposition from de Rham cohomology (but if those words mean nothing, don't worry about looking them up). Then, loosely speaking, in the spirit of classical regression, we can solve a least-squares minimization problem to orthogonally project our aggregate data matrix onto the linear subspace of flows that admit a potential function:</p>
<p>$$
\min_{s \in \mathbb{R}^N} \|D - (-\textrm{grad}(s))\|^2
$$
where $-\textrm{grad}(s)$ is an $N\times N$ matrix whose $ij^\textrm{th}$ element is $s_i - s_j$.</p>
<p>If you're interested in seeing this approach used to construct a ranking for college football teams, see:</p>
<p><a href="http://www.ams.org/samplings/feature-column/fc-2012-12" rel="nofollow noreferrer">http://www.ams.org/samplings/feature-column/fc-2012-12</a></p>
<p>If you'd like to read a bit more about the machinery underlying this including some brief reading about connections to the mathematical voting literature, see:</p>
<p><a href="https://www.stat.uchicago.edu/~lekheng/meetings/mathofranking/ref/jiang-lim-yao-ye.pdf" rel="nofollow noreferrer">https://www.stat.uchicago.edu/~lekheng/meetings/mathofranking/ref/jiang-lim-yao-ye.pdf</a></p>
| <p>Found this: <a href="https://boardgames.stackexchange.com/questions/9697/how-to-measure-luck-vs-skill-in-games">How to Measure Luck vs Skill in Games?</a> at<br>
boardgames.stackexchange.com</p>
<p>The following links might also be helpful and/or of general interest:</p>
<p><a href="https://en.wikipedia.org/wiki/Kelly_criterion" rel="nofollow noreferrer">Kelly criterion</a><br>
<a href="https://en.wikipedia.org/wiki/Edward_O._Thorp" rel="nofollow noreferrer">Edward O. Thorp</a><br>
<a href="https://en.wikipedia.org/wiki/Daniel_Kahneman" rel="nofollow noreferrer">Daniel Kahneman</a><br></p>
<hr />
<p><a href="http://www.cfapubs.org/doi/pdf/10.2469/cp.v30.n3.1" rel="nofollow noreferrer">The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing</a><br>
<span class="math-container">$\;$</span> by Michael J. Mauboussin<br>
<a href="http://studentofvalue.com/notes-on-the-success-equation/" rel="nofollow noreferrer">NOTES ON THE SUCCESS EQUATION</a><br></p>
<blockquote>
<p>Estimated true skill = Grand average + shrinkage factor (observed
average − grand average)</p>
</blockquote>
<p><a href="https://www.farnamstreetblog.com/2012/12/three-things-to-consider-in-order-to-make-an-effective-prediction/" rel="nofollow noreferrer">Michael Mauboussin: Three Things to Consider in Order To Make an Effective Prediction</a><br></p>
<hr />
<p><a href="https://en.wikipedia.org/wiki/Bayesian_inference" rel="nofollow noreferrer">Bayesian inference</a></p>
<hr />
<p>I hope the OP can forgive me for not focusing on his questions. Ten years ago I might have been more enthusiastic, but now all I could muster up was to provide links that anyone interested in games should find interesting.</p>
<p>So what is my problem? Two letters:</p>
<h1>AI</h1>
<p>Ten years ago it was thought that it would take AI 'bots' several decades to become #1 over humanity in two games:</p>
<p>Go (a game of perfect information)<br></p>
<p>OR</p>
<p>Poker (a game of imperfect information/randomness)<br></p>
<p>Today, it is a fait accompli.</p>
<p>The researchers matching their bots against human players in poker can claim victory whenever the out-performance is statistically significant.</p>
<p>GO is a different animal. Any win against a 10 dan professional player is 'off the charts' significant (throw that statistics book out the window).</p>
|
probability | <p>Some years ago I was interested in the following Markov chain
whose state space is the positive integers. The chain begins at state "1",
and from state "n" the chain next jumps to a state uniformly
selected from {n+1,n+2,...,2n}.</p>
<p>As time goes on, this chain goes to infinity, with occasional
large jumps. In any case, the chain is quite unlikely to hit any
particular large n.</p>
<p>If you define p(n) to be the probability that this chain
visits state "n", then p(n) goes to zero like c/n for some
constant c. In fact, </p>
<p>$$ np(n) \to c = {1\over 2\log(2)-1} = 2.588699. \tag1$$</p>
<p>In order to prove this convergence, I recast it as an analytic
problem. Using the Markov property, you can see that the sequence
satisfies: </p>
<p>$$ p(1)=1\quad\mbox{ and }\quad p(n)=\sum_{\lceil n/2\rceil}^{n-1} {p(j)\over j}\mbox{ for }n>1. \tag2$$</p>
<p>For some weeks, using generating functions etc. I tried and failed to find
an analytic proof of the convergence in (1). Finally, at a conference
in 2003 Tom Mountford showed me a (non-trivial) probabilistic proof.</p>
<p>So the result is true, but since then I've
continued to wonder if I missed something obvious. Perhaps there is
a standard technique for showing that (2) implies (1).</p>
<p><strong>Question:</strong> Is there a direct (short?, analytic?) proof of (1)? </p>
<p>Perhaps someone who understands sequences better than I do could take a shot at this.</p>
<p><strong>Update:</strong> I'm digging through my old notes on this. I now remember that I had a proof (using generating functions) that <em>if</em> $\ np(n)$ converges, then the limit is $1\over{2\log (2)-1}$. It was the convergence that eluded me.</p>
<p>I also found some curiosities like: $\sum_{n=1}^\infty {p(n)\over n(2n+1)}={1\over 2}.$ </p>
<p><strong>Another update</strong>: Here is the conditional result mentioned above. </p>
<p>As in Qiaochu's answer, define $Q$ to be the generating function of $p(n)/n$, that is,
$Q(t)=\sum_{n=1}^\infty {p(n)\over n} t^n$ for $0\leq t<1$.
Differentiating gives
$$Q^\prime(t)=1+{Q(t)-Q(t^2)\over 1-t}.$$
This is slightly different from Qiaochu's expression because
$p(n)\neq \sum_{j=\lceil n/2\rceil}^{n-1} {p(j)\over j}$ when $n=1$,
so that $p(1)$ has to be treated separately.</p>
<p>Differentiating again and multiplying by $1-t$, we get
$$(1-t)Q^{\prime\prime}(t)=-1+2\left[Q^\prime(t)-t Q^\prime(t^2)\right],$$
that is,
$$(1-t)\sum_{j=0}^\infty (j+1) p(j+2) t^j = -1+2\left[\sum_{j=1}^\infty (jp(j)) {t^j-t^{2j}\over j}\right].$$</p>
<p>Assume that $\lim_n np(n)=c$ exists. Letting $t\to 1$ above the left hand side gives $c$,
while the right hand side is $-1+2c\log(2)$ and hence $c={1\over 2\log(2)-1}$.</p>
<p>Note: $\sum_{j=1}^\infty {t^j-t^{2j}\over j}=\log(1+t).$ </p>
<p><strong>New update: (Sept. 2)</strong> </p>
<p>Here's an alternative proof of the conditional result that my colleague Terry Gannon
showed me in 2003.</p>
<p>Start with the sum $\sum_{n=2}^{2N}\ p(n)$, substitute the formula in the title,
exchange the variables $j$ and $n$, and rearrange to establish the identity:</p>
<p>$${1\over 2}=\sum_{j=N+1}^{2N} {j-N\over j}\ {p(j)}.$$</p>
<p>If $jp(j)\to c$, then
$1/2=\lim_{N\to\infty} \sum_{j=N+1}^{2N} {j-N\over j^2}\ c=(\log(2)-1/2)\ c,$ so that
$c={1\over 2\log(2)-1}$.</p>
<p><strong>New update: (Sept. 8)</strong> Despite the nice answers and interesting discussion below, I am still holding out for an (nice?, short?) analytic proof of convergence. Basic Tauberian theory is allowed :) </p>
<p><strong>New update: (Sept 13)</strong> I have posted a sketch of the probabilistic proof of convergence under "A fun and frustrating recurrence sequence" in the "Publications" section of my homepage.</p>
<p><strong>Final Update:</strong> (Sept 15th) The deadline is approaching, so I have decided to award the bounty to T..
Modulo the details(!), it seems that the probabilistic approach is the most likely to lead to a proof.</p>
<p>My sincere thanks to everyone who worked on the problem, including those who tried it but
didn't post anything. </p>
<p>In a sense, I <em>did</em> get an answer to my question: there doesn't seem to be an easy, or standard
proof to handle this particular sequence. </p>
| <p>Update: the following probabilistic argument I had posted earlier shows only that $p(1) + p(2) + \dots + p(n) = (c + o(1)) \log(n)$ and not, as originally claimed, the convergence $np(n) \to c$. Until a complete proof is available [edit: George has provided one in another answer] it is not clear whether $np(n)$ converges or has some oscillation, and at the moment there is evidence in both directions. Log-periodic or other slow oscillation is known to occur in some problems where the recursion accesses many previous terms. Actually, everything I can calculate about $np(n)$ is consistent with, and in some ways suggestive of, log-periodic fluctuations, with convergence being the special case where the bounds could somehow be strengthened and the fluctuation scale thus squeezed down to zero.</p>
<hr>
<p>$p(n) \sim c/n$ is [edit: only in average] equivalent to $p(1) + p(2) + \dots + p(n)$ being asymptotic to $c \log(n)$. The sum up to $p(n)$ is the expected time the walk spends in the interval [1,n]. For this quantity there is a simple probabilistic argument that explains (and can rigorously demonstrate) the asymptotics.</p>
<p>This Markov chain is a discrete approximation to a log-normal random walk. If $X$ is the position of the particle, $\log X$ behaves like a simple random walk with steps $\mu \pm \sigma$ where $\mu = 2 \log 2 - 1 = 1/c$ and $\sigma^2 = (1- \mu)^2/2$. This is true because <em>the Markov chain is bounded between two easily analyzed random walks with continuous steps</em>.</p>
<p>(Let X be the position of the particle and $n$ the number of steps; the walk starts at X=1, n=1.)</p>
<p>Lower bound walk $L$: at each step, multiply X by a uniform random number in [1,2] and replace n by (n+1). $\log L$ increases, on average, by $\int_1^2 log(t) dt = 2 \log(2) - 1$ at each step.</p>
<p>Upper bound walk $U$: at each step, jump from X to uniform random number in [X+1,2X+1] and replace n by (n+1). </p>
<p>$L$ and $U$ have means and variances that are the same within $O(1/n)$, where the $O()$ constants can be made explicit. Steps of $L$ are i.i.d and steps of $U$ are independent, asymptotically identical-distributed. Thus, the Central Limit theorem shows that $\log X$ after $n$ steps is approximately a Gaussian with mean $n\mu + O(\log n)$ and variance $n\sigma^2 + O(\log n)$. </p>
<p>The number of steps for the particle to escape the interval $[1,t]$ is therefore $({\log t})/\mu$ with fluctuations of size $A \sqrt{\log t}$ having probability that decays rapidly in A (bounded by $|A|^p \exp(-qA^2)$ for suitable constants). Thus, the sum p(1) + p(2) + ... p(n) is asymptotically equivalent to $(\log n)/(2\log (2)-1)$.</p>
<p>Maybe this is equivalent to the 2003 argument from the conference. If the goal is to get a proof from the generating function, it suggests that dividing by $(1-x)$ may be useful for smoothing the p(n)'s.</p>
| <p>After getting some insight by looking at some numerical calculations to see what $np(n)-c$ looks like for large $n$, I can now describe the convergence in some detail. First, the results of the simulations strongly suggested the following.</p>
<ul>
<li>for $n$ odd, $np(n)$ converges to $c$ from below at rate $O(1/n)$.</li>
<li>for $n$ a multiple of 4, $np(n)$ converges to $c$ from above at rate $O(1/n^2)$.</li>
<li>for $n$ a multiple of 2, but not 4, $np(n)$ converges to $c$ from below at rate $O(1/n^2)$.</li>
</ul>
<p>This suggests the following asymptotic form for $p(n)$.
$$
p(n)=\frac{c}{n}\left(1+\frac{a_1(n)}{n}+\frac{a_2(n)}{n^2}\right)+o\left(\frac{1}{n^3}\right)
$$
where $a_1(n)$ has period 2, with $a_1(0)=0$, $a_1(1) < 0$ and $a_2(n)$ has period 4 with $a_2(0) > 0$ and $a_2(2) < 0$. In fact, we can expand $p(n)$ to arbitrary order [Edit: Actually, not quite true. See below]
$$
\begin{array}
{}p(n)=c\sum_{r=0}^s\frac{a_r(n)}{n^{r+1}}+O(n^{-s-2})&&(1)
\end{array}
$$
and the term $a_r(n)$ is periodic in $n$, with period $2^r$. Here, I have normalized $a_0(n)$ to be equal to 1.</p>
<p>We can compute all of the coefficients in (1). As $p$ satisfies the recurrence relation
$$
\begin{array}
\displaystyle p(n+1)=(1+1/n)p(n)-1_{\lbrace n\textrm{ even}\rbrace}\frac2np(n/2) -1_{\lbrace n=1\rbrace}.&&(2)
\end{array}
$$
we can simply plug (1) into this, expand out the terms $(n+1)^{-r}=\sum_s\binom{-r}{s}n^{-r-s}$ on the left hand side, and compare coefficients of $1/n$.
$$
\begin{array}
\displaystyle a_r(k+1)-a_r(k)=a_{r-1}(k)-\sum_{s=0}^{r-1}\binom{-s-1}{r-s}a_{s}(k+1)-1_{\lbrace k\textrm{ even}\rbrace}2^{r+1}a_{r-1}(k/2).&&(3)
\end{array}
$$
Letting $\bar a_r$ be the average of $a_r(k)$ as $k$ varies, we can average (3) over $k$ to get a recurrence relation for $\bar a_r$. Alternatively, the function $f(n)=\sum_r\bar a_rn^{-r-1}$ must satisfy $f(n+1)=(1+1/n)f(n)-f(n/2)/n$ which is solved by $f(n)=1/(n+1)=\sum_{r\ge0}(-1)^rn^{-r-1}$, so we get $\bar a_r=(-1)^r$. Then, (3) can be applied iteratively to obtain $a_r(k+1)-a_r(k)$ in terms of $a_s(k)$ for $s < r$. Together with $\bar a_r$, this gives $a_r(k)$ and it can be seen that the period of $a_r(k)$ doubles at each step. Doing this gives $a_r\equiv(a_r(0),\ldots,a_r(2^r-1))$ as follows
$$
\begin{align}
& a_0=(1),\\\\
& a_1=(0,-2),\\\\
& a_2=(7,-3,-9,13)/2
\end{align}
$$
These agree with the numerical simulation mentioned above.</p>
<hr>
<p>Update: I tried another numerical simulation to check these asymptotics, by successively subtracting out the leading order terms. You can see it converges beautifully to the levels $a_0$, $a_1$, $a_2$ but, then...</p>
<p><img src="https://i.sstatic.net/ZDNzb.gif" alt="convergence to asymptotics"></p>
<p>... it seems that after the $a_2n^{-3}$ part, there is an oscillating term! I wasn't expecting that, but it there is an asymptotic of the form $cn^{-r}\sin(a\log n+\theta)$, where you can solve to leading order to obtain $r\approx3.54536$, $a\approx10.7539$.</p>
<hr>
<p>Update 2: I was re-thinking this question a few days ago, and it suddenly occured how you can not only prove it using analytic methods, but give a full asymptotic expansion to arbitrary order. The idea involves some very cool maths! (if I may say so myself). Apologies that this answer has grown and grown, but I think it's worth it. It is a very interesting question and I've certainly learned a lot by thinking about it. The idea is that, instead of using a power series generating function as in Qiaochu's answer, you use a Dirichlet series which can be inverted with <a href="http://en.wikipedia.org/wiki/Perron%27s_formula" rel="nofollow noreferrer">Perron's formula</a>. First, the expansion is as follows,
$$
\begin{array}{ccc}
\displaystyle
p(n)=\sum_{\Re[r]+k\le \alpha}a_{r,k}(n)n^{-r-k}+o(n^{-\alpha}),&&(4)
\end{array}
$$
for any real $\alpha$. The sum is over nonnegative integers $k$ and complex numbers $r$ with real part at least 1 and satisfying $r+1=2^r$ (the leading term being $r=1$), and $a_{r,k}(n)$ is a periodic function of $n$, with period $2^k$. The reason for such exponents is that the difference equation (2) has the continuous-time limit $p^\prime(x)=p(x)/x-p(x/2)/x$ which has solutions $p(x)=x^{-r}$ for precisely such exponents. Splitting into real and imaginary parts $r=u+iv$, all solutions to $r+1=2^r$ lie on the line $(1+u)^2+v^2=4^u$ and, other than the leading term $u=1$, $v=0$, there is precisely one complex solution with imaginary part $2n\pi\le v\log2\le2n\pi+\pi/2$ (positive integer $n$) and, together with the complex conjugates, this lists all possible exponents $r$.
Then, $a_{r,k}(n)$ is determined (as a multiple of $a_{r,0}$) for $k > 0$ by substituting this expansion back into the difference equation as I did above. I arrived at this expansion after running the simulations plotted above (and T..'s mention of complex exponents of n helped). Then, the Dirichlet series idea nailed it.</p>
<p>Define the Dirichlet series with coefficients $p(n)/n$
$$
L(s)=\sum_{n=1}^\infty p(n)n^{-1-s},
$$
which converges absolutely for the real part of $s$ large enough (greater than 0, since $p$ is bounded by 1). It can be seen that $L(s)-1$ is of order $2^{-1-s}$ in the limit as the real part of $s$ goes to infinity. Multiplying (2) by $n^{-s}$, summing and expanding $n^{-s}$ in terms of $(n+1)^{-s}$ on the LHS gives the functional equation
$$
\begin{array}
\displaystyle
(s-1+2^{-s})L(s)=s+\sum_{k=1}^\infty(-1)^k\binom{-s}{k+1}(L(s+k)-1).&&(5)
\end{array}
$$
This extends $L$ to a meromorphic function on the whole complex plane with simple poles precisely at the points $-r$ with real part at least one and $r+1 = 2^r$, and then at $-r-k$ for nonnegative integers $k$. The pole with largest real component is at $s = -1$ and has residue
$$
{\rm Res}(L,-1)={\rm Res}(s/(s-1+2^{-s}),-1)=\frac{1}{2\log2-1}.
$$
If we <em>define</em> $p^\prime(n)$ by the truncated expansion (4) for some suitably large $\alpha$, then it will satisfy the recurrence relation (2) up to an error term of size $O(n^{-\alpha-1})$ and its Dirichlet series will satisfy the functional equation (5) up to an error term which will be an analytic function over $\Re[s] > -\alpha$ (being a Dirichlet series with coefficients $o(n^{-\alpha-1}))$. It follows that $p^\prime$ has simple poles in the same places as $p$ on the domain $\Re[s] > -\alpha$ and, by choosing $a_{r,0}$ appropriately, it will have the same residues. Then, the Dirichlet series associated with $q(n)= p^\prime(n)-p(n)$ will be analytic on $\Re[s] > -\alpha$ We can use <a href="http://en.wikipedia.org/wiki/Perron%27s_formula" rel="nofollow noreferrer">Perron's formula</a> to show that $q(n)$ is of size $O(n^\beta)$ for any $\beta > -\alpha$ and, by making $\alpha$ as large as we like, this will prove the asymptotic expansion (4). Differentiated, Perron's formula gives
$$
dQ(x)/dx = \frac{1}{2\pi i}\int_{\beta-i\infty}^{\beta+i\infty}x^sL(s)\,ds
$$
where $Q(x)=\sum_{n\le x}q(n)$ and $\beta > -\alpha$. This expression is intended to be taken in the sense of distributions (i.e., multiply both sides by a smooth function with compact support in $(0,\infty)$ and integrate). If $\theta$ is a smooth function of compact support in $(0,\infty)$ then
$$
\begin{array}
\displaystyle\sum_{n=1}^\infty q(n)\theta(n/x)/x&\displaystyle=\int_0^\infty\theta(y)\dot Q(xy)\,dy\\\\
&\displaystyle=\frac{1}{2\pi i}\int_{\beta-i\infty}^{\beta+i\infty}x^sL(s)\int\theta(y)y^s\,dy\,ds=O(x^\beta)\ \ (6)
\end{array}
$$
We obtain the final bound, because by integration by parts, the integral of $\theta(y)y^s$ can be shown to go to zero faster than any power of $1/s$, so the integrand is indeed integrable and the $x^\beta$ term can be pulled outside.
This is enough to show that $q(n)$ is itself of $O(n^\beta)$. Trying to finish this answer off without too much further detail, the argument is as follows. If $q(n)n^{-\beta}$ was unbounded then would keep exceeding its previous maximum and, by the recurrence relation (2), it would take time at least <a href="http://en.wikipedia.org/wiki/Big-omega_notation" rel="nofollow noreferrer">Ω(n)</a> to get back close to zero. So, if $\theta$ has support in $[1,1+\epsilon]$ for small enough $\epsilon$, the integral $\int\theta(y)\dot Q(ny)\,dy$ will be of order $q(n)$ at such times and, as this happens infinitely often, it would contradict (6). Phew! I knew that this could be done, but it took some work! Possibly not as simple or direct as you were asking for, but Dirichlet series are quite standard (more commonly in analytic number theory, in my experience). However, maybe not really more difficult than the probabilistic method and you do get a whole lot more. This approach should also work for other types of recurrence relations and differential equations too.</p>
<p>Finally, I added a much more detailed writeup on my blog, fleshing out some of the details which I skimmed over here. See <a href="http://almostsure.wordpress.com/2010/10/06/asymptotic-expansions-of-a-recurrence-relation/" rel="nofollow noreferrer">Asymptotic Expansions of a Recurrence Relation</a>.</p>
|
linear-algebra | <p>Suppose $A=uv^T$ where $u$ and $v$ are non-zero column vectors in ${\mathbb R}^n$, $n\geq 3$. $\lambda=0$ is an eigenvalue of $A$ since $A$ is not of full rank. $\lambda=v^Tu$ is also an eigenvalue of $A$ since
$$Au = (uv^T)u=u(v^Tu)=(v^Tu)u.$$
Here is my question:</p>
<blockquote>
<p>Are there any other eigenvalues of $A$?</p>
</blockquote>
<p>Added:</p>
<p>Thanks to Didier's comment and anon's answer, $A$ can not have other eigenvalues than $0$ and $v^Tu$. I would like to update the question:</p>
<blockquote>
<p>Can $A$ be diagonalizable?</p>
</blockquote>
| <p>We're assuming $v\ne 0$. The orthogonal complement of the linear subspace generated by $v$ (i.e. the set of all vectors orthogonal to $v$) is therefore $(n-1)$-dimensional. Let $\phi_1,\dots,\phi_{n-1}$ be a basis for this space. Then they are linearly independent and $uv^T \phi_i = (v\cdot\phi_i)u=0 $. Thus the the eigenvalue $0$ has multiplicity $n-1$, and there are no other eigenvalues besides it and $v\cdot u$.</p>
| <p>As to your last question, when is $A$ diagonalizable?</p>
<p>If $v^Tu\neq 0$, then from anon's answer you know the algebraic multiplicity of $\lambda$ is at least $n-1$, and from your previous work you know $\lambda=v^Tu\neq 0$ is an eigenvalue; together, that gives you at least $n$ eigenvalues (counting multiplicity); since the geometric and algebraic multiplicities of $\lambda=0$ are equal, and the other eigenvalue has algebraic multiplicity $1$, it follows that $A$ is diagonalizable in this case.</p>
<p>If $v^Tu=0$, on the other hand, then the above argument does not hold. But if $\mathbf{x}$ is nonzero, then you have $A\mathbf{x} = (uv^T)\mathbf{x} = u(v^T\mathbf{x}) = (v\cdot \mathbf{x})u$; if this is a multiple of $\mathbf{x}$, $(v\cdot\mathbf{x})u = \mu\mathbf{x}$, then either $\mu=0$, in which case $v\cdot\mathbf{x}=0$, so $\mathbf{x}$ is in the orthogonal complement of $v$; or else $\mu\neq 0$, in which case $v\cdot \mathbf{x} = v\cdot\left(\frac{v\cdot\mathbf{x}}{\mu}\right)u = \left(\frac{v\cdot\mathbf{x}}{\mu}\right)(v\cdot u) = 0$, and again $\mathbf{x}$ lies in the orthogonal complement of $v$; that is, the only eigenvectors lie in the orthogonal complement of $v$, and the only eigenvalue is $0$. This means the eigenspace is of dimension $n-1$, and therefore the geometric multiplicity of $0$ is strictly smaller than its algebraic multiplicity, so $A$ is not diagonalizable.</p>
<p>In summary, $A$ is diagonalizable if and only if $v^Tu\neq 0$, if and only if $u$ is not orthogonal to $v$. </p>
|
combinatorics | <p>In <a href="https://math.stackexchange.com/questions/38350/n-lines-cannot-divide-a-plane-region-into-x-regions-finding-x-for-n">this</a> post it is mentioned that $n$ straight lines can divide the plane into a maximum number of $(n^{2}+n+2)/2$ different regions. </p>
<p>What happens if we use circles instead of lines? That is, what is the maximum number of regions into which n circles can divide the plane?</p>
<p>After some exploration it seems to me that in order to get maximum division the circles must intersect pairwise, with no two of them tangent, none of them being inside another and no three of them concurrent (That is no three intersecting at a point).</p>
<p>The answer seems to me to be affirmative, as the number I obtain is $n^{2}-n+2$ different regions. Is that correct?</p>
| <p>For the question stated in the title, the answer is yes, if more is interpreted as "more than or equal to".</p>
<p>Proof: let <span class="math-container">$\Lambda$</span> be a collection of lines, and let <span class="math-container">$P$</span> be the extended two plane (the Riemann sphere). Let <span class="math-container">$P_1$</span> be a connected component of <span class="math-container">$P\setminus \Lambda$</span>. Let <span class="math-container">$C$</span> be a small circle entirely contained in <span class="math-container">$P_1$</span>. Let <span class="math-container">$\Phi$</span> be the <a href="http://en.wikipedia.org/wiki/M%C3%B6bius_transformation" rel="nofollow noreferrer">conformal inversion</a> of <span class="math-container">$P$</span> about <span class="math-container">$C$</span>. Then by elementary properties of conformal inversion, <span class="math-container">$\Phi(\Lambda)$</span> is now a collection of circles in <span class="math-container">$P$</span>. The number of connected components of <span class="math-container">$P\setminus \Phi(\Lambda)$</span> is the same as the number of connected components of <span class="math-container">$P\setminus \Lambda$</span> since <span class="math-container">$\Phi$</span> is continuous. So this shows that <strong>for any collection of lines, one can find a collection of circles that divides the plane into at least the same number of regions</strong>.</p>
<p>Remark: via the conformal inversion, all the circles in <span class="math-container">$\Phi(\Lambda)$</span> thus constructed pass through the center of the circle <span class="math-container">$C$</span>. One can imagine that by perturbing one of the circles somewhat to reduce concurrency, one can increase the number of regions.</p>
<hr />
<p>Another way to think about it is that lines can be approximated by really, really large circles. So starting with a configuration of lines, you can replace the lines with really really large circles. Then in the finite region "close" to where all the intersections are, the number of regions formed is already the same as that coming from lines. But when the circles "curve back", additional intersections can happen and that can only introduce "new" regions.</p>
<hr />
<p>Lastly, <a href="http://mathworld.wolfram.com/PlaneDivisionbyCircles.html" rel="nofollow noreferrer">yes, the number you derived is correct</a>. See <a href="http://oeis.org/A014206" rel="nofollow noreferrer">also this OEIS entry</a>.</p>
| <p>One may deduce the formula $n^{2}-n+2$ as follows: Start with $m$ circles already drawn on the plane with no two of them tangent, none of them being inside another and no three of them concurrent. Then draw the $m+1$ circle $C$ so that is does not violate the propeties stated before and see how it helps increase the number of regions. Indeed, we can see that that $C$ intersects each of the remaining $m$ circles at two points. Therefore, $C$ is divided into $2m$ arcs, each of which divides in two a region formed previously by the first $m$ circles. But a circle divides the plane into two regions, and so we can count step by step ($m=1,2,\cdots, n$) the total number of regions obatined after drawing the $n$-th circle. That is,
$$
2+2(2-1)+2(3-1)+2(4-1)+\cdots+2(n-1)=n^{2}-n+2
$$</p>
<p>Since $n^{2}-n+2\ge (n^{2}+n+2)/2$ for $n\ge 1$ the answer is affirmative. </p>
<p>ADDENDUM: An easy way to see that the answer to my question is affirmative without finding a formula may be as follows: Suppose that $l_{n}$ is the maximum number of regions into which the plane $\mathbb{R}^{2}$ can be divided by $n$ lines, and that $c_{n}$ is the maximum number of regions into which the plane can be divided by $n$ circles. </p>
<p>Now, in the one-point compactification $\mathbb{R}^{2}\cup\{\infty\}$ of the plane, denoted by $S$ (a sphere), the $n$ lines become circles intersecting all at the point $\infty$. Therefore, these circles divide $S$ into at least $l_{n}$ regions. Now, if we pick a point $p$ in the complement in $S$ of the circles and take the stereographic projection through $p$ mapping onto the plane tangent to $S$ at the antipode of $p$ we obtain a plane which is divided by $n$ circles into at least $l_{n}$ regions. Therefore, $l_{n}\le c_{n}$.</p>
<p>Moreover, from this we can see that the plane and the sphere have equal maximum number of regions into which they can be divided by circles. </p>
|
matrices | <p>Given a square matrix,
is the transpose of the inverse equal to the inverse of the transpose?</p>
<p><span class="math-container">$$
(A^{-1})^T = (A^T)^{-1}
$$</span></p>
| <p>Is $(A^{-1})^T = (A^T)^{-1}$ you ask.</p>
<p>Well
$$\begin{align}
A^T(A^{-1})^T = (A^{-1}A)^{T} = I^T = I \\
(A^{-1})^TA^T = (AA^{-1})^{T} = I^T = I
\end{align}
$$</p>
<p>This proves that the inverse of $A^T$ is $(A^{-1})^T$. So the answer to your question is yes.</p>
<p>Here I have used that
$$
A^TB^T = (BA)^T.
$$
And we have used that the inverse of a matrix $A$ is exactly (by definition) the matrix $B$ such that $AB = BA = I$.</p>
| <p>Given that $A\in \mathbb{R}^{n\times n}$ is invertible, $(A^{-1})^T = (A^T)^{-1}$ holds.</p>
<p>Proof:</p>
<p>$A$ is invertible and $\textrm{rank }A = n = \textrm{rank }A^T,$
so $A^T$ is invertible as well. Conclusion:
$$(A^{-1})^T = (A^{-1})^TA^T(A^T)^{-1} = (AA^{-1})^T(A^T)^{-1} = \textrm{id}^T(A^T)^{-1} = (A^T)^{-1}.$$</p>
|
differentiation | <p>I'm sorry if I sound too ignorant, I don't have a high level of knowledge in math.</p>
<p>The function $f(z)=z^2$ (where $z$ is a complex number) has a derivative equal to $2z$. </p>
<p>I'm really confused about this. If we define the derivative of $f(z)$ as the limit as $h$ approaches $0$ (being $h$ a complex number) of $(f(z+h)-f(z))/h$, then clearly the derivative is $2z$, but what does this derivative represent??</p>
<p>Also, shouldn't we be able to represent a complex function in 4-dimensional space, since our input and output have 2 variables each ($z=x+iy$) and then we could take directional derivatives...right?</p>
<p>But if we define the derivative as above, it would be the same if we approach it from all directions. That's what's bothering me so much.</p>
<p>I would really appreciate any explanation. Thanks!</p>
| <blockquote>
<p>If we define the derivative of <span class="math-container">$f(z)$</span> as the limit as <span class="math-container">$h$</span> approaches <span class="math-container">$0$</span> (being <span class="math-container">$h$</span> a complex number) of <span class="math-container">$(f(z+h)−f(z))/h$</span>...</p>
</blockquote>
<p>That's precisely what we do.</p>
<blockquote>
<p>then clearly the derivative is <span class="math-container">$2z$</span>, but what does this derivative represent?? </p>
</blockquote>
<p>Well, it represents what we have defined: the limit of the incremental ratio, same as in the real case. Probably you're wondering if we can interpret this complex derivative geometrically or visually, as we interpret the (real) derivative as the slope of the tangent line... There is no such a simple pictorial interpretation.</p>
<blockquote>
<p>Also, shouldn't we be able to represent a complex function in 4-dimensional space, since our input and output have 2 variables each ... and then we could take directional derivatives...right?</p>
</blockquote>
<p>Indeed, you could consider each of the two components (real and imaginary) separatedly, both for the variable and for the function, and then you'd get four (real) derivatives. And, yes, it's natural to ask how these four derivatives are related to the complex derivative, and if there are some (necessary and/or sufficient) restrictions on them so that the complex derivative gives the same value no matter the "direction" (as one would want)... Behold the <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Riemann_equations" rel="nofollow noreferrer">Cauchy–Riemann equations</a> and <a href="https://en.wikipedia.org/wiki/Holomorphic_function" rel="nofollow noreferrer">holomorphic functions</a>.</p>
| <p>A complex-valued function $f$ is <em>differentiable</em> if near an arbitrary point $z_{0}$, the function $f$ acts like multiplication by a complex number $f'(z_{0})$ in the sense that
$$
f(z_{0} + h) = f(z_{0}) + h\, f'(z_{0}) + o(|h|).
$$
Since multiplication by a non-zero complex number
$$
\alpha = |\alpha| \exp(i\theta)
$$
rotates the plane (about the origin) through an angle $\theta$ and scales the plane (about the origin) by a factor $|\alpha|$, a complex-differentiable function is <em>properly conformal</em> (preserves both orientation and angle) at each point where the derivative is non-zero.</p>
<p>You're correct that drawing the <em>graph</em> of a complex-valued function of one complex variable entails four-dimensional plotting. Drawing the <em>image of a Cartesian grid</em>, however, is perfectly feasible in the plane:</p>
<p><a href="https://i.sstatic.net/HnyFR.png" rel="noreferrer"><img src="https://i.sstatic.net/HnyFR.png" alt="A complex-differentiable map"></a>
<a href="https://i.sstatic.net/rjjlU.png" rel="noreferrer"><img src="https://i.sstatic.net/rjjlU.png" alt="A smooth, non-holomorphic map"></a></p>
<p>Each map above is defined by a pair of real cubic polynomials; the red curves are the images of vertical lines; the blue curves are images of horizontal lines. The maps differ in the sign of one component of the cubic term. One map is complex differentiable (or <em>holomorphic</em>), one is not.</p>
<p>The holomorphic map sends a square grid to an "infinitesimally square grid". (There is one critical point, whose image is visually apparent.) The derivative $f'(z_{0})$ at a point determines the size of the image (quasi-)square and the angle of rotation compared to the original Cartesian square.</p>
<p>The non-holomorphic map sends a square grid to a grid of curvilinear parallelograms that are not generally squares. This is the geometric manifestation of the directional derivatives at $z_{0}$ depending on the direction of approach to $z_{0}$: The Cartesian grid in the domain "gets stretched and rotated by differing amounts in different directions at a single point."</p>
|
probability | <p>On average, how many times must I roll a dice until I get a $6$?</p>
<p>I got this question from a book called Fifty Challenging Problems in Probability. </p>
<p>The answer is $6$, and I understand the solution the book has given me. However, I want to know why the following logic does not work: The chance that we do not get a $6$ is $5/6$. In order to find the number of dice rolls needed, I want the probability of there being a $6$ in $n$ rolls being $1/2$ in order to find the average. So I solve the equation $(5/6)^n=1/2$, which gives me $n=3.8$-ish. That number makes sense to me intuitively, where the number $6$ does not make sense intuitively. I feel like on average, I would need to roll about $3$-$4$ times to get a $6$. Sometimes, I will have to roll less than $3$-$4$ times, and sometimes I will have to roll more than $3$-$4$ times.</p>
<p>Please note that I am not asking how to solve this question, but what is wrong with my logic above.</p>
<p>Thank you!</p>
| <p>You can calculate the average this way also.</p>
<p>The probability of rolling your first <span class="math-container">$6$</span> on the <span class="math-container">$n$</span>-th roll is <span class="math-container">$$\left[1-\left(\frac{5}{6}\right)^n\right]-\left[1-\left(\frac{5}{6}\right)^{n-1}\right]=\left(\frac{5}{6}\right)^{n-1}-\left(\frac{5}{6}\right)^{n}$$</span></p>
<p>So the weighted average on the number of rolls would be
<span class="math-container">$$\sum_{n=1}^\infty \left(n\left[\left(\frac{5}{6}\right)^{n-1}-\left(\frac{5}{6}\right)^{n}\right]\right)=6$$</span></p>
<p>Again, as noted already, the difference between mean and median comes in to play. The distribution has a long tail way out right pulling the mean to <span class="math-container">$6$</span>.
<img src="https://i.sstatic.net/iItFE.jpg" alt="enter image description here" /></p>
<p>For those asking about this graph, it is the expression above, without the Summation. It is not cumulative. (The cumulative graph would level off at <span class="math-container">$y=6$</span>). This graph is just <span class="math-container">$y=x\left[\left(\frac{5}{6}\right)^{x-1}-(\left(\frac{5}{6}\right)^{x}\right]$</span></p>
<p>It's not a great graph, honestly, as it is kind of abstract in what it represents. But let's take <span class="math-container">$x=4$</span> as an example. There is about a <span class="math-container">$0.0965$</span> chance of getting the first roll of a <span class="math-container">$6$</span> on the <span class="math-container">$4$</span>th roll. And since we're after a weighted average, that is multiplied by <span class="math-container">$4$</span> to get the value at <span class="math-container">$x=4$</span>. It doesn't mean much except to illustrate why the mean number of throws to get the first <span class="math-container">$6$</span> is higher than around <span class="math-container">$3$</span> or <span class="math-container">$4.$</span></p>
<p>You can imagine an experiment with <span class="math-container">$100$</span> trials. About <span class="math-container">$17$</span> times it will only take <span class="math-container">$1$</span> throw(<span class="math-container">$17$</span> throws). About <span class="math-container">$14$</span> times it will take <span class="math-container">$2$</span> throws (<span class="math-container">$28$</span> throws). About <span class="math-container">$11$</span> times it will take <span class="math-container">$3$</span> throws(<span class="math-container">$33$</span> throws). About <span class="math-container">$9$</span> times it will take <span class="math-container">$4$</span> throws(<span class="math-container">$36$</span> throws) etc. Then you would add up ALL of those throws and divide by <span class="math-container">$100$</span> and get <span class="math-container">$\approx 6.$</span></p>
| <p>The probability of the time of first success is given by the <a href="http://en.wikipedia.org/wiki/Geometric_distribution" rel="noreferrer">Geometric distribution</a>.</p>
<p>The distribution formula is:</p>
<p>$$P(X=k) = pq^{n-1}$$</p>
<p>where $q=1-p$.</p>
<p>It's very simple to explain this formula. Let's assume that we consider as a success getting a 6 rolling a dice. Then the probability of getting a success at the first try is</p>
<p>$$P(X=1) = p = pq^0= \frac{1}{6}$$</p>
<p>To get a success at the second try, we have to fail once and then get our 6:</p>
<p>$$P(X=2)=qp=pq^1=\frac{1}{6}\frac{5}{6}$$</p>
<p>and so on.</p>
<p>The expected value of this distribution answers this question: how many tries do I have to wait before getting my first success, as an average? The expected value for the Geometric distribution is:</p>
<p>$$E(X)=\displaystyle\sum^\infty_{n=1}npq^{n-1}=\frac{1}{p}$$</p>
<p>or, in our example, $6$.</p>
<p><strong>Edit:</strong> We are assuming multiple independent tries with the same probability, obviously.</p>
|
number-theory | <p>So I discovered the following formula by using the Taylor series for $\ln (x+1)$ $$x= \ln (x+1)+\frac{1}{2}\ln(x^2+1)-\frac{1}{3}\ln(x^3+1)+\frac{1}{2}\ln(x^4+1)-\frac{1}{5}\ln(x^5+1)-\frac{1}{6}\ln(x^6+1)$$$$-\frac{1}{7}\ln(x^7+1)+\frac{1}{2}\ln(x^8+1)-\frac{1}{10}\ln(x^{10}+1)-\frac{1}{11}\ln(x^{11}+1)-\frac{1}{6}\ln(x^{12}+1)$$$$-\frac{1}{13}\ln(x^{13}+1)-\frac{1}{14}\ln(x^{14}+1)+\frac{1}{15}\ln(x^{15}+1)+\frac{1}{2}\ln(x^{16}+1)+...$$
I then realized that this means that $$e^x=\prod_{n=1}^{\infty}(x^n+1)^{a_n},$$ where $a_1 =1$ and $a_n$ is defined by the recurrence relation $$a_n=\sum_{d|n \land d > 1} \frac{a_{\frac{n}{d}}(-1)^d}{d}$$ when $n > 1$.</p>
<p>So far, I've noticed the following properties about $a_n$:$$a_{2^m}=\frac{1}{2}$$$$a_p=-\frac{1}{p}$$$$a_{p^k}=0$$ for prime $p>2$, positive integer $m$, and integer $k>1$.</p>
<p>When I plugged in the denominators of $a_n$ into OEIS, the closest sequence I got was the sequence of <a href="https://oeis.org/A007947">integer radicals</a>.</p>
<blockquote>
<p><strong>I am interested in the following things:</strong></p>
<p>How can we prove my conjecture about $a_n$ (look below, under "EDIT")? For what values of $x$ does this product converge? Has this formula for $e^x$ been documented anywhere (I'm sure it has, but I'd like to read the paper)? How does this representation of $e^x$ relate to other representations?</p>
</blockquote>
<p>Thanks in advance!</p>
<p><strong>EDIT</strong></p>
<p>After plugging in more values for $a_n$, I realized that my original conjecture that $$\lvert a_n \rvert = \begin{cases}
\displaystyle\operatorname{rad}(n)^{-1}, & \mbox{if } n \neq p^k \\
0, & \mbox{if } n=p^k
\end{cases}$$ for prime $p>2$ and integer $k>1$, was incorrect when I found that $a_{18}=0$. However, I was able to formulate a new conjecture about $a_n$ based on my findings: $$ a_n = \frac{\mu \left( \operatorname{Od}(n) \right) }{\operatorname{rad}(n)} = \frac{\mu \left( \frac{n}{2^{\nu_2 (n)}} \right) }{\operatorname{rad}(n)} $$
where $\mu(n)$ is the <a href="https://en.wikipedia.org/wiki/M%C3%B6bius_function">Möbius function</a>, $\nu_p(n)$ is the <a href="https://en.wikipedia.org/wiki/P-adic_order"><em>p</em>-adic order</a> of $n$, $\operatorname{Od}(n)$ is the <a href="http://mathworld.wolfram.com/OddPart.html">odd part</a> of $n$, and $\operatorname{rad}(n)$ is the <a href="https://en.wikipedia.org/wiki/Radical_of_an_integer">radical</a> of $n$. I had a friend test values for this and it holds for all values up to at least $1024$. Unfortunately, though, I have no idea how to prove this conjecture.</p>
<p><strong>NOTE</strong>
@ZhenhuaLiu has answered my biggest question by proving my conjecture. However, if you do have an answer to any of my other questions, feel free to leave an answer about it.</p>
| <p>We'll use $D(f(n),s)$ to denote the Dirichlet series of arithmetic function $f(n)$.</p>
<p>The recurrence relation obtained by marty cohen is $$\sum_{d|n}da_{d}(-1)^{\frac{n}{d}-1}=
\begin{cases}
1 &\mbox{if } n=1,\\
0&\mbox{if } n>1.
\end{cases}
$$
We can rewrite it as Dirichlet convolution
$$
na_{n}*(-1)^{n-1}=I,
$$
where $I$ is the multiplicative identity of Dirichlet convolution.</p>
<p>The convolution can be translated into the language of Dirichlet series as
$$
D(na_{n},s)D((-1)^{n-1},s)=1.
$$
On the other hand, we have
$$
D((-1)^{n-1},s)=\eta(s)=(1-2^{1-s})\zeta(s),
$$
where $\eta(s)$ is the Dirichlet eta function.</p>
<p>Thus we have
$$\begin{align*}
D(na_n,s)&=\frac{1}{1-2^{1-s}}\frac{1}{\zeta(s)}\\
&=\left(\sum_{n\ge 0}\frac{2^n}{(2^n)^s}\right)D(\mu(n),s).\\
\end{align*}.$$</p>
<p>Translating the product of Dirichlet series into the language of Dirichlet convolution, we have
$$\begin{align*}
a_n&=\frac{1}{n}\sum_{d|n}d[\log_{2}d\in \mathbb{N}]\mu(\frac{n}{d})\\
&=\frac{1}{n}\sum_{k=0}^{\nu_{2}(n)}2^k\mu(\frac{n}{2^k}),\\
\end{align*}
$$
where $[P]$ denotes the Iverson bracket.</p>
<p>If $\mu(\frac{n}{2^{\nu_{2}(n)}})=0$, clearly we have $$a_n=0=\frac{\mu(\frac{n}{2^{\nu_{2}(n)}})}{\mathrm{rad}(n)}.$$</p>
<p>If $\mu(\frac{n}{2^{\nu_{2}(n)}})\neq 0$, we have
$$
a_n=\frac{1}{n}\mu(n)=\frac{\mu(\frac{n}{2^{\nu_{2}(n)}})}{\mathrm{rad}(n)},$$
when $\nu_{2}(n)=0,$
and
$$\begin{align*}
a_n&=\frac{1}{n}\left(2^{\nu_{2}(n)}\mu(\frac{n}{2^{\nu_{2}(n)}})+2^{\nu_{2}(n)-1}\mu(2\frac{n}{2^{\nu_{2}(n)}})\right)\\
&=\frac{1}{n}\left(2^{\nu_{2}(n)}\mu(\frac{n}{2^{\nu_{2}(n)}})+\frac{1}{2}2^{\nu_{2}(n)}\mu(\frac{n}{2^{\nu_{2}(n)}})\mu(2)\right)\\
&=\frac{2^{\nu_{2}(n)}}{2n}\mu(\frac{n}{2^{\nu_{2}(n)}})\\
&=\frac{\mu(\frac{n}{2^{\nu_{2}(n)}})}{\mathrm{rad}(n)},\\
\end{align*}
$$
when $\nu_{2}(n)\ge 1.$</p>
<p>Combining the results from all cases, we've proved your conjecture $$a_n=\frac{\mu(\frac{n}{2^{\nu_{2}(n)}})}{\mathrm{rad}(n)}.$$</p>
| <p>Following Michael's suggestion,
if
$x
=\sum_{n=1}^{\infty} a_n \ln(x^n+1)
$,
then
$1
=\sum_{n=1}^{\infty} a_n \frac{nx^{n-1}}{x^n+1}
$,
so</p>
<p>$\begin{array}\\
1
&=\sum_{n=1}^{\infty} a_n \frac{nx^{n-1}}{x^n+1}\\
&=\sum_{n=1}^{\infty} a_n nx^{n-1}\sum_{m=0}^{\infty} (-1)^m x^{mn}\\
\text{or}\\
x
&=\sum_{n=1}^{\infty} a_n nx^{n}\sum_{m=0}^{\infty} (-1)^m x^{mn}\\
&=\sum_{n=1}^{\infty} a_n n\sum_{m=0}^{\infty} (-1)^m x^{mn+n}\\
&=\sum_{n=1}^{\infty} a_n n\sum_{m=0}^{\infty} (-1)^m x^{n(m+1)}\\
&=\sum_{n=1}^{\infty} a_n n\sum_{m=1}^{\infty} (-1)^{m-1} x^{nm}\\
&=\sum_{k=1}^{\infty}\sum_{d|k} a_d d (-1)^{k/d-1} x^{k}\\
&=\sum_{k=1}^{\infty}x^k\sum_{d|k} a_d d (-1)^{k/d-1}\\
\text{so that}\\
a_1
&=1\\
\text{and}\\
0
&=\sum_{d|k} a_d d (-1)^{k/d-1}
\qquad\text{for } k > 1\\
&=ka_k+\sum_{d|k, d<k} a_d d (-1)^{k/d-1}
\text{or}\\
k a_k
&=\sum_{d|k, d<k} a_d d (-1)^{k/d}\\
\text{or}\\
a_k
&=\dfrac1{k}\sum_{d|k, d<k} a_d d (-1)^{k/d}\\
&=\dfrac1{k}\left((-1)^k+\sum_{d|k, 1<d<k} a_d d (-1)^{k/d}\right)\\
\end{array}
$</p>
<p>In particular,
if $p$ is prime,
$a_p
=\dfrac{(-1)^p}{p}
$,
so
$a_2 = \frac12$
and
$a_p = \dfrac{-1}{p}$
if $p$ is an odd prime.</p>
<p>If
$k = 2^m$,</p>
<p>$\begin{array}\\
a_{2^m}
&=\dfrac1{2^m}\left(1+\sum_{d|2^m, 1<d<2^m} a_d d (-1)^{2^m/d}\right)\\
&=\dfrac1{2^m}\left(1+\sum_{j=1}^{m-1} a_{2^j} 2^j \right)\\
\end{array}
$</p>
<p>Therefore
$a_4
=\frac14(1+2a_2)
=\frac12
$.</p>
<p>If
$a_{2^j}
=\frac12
$
for
$1 < j < m$,
then</p>
<p>$\begin{array}\\
a_{2^m}
&=\dfrac1{2^m}\left(1+\sum_{j=1}^{m-1} \frac12 2^j \right)\\
&=\dfrac1{2^m}\left(1+\sum_{j=0}^{m-2} 2^j \right)\\
&=\dfrac1{2^m}\left(1+2^{m-1}-1 \right)\\
&=\frac12\\
\end{array}
$</p>
<p>for all $m$.</p>
<p>If
$k = p^2$,
$a_{p^2}
=\dfrac1{p^2}(-1+pa_p(-1)^p)
=\dfrac1{p^2}(-1+p\frac{-1}{p}(-1))
=0
$.</p>
<p>If
$k = p^m$
where $m > 2$,
suppose
$a_{p^j}
= 0
$
for
$1 < j < m$.
This is true for
$m=3$.</p>
<p>Then</p>
<p>$\begin{array}\\
a_{p^m}
&=\dfrac1{p^m}\left(1+\sum_{d|p^m, 1<d<p^m} a_d d (-1)^{p^m/d}\right)\\
&=\dfrac1{p^m}\left(1+\sum_{j=1}^{m-1} a_{p^j} p^j \right)\\
&=\dfrac1{p^m}\left(1+a_p p +\sum_{j=2}^{m-1} a_{p^j} p^j \right)\\
&=0\\
\end{array}
$</p>
<p>If
$k=2p$
where $p$ is an odd prime,</p>
<p>$\begin{array}\\
a_{2p}
&=\dfrac1{2p}\left(1+2a_2(-1)^p+a_pp(-1)^2\right)\\
&=\dfrac1{2p}\left(1-2\dfrac12+\dfrac{-1}{p}p\right)\\
&=\dfrac1{2p}\left(1-1-1\right)\\
&=\dfrac{-1}{2p}\\
\end{array}
$</p>
<p>If
$k=pq$
where $p$ and $q$ are distinct odd primes,
then
$a_{pq}
=\dfrac1{pq}\left((-1)^{pq}+a_pp(-1)^p+a_qq(-1)^q\right)
=\dfrac1{pq}\left(-1+1+1\right)
=\dfrac{1}{pq}
$.</p>
<p>I'll leave it at this.</p>
|
logic | <p>I'm currently trying to understand the concepts and theory behind some of the common proof verifiers out there, but am not quite sure on the exact nature and construction of the sort of systems/proof calculi they use. Are they essentially based on higher-order logics that use Henkin semantics, or is there something more to it? As I understand, extending Henkin semantics to higher-order logic does not render the formal system any less sound, though I am not too clear on that.</p>
<p>Though I'm mainly looking for a general answer with useful examples, here are a few specific questions:</p>
<ul>
<li>What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative.</li>
<li>Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics?</li>
<li>Where does typed lambda calculus come into proof verification?</li>
<li>Are there any other approaches than higher order logic to proof verification?</li>
<li>What are the limitations/shortcomings of existing proof verification systems (see below)?</li>
</ul>
<p>The Wikipedia pages on proof verification programs such as <a href="http://en.wikipedia.org/wiki/HOL_Light">HOL Light</a> <a href="http://en.wikipedia.org/wiki/Calculus_of_constructions">Coq</a>, and <a href="http://us.metamath.org/">Metamath</a> give some idea, but these pages contain limited/unclear information, and there are rather few specific high-level resources elsewhere. There are so many variations on formal logics/systems used in proof theory that I'm not sure quite what the base ideas of these systems are - what is required or optimal and what is open to experimentation.</p>
<p>Perhaps a good way of answering this, certainly one I would appreciate, would be a brief guide (albeit with some technical detail/specifics) on how one might go about generating a complete proof calculus (proof verification system) from scratch? Any other information in the form of explanations and examples would be great too, however.</p>
| <p>I'll answer just part of your question: I think the other parts will become clearer based on this.</p>
<p>A proof verifier is essentially a program that takes one argument, a proof representation, and checks that this is properly constructed, and says OK if it is, and either fails silently otherwise, or highlights what is invalid otherwise.</p>
<p>In principle, the proof representation could just be a sequence of formulae in a Hilbert system: all logics (at least, first-orderisable logics) can be represented in such a way. You don't even need to say which rule is specified at each step, since it is decidable whether any formula follows by a rule application from earlier formulae.</p>
<p>In practice, though, the proof representations are more complex. Metamath is rather close to Hilbert systems, but has a rich set of rules. Coq and LF use (different) typed lambda calculi with definitions to represent the steps, which are computationally quite expensive to check (IIRC, both are PSPACE hard). And the proof verifier can do much more: Coq allows ML programs to be extracted from proofs.</p>
| <p>I don't think the people working in higher-order theorem proving really care about Henkin semantics or models in general, they mostly work with their proof calculi. As long as there are no contradictions or other counterintuitive theorems they are happy. The most important and most difficult theorem they prove is usually that their proof terms terminate, which IIRC can be viewed as a form of soundness.</p>
<p>Henkin semantics is most interesting for people trying to extend their first-order methods to higher-order logic, because it behaves essentially like models of first-order logic. Henkin semantics is somewhat weaker than what you would get with standard set-theoretic semantics, which by Gödels incompleteness theorem can't have a complete proof calculus. I think type theories should lie somewhere in between Henkin and standard semantics.</p>
<blockquote>
<p>Where does typed lambda calculus come into proof verification?</p>
</blockquote>
<p>To prove some implication <code>P(x) --> Q(x)</code> with some free variables <code>x</code> you need to map any proof of <code>P(x)</code> to a proof of <code>Q(x)</code>. Syntactically a map (a function) can be represented as a lambda term.</p>
<blockquote>
<p>Are there any other approaches than higher order logic to proof verification?</p>
</blockquote>
<p>You can also verify proofs in first-order or any other logic, but then you would loose much of the power of the logic. First-order logic is mostly interesting because it is possible to automatically find proofs, if they are not too complicated. The same applies even more to propositional logic.</p>
<blockquote>
<p>What are the limitations/shortcomings of existing proof verification systems (see below)?</p>
</blockquote>
<p>The more powerful the logic becomes the harder it becomes to construct proofs.</p>
<p>Since the systems are freely available I suggest you play with them, e.g. Isabelle and Coq for a start.</p>
|
probability | <p>Suppose $X$ is a real-valued random variable and let $P_X$ denote the distribution of $X$. Then
$$
E(|X-c|) = \int_\mathbb{R} |x-c| dP_X(x).
$$
<a href="http://en.wikipedia.org/wiki/Median#Medians_of_probability_distributions" rel="noreferrer">The medians</a> of $X$ are defined as any number $m \in \mathbb{R}$ such that $P(X \leq m) \geq \frac{1}{2}$ and $P(X \geq m) \geq \frac{1}{2}$.</p>
<p>Why do the medians solve
$$
\min_{c \in \mathbb{R}} E(|X-c|) \, ?
$$</p>
| <p>For <strong>every</strong> real valued random variable $X$,
$$
\mathrm E(|X-c|)=\int_{-\infty}^c\mathrm P(X\leqslant t)\,\mathrm dt+\int_c^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt
$$
hence the function $u:c\mapsto \mathrm E(|X-c|)$ is differentiable almost everywhere and, where $u'(c)$ exists, $u'(c)=\mathrm P(X\leqslant c)-\mathrm P(X\geqslant c)$. Hence $u'(c)\leqslant0$ if $c$ is smaller than every median, $u'(c)=0$ if $c$ is a median, and $u'(c)\geqslant0$ if $c$ is greater than every median.</p>
<p>The formula for $\mathrm E(|X-c|)$ is the integrated version of the relations $$(x-y)^+=\int_y^{+\infty}[t\leqslant x]\,\mathrm dt$$ and $|x-c|=((-x)-(-c))^++(x-c)^+$, which yield, for every $x$ and $c$,
$$
|x-c|=\int_{-\infty}^c[x\leqslant t]\,\mathrm dt+\int_c^{+\infty}[x\geqslant t]\,\mathrm dt
$$</p>
| <p>Let $f$ be the pdf and let $J(c) = E(|X-c|)$. We want to maximize $J(c)$. Note that $E(|X-c|) = \int_{\mathbb{R}} |x-c| f(x) dx = \int_{-\infty}^{c} (c-x) f(x) dx + \int_c^{\infty} (x-c) f(x) dx.$</p>
<p>To find the maximum, set $\frac{dJ}{dc} = 0$. Hence, we get that,
$$\begin{align}
\frac{dJ}{dc} & = (c-x)f(x) | _{x=c} + \int_{-\infty}^{c} f(x) dx + (x-c)f(x) | _{x=c} - \int_c^{\infty} f(x) dx\\
& = \int_{-\infty}^{c} f(x) dx - \int_c^{\infty} f(x) dx = 0
\end{align}
$$</p>
<p>Hence, we get that $c$ is such that $$\int_{-\infty}^{c} f(x) dx = \int_c^{\infty} f(x) dx$$ i.e. $$P(X \leq c) = P(X > c).$$</p>
<p>However, we also know that $P(X \leq c) + P(X > c) = 1$. Hence, we get that $$P(X \leq c) = P(X > c) = \frac12.$$</p>
<p><strong>EDIT</strong></p>
<p>When $X$ doesn't have a density, all you need to do is to make use of integration by parts. We get that $$\displaystyle \int_{-\infty}^{c} (c-x) dP(x) = \lim_{y \rightarrow -\infty} (c-y) P(y) + \displaystyle \int_{c}^{\infty} P(x) dx.$$ Similarly, we also get that $$\displaystyle \int_{c}^{\infty} (x-c) dP(x) = \lim_{y \rightarrow \infty} (y-c) P(y) - \displaystyle \int_{c}^{\infty} P(x) dx.$$</p>
|
combinatorics | <p>You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.</p>
<p>The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length. </p>
<p>So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?</p>
<p>And if so, what is the smallest possible ratio?</p>
<p>Example 1 (unit is cm):</p>
<ul>
<li>1st cut: 50 : 50 ratio: 1 </li>
<li>2nd cut: 50 : 25 : 25 ratio: 2 - bad</li>
</ul>
<p>Example 2</p>
<ul>
<li>1st cut: 40 : 60 ratio: 1.5</li>
<li>2nd cut: 40 : 30 : 30 ratio: 1.33</li>
<li>3rd cut: 20 : 20 : 30 : 30 ratio: 1.5</li>
<li>4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad</li>
</ul>
<p>Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.</p>
| <p>TLDR: $a_n=\log_2(1+1/n)$ works, and is the only smooth solution.</p>
<p>This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.</p>
<p>Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,\dots,a_{2n-1}$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_{2n}$ and $a_{2n+1}$. We have the following constraints:</p>
<p>$$a_1=1\qquad a_n=a_{2n}+a_{2n+1}\qquad a_n\ge a_{n+1}\qquad a_n<2a_{2n-1}$$</p>
<p>I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_{2n}+a_{2n+1}$ as well).</p>
<p>First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_n\in (1/2,1)$ then we can define $a_{2n}=a_nb_n$ and $a_{2n+1}=a_n(1-b_n)$, and this will completely define the sequence $a_n$.</p>
<p>Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence</p>
<p>$$1,\frac12,\frac12,\frac14,\frac14,\frac14,\frac14,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does <em>not</em> tend to a limit - it varies between $1$ and $2$.</p>
<p>If we average this exponentially, by considering the function $g(x)=2^xa_{\lfloor 2^x\rfloor}$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]\to\Bbb R$ such that $g(x+n)\to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.</p>
<p>There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,\log_2 (3/2)]$ and scaled down on $[\log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $\int_0^{\log_2(3/2)}h(x)\,dx$ and $\int_{\log_2(3/2)}^1h(x)\,dx$.</p>
<p>Thus, to keep these balanced we should let $b_1=\log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[\log_2(2n),\log_2(2n+1)]$ and $[\log_2(2n+1),\log_2(2n+2)]$ (reduced$\bmod 1$), so we must set them to
$$b_n=\frac{\log_2(2n+1)-\log_2(2n)}{\log_2(2n+2)-\log_2(2n)}=\frac{\log(1+1/2n)}{\log(1+1/n)}.$$</p>
<p>When we do this, a miracle occurs, and $a_n=\log_2(1+1/n)$ becomes analytically solvable:
\begin{align}
a_1&=\log_2(1+1/1)=1\\
a_{2n}+a_{2n+1}&=\log_2\Big(1+\frac1{2n}\Big)+\log_2\Big(1+\frac1{2n+1}\Big)\\
&=\log_2\left[\Big(1+\frac1{2n}\Big)\Big(1+\frac1{2n+1}\Big)\right]\\
&=\log_2\left[1+\frac{2n+(2n+1)+1}{2n(2n+1)}\right]\\
&=\log_2\left[1+\frac1n\right]=a_n.
\end{align}</p>
<p>As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
\begin{align}
2a_m&=2\log_2\Big(1+\frac1m\Big)=\log_2\Big(1+\frac1m\Big)^2=\log_2\Big(1+\frac2m+\frac1{m^2}\Big)\\
&\ge\log_2\Big(1+\frac2m\Big)>\log_2\Big(1+\frac2{2n}\Big)=a_n,
\end{align}</p>
<p>so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $\frac 1{\log 2}=\log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.</p>
<p>It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):</p>
<ul>
<li>$58:42$, ratio = $1.41$</li>
<li>$42:32:26$, ratio = $1.58$</li>
<li>$32:26:22:19$, ratio = $1.67$</li>
<li>$26:22:19:17:15$, ratio = $1.73$</li>
<li>$22:19:17:15:14:13$, ratio = $1.77$</li>
</ul>
<p>If you are working with a sequence of points treated$\bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=\log_2(2n+1)\bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:</p>
<p> <a href="https://i.sstatic.net/SCaaE.gif" rel="noreferrer"><img src="https://i.sstatic.net/SCaaE.gif" alt="sausages"></a></p>
<p>A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0\le x\le 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0\le x\le 1$, and no solution can do better than that (in the limit) for any $x$.</p>
| <p>YES, it is possible!</p>
<p>You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$. </p>
<p>From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.</p>
<p>So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/b\approx 1$). All you have to make sure is that </p>
<ul>
<li>$a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round. </li>
<li>$a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round). </li>
</ul>
<p>Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.</p>
|
geometry | <p>In a freshers lecture of <strong>3-D geometry</strong>, our teacher said that <strong>3-D</strong> objects can be viewed as projections of <strong>4-D</strong> objects. How does this helps us visualize <strong>4-D</strong> objects?
I searched that we can at least see their <strong>3-D</strong> cross-sections. A <strong>Tesseract</strong> hypercube would be a good example.<br>
Can we conclude that a 3-D cube is a shadow of a 4-D tesseract? </p>
<p>But, how can a shadow be <strong>3-D</strong>? Was the screen used for casting shadow also <strong>3-D</strong>; or else, what way is it different from basic physics of shadows we learnt? </p>
<blockquote>
<p><strong>edit:</strong> The responses are pretty good for <strong>4th</strong> dimensional analysis, but can we generalize this projection idea for <strong>n</strong> dimensions, i.e. all <strong>n</strong> dimensional objects will have <strong>n-1</strong> dimensional projections?<br>
This makes me think about <em>higher dimensions</em> discussed in <strong>string theory</strong>.<br>
What other areas of <strong>Mathematics</strong> will be helpful?</p>
</blockquote>
| <p>The animations below accompany an introductory talk on high-dimensional geometry.</p>
<p>Mathematically, the second was made by putting a "light source" at a point $(0, 0, 0, h)$ (with $h > 0$) and sending each point $(x, y, z, w)$ (with $w < h$) to $\frac{h}{h - w}(x, y, z)$.</p>
<p><a href="https://i.sstatic.net/0QOAA.gif" rel="noreferrer"><img src="https://i.sstatic.net/0QOAA.gif" alt="Shadow of a rotating cube"></a>
<a href="https://i.sstatic.net/M4gOo.gif" rel="noreferrer"><img src="https://i.sstatic.net/M4gOo.gif" alt="Shadow of a rotating hypercube"></a></p>
| <p>First things first: your brain is simply not made to visualize anything higher than three spacial dimensions geometrically. All we can do is using tricks an analogies, and of course, the vast power of abstract mathematics. The latter one might help us understand the generalized properties of such objects, but does not help visualizing it for us 3D-beings in a familiar way.</p>
<p>I will keep things simple and will talk more about the philosophy and visualization than about the math behind it. I will avoid using formulas as long as possible.</p>
<hr>
<h2>The analogy</h2>
<p>If you look at an image of a cube, the image is absolutely 2D, but you intuitively understand that the depicted object is some 3D-model. Also you intuitively understand its shape and positioning in space. How is your brain achieving this? There is obviously information lost during the projection into 2D. </p>
<p>Well, you (or your brain) have different techniques to reconstruct the 3D-nature of the object from this simplified representation. For example: you know things are smaller if they are farther away. You <em>know</em> that a <a href="https://upload.wikimedia.org/wikipedia/commons/7/78/Hexahedron.jpg" rel="noreferrer">cube</a> is composed of faces of the same size. However, any (perspectively correct) image shows some face (the back face, if visible) as smaller as the front face. Also it is no longer in the form of a square, but you still recognize it as such.</p>
<p>We use the same analogies for projecting 4D-objects into 3D space (and then further into 2D-space to make an image file out of it). Look at your favorite "picture" of a <a href="https://upload.wikimedia.org/wikipedia/commons/2/22/Hypercube.svg" rel="noreferrer">4D-cube</a>. You recognize that it is composed of several cubes. For example, you see a small cube inside, and a large cube outside. Actually a tesseract (technical term for 4D-cube) is composed of <em>eight identical cubes</em>. But good luck finding these cubes in this picture. They look as much as a cube as a square looks like a square in the 2D-depiction of a 3D-cube.</p>
<p>The small cube "inside" is the "back-cube" of the tesseract. It is the farthest away from you, hence the smallest. The "outer" cube is the front cube for the analogue reason. It might be hard to see, but there are "cubes" connecting the inner and outer cube. These cubes are distorted, and cannot be recognized as cubes. This is the problem of projections. A shape might not be kept.</p>
<p>Even more interesting: look at this picture of a <a href="https://upload.wikimedia.org/wikipedia/commons/5/55/8-cell-simple.gif" rel="noreferrer"><em>rotating tesseract</em></a>. It seems like the complete structure is shifting, nothing is rigid at all. But again, take the anagoly of a rotating 3D-cube in a picture. During the process of rotation, the back face is becoming the front face, hence shifting its size. And during this process it even loses its shape as a square. All line lengths are changing. Just a flaw of the projection. You see the same for the rotating tesseract. The inner cube is becoming the outer one, becoming bigger because it is becoming the "front cube".</p>
<hr>
<h2>The projection</h2>
<p>I just talked about projections. But a projection is essentially the same as a shadow image on the wall. Light is cast on a 3D-object and blocked out by it and you try to reconstruct its shape from the light and dark spots left by it on the wall behind it. Of course we cannot talk physically about a 4D-light source and light traveling through 4D-space and hitting a 3D-wall and casting a 3D-shadow. As I said, this is just pure analogy. We only know the physics of our 3D-world (4D-spacetime if you want, but there are no four spacial dimensions, but one is temporal)</p>
<p>So what are you projecting on? Well, you are projecting on our 3D-space. Hard to imagine, but our 3D-space is as much to the 4D-space as a simple sheet of paper is to our space $-$ flat and infinitely thin.</p>
<p>How crude these depictions are will become evident when making clear how many dimensions are lost in the process. Essentially you are projecting a 4D-object onto a 3D-space and then onto a 2D-image so we can show it on screen. Of course, our brain is clever enought to allow us to reconstruct the 3D-model from it. But essentially we go from 4D to 2D. This is like going from 3D to 1D, trying to understand a 3D-object by looking at its shadow cast at a thin thread. Good luck for 2D-creatures understanding 3D from this crude simplification.</p>
|
probability | <p>A teenage acquaintance of mine lamented:</p>
<blockquote>
<p>Every one of my friends is better friends with somebody else.</p>
</blockquote>
<p>Thanks to my knowledge of mathematics I could inform her that she's not alone and $e^{-1}\approx 37\%$ of all people could be expected to be in the same situation, which I'm sure cheered her up immensely.</p>
<p>This number assumes that friendships are distributed randomly, such that each person in a population of $n$ chooses a best friend at random. Then the probability that any given person is not anyone's best friend is $(1-\frac{1}{n-1})^{n-1}$, which tends to $e^{-1}$ for large $n$.</p>
<p>Afterwards I'm not sure this is actually the best way to analyze the claim. Perhaps instead we should imagine assigning a random "friendship strength" to each edge in the complete graph on $n$ vertices, in which case my friend's lament would be "every vertex I'm connected to has an edge with higher weight than my edge to it". This is not the same as "everyone choses a best friend at random", because it guarantees that there's at least one pair of people who're mutually best friends, namely the two ends of the edge with the highest weight.</p>
<p>(Of course, some people are not friends at all; we can handle that by assigning low weights to their mutual edges. As long as everyone has at least one actual friend, this won't change who are whose best friends).</p>
<p>(It doesn't matter which distribution the friendship weights are chosen by, as long as it's continuous -- because all that matters is the <em>relative</em> order between the weights. Equivalently, one may simply choose a random total order on the $n(n-1)/2$ edges in the complete graph).</p>
<p><strong>In this model, what is the probability that a given person is not anyone's best friend?</strong></p>
<p>By linearity of expectations, the probability of being <em>mutually</em> best friends with anyone is $\frac{n-1}{2n-3}\approx\frac 12$ (much better than in the earlier model), but that doesn't take into account the possibility that some poor soul has me as <em>their</em> best friend whereas I myself has other better friends. Linearity of expectation doesn't seem to help here -- it tells me that the <em>expected</em> number of people whose best friend I am is $1$, but not the probability of this number being $0$.</p>
<hr>
<p><em>(Edit: Several paragraphs of numerical results now moved to a significantly expanded answer)</em></p>
| <p>The probability for large $n$ is $e^{-1}$ in the friendship-strength model too. I can't even begin to <em>explain</em> why this is, but I have strong numerical evidence for it. More precisely, if $p(n)$ is the probability that someone in a population of $n$ isn't anyone's best friend, then it looks strongly like</p>
<p>$$ p(n) = \Bigl(1-\frac{1}{2n-7/3}\Bigr)^{2n-7/3} + O(n^{-3})$$
as $n$ tends to infinity.</p>
<p>The factor of $2$ may hint at some asymptotic connection between the two models, but it has to be subtle, because the best-friend relation certainly doesn't look the same in the two models -- as noted in the question, in the friendship-strength model we expect half of all people to be <em>mutually</em> best friends with someone, whereas in the model where everyone chooses a best friend independently, the <em>total</em> expected number of mutual friendships is only $\frac12$.</p>
<p>The offset $7/3$ was found experimentally, but there's good evidence that it is exact. If it means anything, it's a mystery to me what.</p>
<p><strong>How to compute the probability.</strong> Consider the complete graph on $n$ vertices, and assign random friendship weights to each edge. Imagine processing the edges in order from the strongest friendship towards the weakest. For each vertex/person, the <em>first time</em> we see an edge ending there will tell that person who their best friend is.</p>
<p>The graphs we build can become very complex, but for the purposes of counting we only need to distinguish three kinds of nodes:</p>
<ul>
<li><strong>W</strong> - Waiting people who don't yet know any of their friends. (That is, vertices that are not an endpoint of any edge processed yet).</li>
<li><strong>F</strong> - Friends, people who are friends with someone, but are not anyone's best friend <em>yet</em>. Perhaps one of the Waiting people will turn out to have them as their best friend.</li>
<li><strong>B</strong> - Best friends, who know they are someone's best friend.</li>
</ul>
<p>At each step in the processing of the graph, it can be described as a triple $(w,f,b)$ stating the number of each kind of node. We have $w+f+b=n$, and the starting state is $(n,0,0)$ with everyone still waiting.</p>
<ul>
<li>If we see a <strong>WW</strong> edge, two waiting people become mutually best friends, and we move to state $(w-2,f,b+2)$. There are $w(w-1)/2$ such edges.</li>
<li>If we see a <strong>WF</strong> edge, the <strong>F</strong> node is now someone's best friend and becomes a <strong>B</strong>, and the <strong>W</strong> node becomes <strong>F</strong>. The net effect is to move us to state $(w-1,f,b+1)$. There are $wf$ such edges.</li>
<li>If we see a <strong>WB</strong> edge, tne <strong>W</strong> node becomes <strong>F</strong>, but the <strong>B</strong> stays a <strong>B</strong> -- we don't care how <em>many</em> people's best friends one is, as long there is someone. We move to $(w-1,f+1,b)$, and there are $wb$ edges of this kind.</li>
<li>If we see a <strong>FF</strong> or <strong>FB</strong> or <strong>BB</strong> edge, it represents a friendship where both people already have better friends, so the state doesn't change.</li>
</ul>
<p>Thus, for each state, the next <strong>WW</strong> or <strong>WF</strong> or <strong>WB</strong> edge we see determine which state we move to, and since all edges are equally likely, the probabilities to move to the different successor states are $\frac{w-1}{2n-w-1}$ and$\frac{2f}{2n-w-1}$ and $\frac{2b}{2n-w-1}$, respectively.</p>
<p>Since $w$ decreases at every move between states, we can fill out a table of the probabilities that each state is ever visited simply by considering all possible states in order of decreasing $w$. When all edges have been seen we're in some state $(0,f,n-f)$, and summing over all these we can find the <em>expected</em> $f$ for a random weight assignment.</p>
<p>By linearity of expectation, the probability that any <em>given</em> node is <strong>F</strong> at the end must then be $\langle f\rangle/n$.</p>
<p>Since there are $O(n^2)$ states with $w+f+b=n$ and a constant amount of work for each state, this algorithm runs in time $O(n^2)$.</p>
<p><strong>Numerical results.</strong> Here are exact results for $n$ up to 18:</p>
<pre><code> n approx exact
--------------------------------------------------
1 100% 1/1
2 0.00% 0/1
3 33.33% 1/3
4 33.33% 1/3
5 34.29% 12/35
6 34.81% 47/135
7 35.16% 731/2079
8 35.40% 1772/5005
9 35.58% 20609/57915
10 35.72% 1119109/3132675
11 35.83% 511144/1426425
12 35.92% 75988111/211527855
13 36.00% 1478400533/4106936925
14 36.06% 63352450072/175685635125
15 36.11% 5929774129117/16419849744375
16 36.16% 18809879890171/52019187845625
17 36.20% 514568399840884/1421472473796375
18 36.24% 120770557736740451/333297887934886875
</code></pre>
<p>After this point, exact rational arithmetic with 64-bit denominators start overflowing. It does look like $p(n)$ tends towards $e^{-1}$. (As an aside, the sequence of numerators and denominators are both unknown to OEIS).</p>
<p>To get further, I switched to native machine floating point (Intel 80-bit) and got the $p(n)$ column in this table:</p>
<pre><code> n p(n) A B C D E F G H
---------------------------------------------------------------------
10 .3572375 1.97+ 4.65- 4.65- 4.65- 2.84+ 3.74+ 3.64+ 4.82-
20 .3629434 2.31+ 5.68- 5.68- 5.68- 3.47+ 4.67+ 4.28+ 5.87-
40 .3654985 2.62+ 6.65- 6.64- 6.64- 4.09+ 5.59+ 4.90+ 6.84-
80 .3667097 2.93+ 7.59- 7.57- 7.57- 4.70+ 6.49+ 5.51+ 7.77-
100 .3669469 3.03+ 7.89- 7.87- 7.86- 4.89+ 6.79+ 5.71+ 8.07-
200 .3674164 3.33+ 8.83- 8.79- 8.77- 5.50+ 7.69+ 6.32+ 8.99-
400 .3676487 3.64+ 9.79- 9.69- 9.65- 6.10+ 8.60+ 6.92+ 9.90-
800 .3677642 3.94+ 10.81- 10.60- 10.52- 6.70+ 9.50+ 7.52+ 10.80-
1000 .3677873 4.04+ 11.17- 10.89- 10.80- 6.90+ 9.79+ 7.72+ 11.10-
2000 .3678334 4.34+ 13.18- 11.80- 11.63- 7.50+ 10.69+ 8.32+ 12.00-
4000 .3678564 4.64+ 12.74+ 12.70- 12.41- 8.10+ 11.60+ 8.92+ 12.90-
8000 .3678679 4.94+ 13.15+ 13.60- 13.14- 8.70+ 12.50+ 9.52+ 13.81-
10000 .3678702 5.04+ 13.31+ 13.89- 13.36- 8.90+ 12.79+ 9.72+ 14.10-
20000 .3678748 5.34+ 13.86+ 14.80- 14.03- 9.50+ 13.69+ 10.32+ 15.00-
40000 .3678771 5.64+ 14.44+ 15.70- 14.67- 10.10+ 14.60+ 10.92+ 15.91-
</code></pre>
<p>The 8 other columns show how well $p(n)$ matches various attempts to model it. In each column I show $-\log_{10}|p(n)-f_i(n)|$ for some test function $f_i$ (that is, how many digits of agreement there are between $p(n)$ and $f_i(n)$), and the sign of the difference between $p$ and $f_i$.</p>
<ul>
<li>$f_{\tt A}(n)=e^{-1}$</li>
</ul>
<p>In the first column we compare to the constant $e^{-1}$. It is mainly there as evidence that $p(n)\to e^{-1}$. More precisely it looks like $p(n) = e^{-1} + O(n^{-1})$ -- whenever $n$ gets 10 times larger, another digit of $e^{-1}$ is produced.</p>
<ul>
<li>$f_{\tt C}(n)=\Bigl(1-\frac{1}{2n-7/3}\Bigr)^{2n-7.3}$</li>
</ul>
<p>I came across this function by comparing $p(n)$ to $(1-\frac{1}{n-1})^{n-1}$ (the probability in the chose-best-friends-independently model) and noticing that they almost matched beteen $n$ and $2n$. The offset $7/3$ was found by trial and error. With this value it looks like $f_{\tt C}(n)$ approximates $p(n)$ to about $O(n^{-3})$, since making $n$ 10 times larger gives <em>three</em> additional digits of agreement.</p>
<ul>
<li>$ f_{\tt B}=\Bigl(1-\frac{1}{2n-2.332}\Bigr)^{2n-2.332}$ and $f_{\tt D}=\Bigl(1-\frac{1}{2n-2.334}\Bigr)^{2n-2.334} $</li>
</ul>
<p>These columns provide evidence that $7/3$ in $f_{\tt C}$ is likely to be exact, since varying it just slightly in each direction gives clearly worse approximation to $p(n)$. These columns don't quite achieve three more digits of precision for each decade of $n$.</p>
<ul>
<li>$ f_{\tt E}(n)=e^{-1}\bigl(1-\frac14 n^{-1}\bigr)$ and $f_{\tt F}(n)=e^{-1}\bigl(1-\frac14 n^{-1} - \frac{11}{32} n^{-2}\bigr)$</li>
</ul>
<p>Two and three terms of the asymptotic expansion of $f_{\tt C}$ in $n$. $f_{\tt F}$ also improves cubically, but with a much larger error than $f_{\tt C}$. This seems to indicate that the specific structure of $f_{\tt C}$ is important for the fit, rather than just the first terms in its expansion.</p>
<ul>
<li>$ f_{\tt G}(n)=e^{-1}\bigl(1-\frac12 (2n-7/3)^{-1}\bigr)$ and $f_{\tt H}(n)=e^{-1}\bigl(1-\frac12 (2n-7/3)^{-1}-\frac{5}{24} (2n-7/3)^{-2}\bigr) $</li>
</ul>
<p>Here's a surprise! Expanding $f_{\tt C}$ in powers of $2n-7/3$ instead of powers of $n$ not only gives better approximations than $f_{\tt E}$ and $f_{\tt F}$, but also approximates $p(n)$ better than $f_{\tt C}$ itself does, by a factor of about $10^{0.2}\approx 1.6$. This seems to be even more mysterious than the fact that $f_{\tt C}$ matches.</p>
<p>At $n=40000$ the computation of $p(n)$ takes about a minute, and the comparisons begin to push the limit of computer floating point. The 15-16 digits of precision in some of the columns are barely even representable in double precision. Funnily enough, the calculation of $p(n)$ itself seems to be fairly robust compared to the approximations.</p>
| <p>Here's another take at this interesting problem.
Consider a group of $n+1$ persons $x_0,\dots,x_n$ with $x_0$ being myself.
Define the probability
$$
P_{n+1}(i)=P(\textrm{each of $x_1,\dots,x_i$ has me as his best friend}).
$$
Then we can compute the wanted probability using the <em>inclusion-exclusion formula</em> as follows:
\begin{eqnarray}
P_{n+1} &=& P(\textrm{I am nobody's best friend}) \\
&=& 1-P(\textrm{I am somebody's best friend}) \\
&=& \sum_{i=0}^n (-1)^i\binom{n}{i}P_{n+1}(i). \tag{$*$}
\end{eqnarray}</p>
<p>To compute $P_{n+1}(i)$, note that for the condition to be true,
it is necessary that of all friendships between one of $x_1,\dots,x_i$ and anybody,
the one with the highest weight is a friendship with me.
The probability of that being the case is
$$
\frac{i}{in-i(i-1)/2}=\frac{2}{2n-i+1}.
$$
Suppose, then, that that is the case and let this friendship be $(x_0, x_i)$.
Then I am certainly the best friend of $x_i$.
The probability that I am also the best friend of each of $x_1,\dots,x_{i-1}$ is unchanged.
So we can repeat the argument and get
$$
P_{n+1}(i)
=\frac{2}{2n}\cdot\frac{2}{2n-1}\cdots\frac{2}{2n-i+1}
=\frac{2^i(2n-i)!}{(2n)!}.
$$
Plugging this into $(*)$ gives a formula for $P_{n+1}$ that agrees with Henning's results.</p>
<p>To prove $P_{n+1}\to e^{-1}$ for $n\to\infty$, use that the $i$-th term of $(*)$ converges to, but is numerically smaller than, the $i$-th term of
$$
e^{-1}=\sum_{i=0}^\infty\frac{(-1)^i}{i!}.
$$</p>
<p>By the way, in the alternative model where each person chooses a best friend at random, we would instead have $P_{n+1}(i)=1/n^i$ and $P_{n+1}=(1-1/n)^n$.</p>
|
probability | <p>This problem arose in a different context at work, but I have translated it to pizza.</p>
<p>Suppose you have a circular pizza of radius $R$. Upon this disc, $n$ pepperoni will be distributed completely randomly. All pepperoni have the same radius $r$. </p>
<p>A pepperoni is "free" if it does not overlap any other pepperoni. </p>
<p>You are free to choose $n$.</p>
<p>Suppose you choose a small $n$. The chance that any given pepperoni is free are very large. But $n$ is small so the total number of free pepperoni is small. Suppose you choose a large $n$. The chance that any given pepperoni is free are small. But there are a lot of them.</p>
<p>Clearly, for a given $R$ and $r$, there is some optimal $n$ that maximizes the expected number of free pepperoni. How to find this optimum?</p>
<p><strong>Edit: picking the answer</strong></p>
<p>So it looks like leonbloy's answer given the best approximation in the cases I've looked at:</p>
<pre><code> r/R n* by simulation n_free (sim) (R/2r)^2
0.1581 12 4.5 10
0.1 29 10.4 25
0.01 2550 929.7 2500
</code></pre>
<p>(There's only a few hundred trials in the r=0.01 sim, so 2550 might not be super accurate.)
So I'm going to pick it for the answer. I'd like to thank everyone for their contributions, this has been a great learning experience.</p>
<p>Here are a few pictures of a simulation for r/R = 0.1581, n=12:
<a href="https://i.sstatic.net/Rk63l.png"><img src="https://i.sstatic.net/Rk63l.png" alt="enter image description here"></a></p>
<p><strong>Edit after three answers posted:</strong></p>
<p>I wrote a little simulation. I'll paste the code below so it can be checked (edit: it's been fixed to correctly pick points randomly on a unit disc). I've looked at <s>two</s> three cases so far. First case, r = 0.1581, R = 1, which is roughly p = 0.1 by mzp's notation. At these parameters I got n* = 12 (free pepperoni = 4.52). Arthur's expression did not appear to be maximized here. leonbloy's answer would give 10. I also did r = 0.1, R = 1. I got n* = 29 (free pepperoni = 10.38) in this case. Arthur's expression was not maximized here and leonbloy's answer would give 25. Finally for r = 0.01 I get roughly n*=2400 as shown here:<a href="https://i.sstatic.net/4BzYE.jpg"><img src="https://i.sstatic.net/4BzYE.jpg" alt="enter image description here"></a></p>
<p>Here's my (ugly) code, now edited to properly pick random points on a disc:</p>
<pre><code>from __future__ import division
import numpy as np
# the radius of the pizza is fixed at 1
r = 0.1 # the radius of the pepperoni
n_to_try = [1,5,10,20,25,27,28,29,30,31,32,33,35] # the number of pepperoni
trials = 10000# the number of trials (each trial randomly places n pepperoni)
def one_trial():
# place the pepperoni
pepperoni_coords = []
for i in range(n):
theta = np.random.rand()*np.pi*2 # a number between 0 and 2*pi
a = np.random.rand() # a number between 0 and 1
coord_x = np.sqrt(a) * np.cos(theta) # see http://mathworld.wolfram.com/DiskPointPicking.html
coord_y = np.sqrt(a) * np.sin(theta)
pepperoni_coords.append((coord_x, coord_y))
# how many pepperoni are free?
num_free_pepperoni = 0
for i in range(n): # for each pepperoni
pepperoni_coords_copy = pepperoni_coords[:] # copy the list so the orig is not changed
this_pepperoni = pepperoni_coords_copy.pop(i)
coord_x_1 = this_pepperoni[0]
coord_y_1 = this_pepperoni[1]
this_pepperoni_free = True
for pep in pepperoni_coords_copy: # check it against every other pepperoni
coord_x_2 = pep[0]
coord_y_2 = pep[1]
distance = np.sqrt((coord_x_1 - coord_x_2)**2 + (coord_y_1 - coord_y_2)**2)
if distance < 2*r:
this_pepperoni_free = False
break
if this_pepperoni_free:
num_free_pepperoni += 1
return num_free_pepperoni
for n in n_to_try:
results = []
for i in range(trials):
results.append(one_trial())
x = np.average(results)
print "For pizza radius 1, pepperoni radius", r, ", and number of pepperoni", n, ":"
print "Over", trials, "trials, the average number of free pepperoni was", x
print "Arthur's quantity:", x* ((((1-r)/1)**(x-1) - (r/1)) / ((1-r) / 1))
</code></pre>
| <p><em>Updated: see below (Update 3)</em></p>
<hr>
<p>Here's another approximation. Consider the center of the disks (pepperoni) as an homogeneous point process of density $\lambda = n/A$, where $A=\pi R^2$ is the pizza surface. Let $D$ be nearest neighbour distance from a given center. <a href="https://en.wikipedia.org/wiki/Nearest_neighbour_distribution#Poisson_point_process" rel="nofollow noreferrer">Then</a></p>
<p>$$P(D\le d) = 1- \exp(-\lambda \pi d^2)=1- \exp(-n \,d^2/R^2) \tag{1}$$</p>
<p>A pepperoni is free if $D > 2r$. Let $E$ be the expected number of free peperoni.</p>
<p>Then $$E= n\, P(D>2r) = n \exp (-n \,4 \, r^2/R^2) = n \exp(-n \, p)$$
where $p=(2r)^2/R^2$ (same notation as mzp's answer).</p>
<p>The maximum is attained for (ceil or floor of) $n^*=1/p=(R/2r)^2\tag{2}$ </p>
<p>Update 1: Formula $(1)$ could be corrected for the border effects, the area near the border would be computed as the intersection of two <a href="http://mathworld.wolfram.com/Circle-CircleIntersection.html" rel="nofollow noreferrer">circles</a>. It looks quite cumbersome, though.</p>
<p>Update 2: In the above, I assumed that the center of the pepperoni could be placed anywhere inside the pizza circle. If that's not the case, if the pepperoni must be fully inside the pizza, then $R$ should be replaced by the "effective radius" $R' = R-r$ </p>
<hr>
<p>Update 3: The Poisson approach is really not necessary. Here's an exact solution</p>
<p>Let $$t = \frac{R}{2r}$$</p>
<p>(Equivalently, think of $t$ as the pizza radius, and assume a pepperoni of radius $1/2$). Assume $t>1$. Let $g(x)$ be the area of a unit circle, at a distance $x$ from the origin, intersected with the circle of radius $t$. Then</p>
<p>$$g(x)=\begin{cases}\pi & {\rm if}& 0\le x \le t-1\\
h(x) & {\rm if}& t-1<x \le t \\
0 & {\rm elsewhere}
\end{cases}
\tag{3}$$
<a href="http://mathworld.wolfram.com/Circle-CircleIntersection.html" rel="nofollow noreferrer">where</a>
$$h(x)=\cos^{-1}\left(\frac{x^2+1-t^2}{2x}\right)+t^2
\cos^{-1}\left(\frac{x^2+t^2-1}{2xt}\right) -\frac{1}{2}\sqrt{[(x+t)^2-1][1-(t-x)^2]} \tag{4}$$</p>
<p>Here's a graph of $g(x)/\pi$ for $t=5$
<a href="https://i.sstatic.net/8FFCr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8FFCr.png" alt="enter image description here"></a></p>
<p>Let the random variable $e_i$ be $1$ if the $i$ pepperoni is free, $0$ otherwise.
Then</p>
<p>$$E[e_i \mid x] = \left(1- \frac{g(x)}{\pi \, t^2}\right)^{n-1} \tag{5}$$
(remaining pepperoni fall in the free area). And</p>
<p>$$E[e_i] =E[E(e_i \mid x)]= \int_{0}^t \frac{2}{t^2} x \left(1- \frac{g(x)}{\pi \, t^2}\right)^{n-1} dx = \\
=\left(1- \frac{1}{t^2}\right)^{n-1} \left(1- \frac{1}{t}\right)^2
+\frac{2}{t^2} \int_{t-1}^t x \left(1- \frac{h(x)}{\pi\, t^2}\right)^{n-1} dx
\tag{6}$$</p>
<p>The objective function (expected number of free pepperoni) is then given by:</p>
<p>$$J(n)=n E[e_i ] \tag{7} $$</p>
<p>This is exact... but (almost?) intractable. However, it can be evaluated numerically [**].</p>
<p>We can also take as approximation
$$g(x)=\pi$$ for $0\le x < t$ (neglect border effects) and then it gets simple:</p>
<p>$$E[e_i ] =E[e_i \mid x]= \left(1- \frac{1}{t^2}\right)^{n-1}$$</p>
<p>$$J(n)= n \left(1- \frac{1}{t^2}\right)^{n-1} \tag{8}$$</p>
<p>To maximize, we can write
$$\frac{J(n+1)}{J(n)}= \frac{n+1}{n} \left(1- \frac{1}{t^2}\right)=1 $$
which gives</p>
<p>$$ n^{*}= t^2-1 = \frac{R^2}{4 r^2}-1 \tag{9}$$
quite similar to $(2)$.</p>
<p>Notice that as $t \to \infty$, $J(n^{*})/n^{*} \to e^{-1}$, i.e. the proportion of free pepperoni (when using the optimal number) is around $36.7\%$. Also, the "total pepperoni area" is 1/4 of the pizza.</p>
<p>[**] Some Maxima code to evaluate (numerically) the exact solution $(7)$:</p>
<pre><code>h(x,t) := acos((x^2+1-t^2)/(2*x))+t^2*acos((x^2-1+t^2)/(2*x*t))
-sqrt(((x+t)^2-1)*(1-(t-x)^2))/2 $
j(n,t) := n * ( (1-1/t)^2*(1-1/t^2)^(n-1)
+ (2/t^2) * quad_qag(x * (1-h(x,t)/(%pi*t^2))^(n-1),x,t-1,t ,3)[1]) $
j0(n,t) := n *(1-1/t^2)^(n-1)$
tt : 1/(2*0.1581) $
j(11,tt);
4.521719308511862
j(12,tt);
4.522913706608645
j(13,tt);
4.494540361913981
tt : 1/(2*0.1) $
j(27,tt);
10.37509984083333
j(28,tt);
10.37692859747294
j(29,tt);
10.36601271146961
fpprintprec: 4$
nn : makelist(n, n, 2, round(tt^2*1.4))$
jnn : makelist(j(n,tt),n,nn) $
j0nn : makelist(j0(n,tt),n,nn) $
plot2d([[discrete,nn,jnn],[discrete,nn,j0nn]],
[legend,"exact","approx"],[style,[linespoints,1,2]],[xlabel,"n"], [ylabel,"j(n)"],
[title,concat("t=",tt, " r/R=",1/(2*tt))],[y,2,tt^2*0.5]);
</code></pre>
<p>The first result agrees with the OP simulation, the second is close: I got $n^{*}=28$ instead of $29$. For the third case, I get $n^{*}=2529$ ($j(n)=929.1865331$)</p>
<p><a href="https://i.sstatic.net/IsybQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IsybQ.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/tko9U.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tko9U.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/6Kmap.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Kmap.jpg" alt="enter image description here"></a></p>
| <p>Let $a \equiv \pi (2r)^2$, $A \equiv \pi R^2$, and $p \equiv \frac{a}{A}$. Denote by $P_i^n$ the probability of having $i$ free pepperoni when $n$ are distributed randomly (according to a uniform distribution) over the pizza. Let $E_n$ denote the expected number of free pepperoni given $n$.</p>
<p>I will assume that the pepperoni can be placed on the pizza as long as their center lies inside it.</p>
<ul>
<li><p>$n=1$: </p>
<ul>
<li>$P_0^1 = 0$;</li>
<li>$P_1^1 = 1$;</li>
<li>$E_1= 0\cdot 0 +1\cdot 1 =1$.</li>
</ul></li>
<li><p>$n=2$: </p>
<ul>
<li>$P_0^2 = p$, that is, the probability of both pepperoni having their centers within a distance of less then $2r$, in which case they overlap;</li>
<li>$P_1^2 = 0$;</li>
<li>$P_2^2 = 1- p$;</li>
<li>$E_2=p\cdot 0 +0\cdot 1+(1-p)\cdot 2 = 2(1-p) $.</li>
</ul></li>
<li><p>$n=3$: </p>
<ul>
<li>$P_0^3 = p^2$;</li>
<li>$P_1^3 = C^3_2 p$, since there are $C^3_2$ combinations of how $2$ out of $3$ pepperoni could overlap;</li>
<li>$P_2^3 = 0$;</li>
<li>$P_3^3 = 1-p^2-C^3_2p$;</li>
<li>$E_3=p^2\cdot 0 +C^3_2 p\cdot 1+0\cdot 2 +(1-p^2-C^3_2p)\cdot 3 = 3(1-p^2)- 2C^3_2 p $.</li>
</ul></li>
<li><p>$n=4$: </p>
<ul>
<li>$P_0^4 = p^3$;</li>
<li>$P_1^4 = C^4_3 p^2$;</li>
<li>$P_2^4 = C^4_2 p$;</li>
<li>$P_3^4 = 0$;</li>
<li>$P_4^4 = 1-p^3-C^4_3p^2-C^4_2p$;</li>
<li>$E_4=p^3\cdot 0 +C^4_3 p^2\cdot 1+C^4_2 p \cdot 2+0\cdot 3 +(1-p^3-C^4_3p^2-C^4_2p)\cdot 4 \\
\;\;\;\;= 4(1-p^3)- 3C^4_3 p^2- 2C^4_2 p $.</li>
</ul></li>
<li><p>By induction, for $n\ge 2$:</p>
<blockquote>
<ul>
<li>$E_n = n(1-p^{n-1})- \sum_{j=1}^{n-2} (n-j)C^n_{n-j}p^{n-1-j}$.</li>
</ul>
</blockquote></li>
</ul>
<p>Hence the problem becomes that of solving</p>
<p>$$\max_{n \in\mathbb N} E_n$$</p>
<p>I was not able to solve this in general, but, for instance, if $p=0.1$ then </p>
<p>$$E_1 = 1, \; E_2 = 1.8, \; E_3 = 2.37, \; E_4 = 2.676, \; E_5 = 2.6795, \; E_6 = 2.3369, \; E_7 = 1.5991,\dots$$</p>
<p>So that the optimal number of pepperoni is $n^{*}=5$.</p>
|
logic | <p>Text below copied from <a href="https://math.stackexchange.com/questions/566/your-favourite-maths-puzzles">here</a> </p>
<blockquote>
<p>The Blue-Eyed Islander problem is one of my favorites. You can read
about it <a href="http://terrytao.wordpress.com/2008/02/05/the-blue-eyed-islanders-puzzle/" rel="noreferrer">here</a> on Terry Tao's website, along with some discussion.
I'll copy the problem here as well.</p>
<p>There is an island upon which a tribe resides. The tribe consists of
1000 people, with various eye colours. Yet, their religion forbids
them to know their own eye color, or even to discuss the topic; thus,
each resident can (and does) see the eye colors of all other
residents, but has no way of discovering his or her own (there are no
reflective surfaces). If a tribesperson does discover his or her own
eye color, then their religion compels them to commit ritual suicide
at noon the following day in the village square for all to witness.
All the tribespeople are highly logical and devout, and they all know
that each other is also highly logical and devout (and they all know
that they all know that each other is highly logical and devout, and
so forth).</p>
<p>[For the purposes of this logic puzzle, "highly logical" means that
any conclusion that can logically deduced from the information and
observations available to an islander, will automatically be known to
that islander.]</p>
<p>Of the 1000 islanders, it turns out that 100 of them have blue eyes
and 900 of them have brown eyes, although the islanders are not
initially aware of these statistics (each of them can of course only
see 999 of the 1000 tribespeople).</p>
<p>One day, a blue-eyed foreigner visits to the island and wins the
complete trust of the tribe.</p>
<p>One evening, he addresses the entire tribe to thank them for their
hospitality.</p>
<p>However, not knowing the customs, the foreigner makes the mistake of
mentioning eye color in his address, remarking “how unusual it is to
see another blue-eyed person like myself in this region of the world”.</p>
<p>What effect, if anything, does this faux pas have on the tribe?</p>
</blockquote>
<p>The possible options are </p>
<p>Argument 1. The foreigner has no effect, because his comments do not tell the tribe anything that they do not already know (everyone in the tribe can already see that there are several blue-eyed people in their tribe). </p>
<p>Argument 2. 100 days after the address, all the blue eyed people commit suicide. This is proven as a special case of</p>
<p>Proposition. Suppose that the tribe had $n$ blue-eyed people for some positive integer $n$. Then $n$ days after the traveller’s address, all $n$ blue-eyed people commit suicide.</p>
<p>Proof: We induct on $n$. When $n=1$, the single blue-eyed person realizes that the traveler is referring to him or her, and thus commits suicide on the next day. Now suppose inductively that $n$ is larger than $1$. Each blue-eyed person will reason as follows: “If I am not blue-eyed, then there will only be $n-1$ blue-eyed people on this island, and so they will all commit suicide $n-1$ days after the traveler’s address”. But when $n-1$ days pass, none of the blue-eyed people do so (because at that stage they have no evidence that they themselves are blue-eyed). After nobody commits suicide on the $(n-1)^{st}$ day, each of the blue eyed people then realizes that they themselves must have blue eyes, and will then commit suicide on the $n^{th}$ day. </p>
<p>It seems like no-one has found a suitable answer to this puzzle, which seems to be, "which argument is valid?" </p>
<p>My question is...
Is there no solution to this puzzle? </p>
| <p>Argument 1 is clearly wrong.</p>
<p>Consider the island with only <em>two</em> blue-eyed people. The foreigner arrives and announces "how unusual it is to see another blue-eyed person like myself in this region of the world." The induction argument is now simple, and proceeds for only two steps; on the second day both islanders commit suicide. (I leave this as a crucial exercise for the reader.)</p>
<p>Now, what did the foreigner tell the islanders that they did not already know? Say that the blue-eyed islanders are $A$ and $B$. Each already knows that there are blue-eyed islanders, so this is <em>not</em> what they have learned from the foreigner. Each knows that there are blue-eyed islanders, but <em>neither</em> one knows that the other knows this. But when $A$ hears the foreigner announce the existence of blue-eyed islanders, he gains new knowledge: he now knows <em>that $B$ knows that there are blue-eyed islanders</em>. This is new; $A$ did not know this before the announcement. The information learned by $B$ is the same, but mutatis mutandis.</p>
<p>Analogously, in the case that there are three blue-eyed islanders, none learns from the foreigner that there are blue-eyed islanders; all three already knew this. And none learns from the foreigner that other islanders knew there were blue-eyed islanders; all three knew this as well. But each of the three does learn something new, namely that all the islanders now know that (all the islanders know that there are blue-eyed islanders). They did not know this before, and this new information makes the difference.</p>
<p>Apply this process 100 times and you will understand what new knowledge was gained by the hundred blue-eyed islanders in the puzzle.</p>
| <p>This isn't a solution to the puzzle, but it's too long to post as a comment. If one reads further in the post (second link), for clarification:</p>
<p>In response to a request for the solution shortly after the puzzle was posted, Terence Tao replied: </p>
<blockquote>
<p>I don’t want to spoil the puzzle for others, but the key to resolving the apparent contradiction is to understand the concept of “common knowledge”; see
<a href="http://en.wikipedia.org/wiki/Common_knowledge_%28logic%29" rel="noreferrer">http://en.wikipedia.org/wiki/Common_knowledge_%28logic%29</a></p>
</blockquote>
<p>Added much later, Terence Tao poses <em>this question</em>:</p>
<blockquote>
<p>[An interesting moral dilemma: the traveler can save 99 lives after his faux pas, by naming a specific blue-eyed person as the one he referred to, causing that unlucky soul to commit suicide the next day and sparing everyone else. Would it be ethical to do so?]</p>
</blockquote>
<p><em>Now that is truly a dilemma!</em></p>
<hr>
<p>Added: See also this <a href="http://www.math.dartmouth.edu/~pw/solutions.pdf" rel="noreferrer">alternate version of same problem</a> - and its solution, by Peter Winkler of Dartmouth (dedicated to Martin Gardner). See problem/solution $(10)$.</p>
|
geometry | <p>I am doing landscaping and some times I need to create circles or parts of circles that would put the centre of the circle in the neighbours' garden, or there are other obstructions that stop me from just sticking a peg in the ground and using a string. Without moving sheds, walls, fence panels, etc., and without needing to go in the neighbours' garden, how would I work out the circles or sections of circles?</p>
<p>UPDATE:
K I am getting it slowly, I like the template ideas but yes they would only be good for small curves, I am also looking at this: <a href="http://chestofbooks.com/crafts/mechanics/Cyclopaedia/Finding-Circular-Curve-When-Centre-Is-Inaccessible.html#.U75RGfldWJs" rel="nofollow noreferrer">Finding Circular Curve When Centre Is Inaccessible</a>
<img src="https://i.sstatic.net/5x9zo.jpg" alt="enter image description here"></p>
<p>I Like this idea but I have the concern that these templates would collide with house walls, boundary fences, and obstacles as the template turns.</p>
<p>Here is why the template idea has practicality issues.
<img src="https://i.sstatic.net/x72Q9.png" alt="enter image description here"></p>
<p>NOTE! Templates would not have the room to turn the distance needed.
Also Patios are often laid against a house back wall so there would be a solid structure making the centre and or more inaccessible.</p>
<p>Also here is an example path I once had to create. approx distance to cover was about 40ft. Sorry its not exact or very good but something like this, it was nearly ten years ago now.
<img src="https://i.sstatic.net/JYMqH.png" alt="enter image description here"></p>
<p>NEW UPDATE: It would appear this question has been answered mathematically and practically with many options, It is hard to choose an actual answer as there is more than one answer mathematically and practically. Thanks to all contributions!</p>
| <p>Here is a suggestion (see diagram below). Suppose you want to set out a circular arc from $A$ to $C$. The point $B$ will be on the arc as long as the angle $ABC$ is fixed.</p>
<p>Now I'm sure you are more up on the practicalities than I am, but I guess you could saw an angle out of a piece of wood or something. If you then perhaps put pegs at $A$ and $C$, and keep moving the angle around so that the two sides always touch the pegs, then $B$ will move through various positions which are all on a circular arc.</p>
<p>If the angle you use is a right angle then the section you get will be a semicircle. the angle is bigger than a right angle you will get less than a semicircle, and if it's smaller than a right angle you will get more than a semicircle. If you know one point that you want the curve to go through (apart from the ends) then you could use that to find the angle you need, otherwise you would probably have to do a bit of trial and error.</p>
<p>Good luck!</p>
<p><img src="https://i.sstatic.net/GYeLs.png" alt="enter image description here"></p>
| <p>I suggest a pointwise construction. In order to construct the circle of radius $r$ around the inaccessible point $A$, maybe you have enough space to first construct the circle of radius $\frac r2$ around an accessible point $B$ that is the midpoint of a line segment $AC$, where $C$ is also accessible. Once this has been done, you can repeatedly pick a point $P$ on this circle and construct $Q$ such that $P$ is the midpoint of $CQ$.</p>
<p><img src="https://i.sstatic.net/xfXPv.png" alt="construction"></p>
<p>Alternatively, assume that the boundary to the neighbour's garden is a straight line and that the reflection $O'$ of $O$ along this line is accessible. Use peg and string with $O'$ as center, but "reflect" the string at the border line by wrapping it around a (moving) stick there; but you have to watch out that the angles at the stick are symmetric, or that the force acting on it by pulling its orthogonal to the border line ...</p>
|
game-theory | <p>I'm looking for a good book on Game Theory. I run a software company and from the little I've heard about Game Theory, it seems interesting and potentially useful.</p>
<p>I've looked on Amazon.com but wanted to ask the Mathematicians here.</p>
<p>One that looks good (despite the title:) is <a href="http://rads.stackoverflow.com/amzn/click/161564055X">The Complete Idiots Guide to Game Theory</a>. But please post that as an answer if you think it's good.</p>
<p>Please post one book per answer so folks can vote on each book.</p>
| <p>Not a book, but very nice online lectures by B. Polak: <a href="http://academicearth.org/courses/game-theory">http://academicearth.org/courses/game-theory</a>. I watched the first part (12 lectures). I had no knowledge in this area but it's very easy to follow and mathematics is on very low level.</p>
| <p>Before purchasing a text, I'd recommend getting a brief overview of the field; see, for example, <a href="http://en.wikipedia.org/wiki/Game_theory" rel="nofollow"><strong>Wikipedia's entry on Game Theory</strong></a>. You'll then be in a better position, once you've identified which aspects of the field interest you most, to select an appropriate book or website, and you'll also be able to find references that specifically target that area of interest. </p>
<p>Also, if you scroll down to the bottom of the link above, you'll find a whole assortment of texts & references, historically important works in the field, articles, and websites, many with annotations regarding the level of difficulty and targeted audience.</p>
|
matrices | <p>Mariano mentioned somewhere that everyone should prove once in their life that every matrix is conjugate to its transpose.</p>
<p>I spent quite a bit of time on it now, and still could not prove it. At the risk of devaluing myself, might I ask someone else to show me a proof?</p>
| <p>This question has a nice answer using the theory of <a href="https://en.wikipedia.org/wiki/Structure_theorem_for_finitely_generated_modules_over_a_principal_ideal_domain">modules over a PID</a>. Clearly the <a href="https://en.wikipedia.org/wiki/Smith_normal_form">Smith normal forms</a> (over $K[X]$) of $XI_n-A$ and of $XI_n-A^T$ are the same (by symmetry). Therefore $A$ and $A^T$ have the same invariant factors, thus the same <a href="https://en.wikipedia.org/wiki/Rational_canonical_form">rational canonical form</a>*, and hence they are similar over$~K$.</p>
<p>*The Wikipedia article at the link badly needs rewriting.</p>
| <p>I had in mind an argument using the Jordan form, which reduces the question to single Jordan blocks, which can then be handled using Ted's method ---in the comments.</p>
<p>There is one subtle point: the matrix which conjugates a matrix $A\in M_n(k)$ to its transpose can be taken with coefficients in $k$, no matter what the field is. On the other hand, the Jordan canonical form exists only for algebraically closed fields (or, rather, fields which split the characteristic polynomial)</p>
<p>If $K$ is an algebraic closure of $k$, then we can use the above argument to find an invertible matrix $C\in M_n(K)$ such that $CA=A^tC$. Now, consider the equation $$XA=A^tX$$ in a matrix $X=(x_{ij})$ of unknowns; this is a linear equation, and <em>over $K$</em> it has non-zero solutions. Since the equation has coefficients in $k$, it follows that there are also non-zero solutions with coefficients in $k$. This solutions show $A$ and $A^t$ are conjugated, except for a detail: can you see how to assure that one of this non-zero solutions has non-zero determinant?</p>
|
geometry | <p>There is no question what topology is and what it's about: it's about topologies (= topological spaces), and that's it.</p>
<p>There is also no question what (universal) algebra is and what it's about. (Among other things, it's about algebras.)</p>
<p>But what is <em>geometry</em> and what is it about? Is there a thorough and generally agreed upon definition of a geometry (= geometric structure) comparable to the unequivocal definition of a topology or an algebra?</p>
| <p>Usually, geometry consists of an underlying topological space (a manifold, for example) and some <em>structure</em> on this space. The structure is an analogy of some tool – such as a ruler or compass – that enables you to see <em>more</em> than what the topology sees. It might be something that enables you, for example, to “measure angles and distances” (Riemannian geometry), or “just to measure angles” (Conformal geometry), or “to see what are lines and what are not lines” (Projective geometry), or some other, more abstract analog of a “ruler and compass”.</p>
<p>From what I have learned, the <em>Cartan geometry</em> – which defines geometry as a principal bundle over a manifold with some <a href="http://en.wikipedia.org/wiki/Cartan_connection" rel="noreferrer">Cartan connection</a> – generalizes both Kleinian and Riemannian geometry in some sense; one book where this is explained is <a href="http://rads.stackoverflow.com/amzn/click/0387947329" rel="noreferrer">Sharpe: <em>Cartan's generalization of Klein's Erlangen program</em></a>.</p>
<p>The reason why there is not a single universal definition, unlike in topology, is the immense history of geometry (2500 years, compared to 100 years of topology).</p>
| <p>According to Klein, geometry can be viewed as the action of a group on a space, be it smooth or finite. See <a href="http://en.wikipedia.org/wiki/Erlangen_program">this</a>. That is, a <em>geometry</em> on a set $X$ is a triple $(X,G,A)$, where $G$ is a group with action $A$ on $X$.</p>
|
probability | <p>I have been using Sebastian Thrun's course on AI and I have encountered a slightly difficult problem with probability theory.</p>
<p>He poses the following statement:</p>
<p>$$
P(R \mid H,S) = \frac{P(H \mid R,S) \; P(R \mid S)}{P(H \mid S)}
$$</p>
<p>I understand he used Bayes' Rule to get the RHS equation, but fail to see how he did this. If somebody could provide a breakdown of the application of the rule in this problem that would be great.</p>
| <p>Taking it one step at a time:
$$\begin{align}
\mathsf P(R\mid H, S) & = \frac{\mathsf P(R,H,S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R, S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R\mid S)\,\mathsf P(S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R\mid S)}{\mathsf P(H\mid S)}\frac{\mathsf P(S)}{\mathsf P(S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\;\mathsf P(R\mid S)}{\mathsf P(H\mid S)}
\end{align}$$</p>
| <p>You don't really need Bayes' Theorem. Just apply the definition of conditional probability in two ways. Firstly,</p>
<p>\begin{eqnarray*}
P(R\mid H,S) &=& \dfrac{P(R,H\mid S)}{P(H\mid S)} \\
&& \\
\therefore\quad P(R,H\mid S) &=& P(R\mid H,S)P(H\mid S).
\end{eqnarray*}</p>
<p>Secondly,</p>
<p>\begin{eqnarray*}
P(H\mid R,S) &=& \dfrac{P(R,H\mid S)}{P(R\mid S)} \\
&& \\
\therefore\quad P(R,H\mid S) &=& P(H\mid R,S)P(R\mid S).
\end{eqnarray*}</p>
<p>Combine these two to get the result.</p>
|
probability | <p>Suppose $X$ is a real-valued random variable and let $P_X$ denote the distribution of $X$. Then
$$
E(|X-c|) = \int_\mathbb{R} |x-c| dP_X(x).
$$
<a href="http://en.wikipedia.org/wiki/Median#Medians_of_probability_distributions" rel="noreferrer">The medians</a> of $X$ are defined as any number $m \in \mathbb{R}$ such that $P(X \leq m) \geq \frac{1}{2}$ and $P(X \geq m) \geq \frac{1}{2}$.</p>
<p>Why do the medians solve
$$
\min_{c \in \mathbb{R}} E(|X-c|) \, ?
$$</p>
| <p>For <strong>every</strong> real valued random variable $X$,
$$
\mathrm E(|X-c|)=\int_{-\infty}^c\mathrm P(X\leqslant t)\,\mathrm dt+\int_c^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt
$$
hence the function $u:c\mapsto \mathrm E(|X-c|)$ is differentiable almost everywhere and, where $u'(c)$ exists, $u'(c)=\mathrm P(X\leqslant c)-\mathrm P(X\geqslant c)$. Hence $u'(c)\leqslant0$ if $c$ is smaller than every median, $u'(c)=0$ if $c$ is a median, and $u'(c)\geqslant0$ if $c$ is greater than every median.</p>
<p>The formula for $\mathrm E(|X-c|)$ is the integrated version of the relations $$(x-y)^+=\int_y^{+\infty}[t\leqslant x]\,\mathrm dt$$ and $|x-c|=((-x)-(-c))^++(x-c)^+$, which yield, for every $x$ and $c$,
$$
|x-c|=\int_{-\infty}^c[x\leqslant t]\,\mathrm dt+\int_c^{+\infty}[x\geqslant t]\,\mathrm dt
$$</p>
| <p>Let $f$ be the pdf and let $J(c) = E(|X-c|)$. We want to maximize $J(c)$. Note that $E(|X-c|) = \int_{\mathbb{R}} |x-c| f(x) dx = \int_{-\infty}^{c} (c-x) f(x) dx + \int_c^{\infty} (x-c) f(x) dx.$</p>
<p>To find the maximum, set $\frac{dJ}{dc} = 0$. Hence, we get that,
$$\begin{align}
\frac{dJ}{dc} & = (c-x)f(x) | _{x=c} + \int_{-\infty}^{c} f(x) dx + (x-c)f(x) | _{x=c} - \int_c^{\infty} f(x) dx\\
& = \int_{-\infty}^{c} f(x) dx - \int_c^{\infty} f(x) dx = 0
\end{align}
$$</p>
<p>Hence, we get that $c$ is such that $$\int_{-\infty}^{c} f(x) dx = \int_c^{\infty} f(x) dx$$ i.e. $$P(X \leq c) = P(X > c).$$</p>
<p>However, we also know that $P(X \leq c) + P(X > c) = 1$. Hence, we get that $$P(X \leq c) = P(X > c) = \frac12.$$</p>
<p><strong>EDIT</strong></p>
<p>When $X$ doesn't have a density, all you need to do is to make use of integration by parts. We get that $$\displaystyle \int_{-\infty}^{c} (c-x) dP(x) = \lim_{y \rightarrow -\infty} (c-y) P(y) + \displaystyle \int_{c}^{\infty} P(x) dx.$$ Similarly, we also get that $$\displaystyle \int_{c}^{\infty} (x-c) dP(x) = \lim_{y \rightarrow \infty} (y-c) P(y) - \displaystyle \int_{c}^{\infty} P(x) dx.$$</p>
|
linear-algebra | <p>Mariano mentioned somewhere that everyone should prove once in their life that every matrix is conjugate to its transpose.</p>
<p>I spent quite a bit of time on it now, and still could not prove it. At the risk of devaluing myself, might I ask someone else to show me a proof?</p>
| <p>This question has a nice answer using the theory of <a href="https://en.wikipedia.org/wiki/Structure_theorem_for_finitely_generated_modules_over_a_principal_ideal_domain">modules over a PID</a>. Clearly the <a href="https://en.wikipedia.org/wiki/Smith_normal_form">Smith normal forms</a> (over $K[X]$) of $XI_n-A$ and of $XI_n-A^T$ are the same (by symmetry). Therefore $A$ and $A^T$ have the same invariant factors, thus the same <a href="https://en.wikipedia.org/wiki/Rational_canonical_form">rational canonical form</a>*, and hence they are similar over$~K$.</p>
<p>*The Wikipedia article at the link badly needs rewriting.</p>
| <p>I had in mind an argument using the Jordan form, which reduces the question to single Jordan blocks, which can then be handled using Ted's method ---in the comments.</p>
<p>There is one subtle point: the matrix which conjugates a matrix $A\in M_n(k)$ to its transpose can be taken with coefficients in $k$, no matter what the field is. On the other hand, the Jordan canonical form exists only for algebraically closed fields (or, rather, fields which split the characteristic polynomial)</p>
<p>If $K$ is an algebraic closure of $k$, then we can use the above argument to find an invertible matrix $C\in M_n(K)$ such that $CA=A^tC$. Now, consider the equation $$XA=A^tX$$ in a matrix $X=(x_{ij})$ of unknowns; this is a linear equation, and <em>over $K$</em> it has non-zero solutions. Since the equation has coefficients in $k$, it follows that there are also non-zero solutions with coefficients in $k$. This solutions show $A$ and $A^t$ are conjugated, except for a detail: can you see how to assure that one of this non-zero solutions has non-zero determinant?</p>
|
probability | <p>I have a lot of questions regarding probability. Please forgive me if I have made mistakes.</p>
<ol>
<li><p>I actually tossed a coin 200 times. 54% of the time it landed on heads and 46% it landed on tails.</p>
<p>What is the reason that there is a fair chance of the coin landing on heads or tails? </p>
<p>Is it the randomness that causes this?</p>
<p>If so, then in a purely random experiment would the result be 50-50? </p>
<p>Even though probability only projects the likelihood of an event, why are the outcomes in favor of this projection?</p></li>
<li><p>If I eliminate all the external factors during a coin toss, like air resistance, the coin is tossed in a vacuum chamber, the force to flip the coin is fixed,etc will the experiment still be random? Or will I be able to predict the outcomes?</p></li>
</ol>
<p>If the outcomes are indeed predictable, will the experiment be still random if I add a single atom into the chamber? If not at what point does it become random again?</p>
| <p><a href="https://math.stackexchange.com/a/3446324/376156">j4nd3r53n's answer</a> is great, and the starting point of mine.</p>
<blockquote>
<p>What is the reason that there is a fair chance of the coin landing on heads or tails?</p>
</blockquote>
<p>Probability can be thought of as a measure of uncertainty. When a person flips a coin in the real world, there are many, many factors that go into whether the coin flip will result in heads or tails: the force with which the flipper flips, the exact spot on the coin the force is centered, the exact vector of the force, the height difference between where it's flipped and stopped, deformations and imperfections in the coin, the wind, etc.... We say it's 50/50 because so many of those factors are wholly outside of the flipper's control and many that are theoretically within their control (eg., the force they use) are incredibly sensitive to tiny differences. Since humans out in the world can't choose which result they want, we have no information about which side will face up after the flip: we're completely uncertain, so neither choice is better than the other.</p>
<blockquote>
<p>If I eliminate all the external factors during a coin toss, ... will the experiment still be random?</p>
</blockquote>
<p>On the extreme other end, it is entirely possible to consider a machine that is designed to flip a coin just so so that it always lands heads-up, provided the coin can be placed into the mechanism correctly (eg., it might need to be heads-up to start, with the face angled just so). That machine could well flip the coin thousands of times and get heads each time. The difference here is that all of the factors that affect how the coin flies through the air are controlled, removing the uncertainty of the coin's trajectory. Or, more accurately, the variation now lives within the machine: whether the flipping arm malfunctioned or the case cracked or the landing pad wore down enough that the bounce is now "wrong". In removing sources of uncertainty, we've increased the probability of getting heads.</p>
<blockquote>
<p>Is it the randomness that causes [a fair chance of the coin landing heads or tails]?</p>
</blockquote>
<p>It's the other way 'round: the fair chance of the coin landing heads or tails is what results in the coin toss being random. </p>
<blockquote>
<p>... in a purely random experiment would the result be 50-50?</p>
</blockquote>
<p>In the real world, probably not: there's <a href="https://econ.ucsb.edu/~doug/240a/Coin%20Flip.htm" rel="noreferrer">some evidence</a> that heads will result about 51% of the time (partly due to the heads and tails sides not being evenly weighted, though there are other factors).</p>
<blockquote>
<p>Here are the broad strokes of their research:</p>
<ol>
<li><strong>If the coin is tossed and caught</strong>, it has about a 51% chance of landing on the same face it was launched. (If it starts out as heads, there's a 51% chance it will end as heads).</li>
<li><strong>If the coin is spun, rather than tossed</strong>, it can have a much-larger-than-50% chance of ending with the heavier side down. Spun coins can exhibit "huge bias" (some spun coins will fall tails-up 80% of the time).</li>
<li><strong>If the coin is tossed and allowed to clatter to the floor</strong>, this probably adds randomness.</li>
<li><strong>If the coin is tossed and allowed to clatter to the floor where it <em>spins</em></strong>, as will sometimes happen, the above spinning bias probably comes into play.</li>
<li><strong>A coin will land on its edge around 1 in 6000 throws</strong>, creating a <a href="http://en.wikipedia.org/wiki/Flipism" rel="noreferrer">flipistic singularity</a>.</li>
<li><strong>The same initial coin-flipping conditions produce the same coin flip result.</strong> That is, there's a certain amount of determinism to the coin flip.</li>
<li><strong>A more robust coin toss (more revolutions) decreases the bias.</strong></li>
</ol>
</blockquote>
<p>-- <a href="https://econ.ucsb.edu/~doug/240a/Coin%20Flip.htm" rel="noreferrer">source</a></p>
<blockquote>
<p>If the outcomes are indeed predictable, will the experiment be still random if I add a single atom into the chamber? If not at what point does it become random again?</p>
</blockquote>
<p>Basically, the result will be as random as the unknowns or un-predictable-s allow/force it to be. With the coin flipper machine, a single atom is almost certainly not going to affect the path of the coin, but thermal expansion of the coin or the flipping arm might; a well-timed power surge or failure to the flipping arm is certain to add uncertainty to the flip.</p>
<blockquote>
<p>I actually tossed a coin 200 times. 54% of the time it landed on heads and 46% it landed on tails.</p>
</blockquote>
<p>For some more theoretical/math-ey bits, <a href="https://en.wikipedia.org/wiki/Law_of_large_numbers" rel="noreferrer">Wikipedia's Law of Large Numbers page</a> has some good information and pointers. Extremely basically, it says that "the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer to the expected value as more trials are performed." Which is to say, were you to keep flipping the coin, you'd get closer and closer to that 51/49 ratio. 200 trials is in the realm of a large number of trials, but it's on the low end.</p>
<p>This is not to be confused with the semi-serious <a href="https://en.wikipedia.org/wiki/Law_of_truly_large_numbers" rel="noreferrer">Law of Truly Large Numbers</a>, which states that "with a large enough number of samples, any outrageous (i.e. unlikely in any single sample) thing is likely to be observed" - flip the coin 1,000,000,000 times and getting 100 heads in a row becomes pretty likely.</p>
| <p>This is a great question!</p>
<p>The randomness we talk about when we talk about the coin is not out there in the world. It is in our minds. We say the coin toss is 50/50 because we can't do better. If we could, we wouldn't say that it was 50/50. This isn't a paradox because probability is a fact about how we perceive the world, not about the world itself.</p>
<p>I wonder if this quote from <a href="https://www.lesswrong.com/posts/f6ZLxEWaankRZ2Crv/probability-is-in-the-mind" rel="noreferrer">Probability is in the Mind</a> will be helpful:</p>
<blockquote>
<p>To make the coinflip experiment repeatable, as frequentists are wont to demand, we could build an automated coinflipper, and verify that the results were 50% heads and 50% tails. But maybe a robot with extra-sensitive eyes and a good grasp of physics, watching the autoflipper prepare to flip, could predict the coin's fall in advance—not with certainty, but with 90% accuracy. Then what would the real probability be?</p>
<p>There is no "real probability". The robot has one state of partial information. You have a different state of partial information. The coin itself has no mind, and doesn't assign a probability to anything; it just flips into the air, rotates a few times, bounces off some air molecules, and lands either heads or tails.</p>
</blockquote>
|
probability | <p>There are 100 ropes in a bag. In each step, two <strong>rope ends</strong> are picked at random, tied together and put back into a bag. The process is repeated until there are no free ends.<br />
What is the expected number of loops at the end of the process?</p>
| <p>Suppose you start with $n$ ropes. You pick two free ends and tie them together:</p>
<ul>
<li><p>If you happened to pick two ends of the same rope, you've added one additional loop (which you can set aside, since you'll never pick it now on), and have $n-1$ ropes</p></li>
<li><p>If you happened to pick ends of different ropes, you've added no loop, and effectively replaced the two ropes with a longer rope, so you have $n-1$ ropes in this case too.</p></li>
</ul>
<p>Of the $\binom{2n}{2}$ ways of choosing two ends, $n$ of them result in the first case, so the first case has probability $\frac{n}{2n(2n-1)/2} = 1/(2n-1)$. So the expected number of loops you <strong>add in the first step</strong>, when you start with $n$ ropes, is $$\left(\frac{1}{2n-1}\right)1 + \left(1-\frac{1}{2n-1}\right)0 = \frac{1}{2n-1}.$$</p>
<p>After this, you start over with $n-1$ ropes. Since what happens in the first step and later are independent (and expectation is linear anyway), the expected number of loops is $$ \frac{1}{2n-1} + \frac{1}{2n-3} + \dots + \frac{1}{3} + 1 = H_{2n} - \frac{H_n}{2}$$</p>
<p>In particular, for $n=100$, the answer is roughly $3.28$, which come to think of it seems surprisingly small for the number of loops!</p>
| <p>This question is very old (over 10 years), but there is a simpler solution to explain, with the same general idea as the accepted answer.</p>
<p>Suppose, at a given moment, there are <span class="math-container">$n$</span> loose ends. Once you choose the first loose end, you have <span class="math-container">$n-1$</span> other loose ends to tie the rope to, each with probability <span class="math-container">$\frac{1}{n-1}$</span>. Note that only one of the loose ends is on the same rope as the first loose end.</p>
<p>When <span class="math-container">$n = 200$</span> (remember <span class="math-container">$n$</span> is the number of loose ends and there are 2 loose ends per rope), the answer is simply <span class="math-container">$\frac{1}{200-1}+\frac{1}{200-3}+\dots+\frac{1}{200-199}$</span>. We add, since <span class="math-container">$n$</span> is arbitrary for any given moment and specifies a different time of you looping the rope.</p>
|
geometry | <h2>The problem</h2>
<p>So recently in school, we should do a task somewhat like this (roughly translated):</p>
<blockquote>
<p><em>Assign a system of linear equations to each drawing</em></p>
</blockquote>
<p>Then, there were some systems of three linear equations (SLEs) where each equation was describing a plane in their coordinate form and some sketches of three planes in some relation (e.g. parallel or intersecting at 90°-angles.</p>
<h2>My question</h2>
<p>For some reason, I immediately knew that these planes:</p>
<p><a href="https://i.sstatic.net/6luFl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6luFl.png" alt="enter image description here" /></a></p>
<p>belonged to this SLE:
<span class="math-container">$$ x_1 -3x_2 +2x_3 = -2 $$</span>
<span class="math-container">$$ x_1 +3x_2 -2x_3 = 5 $$</span>
<span class="math-container">$$-6x_2 + 4x_3 = 3$$</span></p>
<p>And it turned out to be true. In school, we proved this by determining the planes' intersecting lines and showing that they are parallel, but not identical.<br />
However, I believe that it must be possible to show the planes are arranged like this without a lot of calculation. Since I immediately saw/"felt" that the planes described in the SLE must be arranged in the way they are in the picture (like a triangle). I could also determine the same "shape" on a similar question, so I do not believe that it was just coincidence.</p>
<h2>What needs to be shown?</h2>
<p>So we must show that the three planes described by the SLE cut each other in a way that I do not really know how to describe. They do not intersect with each other perpendicular (at least they don' have to to be arranged in a triangle), but there is no point in which all three planes intersect. If you were to put a line in the center of the triangle, it would be parallel to all planes.</p>
<p>The three planes do not share one intersecting line as it would be in this case:</p>
<p><a href="https://i.sstatic.net/LQ5IY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LQ5IY.png" alt="enter image description here" /></a></p>
<p>(which was another drawing from the task, but is not relevant to this question except for that it has to be excluded)</p>
<h2>My thoughts</h2>
<p>If you were to look at the planes exactly from the direction in which the parallel line from the previous section leads, you would see something like this:</p>
<p><a href="https://i.sstatic.net/eMj2x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eMj2x.png" alt="enter image description here" /></a></p>
<p>The red arrows represent the normal of each plane (they should be perpendicular). You can see that the normals somehow are part of one (new) plane. This is already given by the manner how the planes intersect with each other (as I described before).
If you now were to align your coordinate system in such a way that the plane in which the normals lie is the <span class="math-container">$x_1 x_2$</span>-plane, each normals would have an <span class="math-container">$x_3$</span> value of <span class="math-container">$0$</span>. If you were now to further align the coordinate axes so that the <span class="math-container">$x_1$</span>-axis is identical to one of the normals (let's just choose the bottom one), the values of the normals would be somehow like this:</p>
<p><span class="math-container">$n_1=\begin{pmatrix}
a \\
0 \\
0
\end{pmatrix}$</span> for the bottom normal</p>
<p><span class="math-container">$n_2=\begin{pmatrix}
a \\
a \\
0
\end{pmatrix}$</span> for the upper right normal</p>
<p>and <span class="math-container">$n_3=\begin{pmatrix}
a \\
-a \\
0
\end{pmatrix}$</span> for the upper left normal</p>
<p>Of course, the planes do not have to be arranged in a way that the vectors line up so nicely that they are in one of the planes of our coordinate system.</p>
<p>However, in the SLE, I noticed the following:</p>
<p>-The three normals (we can simpla read the coefficients since the equations are in coordinate form) are <span class="math-container">$n_1=\begin{pmatrix}
1 \\
-3 \\
2
\end{pmatrix}$</span>, <span class="math-container">$n_2=\begin{pmatrix}
1 \\
3 \\
-2
\end{pmatrix}$</span> and <span class="math-container">$n_3=\begin{pmatrix}
0 \\
-6 \\
4
\end{pmatrix}$</span>.</p>
<p>As we can see, <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span> have the same values for <span class="math-container">$x_1$</span> and that <span class="math-container">$x_2(n_1)=-x_2(n_2)$</span>; <span class="math-container">$x_3(n_1)=-x_3(n_2)$</span></p>
<p>Also, <span class="math-container">$n_3$</span> is somewhat similar in that its <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values are the same as the <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values of <span class="math-container">$n_1$</span>, but multiplied by the factor <span class="math-container">$2$</span>.</p>
<p>I also noticed that <span class="math-container">$n_3$</span> has no <span class="math-container">$x_1$</span> value (or, more accurately, the value is <span class="math-container">$0$</span>), while for <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span>, the value for <span class="math-container">$x_1$</span> is identical (<span class="math-container">$n_1=1$</span>).</p>
<h2>Conclusion</h2>
<p>I feel like I am very close to a solution, I just don't know what to do with my thoughts/approaches regarding the normals of the planes.<br />
Any help would be greatly appreciated.</p>
<p><strong>How can I show that the three planes are arranged in this triangular-like shape by using their normals, i.e. without having to calculate the planes' intersection lines?</strong> (Probably we will need more than normals, but I believe that they are the starting point).</p>
<hr />
<p><strong>Update:</strong> I posted a <a href="https://math.stackexchange.com/questions/3827387/why-are-three-vectors-linearly-dependent-when-one-of-them-is-a-combination-of-th">new question</a> that is related to this problem, but is (at least in my opinion) not the same question.</p>
| <p>If you write your systems of equations as a matrix as follows:
<span class="math-container">$$A \vec{x} = \begin{bmatrix} 1 & -3 & 2 \\ 1 & 3 & -2 \\ 0 & -6 & 4 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} -2 \\ 5 \\ 3\end{bmatrix} = \vec{b}$$</span>
then here is a (perhaps) quicker way to determine if the picture looks like the triangle. <strong>Note:</strong> I don't know how comfortable you are with basic linear algebra concepts, but you only need them to understand the proof of why this is correct. You can apply the method without any understanding of them.</p>
<blockquote>
<p><span class="math-container">$1$</span>. If all three normal vectors of the planes are multiples of the same vector, then you can immediately conclude you have three parallel
planes (and not the triangle).</p>
<p><span class="math-container">$2$</span>. If exactly two normal vectors are multiples of the same vector, then you can immediately conclude you don't have the triangle.
Instead, you have one plane that is cut by two parallel planes.</p>
<p><span class="math-container">$3$</span>. If none of the normal vectors are multiples of each other, then it's possible you have the triangle. As you noted, the normal vectors
must be in the same plane, i.e. linearly dependent, so it must follow
that <span class="math-container">$\det(A) = 0$</span>. If this isn't the case, then you can immediately
conclude that the planes intersect in one point.</p>
<p><span class="math-container">$4$</span>. If there is a solution, then <span class="math-container">$\vec{b}$</span> should be a linear combination of two linearly independent columns of <span class="math-container">$A$</span>. (This is because <span class="math-container">$A \vec{x}$</span> is just a linear combination of <span class="math-container">$A$</span>'s columns. If there is a
solution to <span class="math-container">$A \vec{x} = \vec{b}$</span> and <span class="math-container">$A$</span> has two linearly independent
columns, then <span class="math-container">$\vec{b}$</span> should be able to be written as a linear
combination of just those two columns.) Thus, if we replace a linearly
dependent column (i.e. one that can be expressed as a linear
combination of the others) of <span class="math-container">$A$</span> with the vector <span class="math-container">$\vec{b}$</span> to create
the matrix <span class="math-container">$A'$</span>, for there to be no solution (i.e. the "triangle"
configuration) it must be the case that <span class="math-container">$\det(A') \neq 0$</span>. If
<span class="math-container">$\det(A') = 0$</span>, then you can conclude you have three planes
intersecting in one line (the second picture you've posted).</p>
<p>Fortunately, choosing a linearly dependent column is easy. You
just need to make sure to a) replace a zero column with <span class="math-container">$\vec{b}$</span> if
<span class="math-container">$A$</span> has a zero column or b) if there are two columns that are (nonzero)
multiples of each other, then replace one of them with <span class="math-container">$\vec{b}$</span>. And
if none of a) or b) is the case, then you can choose any column.</p>
</blockquote>
<p><strong>Example:</strong> I'll work thru the steps above with the example you've written.</p>
<p><em>Steps <span class="math-container">$1$</span> and <span class="math-container">$2$</span>.</em> I can immediately notice that none of normal vectors of the planes are parallel. So we proceed to step <span class="math-container">$3$</span>.</p>
<p><em>Step <span class="math-container">$3$</span>.</em> We can calculate
<span class="math-container">$$\det(A) = (1)(12 - 12) - (-3)(4 - 0) + 2(-6 - 0) = 0$$</span>
so we proceed to step <span class="math-container">$4$</span>. Note that if you were able to observe that the third row of <span class="math-container">$A$</span> was a linear combination of the first and second row (the third row is simply the first row minus the second row) or that the third column was a multiple of the second column, you could immediately skip to step <span class="math-container">$4$</span>.</p>
<p><em>Step <span class="math-container">$4$</span>.</em> We can notice that none of the columns are zeroes (case a), but in fact the last two columns are multiples of each other. So case b) applies here, and we have to exchange one of the last two columns with <span class="math-container">$\vec{b}$</span> for the process to be correct. Let's replace the last column of <span class="math-container">$A$</span> with <span class="math-container">$\vec{b}$</span> to obtain <span class="math-container">$A'$</span>:
<span class="math-container">$$A' = \begin{bmatrix} 1 & -3 & -2 \\ 1 & 3 & 5 \\ 0 & -6 & 3 \end{bmatrix}$$</span>
and we can calculate
<span class="math-container">$$\det (A') = (1)(9 + 30) - (-3)(3 - 0) + (-2)(-6 - 0) = 29 + 9 + 12 = 60 \neq 0$$</span>
and hence we can conclude we have the "triangle" configuration.</p>
<p><strong>Conclusion:</strong> I think this method is somewhat easier than calculating the three intersection lines. It requires you to calculate two determinants of <span class="math-container">$3 \times 3$</span> matrices instead.</p>
| <p>The three normals <span class="math-container">$n_1, n_2, n_3$</span> all lie in a plane <span class="math-container">$P$</span> through the origin, because <span class="math-container">$n_1 - n_2 = n_3.$</span> The three given planes are orthogonal to <span class="math-container">$P.$</span> If their lines of intersection with <span class="math-container">$P$</span> were concurrent, the point of intersection of the lines would lie on all three planes. But if a point <span class="math-container">$x = (x_1, x_2, x_3)$</span> is common to the first two planes, then <span class="math-container">$x \cdot (n_1 - n_2) = x \cdot n_1 - x \cdot n_2 = -2 - 5 = -7,$</span> which contradicts the equation <span class="math-container">$x \cdot n_3 = 3$</span> of the third plane. Therefore the lines of intersection of the three given planes with <span class="math-container">$P$</span> are not concurrent. No two of them are parallel, because no two of <span class="math-container">$n_1, n_2, n_3$</span> are scalar multiples of each other. Therefore the lines of intersection of the given planes with <span class="math-container">$P$</span> intersect each other in three distinct points, forming a triangle in <span class="math-container">$P.$</span></p>
<p>(It seems to me that that is all that needs to be said, but I've a horrible feeling I'm missing something <span class="math-container">$\ldots$</span>)</p>
|
probability | <p>Everybody knows the famous <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty Hall problem</a>; way too much ink has been spilled over it already. Let's take it as a given and consider the following variant of the problem that I thought up this morning.</p>
<p>Suppose Monty has three apples. Two of them have worms in them, and one doesn't. (For the purposes of this problem, let's assume that finding a worm in your apple is an <em>undesirable</em> outcome). He gives three "contestants" one apple each, then he picks one that he knows has a worm in his apple and instructs him to bite into it. The poor contestant does so, finds (half of) a worm in it, and runs off-stage in disgust.</p>
<p>Now consider the situations of the two remaining contestants. Each one has a classical Monty Hall problem facing him. From player A's perspective, one "door" has been "opened" and revealed to have a "goat"; using the same logic as before, he should choose to switch apples with player B.</p>
<p>The paradox is that player B can use the same logic to conclude that he should switch apples with player A. Therefore, each of the two remaining contestants agree that they should switch apples, and they'll both be better off! Of course, this can't be the case. Exactly one of them gets a worm no matter what.</p>
<p>Where is the flaw in the logic? Where does the analogy between this variant of the problem and the classical version break down?</p>
| <p>I completely agree with Henning Makholm: the important difference between this problem and the classic Monty Hall problem is not whether the apples are chosen by the players or assigned to them — in fact, that makes <em>absolutely no difference</em>, since they have no information to base any meaningful choice on at the point where the apples are given to them.</p>
<p>Rather, the key difference is that, in the classic Monty Hall problem, the player knows that Monty will never open the door they choose. Similarly, if one of the players in this problem knew that they wouldn't be asked to bite into the first apple, they'd be better off switching apples with the other remaining player. But of course, if the apples are assigned randomly, it's impossible for more than one of the players to (correctly) possess such knowledge: if two of the players knew they'd never be chosen to go first, and the third one got the wormless apple, Monty would have no way to pick a player with a wormy apple to go first.</p>
<p>Anyway, you don't really have to believe my reasoning above; as with the classic Monty Hall problem, we can simply enumerate the possible outcomes.
Of course, I'm making here a few assumptions which weren't <em>quite</em> explicitly stated by the OP, but which seem like reasonable interpretations of the problem statement and match the classic Monty Hall problem:</p>
<ul>
<li>Each of the players is equally likely to get the wormless apple.</li>
<li>Monty will <em>always</em> choose a player with a wormy apple to go first.</li>
<li>Of the two players with wormy apples, both are equally likely to be chosen to go first.</li>
<li>All the players know all of the above things in advance.</li>
</ul>
<p>Given these assumptions, there are six possible situations the might occur, with equal probability, at the point where the two remaining players are asked to switch:</p>
<ol>
<li>$A$ has the wormless apple, $B$ went first $\to$ $A$ and $C$ remain.</li>
<li>$A$ has the wormless apple, $C$ went first $\to$ $A$ and $B$ remain.</li>
<li>$B$ has the wormless apple, $C$ went first $\to$ $B$ and $A$ remain.</li>
<li>$B$ has the wormless apple, $A$ went first $\to$ $B$ and $C$ remain.</li>
<li>$C$ has the wormless apple, $A$ went first $\to$ $C$ and $B$ remain.</li>
<li>$C$ has the wormless apple, $B$ went first $\to$ $C$ and $A$ remain.</li>
</ol>
<p>From the list above, you can easily count that, for each player, there are four scenarios where they remain, and in two of those they have the wormless apple. So it makes no difference whether they switch or not.</p>
<p>But what if one player, say $A$, knew that they'd never be chosen to go first? Then, if $B$ or $C$ got the wormless apple, Monty would have to choose the other one of them to go first. Thus, scenarios 4 and 5 above become impossible, while 3 and 6 <em>become twice as likely</em>. Thus, if, say, $A$ and $B$ remain, they know that they have to be in scenarios 2 or 3 — and of those, the one in which $B$ has the wormless apple (3) is now twice as likely as the one in which $A$ has it (2), so $A$ should want to switch but $B$ should not.</p>
| <p>I'm not sure Arturo's solution (in comments) is entirely right. It doesn't really matter who's making the random choice, as long as it is indeed random (which we're assuming it to be here).</p>
<p>I would point to the fact that in the game you describe, every player has -- at the time the apples where chosen -- a risk of getting to run from the stage in disgust rather than getting a chance to switch. Since this risk is dependent on whether the apple he was originally assigned contained a worm or not, once a player knows he's <em>not</em> running in disgust, he has information that the player in a classic Monty Hall game doesn't have, and that changes the odds for him.</p>
<p>(From the beginning, the chance that a player has an apple with a worm is 2/3. However, once a player finds himself being given the chance to switch, half of that probability has disappeared into the possibility that he would have been eliminated initially. So the remaining chance of having a worm is now only 1/2 <em>of the initial probability mass</em>, which is the same as the chance of having a good apple.)</p>
<p>This also points to the sometimes underappreciated fact that what is important in Monty Hall-type problems is <em>not</em> simply what has actually happened to you before you make your choice, but also your assumptions about what else <em>might</em> have had happened to you but <em>didn't</em>. If, the original game had been such that Monty has an option not to offer you a switch at all, the entire standard argument for switching collapses. (Imagine, for example, that Monty were a miser and only offered a switch to contestants that had picked the prize door initially).</p>
|
number-theory | <blockquote>
<p><span class="math-container">$\pi$</span> Pi</p>
<p>Pi is an infinite, nonrepeating <span class="math-container">$($</span>sic<span class="math-container">$)$</span> decimal - meaning that
every possible number combination exists somewhere in pi. Converted
into ASCII text, somewhere in that infinite string of digits is the
name of every person you will ever love, the date, time and manner
of your death, and the answers to all the great questions of
the universe.</p>
</blockquote>
<p>Is this true? Does it make any sense ?</p>
| <p>It is not true that an infinite, non-repeating decimal must contain ‘every possible number combination’. The decimal $0.011000111100000111111\dots$ is an easy counterexample. However, if the decimal expansion of $\pi$ contains every possible finite string of digits, which seems quite likely, then the rest of the statement is indeed correct. Of course, in that case it also contains numerical equivalents of every book that will <strong>never</strong> be written, among other things.</p>
| <p>Let me summarize the things that have been said which are true and add one more thing.</p>
<ol>
<li>$\pi$ is not known to have this property, but it is expected to be true.</li>
<li>This property does not follow from the fact that the decimal expansion of $\pi$ is infinite and does not repeat.</li>
</ol>
<p>The one more thing is the following. The assertion that the answer to every question you could possibly want to ask is contained somewhere in the digits of $\pi$ may be true, but it's useless. Here is a string which may make this point clearer: just string together every possible sentence in English, first by length and then by alphabetical order. The resulting string contains the answer to every question you could possibly want to ask, but</p>
<ul>
<li>most of what it contains is garbage, </li>
<li>you have no way of knowing what is and isn't garbage <em>a priori</em>, and</li>
<li>the only way to refer to a part of the string that isn't garbage is to describe its position in the string, and the bits required to do this themselves constitute a (terrible) encoding of the string. So finding this location is exactly as hard as finding the string itself (that is, finding the answer to whatever question you wanted to ask).</li>
</ul>
<p>In other words, a string which contains everything contains nothing. Useful communication is useful because of what it does <em>not</em> contain.</p>
<p>You should keep all of the above in mind and then read Jorge Luis Borges' <em><a href="http://en.wikipedia.org/wiki/The_Library_of_Babel">The Library of Babel</a></em>. (A library which contains every book contains no books.) </p>
|
matrices | <p>If the matrix is positive definite, then all its eigenvalues are strictly positive. </p>
<p>Is the converse also true?<br>
That is, if the eigenvalues are strictly positive, then matrix is positive definite?<br>
Can you give example of $2 \times 2$ matrix with $2$ positive eigenvalues but is not positive definite?</p>
| <p>I think this is false. Let <span class="math-container">$A = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}$</span> be a 2x2 matrix, in the canonical basis of <span class="math-container">$\mathbb R^2$</span>. Then A has a double eigenvalue b=1. If <span class="math-container">$v=\begin{pmatrix}1\\1\end{pmatrix}$</span>, then <span class="math-container">$\langle v, Av \rangle < 0$</span>.</p>
<p>The point is that the matrix can have all its eigenvalues strictly positive, but it does not follow that it is positive definite.</p>
| <p>This question does a great job of illustrating the problem with thinking about these things in terms of coordinates. The thing that is positive-definite is not a matrix $M$ but the <em>quadratic form</em> $x \mapsto x^T M x$, which is a very different beast from the linear transformation $x \mapsto M x$. For one thing, the quadratic form does not depend on the antisymmetric part of $M$, so using an asymmetric matrix to define a quadratic form is redundant. And there is <em>no reason</em> that an asymmetric matrix and its symmetrization need to be at all related; in particular, they do not need to have the same eigenvalues. </p>
|
linear-algebra | <p>In the vector space of $f:\mathbb R \to \mathbb R$, how do I prove that functions $\sin(x)$ and $\cos(x)$ are linearly independent. By def., two elements of a vector space are linearly independent if $0 = a\cos(x) + b\sin(x)$ implies that $a=b=0$, but how can I formalize that? Giving $x$ different values? Thanks in advance.</p>
| <p><strong>Hint:</strong> If $a\cos(x)+b\sin(x)=0$ for all $x\in\mathbb{R}$ then it
is especially true for $x=0,\frac{\pi}{2}$ </p>
| <p>There are easier ways - eg take special values for $x$. The following technique is a sledgehammer in this case, but a useful one to have around.</p>
<p>Suppose you have $a$ and $b$ as required. Let $r=\sqrt{a^2+b^2}$ and $\phi = \arctan {\frac a b}$ (take $\phi=\frac {\pi} 2$ if $b=0$). Then we have:
$$a \cos (x)+b\sin(x)=r\sin(\phi)\cos(x)+r\cos(\phi)\sin(x)=r\sin(x+\phi)$$</p>
<p>The last form is identically zero only if $r=0$, which immediately implies $a=b=0$ from the definition of $r$.</p>
|
logic | <p>One thing that has bothered me so far while learning about the Peano axioms is that the use of parentheses just comes out of nowhere and we automatically assume they are true in how they work.</p>
<p>For example the proof that $(a+b)+c = a+(b+c)$. Given some base case $(a+b)+0 = a+b = a+(b+0)$ we have for an inductive step:</p>
<p>$(a+b)+S(c) = S((a+b)+c) = S(a+(b+c)) = a + S(b+c) = a + (b+S(c))$</p>
<p>But for some reason this bothers me because we haven't really explained yet at all how the parentheses are meant to be treated in the first place. Do we need an additional axiom or definition for this? Is this mentioned anywhere? Or do we just sort of take them for granted?</p>
| <p>In <a href="https://en.wikipedia.org/wiki/Formal_language" rel="noreferrer">formal language theory</a> (most relevantly, context-free languages), there is the notion of an <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree" rel="noreferrer">abstract syntax tree</a>. A decent chunk of formal language theory is figuring out how to turns flat, linear strings of symbols into this more structured representation. This more structured, tree-like representation is generally what we are thinking of when think about well-formed formulas or terms. For example, when we consider doing induction over all formulas, it is a structural induction over abstract syntax trees, not strings.</p>
<p>For the relatively simple and regular languages often used for simple formal logics, it is relatively easy to describe the syntax with a <a href="https://en.wikipedia.org/wiki/Context-free_grammar" rel="noreferrer">context-free grammar</a> and/or describe an algorithm that will take a string and parse out the abstract syntax tree if possible. For something like $S(a+(b+c))$ this would produce a tree like:</p>
<pre><code> S
|
+
/ \
a +
/ \
b c
</code></pre>
<p>Of course, if I wanted to represent this tree in linear text, i.e. I wanted to serialize it, I would need to pick some textual representation, and in this case the original expression, <code>S(a+(b+c))</code>, would be one possible representation. Another would be Polish notation, <code>S+a+bc</code>. Another would be S-expressions, <code>(S (+ a (+ b c)))</code>. Another would be something like standard notation but eschewing infix operators, <code>S(+(a,+(b,c)))</code>.</p>
<p>Ultimately, what I would strongly recommend is that you think of the abstract syntax trees as primary and the pieces of text you see in books as just serializations of these trees forced upon us by practical limitations. The need to concern ourselves with parsing at all in logic is, to a large extent, just an artifact of the medium we're using. From the perspective of abstract syntax trees, associativity looks like:</p>
<pre><code> + +
/ \ / \
+ c = a +
/ \ / \
a b b c
</code></pre>
<p>Humans are rather good at parsing languages, so it's not generally a barrier to understanding. It is obviously unnecessary to understand context-free grammars to confidently and correctly read most formal notation. If you are interested in parsing, by all means read up on it, but parsing isn't too important to logic so I would not get hung up on it. (Formal languages and formal logic do have a decent overlap in concepts and techniques though.) From a philosophical perspective, worrying about parsing formal languages (an impression some books give) while reading sentences in a massively more complicated natural language is a touch silly.</p>
| <p>Short version: you're right, there is a serious issue here, and it comes down to the exact rules we use to form terms. You've phrased it in terms of trivialization ("doesn't this reduce the proof of associativity to the reflexivity of "$=$"?"), but it's a more serious problem than that, even: it ultimately will allow us to prove that <em>every</em> (infix) binary operation is associative, which is clearly bonkers$^*$ (see my comments below the OP). So something needs to be resolved here.</p>
<p>Ultimately we want to <em>not consider expressions like "$x+y+z$" to be terms at all</em> (which makes sense, because until we've proved that $+$ is associative there's no reason to expect "$x+y+z$" to unambiguously refer to anything).</p>
<blockquote>
<p>So it's not that "$(a+b)+c=a+b+c$" is <em>false</em>, but rather that it's <em>grammatically incorrect</em>.</p>
</blockquote>
<p>Unfortunately some presentations gloss over this point in a way which does in fact cause a serious problem. I think the simplest approach is to outlaw infix notation entirely (so that "$x+y$" is a term in the "metalanguage," but not actually in the formal language we use). This has the drawback of being hard to read, but fairly quickly you'll reach the point where you're comfortable using informal infix notation as shorthand for the formal expressions in question.</p>
<hr>
<p>The syntax I present below is just one option of many:</p>
<p>Given a language $L$ consisting of constant symbols, function symbols (with arities), and relation symbols (with arities), we define the class of <strong>$L$-terms</strong> as follows:</p>
<ul>
<li><p>Every constant symbol in $L$ is a term. <em>(Note that there may be none of these.)</em></p></li>
<li><p>Every variable is a term. <em>(Here variables exist independently of the language $L$; in other words, variables are "logical symbols" like $\forall$, $\wedge$, $($, and $=$.)</em></p></li>
<li><p><strong>This is the key</strong>: if $t_1, ..., t_n$ are terms and $f$ is an $n$-ary function symbol in $L$, then $f(t_1, ..., t_n)$ is a term.</p></li>
<li><p>No string of symbols is a term unless it is required to be by the rules above.</p></li>
</ul>
<p>Note that this syntax <em>does not allow infix symbols</em>: the expression "$x+y$" is grammatically incorrect, and should be written "$+(x, y)$" instead. Note also that "$+$" can only take in as many inputs as its arity: so $+(x, y, z)$ is not a term, since $+$ is $2$-ary. Now the associativity of a binary operation $*$ is expressed as $$\forall x, y, z(*(x, *(y, z))=*(*(x, y), z)),$$ and the issue you raise doesn't crop up here at all since the relevant "ambiguous terms" aren't even well-formed expressions in our syntax.</p>
<p><em>Incidentally, note that the notation I've described above is redundant: <a href="https://en.wikipedia.org/wiki/Polish_notation" rel="noreferrer">we could write "$ft_1...t_n$" and this would be perfectly unambiguous</a>. But in my opinion this leads to a significant loss of "human readability;" why make your syntax minimal if it's harder to read? You'll pry my unnecessary commas and parentheses from my cold, dead hands!</em></p>
<hr>
<p>Now in some sense this can be viewed as dodging the point: <em>what if we do want to allow infix operations</em>? Here again the point is that we need to approach the term-formation rules carefully. My choice would be the following rules (here "$*$" is a binary infix operator):</p>
<ul>
<li>If $t_1, t_2$ are terms, then $(t_1*t_2)$ is a term.</li>
</ul>
<p>Note the addition of parentheses: this prevents (again!) the formation of an "ambiguous term" like $x*y*z$. It does have the drawback of adding "unnecessary parentheses" - e.g. "$x*y$" is not a term, but "$(x*y)$" is - but this winds up not being a problem, only an annoyance.</p>
<hr>
<p>$^*$Actually, there is a third fix: only allow infix binary operations which are associative! So a language consists of some constant symbols, function symbols, relation symbols, and infix binary operations; and an interpretation of a language has to interpret each infix binary operation symbol as an associative binary function on the domain. This then resolves the difficulties here, since associativity is baked into the choice of notation, and it means we can allow expressions like "$a*b*c$" without causing any problems; however, it seems extremely artificial to me (and goes against mathematical practice as well: we're perfectly happy writing "$x-y$" in the real world, after all), so I wouldn't consider it as anything more than a footnote.</p>
|
logic | <p>Can you explain it in a fathomable way at high school level?</p>
| <p>The problem with Gödel's incompleteness is that it is <em>so</em> open for exploitations and problems once you don't do it completely right. You can prove and disprove the existence of god using this theorem, as well the correctness of religion and its incorrectness against the correctness of science. The number of horrible arguments carried out in the name of Gödel's incompleteness theorem is so large that we can't even count them all.</p>
<p>But if I were to give the theorem in a nutshell I would say that if we have a list of axioms which we can enumerate with a computer, and these axioms are sufficient to develop the basic laws of arithmetics, then our list of axioms cannot be both consistent and complete.</p>
<p>In other words, if our axioms are consistent, then in every model of the axioms there is a statement which is true but not provable.</p>
<hr />
<p>Caveats:</p>
<ol>
<li>Enumerate with a computer means that we can write a program (on an ideal computer) which goes over the possible strings in our alphabet and identify the axioms on our list. The fact the computer is "ideal" and not bound by physical things like electricity, etc. is important. Moreover if you are talking to high school kids which are less familiar with theoretical concepts like that, it might be better to talk about them at first. Or find a way to avoid that.</li>
<li>By basic laws of arithmetics I mean the existence of successor function, addition and multiplication. The usual symbols may or may not be in the language over which our axioms are written in. This is another fine point which a lot of people skip over, because it's usually clear to mathematician (although not always), but for high school level crowd it might be worth pointing out that we need to be able to define these operations and prove they behave normally to some extent, but we don't have to have them in our language.</li>
<li>It's very important to talk about what is "provable" and what is "true", the latter being within the context of a specific structure. That's something that professional mathematician can make mistakes with, especially those less trained with the distinction of provable and true (while still doing classical mathematics, rather some intuitionistic or constructive mathematics).</li>
</ol>
<hr />
<p>Now, I would probably try to keep it simple. <strong>VERY</strong> simple. I'd talk about the integers in their standard model, and I would explain the incompleteness theorem within the context of <em>this particular model</em>, with a very specified language including all the required symbols.</p>
<p>In this case we can simply say that if <span class="math-container">$T$</span> is a list of axioms in our language which we can enumerate by an ideal computer, and all the axioms of <span class="math-container">$T$</span> are true in <span class="math-container">$\Bbb N$</span>, then there is some statement <span class="math-container">$\varphi$</span> which is true in <span class="math-container">$\Bbb N$</span> but there is no proof from <span class="math-container">$T$</span> to the statement <span class="math-container">$\varphi$</span>.</p>
<p>(One can even simplify things better by requiring <span class="math-container">$T$</span> to be closed under logical consequences, in which case we really just say that <span class="math-container">$\varphi\notin T$</span>.)</p>
| <p><span class="math-container">${\underline{First}}$</span>: The statement of Gödel's theorem(s) requires some knowledge of what <em>axioms</em> and <em>proofs</em> are. To really get a good feeling for what this is, I believe that one would need to work with it. So it is difficult to give a simple description of Gödel's theorems here in any short way. One will almost certainly say something that isn't completely (pun intended) true.</p>
<p>I can recommend the book <em>Gödel's Proof</em> by <em>Nagel</em> and <em>Newman</em>. I can also recommend the book <em>Gödel's Theorem: An Incomplete Guide to Its Use and Abuse</em> by <em>Torkel Franzén</em>. Both of these books aren't too expensive and they are fairly easy to read. (Disclaimer: I haven't actually read all of Franzén's book).</p>
<p><span class="math-container">${\underline{Second}}: $</span>That said, this is how I would attempt to explain the whole thing. In high school and, hopefully, earlier you learn for example how to add, subtract, multiply, and divide numbers. You might have learned that numbers are good for counting things. If you have a business, you use numbers to keep track of sales and for that we need to be able to "use math" with the numbers.</p>
<p>Now, you have probably also by now learnt that there are rules in math. We for example now that <span class="math-container">$4 + 7 = 7 + 4$</span>. The order doesn't matter when we add numbers. One can't, for example, divide by zero. And so it is not hard to understand how math is built up by all the rules for how you are allowed to manipulate numbers. </p>
<p>Now, mathematicians (and other people) have tried to figure out if it is possible to write down a list of <em>axioms</em> and some rules of logic that would "give us all" of mathematics. We in particular want the axioms to "contain" "basic arithmetic" (think of arithmetic as doing math with the natural numbers). An axiom is something that we take for granted. It is a bit like a definition in that we don't have to prove it. An axiom is true because we say so.</p>
<p>With the axioms we have (want) rules for how to deduce new things. From the axioms, we might be able to prove that something is true or false. (If you want to see something concrete, you could look up the "axioms" of what a <a href="http://en.wikipedia.org/wiki/Group_%28mathematics%29#Definition" rel="noreferrer">group is</a>. With these "axioms" one can write down a proof that the so-called <a href="http://en.wikipedia.org/wiki/Group_%28mathematics%29#Uniqueness_of_identity_element_and_inverses" rel="noreferrer">identity is unique</a>. So here you have an example where we start with a definition and prove that a statement is true).</p>
<p>Part of what we require is that a statement is either <em>true</em> or <em>false</em>. If I write the statement that <span class="math-container">$\pi = 22/7$</span>, then that is a false statement. If I write that <span class="math-container">$\sqrt{2}$</span> is irrational (as a real number), then that is a true statement. It might not be as obvious. So how would I <strong>prove</strong> that this is true. Well, using the rules of logic I can prove that from the axioms the statement follows.</p>
<p>Now a question: Since we are saying that all statements are either true or false, can we, given a statement, always arrive at a proof? That is, can we prove that any given statement is either true or false? If we (in theory) can either prove or disprove any given statement, we say that the system is <em>complete</em>. We would of course like a complete system.</p>
<p>Another question is: Can we <em>prove</em> that our system of axioms will never lead to a contradiction? That is, is it possible that we might have a statement that we can prove is true <em>and</em> false at the same time? If this does not happen we say that the system is <em>consistent</em>. Having a consistent system is of course essential. If one statement existed that is both true and false, then you can prove that all statements are true and false. That would be bad.</p>
<p>And with these two questions is problem located.</p>
<p>The answer is: It is impossible to write down a axiomatic system that while being <em>consistent</em> is also <em>complete</em>.</p>
<p>So if, in the "design" of the axiomatic system we want to make sure that we can't have theorems that are true and false at the same time, then we will have statements that we can't write down a proof of. We will have statements/theorems where we can't decide whether the statement is true or false. In fact we will have statements that we can prove that we can't prove i.e. <a href="https://en.wikipedia.org/wiki/Continuum_hypothesis" rel="noreferrer">continuum hypothesis</a></p>
|
game-theory | <p>I'm thinking about a game theory problem related to factorization. Here it is,</p>
<p>Q: two players A and B are playing this factorization game.
At very first, we have a natural number $270000=2^4\times 3^3\times 5^4$ on a chalkboard.</p>
<p>in their own turn, each player chooses one number $N$ from the chalkboard and erase it, and then write down new two natural numbers $X$ and $Y$ satisfying</p>
<p>(1) $N=X\times Y$
(2) $gcd(X,Y)>1$ (So, they are "NOT" coprime)</p>
<p>a player loses if he cannot do this process in his turn.</p>
<p>So, in this game, possible states at $k$-th turn are actually sequence of natural numbers $a_1,a_2,\dots,a_k$ with $a_i>1$ and $a_1\times a_2\dots \times a_k=270000$</p>
<p>EXAMPLE of this game)<br>
A's turn-$2^{4}\times3^{3}\times5^{4}$<br>
B's turn-$2^2\times3\times5^2,2^2\times3^2\times5^2$<br>
A's turn-$2^2\times3\times5^2,2\times3\times5,2\times3\times5$<br>
B's turn-$2\times5,2\times3\times5,2\times3\times5,2\times3\times5$<br>
So B loses in above case.</p>
<p>Actually, in this game, the first player(So, A) has winning strategy.<br>
What is this winning strategy for A?</p>
<p>-Attempted approach: I tried to find what is "winning position" and "losing position" for this combinatorial game. but classifying these positions were not so obvious.</p>
<p>What is A's winning strategy? Thanks for any help in advance.</p>
| <p>This is nim in disguise. I suggest you represent a pile by a sequence of the exponents of the primes, so $270000$ would be represented by $(4,3,4)$ You can then sort the starting numbers in order, as $(4,4,3)$ would play exactly the same. A legal move is replacing a sequence with two sequences such that: the sum of the corresponding positions in the new sequences matches the number in the original sequence and at least one position of the new sequences is greater than zero in both. For example, from $(4,3,4)$ you can move to $(2,3,4)+(2,0,0)$ or to $(3,2,1)+(1,1,3)$ or to $(4,1,2)+(0,2,2)$, but not to $(4,3,0)+(0,0,4)$. You are trying to find the nim values of various positions. </p>
<p>I would start with single position sequences, so the original number is a prime power. $(1)$ is a losing position, so is $*0$. $(2)$ is clearly $*1$ as you have to move to $(1)+(1)$. $(3)$ is losing, so is $*0$. $(4)$ is $*1$ because you can only move to $*0$. From $(5)$ you can only move to $*1$, so it is $*0$ and losing. A single even pile is winning as you can move to two piles half the size, then mirror your opponent's play, so is $*0$. A single odd pile is losing and is $*1$.</p>
<p>I claim that as first player I can win any game of the form $(a,b)$ with $a\gt 1, b \gt 1$. The point is that an entry of $0$ in a sequence is equivalent to an entry of $1$ as neither can be divided and neither can provide the matching entry. I will move to either $(a,1)+(0,b-1)$ or to $(a-1,1)+(1,b-1)$, whichever makes the large numbers have the same parity. Then I either leave $*0+*0$ or $*1+*1$, both of which are $*0$ and losing for my opponent. </p>
<p>I believe a similar argument can be made for longer sequences, but have not fleshed it out. You can kill off one of the entries of the sequence by leaving $0$ or $1$ in that location in one of the daughter sequences and one of them will win.</p>
| <p>It seems from your question that you might only be interested in the case of $270000$. One winning move in $270000=2^{4}\times3^{3}\times5^{4}$ is for $A$ to split it into $1350,200$ $=\left(2^{1}\times3^{3}\times5^{2}\right),\left(2^{3}\times3^{0}\times5^{2}\right)$. After that, any splitting $B$ does in one of those factors can be mirrored (splitting a power of $3$ instead of $2$ or vice versa) in the other, and future moves by $B$ can be mirrored as well (the subfactor of $2$ in $1350$ doesn't affect anything since it can never be split). </p>
<p>For example, if $B$ moves in the $200$ component to $\left(2^{1}\times3^{0}\times5^{0}\right),\left(2^{2}\times3^{0}\times5^{2}\right)$ then $A$ can move in the $1350$ component to $\left(2^{1}\times3^{1}\times5^{0}\right),\left(2^{0}\times3^{2}\times5^{2}\right)$. Since $A$ can always mirror a move by $B$, $A$ won't be left in a position without a move.</p>
<p>There are other winning moves for $A$ like $27000,10$ and $7500,36$, but it is more difficult to prove that they are winning, since there is no longer a straightforward mirroring strategy to follow.</p>
<p>If you were asking about the general case of an arbitrary starting number, things get a bit tricky (and please clarify in your question). <a href="https://math.stackexchange.com/users/1827/ross-millikan">Ross Millikan</a>'s <a href="https://math.stackexchange.com/a/1673620/26369">answer</a> contains a strategy for the first player to win when they can in the one and two prime case, but this doesn't generalize well. In particular, $27000=2^{3}\times3^{3}\times5^{3}$ is a losing position (though this is not obvious). It may be essentially the only nontrivial losing position, but I have not quite proved this yet. (I've been working on typing up my results in the general case, but they will be several pages long and I'm not sure an MSE answer is the best venue.)</p>
|
geometry | <p>I'm thinking about a circle rolling along a parabola. Would this be a parametric representation?</p>
<p>$(t + A\sin (Bt) , Ct^2 + A\cos (Bt) )$</p>
<p>A gives us the radius of the circle, B changes the frequency of the rotations, C, of course, varies the parabola. Now, if I want the circle to "match up" with the parabola as if they were both made of non-stretchy rope, what should I choose for B?</p>
<p>My first guess is 1. But, the the arc length of a parabola from 0 to 1 is much less than the length from 1 to 2. And, as I examine the graphs, it seems like I might need to vary B in order to get the graph that I want. Take a look:</p>
<p><img src="https://i.sstatic.net/voj4f.jpg" alt="I played with the constants until it looked ALMOST like what I had in mind."></p>
<p>This makes me think that the graph my equation produces will always be wrong no matter what constants I choose. It should look like a cycloid:</p>
<p><img src="https://i.sstatic.net/k3u17.jpg" alt="Cycloid"></p>
<p>But bent to fit on a parabola. [I started this becuase I wanted to know if such a curve could be self-intersecting. (I think yes.) When I was a child my mom asked me to draw what would happen if a circle rolled along the tray of the blackboard with a point on the rim tracing a line ... like most young people, I drew self-intersecting loops and my young mind was amazed to see that they did not intersect!]</p>
<p>So, other than checking to see if this is even going in the right direction, I would like to know if there is a point where the curve shown (or any curve in the family I described) is most like a cycloid-- </p>
<p>Thanks.</p>
<p>"It would be really really hard to tell" is a totally acceptable answer, though it's my current answer, and I wonder if the folks here can make it a little better.</p>
| <p>(I had been meaning to blog about roulettes a while back, but since this question came up, I'll write about this topic here.)</p>
<p>I'll use the parametric representation</p>
<p>$$\begin{pmatrix}2at\\at^2\end{pmatrix}$$</p>
<p>for a parabola opening upwards, where $a$ is the focal length, or the length of the segment joining the parabola's vertex and focus. The arclength function corresponding to this parametrization is $s(t)=a(t\sqrt{1+t^2}+\mathrm{arsinh}(t))$.</p>
<p>user8268 gave a derivation for the "cycloidal" case, and Willie used unit-speed machinery, so I'll handle the generalization to the "trochoidal case", where the tracing point is not necessarily on the rolling circle's circumference.</p>
<p>Willie's comment shows how you should consider the notion of "rolling" in deriving the parametric equations: a rotation (about the wheel's center) followed by a rotation/translation. The first key is to consider that the amount of rotation needed for your "wheel" to roll should be equivalent to the arclength along the "base curve" (in your case, the parabola).</p>
<p>I'll start with a parametrization of a circle of radius $r$ tangent to the horizontal axis at the origin:</p>
<p>$$\begin{pmatrix}-r\sin\;u\\r-r\cos\;u\end{pmatrix}$$</p>
<p>This parametrization of the circle was designed such that a positive value of the parameter $u$ corresponds to a clockwise rotation of the wheel, and the origin corresponds to the parameter value $u=0$.</p>
<p>The arclength function for this circle is $ru$; for rolling this circle, we obtain the equivalence</p>
<p>$$ru=s(t)-s(c)$$</p>
<p>where $c$ is the parameter value corresponding to the point on the base curve where the rolling starts. Solving for $u$ and substituting the resulting expression into the circle equations yields</p>
<p>$$\begin{pmatrix}-r\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-r\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>So far, this is for the "cycloidal" case, where the tracing point is on the circumference. To obtain the "trochoidal" case, what is needed is to replace the $r$ multiplying the trigonometric functions with the quantity $hr$, the distance of the tracing point from the center of the rolling circle:</p>
<p>$$\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>At this point, I note that $r$ here can be a positive or a negative quantity. For your "parabolic trochoid", negative $r$ corresponds to the circle rolling outside the parabola and positive $r$ corresponds to rolling inside the parabola. $h=1$ is the "cycloidal" case; $h > 1$ is the "prolate" case (tracing point outside the rolling circle), and $0 < h < 1$ is the "curtate" case (tracing point within the rolling circle).</p>
<p>That only takes care of the rotation corresponding to "rolling"; to get the circle into the proper position, a further rotation and a translation has to be done. The further rotation needed is a rotation by the <a href="http://mathworld.wolfram.com/TangentialAngle.html">tangential angle</a> $\phi$, where for a parametrically-represented curve $(f(t)\quad g(t))^T$, $\tan\;\phi=\frac{g^\prime(t)}{f^\prime(t)}$. (In words: $\phi$ is the angle the tangent of the curve at a given $t$ value makes with the horizontal axis.)</p>
<p>We then substitute the expression for $\phi$ into the <em>anticlockwise</em> rotation matrix</p>
<p>$$\begin{pmatrix}\cos\;\phi&-\sin\;\phi\\\sin\;\phi&\cos\;\phi\end{pmatrix}$$</p>
<p>which yields</p>
<p>$$\begin{pmatrix}\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&-\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\\\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\end{pmatrix}$$</p>
<p>For the parabola as I had parametrized it, the tangential angle rotation matrix is</p>
<p>$$\begin{pmatrix}\frac1{\sqrt{1+t^2}}&-\frac{t}{\sqrt{1+t^2}}\\\frac{t}{\sqrt{1+t^2}}&\frac1{\sqrt{1+t^2}}\end{pmatrix}$$</p>
<p>This rotation matrix can be multiplied with the "transformed circle" and then translated by the vector $(f(t)\quad g(t))^T$, finally resulting in the expression</p>
<p>$$\begin{pmatrix}f(t)\\g(t)\end{pmatrix}+\frac1{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\begin{pmatrix}f^\prime(t)&-g^\prime(t)\\g^\prime(t)&f^\prime(t)\end{pmatrix}\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$</p>
<p>for a trochoidal curve. (What those last two transformations do, in words, is to rotate and shift the rolling circle appropriately such that the rolling circle touches an appropriate point on the base curve.)</p>
<p>Using this formula, the parametric equations for the "parabolic trochoid" (with starting point at the vertex, $c=0$) are</p>
<p>$$\begin{align*}x&=2at+\frac{r}{\sqrt{1+t^2}}\left(ht\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-t-h\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)\right)\\y&=at^2-\frac{r}{\sqrt{1+t^2}}\left(h\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)+ht\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-1\right)\end{align*}$$</p>
<p>A further generalization to a <em>space curve</em> can be made if the rolling circle is not coplanar to the parabola; I'll leave the derivation to the interested reader (hint: rotate the "transformed" rolling circle equation about the x-axis before applying the other transformations).</p>
<p>Now, for some plots:</p>
<p><img src="https://i.sstatic.net/ukYzm.png" alt="parabolic trochoids"></p>
<p>For this picture, I used a focal length $a=1$ and a radius $r=\frac34$ (negative for the "outer" ones and positive for the "inner" ones). The curtate, cycloidal, and prolate cases correspond to $h=\frac12,1,\frac32$.</p>
<hr>
<p>(added 5/2/2011)</p>
<p>I did promise to include animations and code, so here's a bunch of GIFs I had previously made in <em>Mathematica</em> 5.2:</p>
<p>Inner parabolic cycloid, $a=1,\;r=\frac34\;h=1$</p>
<p><img src="https://i.sstatic.net/RfcAB.gif" alt="inner parabolic cycloid"></p>
<p>Curtate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac12$</p>
<p><img src="https://i.sstatic.net/vWh6M.gif" alt="curtate inner parabolic trochoid"></p>
<p>Prolate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac32$</p>
<p><img src="https://i.sstatic.net/YjTZR.gif" alt="prolate inner parabolic trochoid"></p>
<p>Outer parabolic cycloid, $a=1,\;r=-\frac34\;h=1$</p>
<p><img src="https://i.sstatic.net/DhioB.gif" alt="outer parabolic cycloid"></p>
<p>Curtate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac12$</p>
<p><img src="https://i.sstatic.net/pJtsO.gif" alt="curtate outer parabolic trochoid"></p>
<p>Prolate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac32$</p>
<p><img src="https://i.sstatic.net/u5zSM.gif" alt="prolate outer parabolic trochoid"></p>
<p>The <em>Mathematica</em> code (unoptimized, sorry) is a bit too long to reproduce; those who want to experiment with parabolic trochoids can obtain a notebook from me upon request.</p>
<p>As a final bonus, here is an animation of a <em>three-dimensional</em> generalization of the prolate parabolic trochoid:</p>
<p><img src="https://i.sstatic.net/uNQsd.gif" alt="3D prolate parabolic trochoid"></p>
| <p>If I understand the question correctly:</p>
<p>Your parabola is $p(t)=(t,Ct^2)$. Its speed is $(1,2Ct)$, after normalization it is $v(t)=(1,2Ct)//\sqrt{1+(2Ct)^2)}$, hence the unit normal vector is $n(t)=(-2Ct,1)/\sqrt{1+(2Ct)^2)}$. The center of the circle is at $p(t)+An(t)$. The arc length of the parabola is $\int\sqrt{1+(2Ct)^2}dt= (2 C t \sqrt{4 C^2 t^2+1}+\sinh^{-1}(2 C t))/(4 C)=:a(t)$. The position of a marked point on the circle is $p(t)+An(t)+A\cos(a(t)-a(t_0))\,n(t)+A\sin(a(t)-a(t_0))\,v(t)$ -that's the (rather complicated) curve you're looking for.</p>
<p><strong>edit:</strong> corrected a mistake found by Willie Wong</p>
|
game-theory | <p>You are allowed to roll a die up to six times. Anytime you stop, you get the dollar amount of the face value of your last roll.</p>
<p>Question: What is the best strategy?</p>
<p>According to my calculation, for the strategy 6,5,5,4,4, the expected value is $142/27\approx 5.26$, which I consider quite high. So this might be the best strategy.</p>
<p>Here, 6,5,5,4,4 means in the first roll you stop only when you get a 6; if you did not get a 6 in the first roll, then in the second roll you stop only when you roll a number 5 or higher (i.e. 5 or 6), etc.</p>
| <p>Just work backwards. At each stage, you accept a roll that is >= the expected gain from the later stages:<br>
Expected gain from 6th roll: 7/2<br>
Therefore strategy for 5th roll is: accept if >= 4<br>
Expected gain from 5th roll: (6 + 5 + 4)/6 + (7/2)(3/6) = 17/4<br>
Therefore strategy for 4th roll is: accept if >= 5<br>
Expected gain from 4th roll: (6 + 5)/6 + (17/4)(4/6) = 14/3<br>
Therefore strategy for 3rd roll is: accept if >= 5<br>
Expected gain from 3rd roll: (6 + 5)/6 + (14/3)(4/6) = 89/18<br>
Therefore strategy for 2nd roll is: accept if >= 5<br>
Expected gain from 2nd roll: (6 + 5)/6 + (89/18)(4/6) = 277/54<br>
Therefore strategy for 1st roll is: accept only if 6<br>
Expected gain from 1st roll: 6/6 + (277/54)(5/6) = 1709/324 </p>
<p>So your strategy is 6,5,5,5,4 for an expectation of $5.27469...</p>
| <p>Let $X_n$ be your winnings in a game of length $n$ (in your case $n = 6$), if you are playing optimally. Here, "optimally" means that at roll $m$, you will accept if the value is greater than $\mathbb{E} X_{n-m}$, which is your expected winnings if you continued to play with this strategy. </p>
<p>Let $X \sim \mathrm{Unif}(1,2,3,4,5,6) $ (you can also insert any distribution you like here). Then $X_n$ can be defined as $X_1 = X$ and for $n \geq 2$,</p>
<p>$$ X_n = \begin{cases} X_{n-1}, \quad \mathrm{if} \quad X < \mathbb{E}X_{n-1} \\
X \quad \mathrm{if} \quad X \geq \mathbb{E}X_{n-1} \end{cases} $$
</p>
<p>So your decisions can be determined by computing $\mathbb{E} X_n$ for each $n$ recursively. For the dice case, $\mathbb{E} X_1 = \mathbb{E}X = 7/2$ (meaning on the fifth roll, accept if you get >7/2, or 4,5 or 6), and so,</p>
<p>$$\mathbb{E} X_2 = \mathbb{E} X_1 \mathrm{P}[X = 1,2,3] + \mathbb{E} [X | X \geq 4] \mathrm{P}[X = 4,5,6]$$
$$ = (7/2)(3/6) + (4 + 5 + 6)/3 (1/2) = 17/4 $$
</p>
<p>So on the fourth roll, accept if you get > 17/4, or 5 or 6, and so on (you need to round the answer up at each step, which makes it hard to give a closed form for $\mathbb{E} X_n$ unfortunately). </p>
|
geometry | <p>For <span class="math-container">$n = 2$</span>, I can visualize that the determinant <span class="math-container">$n \times n$</span> matrix is the area of the parallelograms by actually calculating the area by coordinates. But how can one easily realize that it is true for any dimensions?</p>
| <p>If the column vectors are linearly dependent, both the determinant and the volume are zero.
So assume linear independence.
The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume.
By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.</p>
| <p>Here is the same argument as Muphrid's, perhaps written in an elementary way.<br /><br /></p>
<p>Apply Gram-Schmidt orthogonalization to $\{v_{1},\ldots,v_{n}\}$, so that
\begin{eqnarray*}
v_{1} & = & v_{1}\\
v_{2} & = & c_{12}v_{1}+v_{2}^{\perp}\\
v_{3} & = & c_{13}v_{1}+c_{23}v_{2}+v_{3}^{\perp}\\
& \vdots
\end{eqnarray*}
where $v_{2}^{\perp}$ is orthogonal to $v_{1}$; and $v_{3}^{\perp}$
is orthogonal to $span\left\{ v_{1},v_{2}\right\} $, etc. </p>
<p>Since determinant is multilinear, anti-symmetric, then
\begin{eqnarray*}
\det\left(v_{1},v_{2},v_{3},\ldots,v_{n}\right) & = & \det\left(v_{1},c_{12}v_{1}+v_{2}^{\perp},c_{13}v_{1}+c_{23}v_{2}+v_{3}^{\perp},\ldots\right)\\
& = & \det\left(v_{1},v_{2}^{\perp},v_{3}^{\perp},\ldots,v_{n}^{\perp}\right)\\
& = & \mbox{signed volume}\left(v_{1},\ldots,v_{n}\right)
\end{eqnarray*}</p>
|
probability | <p>Randomly break a stick (or a piece of dry spaghetti, etc.) in two places, forming three pieces. The probability that these three pieces can form a triangle is $\frac14$ (coordinatize the stick form $0$ to $1$, call the breaking points $x$ and $y$, consider the unit square of the coordinate plane, shade the areas that satisfy the triangle inequality <em>edit</em>: see comments on the question, below, for a better explanation of this).</p>
<p>The other day in class<sup>*</sup>, my professor was demonstrating how to do a Monte Carlo simulation of this problem on a calculator and wrote a program that, for each trial did the following:</p>
<ol>
<li>Pick a random number $x$ between $0$ and $1$. This is the first side length.</li>
<li>Pick a random number $y$ between $0$ and $1 - x$ (the remaning part of the stick). This is the second side length.</li>
<li>The third side length is $1 - x - y$.</li>
<li>Test if the three side lengths satisfy the triangle inequality (in all three permutations).</li>
</ol>
<p>He ran around $1000$ trials and was getting $0.19$, which he said was probably just random-chance error off $0.25$, but every time the program was run, no matter who's calculator we used, the result was around $0.19$.</p>
<p>What's wrong with the simulation method? What is the theoretical answer to the problem actually being simulated?</p>
<p>(<sup>*</sup> the other day was more than $10$ years ago)</p>
| <p>The three triangle inequalities are</p>
<p>\begin{align}
x + y &> 1-x-y \\
x + (1-x-y) &> y \\
y + (1-x-y) &> x \\
\end{align}</p>
<p>Your problem is that in picking the smaller number first from a uniform distribution, it's going to end up being bigger than it would if you had just picked two random numbers and taken the smaller one. (You'll end up with an average value of $1/2$ for the smaller instead of $1/3$ like you actually want.) Now when you pick $y$ on $[0, 1-x]$, you're making it smaller than it should be (ending up with average value of $1/4$). To understand this unequal distribution, we can substitute $y (1-x)$ for $y$ in the original inequalities and we'll see the proper distribution.</p>
<p><img src="https://i.sstatic.net/5iUej.png" alt=""></p>
<p>(Note that the $y$-axis of the graph doesn't really go from $0$ to $1$; instead the top represents the line $y=1-x$. I'm showing it as a square because that's how the probabilities you were calculating were being generated.) Now the probability you're measuring is the area of the strangely-shaped region on the left, which is
$$\int_0^{1/2}\frac1{2-2x}-\frac{2x-1}{2x-2}\,dx=\ln 2-\frac12\approx0.19314$$
I believe that's the answer you calculated.</p>
| <p>FYI: This question was included in a Martin Gardner 'Mathematical Games' article for Scientific American some years ago. He showed that there were 2 ways of randomly choosing the 2 'break' points:</p>
<ol>
<li>choose two random numbers from 0 to 1, or</li>
<li>choose one random number, break the stick at that point, then choose one of the two shortened sticks at random, and break it at a random point.</li>
</ol>
<p>The two methods give different answers. </p>
|
logic | <p>Is there such a logical thing as proof by example?</p>
<p>I know many times when I am working with algebraic manipulations, I do quick tests to see if I remembered the formula right.</p>
<p>This works and is completely logical for counter examples. One specific counter example disproves the general rule. One example might be whether $(a+b)^2 = a^2+b^2$. This is quickly disproven with most choices of a counter example. </p>
<p>However, say I want to test something that is true like $\log_a(b) = \log_x(b)/\log_x(a)$. I can pick some points a and b and quickly prove it for one example. If I test a sufficient number of points, I can then rest assured that it does work in the general case. <strong>Not that it probably works, but that it does work assuming I pick sufficiently good points</strong>. (Although in practice, I have a vague idea of what makes a set of sufficiently good points and rely on that intuition/observation that it it should work)</p>
<p>Why is this thinking "it probably works" <strong>correct</strong>?</p>
<p>I've thought about it, and here's the best I can come up with, but I'd like to hear a better answer:</p>
<blockquote>
<p>If the equation is false (the two sides aren't equal), then there is
going to be constraints on what a and b can be. In this example it is
one equation and two unknowns. If I can test one point, see it fits
the equation, then test another point, see it fits the equation, and
test one more that doesn't "lie on the path formed by the other two
tested points", then I have proven it.</p>
</blockquote>
<p>I remember being told in school that this is not the same as proving the general case as I've only proved it for specific examples, but thinking about it some more now, I am almost sure it is a rigorous method to prove the general case provided you pick the right points and satisfy some sort of "not on the same path" requirement for the chosen points.</p>
<p>edit: Thank you for the great comments and answers. I was a little hesitant on posting this because of "how stupid a question it is" and getting a bunch of advice on why this won't work instead of a good discussion. I found the polynomial answer the most helpful to my original question of whether or not this method could be rigorous, but I found the link to the small numbers intuition quiz pretty awesome as well.</p>
<p>edit2: Oh I also originally tagged this as linear-algebra because the degrees of freedom nature when the hypothesis is not true. But I neglected to talk about that, so I can see why that was taken out. When a hypothesis is not true (ie polynomial LHS does not equal polynomial RHS), the variables can't be anything, and there exists a counter example to show this. By choosing points that slice these possibilities in the right way, it's proof that the hypothesis is true, at least for polynomials. The points have to be chosen so that there is no possible way the polynomial can meet all of them. If it still meets these points, the only possibility is that the polynomials are the same, proving the hypothesis by example. I would imagine there is a more general version of this, but it's probably harder than writing proofs the more straightforward way in a lot of cases. Maybe "by example" is asking to be stoned and fired. I think "brute force" was closer to what I was asking, but I didn't realize it initially.</p>
| <p>In mathematics, "it probably works" is never a good reason to think something has been proven. There are certain patterns that hold for a large amount of small numbers - most of the numbers one would test - and then break after some obscenely large $M$ (see <a href="http://arxiv.org/pdf/1105.3943.pdf">here</a> for an example). If some equation or statement doesn't hold in general but holds for certain values, then yes, there will be constraints, but those constraints might be very hard or even impossible to quantify: say an equation holds for all composite numbers, but fails for all primes. Since we don't know a formula for the $n$th prime number, it would be very hard to test your "path" to see where this failed.</p>
<p>However, there is such a thing as a proof by example. We often want to show two structures, say $G$ and $H$, to be the same in some mathematical sense: for example, we might want to show $G$ and $H$ are <a href="http://en.wikipedia.org/wiki/Group_isomorphism">isomorphic as groups</a>. Then it would suffice to find an isomorphism between them! In general, if you want to show something exists, you can prove it by <em>finding it</em>!</p>
<p>But again, if you want to show something is true for all elements of a given set (say, you want to show $f(x) = g(x)$ for all $x\in\Bbb{R}$), then you have to employ a more general argument: no amount of case testing will prove your claim (unless you can actually test all the elements of the set explicitly: for example when the set is finite, or when you can apply mathematical induction).</p>
| <p>Yes. As pointed out in the comments by CEdgar:</p>
<p>Theorem: There exists an odd prime number.
Proof: 17 is an odd prime number.</p>
<p>Incidently, this is also a proof by example that there are proofs by example.</p>
|
Subsets and Splits