tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
probability
<p>What is the most efficient way to simulate a 7-sided die with a 6-sided die? I've put some thought into it but I'm not sure I get somewhere specifically.</p> <p>To create a 7-sided die we can use a rejection technique. 3-bits give uniform 1-8 and we need uniform 1-7 which means that we have to reject 1/8 i.e. 12.5% rejection probability.</p> <p>To create $n * 7$-sided die rolls we need $\lceil log_2( 7^n ) \rceil$ bits. This means that our rejection probability is $p_r(n)=1-\frac{7^n}{2^{\lceil log_2( 7^n ) \rceil}}$.</p> <p>It turns out that the rejection probability varies wildly but for $n=26$ we get $p_r(26) = 1 - \frac{7^{26}}{2^{\lceil log_2(7^{26}) \rceil}} = 1-\frac{7^{26}}{2^{73}} \approx 0.6\%$ rejection probability which is quite good. This means that we can generate with good odds 26 7-die rolls out of 73 bits.</p> <p>Similarly, if we throw a fair die $n$ times we get number from $0...(6^n-1)$ which gives us $\lfloor log_2(6^{n}) \rfloor$ bits by rejecting everything which is above $2^{\lfloor log_2(6^{n}) \rfloor}$. Consequently the rejection probability is $p_r(n)=1-\frac{2^{\lfloor log_2( 6^{n} ) \rfloor}}{6^n}$.</p> <p>Again this varies wildly but for $n = 53$, we get $p_r(53) = 1-\frac{2^{137}}{6^{53}} \approx 0.2\%$ which is excellent. As a result, we can roll the 6-face die 53 times and get ~137 bits.</p> <p>This means that we get about $\frac{137}{53} * \frac{26}{73} = 0.9207$ 7-face die rolls out of 6-face die rolls which is close to the optimum $\frac{log 7}{log6} = 0.9208$.</p> <p>Is there a way to get the optimum? Is there an way to find those $n$ numbers as above that minimize errors? Is there relevant theory I could have a look at?</p> <p>P.S. Relevant python expressions:</p> <pre><code>min([ (i, round(1000*(1-( 7**i ) / (2**ceil(log(7**i,2)))) )/10) for i in xrange(1,100)], key=lambda x: x[1]) min([ (i, round(1000*(1- ((2**floor(log(6**i,2))) / ( 6**i )) ) )/10) for i in xrange(1,100)], key=lambda x: x[1]) </code></pre> <p>P.S.2 Thanks to @Erick Wong for helping me get the question right with his great comments.</p> <p>Related question: <a href="https://math.stackexchange.com/questions/685395/is-there-a-way-to-simulate-any-n-sided-die-using-a-fixed-set-of-die-types-for">Is there a way to simulate any $n$-sided die using a fixed set of die types for all $n$?</a></p>
<p>Roll the D6 twice. Order the pairs $(1,1), (1,2), \ldots, (6,5)$ and associate them with the set $\{1,2,\ldots,35\}$. If one of these pairs is rolled, take the associated single value and reduce it mod $7$. So far you have a uniform distribution on $1$ through $7$. </p> <p>If the pair $(6,6)$ is rolled, you are required to start over. This procedure will probabilistically end at some point. Since it has a $\frac{35}{36}$ chance of not requiring a repeat, the expected number of repeats is only $\frac{36}{35}$. More specifically, the number of iterations of this 2-dice-toss process has a geometric distribution with $p=\frac{35}{36}$. So $P(\mbox{requires $n$ iterations})=\frac{35}{36}\left(\frac{1}{36}\right)^{n-1}$</p> <hr> <p>Counting by die rolls instead of iterations, with this method, $$\{P(\mbox{exactly $n$ die rolls are needed})\}_{n=1,2,3,\ldots}=\left\{0,\frac{35}{36},0,\frac{35}{36}\left(\frac{1}{36}\right),0,\frac{35}{36}\left(\frac{1}{36}\right)^{2},0,\frac{35}{36}\left(\frac{1}{36}\right)^{3},\ldots\right\}$$</p> <p>The method that uses base-6 representations of real numbers (@Erick Wong's answer) has $$\{P(\mbox{exactly $n$ die rolls are needed})\}_{n=1,2,3,\ldots}=\left\{0,\frac{5}{6},\frac{5}{6}\left(\frac{1}{6}\right),\frac{5}{6}\left(\frac{1}{6}\right)^{2},\frac{5}{6}\left(\frac{1}{6}\right)^{3},\frac{5}{6}\left(\frac{1}{6}\right)^{4},\ldots\right\}$$</p> <p>Put another way, let $Q(n)=P(\mbox{at most $n$ die rolls are needed using this method})$ and $R(n)=P(\mbox{at most $n$ die rolls are needed using the base-6 method})$. Then we sum the above term by term, and for ease of comparison, I'll use common denominators:</p> <p>$$\begin{align} \{Q(n)\}_{n=1,2,\ldots} &amp;= \left\{0,\frac{45360}{36^3},\frac{45360}{36^3},\frac{46620}{36^3},\frac{46620}{36^3},\frac{46655}{36^3},\frac{46655}{36^3},\ldots\right\}\\ \{R(n)\}_{n=1,2,\ldots} &amp;= \left\{0,\frac{38880}{36^3},\frac{45360}{36^3},\frac{46440}{36^3},\frac{46620}{36^3},\frac{46650}{36^3},\frac{46655}{36^3},\ldots\right\}\\ \end{align}$$</p> <p>So on every other $n$, the base-6 method ties with this method, and otherwise this method is "winning".</p> <hr> <p>EDIT Ah, I first understood the question to be about simulating <em>one</em> D7 roll with $n$ D6 rolls, and minimizing $n$. Now I understand that the problem is about simulating $m$ D7 rolls with $n$ D6 rolls, and minimizing $n$.</p> <p>So alter this by keeping track of "wasted" random information. Here is the recursion in words. I am sure that it could be coded quite compactly in perl:</p> <p>Going into the first roll, we will have $6$ uniformly distributed outcomes (and so not enough to choose from $\{1,...,7\}$.) This is the initial condition for the recursion.</p> <p>Generally, going into the $k$th roll, we have some number $t$ of uniformly distributed outcomes to consider. Partition $\{1,\ldots,t\}$ into $$\{1,\ldots,7; 8,\ldots,14;\ldots,7a;7a+1,\ldots,t\}$$ where $a=t\operatorname{div}7$. Agree that if the outcome from the $k$th roll puts us in $\{1,\ldots,7a\}$, we will consider the result mod $7$ to be a D7 roll result. If the $k$th roll puts us in the last portion of the partition, we must re-roll.</p> <p>Now, either</p> <ul> <li><p>we succeeded with finding a D7 roll. Which portion of the partition were we in? We could have uniformly been in the 1st, 2nd, ..., or $a$th. The next roll therefore will give us $6a$ options to consider, and the recursion repeats.</p></li> <li><p>we did not find another D7 roll. So our value was among $7a+1, \ldots, t$, which is at most $6$ options. However many options it is, call it $b$ for now ($b=t\operatorname{mod}7$). Going into the next roll, we will have $6b$ options to consider, and the recursion repeats.</p></li> </ul>
<p>In the long run, just skip the binary conversion altogether and go with some form of arithmetic coding: use the $6$-dice rolls to generate a uniform base-$6$ real number in $[0,1]$ and then extract base-$7$ digits from that as they resolve. For instance:</p> <pre><code>int rand7() { static double a=0, width=7; // persistent state while ((int)(a+width) != (int)a) { width /= 6; a += (rand6()-1)*width; } int n = (int)a; a -= n; a *= 7; width *= 7; return (n+1); } </code></pre> <p>A test run of $10000$ outputs usually requires exactly $10861$ dice rolls, and occasionally needs one or two more. Note that the uniformity of this implementation is not exact (even if <code>rand6</code> is perfect) due to floating-point truncation, but should be pretty good overall.</p>
linear-algebra
<p>I see on Wikipedia that the product of two commuting symmetric positive definite matrices is also positive definite. Does the same result hold for the product of two positive semidefinite matrices?</p> <p>My proof of the positive definite case falls apart for the semidefinite case because of the possibility of division by zero...</p>
<p>You have to be careful about what you mean by "positive (semi-)definite" in the case of non-Hermitian matrices. In this case I think what you mean is that all eigenvalues are positive (or nonnegative). Your statement isn't true if "$A$ is positive definite" means $x^T A x &gt; 0$ for all nonzero real vectors $x$ (or equivalently $A + A^T$ is positive definite). For example, consider $$ A = \pmatrix{ 1 &amp; 2\cr 2 &amp; 5\cr},\ B = \pmatrix{1 &amp; -1\cr -1 &amp; 2\cr},\ AB = \pmatrix{-1 &amp; 3\cr -3 &amp; 8\cr},\ (1\ 0) A B \pmatrix{1\cr 0\cr} = -1$$</p> <p>Let $A$ and $B$ be positive semidefinite real symmetric matrices. Then $A$ has a positive semidefinite square root, which I'll write as $A^{1/2}$. Now $A^{1/2} B A^{1/2}$ is symmetric and positive semidefinite, and $AB = A^{1/2} (A^{1/2} B)$ and $A^{1/2} B A^{1/2}$ have the same nonzero eigenvalues.</p>
<p>The product of two symmetric PSD matrices is PSD, iff the product is also symmetric. More generally, if $A$ and $B$ are PSD, $AB$ is PSD iff $AB$ is normal, ie, $(AB)^T AB = AB(AB)^T$.</p> <p>Reference: On a product of positive semidefinite matrices, A.R. Meenakshi, C. Rajian, Linear Algebra and its Applications, Volume 295, Issues 1–3, 1 July 1999, Pages 3–6.</p>
logic
<p>Most of the systems mathematicians are interested in are consistent, which means, by Gödel's incompleteness theorems, that there must be unprovable statements.</p> <p>I've seen a simple natural language statement here and elsewhere that's supposed to illustrate this: "I am not a provable statement." which leads to a paradox if false and logical disconnect if true (i.e. logic doesn't work to prove it by definition). Like this answer explains: <a href="https://math.stackexchange.com/a/453764/197692">https://math.stackexchange.com/a/453764/197692</a>.</p> <p>The natural language statement is simple enough for people to get why there's a problem here. But Gödel's incompleteness theorems show that similar statements exist within mathematical systems.</p> <p>My question then is, are there a <em>simple</em> unprovable statements, that would seem intuitively true to the layperson, or is intuitively unprovable, to illustrate the same concept in, say, integer arithmetic or algebra?</p> <p>My understanding is that the continuum hypothesis is an example of an unprovable statement in Zermelo-Fraenkel set theory, but that's not really simple or intuitive.</p> <p>Can someone give a good example you can point to and say "That's what Gödel's incompleteness theorems are talking about"? Or is this just something that is fundamentally hard to show mathematically?</p> <p>Update: There are some fantastic answers here that are certainly accessible. It will be difficult to pick a "right" one.</p> <p>Originally I was hoping for something a high school student could understand, without having to explain axiomatic set theory, or Peano Arithmetic, or countable versus uncountable, or non-euclidean geometry. But the impression I am getting is that in a sufficiently well developed mathematical system, mathematician's have plumbed the depths of it to the point where potentially unprovable statements either remain as conjecture and are therefore hard to grasp by nature (because very smart people are stumped by them), or, once shown to be unprovable, become axiomatic in some new system or branch of systems.</p>
<p>Here's a nice example that I think is easier to understand than the usual examples of Goodstein's theorem, Paris-Harrington, etc. Take a countably infinite paint box; this means that it has one color of paint for each positive integer; we can therefore call the colors <span class="math-container">$C_1, C_2, $</span> and so on. Take the set of real numbers, and imagine that each real number is painted with one of the colors of paint.</p> <p>Now ask the question: Are there four real numbers <span class="math-container">$a,b,c,d$</span>, all painted the same color, and not all zero, such that <span class="math-container">$$a+b=c+d?$$</span></p> <p>It seems reasonable to imagine that the answer depends on how exactly the numbers have been colored. For example, if you were to color every real number with color <span class="math-container">$C_1$</span>, then obviously there are <span class="math-container">$a,b,c,d$</span> satisfying the two desiderata. But one can at least entertain the possibility that if the real numbers were colored in a sufficiently complicated way, there would not be four numbers of the same color with <span class="math-container">$a+b=c+d$</span>; perhaps a sufficiently clever painter could arrange that for any four numbers with <span class="math-container">$a+b=c+d$</span> there would always be at least one of a different color than the rest.</p> <p>So now you can ask the question: Must such <span class="math-container">$a,b,c,d$</span> exist <em>regardless</em> of how cleverly the numbers are actually colored?</p> <p>And the answer, proved by Erdős in 1943 is: yes, <em>if and only if the continuum hypothesis is false</em>, and is therefore independent of the usual foundational axioms for mathematics.</p> <hr> <p>The result is mentioned in </p> <ul> <li>Fox, Jacob “<a href="http://math.mit.edu/~fox/paper-foxrado.pdf" rel="noreferrer">An infinite color analogue of Rado's theorem</a>”, Journal of Combinatorial Theory Series A <strong>114</strong> (2007), 1456–1469.</li> </ul> <p>Fox says that the result I described follows from a more general result of Erdős and Kakutani, that the continuum hypothesis is equivalent to there being a countable coloring of the reals such that each monochromatic subset is linearly independent over <span class="math-container">$\Bbb Q$</span>, which is proved in:</p> <ul> <li>Erdős, P and S. Kakutani “<a href="http://projecteuclid.org/euclid.bams/1183505209" rel="noreferrer">On non-denumerable graphs</a>”, Bull. Amer. Math. Soc. <strong>49</strong> (1943) 457–461.</li> </ul> <p>A proof for the <span class="math-container">$a+b=c+d$</span> situation, originally proved by Erdős, is given in:</p> <ul> <li>Davies, R.O. “<a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&amp;aid=2073404" rel="noreferrer">Partitioning the plane into denumerably many sets without repeated distance</a>” Proc. Cambridge Philos. Soc. <strong>72</strong> (1972) 179–183.</li> </ul>
<p>Any statement which is not logically valid (read: always true) is unprovable. The statement $\exists x\exists y(x&gt;y)$ is not provable from the theory of linear orders, since it is false in the singleton order. On the other hand, it is not disprovable since any other order type would satisfy it.</p> <p>The statement $\exists x(x^2-2=0)$ is not provable from the axioms of the field, since $\Bbb Q$ thinks this is false, and $\Bbb C$ thinks it is true.</p> <p>The statement "$G$ is an Abelian group" is not provable since given a group $G$ it can be Abelian and it could be non-Abelian.</p> <p>The statement "$f\colon\Bbb{R\to R}$ is continuous/differentiable/continuously differentiable/smooth/analytic/a polynomial" and so on and so forth, are all unprovable, because just like that given an arbitrary function we don't know anything about it. Even if we know it is continuous we can't know if it is continuously differentiable, or smooth, or anything else. So these are all additional assumptions we have to make.</p> <p>Of course, given a particular function, like $f(x)=e^x$ we can sit down and prove things about it, but the statement "$f$ is a continuous function" cannot be proved or disproved until further assumptions are added.</p> <p>And that's the point that I am trying to make here. Every statement which cannot be always proved will be unprovable from some assumptions. But you ask for an intuitive statement, and that causes a problem.</p> <p>The problem with "intuitive statement" is that the more you work in mathematics, the more your intuition is decomposed and reconstructed according to the topic you work with. The continuum hypothesis is perfectly intuitive and simple for me, it is true that understanding <strong>how</strong> it can be unprovable is difficult, but the statement itself is not very difficult once you cleared up the basic notions like cardinality and power sets.</p> <p>Finally, let me just add that there are plenty of theories which are complete and consistent and we work with them. Some of them are even recursively enumerable. The incompleteness theorem gives us <em>three</em> conditions from which incompleteness follows, any two won't suffice. (1) Consistent, (2) Recursively enumerable, (3) Interprets arithmetics.</p> <p>There are complete theories which satisfy the first two, and there are complete theories which are consistent and interpret arithmetics, and of course any inconsistent theory is complete.</p>
logic
<p>I'm reading Behnke's <em>Fundamentals of mathematics</em>:</p> <blockquote> <p>If the number of axioms is finite, we can reduce the concept of a consequence to that of a tautology.</p> </blockquote> <p>I got curious on this: Are there infinite sets of axioms? The only thing I could think about is the possible existence of unknown axioms and perhaps some belief that this number of axioms is infinite.</p>
<p>In first order Peano axioms the principal of mathematical induction is not one axiom, but a &quot;template&quot; called an <a href="http://en.wikipedia.org/wiki/Axiom_schema" rel="nofollow noreferrer">axiom scheme</a>. For every possible expression (or &quot;predicate&quot;) with a free variable, <span class="math-container">$P(n)$</span>, we have the axiom:</p> <p><span class="math-container">$$(P(0) \land \left(\forall n: P(n)\implies P(n+1)\right))\implies \\\forall n: P(n)$$</span></p> <p>So, if <span class="math-container">$P(x)$</span> is the predicate, <span class="math-container">$x\cdot 0 = 1$</span> then we'd have the messy axiom:</p> <p><span class="math-container">$$(0\cdot 0=1 \land \left(\forall n: n\cdot 0 =1\implies (n+1)\cdot 0=1\right))\implies \\\forall n: n\cdot 0 = 1$$</span></p> <p>Our inclination is to think of this axiom scheme as a single axiom when preceded by &quot;<span class="math-container">$\forall P$</span>&quot;, but in first-order theory, there is only one &quot;type.&quot; In first-order number theory, that type is &quot;natural number.&quot; So there is no room in the language for the concept of <span class="math-container">$\forall P$</span>. <a href="http://en.wikipedia.org/wiki/Mathematical_induction#Axiom_of_induction" rel="nofollow noreferrer">In second order theory</a>, we can say <span class="math-container">$\forall P$</span>.</p> <p>In set theory, you have a similar rule, the <a href="http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory#3._Axiom_schema_of_specification_.28also_called_the_axiom_schema_of_separation_or_of_restricted_comprehension.29" rel="nofollow noreferrer">&quot;axiom of specification&quot;</a> which lets you construct a set from any predicate, <span class="math-container">$P(x,y)$</span>, with two free variables:</p> <p><span class="math-container">$$\forall S:\exists T: \forall x: (x\in T\iff (x\in S\land P(x,S)))$$</span></p> <p>(The axiom lets you do more, but this is a simple case.)</p> <p>which essentially means that there exists a set:</p> <p><span class="math-container">$$\{x\in S: P(x,S)\}$$</span></p> <p>Again, there is no such object inside set theory as a &quot;predicate.&quot;</p> <p>For most human axiom systems, even when the axioms are infinite, we have a level of verifiability. We usually desire an ability to verify a proof using mechanistic means, and therefore, given any step in a proof, we desire the ability to verify the step in a finite amount of time.</p>
<p>Many important theories, most significantly first-order Peano arithmetic, and ZFC, the most commonly used axiomatic set theory, have an infinite number of axioms. So does the theory of algebraically closed fields. </p>
probability
<p>Suppose we have two independent random variables <span class="math-container">$Y$</span> and <span class="math-container">$X$</span>, both being exponentially distributed with respective parameters <span class="math-container">$\mu$</span> and <span class="math-container">$\lambda$</span>.</p> <p>How can we calculate the pdf of <span class="math-container">$Y-X$</span>?</p>
<p>You can think of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> as waiting times for two independent things (say <span class="math-container">$A$</span> and <span class="math-container">$B$</span> respectively) to happen. Suppose we wait until the first of these happens. If it is <span class="math-container">$A$</span>, then (by the lack-of-memory property of the exponential distribution) the further waiting time until <span class="math-container">$B$</span> happens still has the same exponential distribution as <span class="math-container">$Y$</span>; if it is <span class="math-container">$B$</span>, the further waiting time until <span class="math-container">$A$</span> happens still has the same exponential distribution as <span class="math-container">$X$</span>. That says that the conditional distribution of <span class="math-container">$Y-X$</span> given <span class="math-container">$Y &gt; X$</span> is the distribution of <span class="math-container">$Y$</span>, and the conditional distribution of <span class="math-container">$Y-X$</span> given <span class="math-container">$Y &lt; X$</span> is the distribution of <span class="math-container">$-X$</span>.</p> <p>Let <span class="math-container">$Z=Y-X$</span>. We use a formula related to the law of total probability: <span class="math-container">$$f_Z(x) = f_{Z|Z&lt;0}(x)P(Z&lt;0) + f_{Z|Z\geq 0}(x)P(Z\geq 0)\,.$$</span></p> <p>Given that we know <span class="math-container">$P(Z&lt;0)= P(Y&lt;X) = \frac{\mu}{\mu+\lambda}$</span>, and correspondingly <span class="math-container">$P(Y&gt;X) = \frac{\lambda}{\mu+\lambda}$</span>, the above implies that the pdf for <span class="math-container">$Y-X$</span> is <span class="math-container">$$ f(x) = \frac{\lambda \mu}{\lambda+\mu} \cases{e^{\lambda x} &amp; if $x &lt; 0 $\cr e^{-\mu x} &amp; if $x \geq 0 \,.$\cr}$$</span></p>
<p>The right answer depends very much on what your mathematical background is. I will assume that you have seen some calculus of several variables, and not much beyond that. Instead of using your $u$ and $v$, I will use $X$ and $Y$.</p> <p>The density function of $X$ is $\lambda e^{-\lambda x}$ (for $x \ge 0$), and $0$ elsewhere. There is a similar expression for the density function of $Y$. By independence, the <strong>joint</strong> density function of $X$ and $Y$ is $$\lambda\mu e^{-\lambda x}e^{-\mu y}$$ in the first quadrant, and $0$ elsewhere.</p> <p>Let $Z=Y-X$. We want to find the density function of $Z$. First we will find the cumulative distribution function $F_Z(z)$ of $Z$, that is, the probability that $Z\le z$.</p> <p>So we want the probability that $Y-X \le z$. The geometry is a little different when $z$ is positive than when $z$ is negative. I will do $z$ positive, and you can take care of negative $z$. </p> <p>Consider $z$ fixed and positive, and <strong>draw</strong> the line $y-x=z$. We want to find the probability that the ordered pair $(X,Y)$ ends up below that line or on it. The only relevant region is in the first quadrant. So let $D$ be the part of the first quadrant that lies below or on the line $y=x+z$. Then $$P(Z \le z)=\iint_D \lambda\mu e^{-\lambda x}e^{-\mu y}\,dx\,dy.$$</p> <p>We will evaluate this integral, by using an iterated integral. First we will integrate with respect to $y$, and then with respect to $x$. Note that $y$ travels from $0$ to $x+z$, and then $x$ travels from $0$ to infinity. Thus $$P(Z\le x)=\int_0^\infty \lambda e^{-\lambda x}\left(\int_{y=0}^{x+z} \mu e^{-\mu y}\,dy\right)dx.$$</p> <p>The inner integral turns out to be $1-e^{-\mu(x+z)}$. So now we need to find $$\int_0^\infty \left(\lambda e^{-\lambda x}-\lambda e^{-\mu z} e^{-(\lambda+\mu)x}\right)dx.$$ The first part is easy, it is $1$. The second part is fairly routine. We end up with $$P(Z \le z)=1-\frac{\lambda}{\lambda+\mu}e^{-\mu z}.$$ For the density function $f_Z(z)$ of $Z$, differentiate the cumulative distribution function. We get $$f_Z(z)=\frac{\lambda\mu}{\lambda+\mu} e^{-\mu z} \quad\text{for $z \ge 0$.}$$ Please note that we only dealt with positive $z$. A very similar argument will get you $f_Z(z)$ at negative values of $z$. The main difference is that the final integration is from $x=-z$ on. </p>
logic
<p>In mathematics the existence of a mathematical object is often proved by contradiction without showing how to construct the object.</p> <p>Does the existence of the object imply that it is at least possible to construct the object?</p> <p>Or are there mathematical objects that do exist but are impossible to construct? </p>
<p>Really the answer to this question will come down to the way we define the terms "existence" (and "construct"). Going philosophical for a moment, one may argue that constructibility is a priori required for existence, and so ; this, broadly speaking, is part of the impetus for <strong>intuitionism</strong> and <strong>constructivism</strong>, and related to the impetus for <strong>(ultra)finitism</strong>.$^1$ Incidentally, at least to some degree we can produce formal systems which capture this point of view <em>(although the philosophical stance should really be understood as</em> preceding <em>the formal systems which try to reflect them; I believe this was a point Brouwer and others made strenuously in the early history of intuitionism)</em>.</p> <p>A less philosophical take would be to interpret "existence" as simply "provable existence relative to some fixed theory" (say, ZFC, or ZFC + large cardinals). In this case it's clear what "exists" means, and the remaining weasel word is "construct." <strong>Computability theory</strong> can give us some results which may be relevant, depending on how we interpret this word: there are lots of objects we can define in a complicated way but provably have no "concrete" definitions:</p> <ul> <li><p>The halting problem is not computable.</p></li> <li><p>Kleene's $\mathcal{O}$ - or, the set of indices for computable well-orderings - is not hyperarithmetic.</p></li> <li><p>A much deeper example: while we know that for all Turing degrees ${\bf a}$ there is a degree strictly between ${\bf a}$ and ${\bf a'}$ which is c.e. in $\bf a$, we can also show that there is no "uniform" way to produce such a degree in a precise sense.</p></li> </ul> <p>Going further up the ladder, ideas from <strong>inner model theory</strong> and <strong>descriptive set theory</strong> become relevant. For example:</p> <ul> <li><p>We can show in ZFC that there is a (Hamel) basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$; however, we can also show that no such basis is "nicely definable," in various precise senses (and we get stronger results along these lines as we add large cardinal axioms to ZFC). For example, no such basis can be Borel. </p></li> <li><p>Other examples of the same flavor: a nontrivial ultrafilter on $\mathbb{N}$; a well-ordering of $\mathbb{R}$; a Vitali (or Bernstein or Luzin) set, or indeed any non-measurable set (or set without the property of Baire, or without the perfect set property); ...</p></li> <li><p>On the other side of the aisle, the theory ZFC + a measurable cardinal proves that there is a set of natural numbers which is not "constructible" in a <a href="https://en.wikipedia.org/wiki/Constructible_universe" rel="noreferrer">precise set-theoretic sense</a> (basically, can be built just from "definable transfinite recursion" starting with the emptyset). Now the connection between $L$-flavored constructibility and the informal notion of a mathematical construction is tenuous at best, but this does in my opinion say that a measurable cardinal yields a hard-to-construct set of naturals in a precise sense.</p></li> </ul> <hr> <p>$^1$I don't actually hold these stances <a href="https://math.stackexchange.com/a/2757816/28111">except very rarely</a>, and so I'm really not the best person to comment on their motivations; please take this sentence with a grain of salt.</p>
<p>There exists a Hamel basis for the vector space $\Bbb{R}$ over $\Bbb{Q}$, but nobody has seen one so far. It is the axiom of choice which ensures existence.</p>
logic
<p>I wonder about the foundations of set theory and my question can be stated in some related forms:</p> <ul> <li><p>If we base Zermelo–Fraenkel set theory on first order logic, does that mean first order logic is not allowed to contain the notion of sets?</p></li> <li><p>The axioms of Zermelo–Fraenkel set theory seem to already expect the notion of a set to be defined. Is there are pre-definition of what we are dealing with? And where?</p></li> <li><p>In set theory, if a <em>function</em> is defined as a set using tuples, why or how does first order logic and the axioms of Zermelo–Fraenkel set theory contain parameter dependend properties $\psi(u_1,u_2,q,...)$, which basically are functions?</p></li> </ul>
<p>(1) This is actually not a problem in the form you have stated it -- the rules of what is a valid proof in first-order logic can be stated without any reference to sets, such as by speaking purely about operations on concrete strings of symbols, or by arithmetization with Gödel numbers. </p> <p>However, if you want to do <em>model theory</em> on your first-order theory you need sets. And even if you take the syntactical viewpoint and say that it is all just strings, that just pushes the fundamental problem down a level, because how can we then formalize reasoning about natural numbers (or symbol strings) if first-order logic itself "depends on" natural numbers (or symbol strings)?</p> <p>The answer to that is that is just how it is -- the formalization of first-order logic is <em>not</em> really the ultimate basis for all of mathematics, but a <em>mathematical model</em> of mathematical reasoning itself. The model is not the thing, and mathematical reasoning is ultimately not really a formal theory, but something we do because we <strong>intuitively believe</strong> that it works.</p> <p>(2) This is a misunderstanding. In axiomatic set theory, the axioms themselves <em>are</em> the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p> <p>(3) What you quote is how functions usually are <em>modeled in set theory</em>. Again, the model is not the thing, and just because we can create a model of our abstract concept of functional relation in set theory, it doesn't mean that our abstract concept <em>an sich</em> is necessarily a creature of set theory. Logic has its own way of modeling functional relations, namely by writing down syntactic rules for how they must behave -- this is less expressive but sufficient for logic's need, and is <em>no less valid</em> as a model of functional relations than the set-theoretic model is.</p>
<p>For those concerned with secure foundations for mathematics, the problem is basically this: In order to even state the axioms of ZF set theory, we need to have developed first-order logic; but in order to recognize conditions under which first-order logical formulas can be regarded as "true" or "valid", we need to give a model-theoretic semantics for first-order logic, and this in turn requires certain notions from set theory. At first glance, this seems troublingly circular.</p> <p>Obviously, this is an extremely subtle area. But as someone who has occasionally been troubled by these kinds of questions, let me offer a few personal thoughts, for whatever they may be worth. </p> <p>My own view goes something like this. Let's first distinguish carefully between logic and metalogic. </p> <p>In logic, which we regard as a pre-mathematical discipline, all we can do is give grammatical rules for constructing well-formed statements and derivation rules for constructing "proofs" (or perhaps, so as not to beg the question, we should call them "persuasive demonstrations"); these should be based purely on the grammatical forms of the various statements involved. There is, at this point, no technical or mathematical theory of how to assign meanings to these formulas; instead, we regard them merely as symbolic abbreviations for the underlying natural language utterances, with whatever informal psychological interpretations these already have attached to them. We "justify" the derivation rules informally by persuading ourselves through illustrative examples and informal semantical arguments that they embody a reasonable facsimile of some naturally occurring reasoning patterns we find in common use. What we don't do, at this level, is attempt to specify any formal semantics. So we can't yet prove or even make sense of the classical results from metalogic such as the soundness and/or completeness of a particular system of derivation rules. For us, "soundness" means only that each derivation rule represents a commonly accepted and intuitively legitimate pattern of linguistic inference; "completeness" means basically nothing. We think of first-order logic as merely a transcription into symbolic notation of certain relatively uncontroversial aspects of our pre-critical reasoning habits. We insist on this symbolic representation for two reasons: (i) regimentation, precisely because it severely limits expressive creativity, is a boon to precision; and (ii) without such a symbolic representation, there would be no hope of eventually formalizing anything.</p> <p>Now we take our first step into true formalization by setting down criteria (in a natural language) for what should constitute formal expression in the first place. This involves taking as primitives (i) the notion of a token (i.e., an abstract stand-in for a "symbol"), (ii) a binary relation of distinctness between tokens, (iii) the notion of a string, and (iv) a ternary relation of "comprisal" between strings, subject to certain axioms. The axioms governing these primitives would go something like this. First, each token is a string; these strings are not comprised of any other strings. By definition, two strings that are both tokens are distinct if the two tokens are distinct, and any string that is a token is distinct from any string that is not a token. Secondly, for any token t and any string S, there is a unique string U comprised of t and S (we can denote this U by 'tS'; this is merely a notation for the string U, just as 't' is merely a notation for the abstract token involved --- it should not be thought of as being the string itself). By definition, tS and t'S' are distinct strings if either t and t' are distinct tokens or S and S' are distinct strings (this is intuitively a legitimate recursive definition). Thirdly, nothing is a string except by virtue of the first two axioms. We can define the concatenation UV of strings U and V recursively: if U is a token t, define UV as the string tV; otherwise, U must be t for some token t and string S, and we define UV as the string tW, where W is the (already defined) concatenation SV. This operation can be proven associative by induction. All this is part of intuitive, rather than formal, mathematics. There is simply no way to get mathematics off the ground without understanding that some portion of it is just part of the linguistic activity we routinely take part in as communicative animals. This said, it is worth the effort to formulate things so that this part of it becomes as small as possible.</p> <p>Toward that end, we can insist that any further language used in mathematics be formal (or at least capable of formal representation). This means requiring that some distinguishable symbol be set aside and agreed upon for each distinct token we propose to take for granted, and that everything asserted of the tokens can ultimately be formulated in terms of the notions of distinctness, strings, comprisal, and concatenation discussed above. Notice that there is no requirement to have a finite set of tokens, nor do we have recourse within a formal language to any notion of a set or a number at all.</p> <p>Coming back to our symbolic logic, we can now prove as a theorem of intuitive (informal) mathematics that, with the symbolic abbreviations (the connectives, quantifiers, etc.) mapped onto distinct tokens, we can formulate the grammatical and derivational rules of logic strictly in terms of strings and the associated machinery of formal language theory. This brings one foot of symbolic logic within the realm of formal mathematics, giving us a language in which to express further developments; but one foot must remain outside the formal world, connecting our symbolic language to the intuitions that gave birth to it. The point is that we have an excellent, though completely extra-mathematical, reason for using this particular formal language and its associated deduction system: namely, we believe it captures as accurately as possible the intuitive notions of correct reasoning that were originally only written in shorthand. This is the best we can hope to do; there must be a leap of faith at some point. </p> <p>With our technically uninterpreted symbolic/notational view of logic agreed upon (and not everyone comes along even this far, obviously -- the intuitionists, for example), we can proceed to formulate the axioms of ZF set theory in the usual way. This adds sets and membership to the primitive notions already taken for granted (tokens, strings, etc.). The motivating considerations for each new axiom, for instance the desire to avoid Russell's paradox, make sense within the intuitive understanding of first-order logic, and these considerations are at any rate never considered part of the formal development of set theory, on any approach to foundational questions.</p> <p>Once we have basic set theory, we can go back and complete our picture of formal reasoning by defining model-theoretic semantics for first-order logic (or indeed, higher-order logic) in terms of sets and set-theoretic ideas like inclusion and n-tuples. The purpose of this is twofold: first, to deepen our intuitive understanding of correct reasoning by embedding all of our isolated instincts about valid patterns of deduction within a single rigid structure, as well as to provide an independent check on those instincts; and secondly, to enable the development of the mathematical theory of logic, i.e., metalogic, where beautiful results about reasoning systems in their own right can be formulated and proven.</p> <p>Once we have a precise understanding of the semantics (and metatheory) of first-order logic, we can of course go back to the development of set theory and double-check that any logical deductions we used in proving theorems there were indeed genuinely valid in the formal sense (and they all turn out to be, of course). This doesn't count as a technical justification of one by means of the other, but only shows that the two together have a comforting sort of coherence. I think of their relation as somehow analogous to the way in which the two concepts of "term" and "formula" in logical grammar cannot be defined in isolation from one another, but must instead be defined by a simultaneous recursion. That's obviously just a vague analogy, but it's one I find helpful and comforting.</p> <p>I hope this made sense and was somewhat helpful. My apologies for being so wordy. Cheers. - Joseph</p>
differentiation
<p>I'm looking for a geometric interpretation of this theorem:</p> <p><img src="https://i.sstatic.net/xg9eV.png" alt="enter image description here"></p> <p>My book doesn't give any kind of explanation of it. Again, I'm <em>not</em> looking for a proof - I'm looking for a geometric interpretation.</p> <p>Thanks. </p>
<p>Inspired by Ted Shifrin's comment, here's an attempt at an intuitive viewpoint. I'm not sure how much this counts as a "geometric interpretation".</p> <p>Consider a tiny square $ABCD$ of side length $h$, with $AB$ along the $x$-axis and $AD$ along the $y$-axis.</p> <pre><code>D---C | | h A---B h </code></pre> <p>Then $f_x(A)$ is approximately $\frac1h\big(f(B)-f(A)\big)$, and $f_x(D)$ is approximately $\frac1h\big(f(C)-f(D)\big)$. So, assuming by $f_{xy}$ we mean $\frac\partial{\partial y}\frac\partial{\partial x}f$, we have $$f_{xy}\approx\frac1h\big(f_x(D)-f_x(A)\big)\approx\frac1{h^2}\Big(\big(f(C)-f(D)\big)-\big(f(B)-f(A)\big)\Big).$$ Similarly, $$f_{yx}\approx\frac1{h^2}\Big(\big(f(C)-f(B)\big)-\big(f(D)-f(A)\big)\Big).$$ But those two things are the same: they both correspond to the "stencil" $$\frac1{h^2}\begin{bmatrix} -1 &amp; +1\\ +1 &amp; -1 \end{bmatrix}.$$</p>
<p>One thing to think about is that the derivative in one dimension describes tangent lines. That is, $f'$ is the function such that the following line, parametrized in $x$, is tangent to $f$ at $x_0$: $$y(x)=f(x_0)+f'(x_0)(x-x_0).$$ We could, in fact, go further, and <em>define</em> the derivative to be the linear function which most closely approximates $f$ near $x_0$. If we want to be really formal about it, we could say a function $y_0$ is a better approximation to $f$ near $x$ than $y_1$ if there exists some open interval around $x$ such that $|y_0(a)-f(a)|\leq |y_1(a)-f(a)|$ for every $a$ in the interval. The limit definition of the derivative ensures that the closest function under this definition is the $y(x)$ given above and it can be shown that my definition is equivalent to the limit definition.</p> <p>I only go through that formality so that we could define the second derivative to be the value such that $$y(x)=f(x_0)+f'(x_0)(x-x_0)+\frac{1}2f''(x_0)(x-x_0)^2$$ is the parabola which best approximates $f$ near $x_0$. This can be checked rigorously, if one desires.</p> <p>However, though the idea of the first derivative has an obvious extension to higher dimensions - i.e. what plane best approximates $f$ near $x_0$, it is not as obvious what a second derivative is supposed to represent. Clearly, it should somehow represent a quadratic function, except in two dimensions. The most sensible way I can think to define a quadratic function in higher dimension is to say that $z(x,y)$ is "quadratic" only when, for any $\alpha$, $\beta$, $x_0$ and $y_0$, the function of one variable $$t\mapsto z(\alpha t+x_0,\beta t+y_0)$$ is quadratic; that is, if we traverse $z$ across any line, it looks like a quadratic. The nice thing about this approach is that it can be done in a coordinate-free way. Essentially, we are talking about the best paraboloid or hyperbolic paraboloid approximation to $f$ as being the second-derivative. It is simple enough to show that any such function must be a sum of coefficients of $1$, $x$, $y$, $x^2$, $y^2$, and importantly, $xy$. We need the coefficients of $xy$ in order to ensure that functions like $$z(x,y)=(x+y)^2=x^2+y^2+2xy$$ can be represented, as such functions should clearly be included our new definition quadratic, but can't be written just as a sum of $x^2$ and $y^2$ and lower order terms.</p> <p>However, we don't typically define the derivative to be a function, and here we have done just that. This isn't a problem in one dimension, because there's only the coefficient of $x^2$ to worry about, but in two-dimensions, we have coefficients of three things - $x^2$, $xy$, and $y^2$. Happily, though, we have values $f_{xx}$, $f_{xy}$ and $f_{yy}$ to deal with the fact that these three terms exists. So, we can define a whole ton of derivatives when we say that the best approximating quadratic function must be the map $$z(x,y)=f(x,y)+f_x(x,y)(x-x_0)+f_y(x,y)(y-y_0)+\frac{1}2f_{xx}(x,y)(x-x_0)^2+f_{xy}(x,y)(x-x_0)(y-y_0)+\frac{1}2f_{yy}(x,y)(y-y_0)^2$$ There are two things to note here:</p> <p>Firstly, that this is a well-defined notion <em>regardless of whether we name the coefficients or arguments</em>. The set of quadratic functions of two variables is well defined, regardless of how it can be written in a given form. Intuitively, this means that, given just the graph of the function, we can draw the surface based on local geometric properties of the graph of $f$. The existence of the surface is implied by the requirement that $f_{xy}$ and $f_{yx}$ be continuous.</p> <p>Secondly, that there are multiple ways to express the same function; we would expect that it does not matter if we use the term $f_{xy}(x,y)(x-x_0)(y-y_0)$ or $f_{yx}(x,y)(y-y_0)(x-x_0)$ because they should both describe the same feature of the the surface - abstractly, they both give what the coefficient of $(x-x_0)(y-y_0)$ is for the given surface, and since the surface is well-defined without reference to derivatives, there is a definitive answer to what the coefficient of $(x-x_0)(y-y_0)$ is - and if both $f_{xy}$ and $f_{yx}$ are to answer it, they'd better be equal. (In particular, notice that $z(x,y)=xy$ is a hyperbolic paraboloid, which is zero on the $x$ and $y$ axes; the coefficient of $(x-x_0)(y-y_0)$ can be thought of, roughly as a measure of how much the function "twists" about those axes, representing a change that affects neither axis, but does affect other points)</p>
linear-algebra
<p>Let <span class="math-container">$V$</span> be a vector space of finite dimension and let <span class="math-container">$T,S$</span> linear diagonalizable transformations from <span class="math-container">$V$</span> to itself. I need to prove that if <span class="math-container">$TS=ST$</span> every eigenspace <span class="math-container">$V_\lambda$</span> of <span class="math-container">$S$</span> is <span class="math-container">$T$</span>-invariant and the restriction of <span class="math-container">$T$</span> to <span class="math-container">$V_\lambda$</span> (<span class="math-container">$T:{V_{\lambda }}\rightarrow V_{\lambda }$</span>) is diagonalizable. In addition, I need to show that there's a base <span class="math-container">$B$</span> of <span class="math-container">$V$</span> such that <span class="math-container">$[S]_{B}^{B}$</span>, <span class="math-container">$[T]_{B}^{B}$</span> are diagonalizable if and only if <span class="math-container">$TS=ST$</span>.</p> <p>Ok, so first let <span class="math-container">$v\in V_\lambda$</span>. From <span class="math-container">$TS=ST$</span> we get that <span class="math-container">$\lambda T(v)= S(T(v))$</span> so <span class="math-container">$T(v)$</span> is eigenvector of <span class="math-container">$S$</span> and we get what we want. I want to use that in order to get the following claim, I just don't know how. One direction of the "iff" is obvious, the other one is more tricky to me.</p>
<p>This answer is basically the same as Paul Garrett's. --- First I'll state the question as follows.</p> <p>Let <span class="math-container">$V$</span> be a finite dimensional vector space over a field <span class="math-container">$K$</span>, and let <span class="math-container">$S$</span> and <span class="math-container">$T$</span> be diagonalizable endomorphisms of <span class="math-container">$V$</span>. We say that <span class="math-container">$S$</span> and <span class="math-container">$T$</span> are <strong>simultaneously</strong> diagonalizable if (and only if) there is a basis of <span class="math-container">$V$</span> which diagonalizes both. The theorem is</p> <blockquote> <p><span class="math-container">$S$</span> and <span class="math-container">$T$</span> are simultaneously diagonalizable if and only if they commute.</p> </blockquote> <p>If <span class="math-container">$S$</span> and <span class="math-container">$T$</span> are simultaneously diagonalizable, they clearly commute. For the converse, I'll just refer to Theorem 5.1 of <a href="https://kconrad.math.uconn.edu/blurbs/linmultialg/minpolyandappns.pdf" rel="nofollow noreferrer">The minimal polynomial and some applications</a> by Keith Conrad. [Harvey Peng pointed out in a comment that the link to Keith Conrad's text was broken. I hope the link will be restored, but in the meantime here is a link to the <a href="https://web.archive.org/web/20240410181748/https://kconrad.math.uconn.edu/blurbs/linmultialg/minpolyandappns.pdf" rel="nofollow noreferrer">Wayback Machine version</a>. Edit: original link just updated.]</p> <p><strong>EDIT.</strong> The key statement to prove the above theorem is Theorem 4.11 of Keith Conrad's text, which says:</p> <blockquote> <p>Let <span class="math-container">$A: V \to V$</span> be a linear operator. Then <span class="math-container">$A$</span> is diagonalizable if and only if its minimal polynomial in <span class="math-container">$F[T]$</span> splits in <span class="math-container">$F[T]$</span> and has distinct roots.</p> </blockquote> <p>[<span class="math-container">$F$</span> is the ground field, <span class="math-container">$T$</span> is an indeterminate, and <span class="math-container">$V$</span> is finite dimensional.]</p> <p>The key point to prove Theorem 4.11 is to check the equality <span class="math-container">$$V=E_{\lambda_1}+···+E_{\lambda_r},$$</span> where the <span class="math-container">$\lambda_i$</span> are the distinct eigenvalues and the <span class="math-container">$E_{\lambda_i}$</span> are the corresponding eigenspaces. One can prove this by using Lagrange's interpolation formula: put <span class="math-container">$$f:=\sum_{i=1}^r\ \prod_{j\not=i}\ \frac{T-\lambda_j}{\lambda_i-\lambda_j}\ \in F[T]$$</span> and observe that <span class="math-container">$f(A)$</span> is the identity of <span class="math-container">$V$</span>.</p>
<p>You've proven (from <span class="math-container">$ST=TS$</span>) that the <span class="math-container">$\lambda$</span>-eigenspace <span class="math-container">$V_\lambda$</span> of <span class="math-container">$T$</span> is <span class="math-container">$S$</span>-stable. The diagonalizability of <span class="math-container">$S$</span> on the whole space is equivalent to its minimal polynomial having no repeated factors. Its minimal poly on <span class="math-container">$V_\lambda$</span> divides that on the whole space, so is still repeated-factor-free, so <span class="math-container">$S$</span> is diagonalizable on that subspace. This gives an induction to prove the existence of a simultaneous basis of eigenvectors. Note that it need <em>not</em> be the case that every eigenvector of <span class="math-container">$T$</span> is an eigenvector of <span class="math-container">$S$</span>, because eigenspaces can be greater-than-one-dimensional.</p> <p><strong>Edit</strong>: Thanks Arturo M. Yes, over a not-necessarily algebraically closed field, one must say that "diagonalizable" is equivalent to having no repeated factor <em>and</em> splits into linear factors.</p> <p><strong>Edit 2</strong>: <span class="math-container">$V_\lambda$</span> being "S-stable" means that <span class="math-container">$SV_\lambda\subset V_\lambda$</span>, that is, <span class="math-container">$Sv\in V_\lambda$</span> for all <span class="math-container">$v\in V_\lambda$</span>. </p>
linear-algebra
<p>Quick question: Can I define some inner product on any arbitrary vector space such that it becomes an inner product space? If yes, how can I prove this? If no, what would be a counter example? Thanks a lot in advance.</p>
<p>I'm assuming the ground field is ${\mathbb R}$ or ${\mathbb C}$, because otherwise it's not clear what an "inner product space" is.</p> <p>Now any vector space $X$ over ${\mathbb R}$ or ${\mathbb C}$ has a so-called <em>Hamel basis</em>. This is a family $(e_\iota)_{\iota\in I}$ of vectors $e_\iota\in X$ such that any $x\in X$ can be written uniquely in the form $x=\sum_{\iota\in I} \xi_\iota\ e_\iota$, where only finitely many $\xi_\iota$ are $\ne 0$. Unfortunately you need the axiom of choice to obtain such a basis, if $X$ is not finitely generated.</p> <p>Defining $\langle x, y\rangle :=\sum_{\iota\in I} \xi_\iota\ \bar\eta_\iota$ gives a bilinear "scalar product" on $X$ such that $\langle x, x\rangle&gt;0$ for any $x\ne0$. Note that in computing $\langle x,y\rangle$ no question of convergence arises. </p> <p>It follows that $\langle\ ,\ \rangle$ is an inner product on $X$, and adopting the norm $\|x\|^2:=\langle x,x\rangle$ turns $X$ into a metric space in the usual way.</p>
<p>How about vector spaces over <a href="http://en.wikipedia.org/wiki/Finite_field">finite fields</a>? Finite fields don't have an ordered subfield, and thus one cannot meaningfully define a positive-definite inner product on vector spaces over them.</p>
logic
<p><a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_ontological_proof" rel="noreferrer">Gödel's ontological proof</a> is a formal argument for God's existence by the mathematician Kurt Gödel.</p> <p>Can someone please explain what are the symbols in the proof and elaborate about its flow:</p> <p><span class="math-container">$$ \begin{array}{rl} \text{Ax. 1.} &amp; \left\{P(\varphi) \wedge \Box \; \forall x[\varphi(x) \to \psi(x)]\right\} \to P(\psi) \\ \text{Ax. 2.} &amp; P(\neg \varphi) \leftrightarrow \neg P(\varphi) \\ \text{Th. 1.} &amp; P(\varphi) \to \Diamond \; \exists x[\varphi(x)] \\ \text{Df. 1.} &amp; G(x) \iff \forall \varphi [P(\varphi) \to \varphi(x)] \\ \text{Ax. 3.} &amp; P(G) \\ \text{Th. 2.} &amp; \Diamond \; \exists x \; G(x) \\ \text{Df. 2.} &amp; \varphi \text{ ess } x \iff \varphi(x) \wedge \forall \psi \left\{\psi(x) \to \Box \; \forall y[\varphi(y) \to \psi(y)]\right\} \\ \text{Ax. 4.} &amp; P(\varphi) \to \Box \; P(\varphi) \\ \text{Th. 3.} &amp; G(x) \to G \text{ ess } x \\ \text{Df. 3.} &amp; E(x) \iff \forall \varphi[\varphi \text{ ess } x \to \Box \; \exists y \; \varphi(y)] \\ \text{Ax. 5.} &amp; P(E) \\ \text{Th. 4.} &amp; \Box \; \exists x \; G(x) \end{array} $$</span></p> <p><strong>Does it prove both existence and uniqueness?</strong></p> <p>Edit: these are <a href="http://plato.stanford.edu/entries/logic-modal/" rel="noreferrer">modal logic</a> symbols.</p>
<p>The modal operator $\square$ refers to necessity; its dual, $\lozenge$, refers to possibility. (A sentence is necessarily true iff it isn't possible for it to be false, and vice versa.) $P(\varphi)$ means that $\varphi$ is a positive (in the sense of "good") property; I'll just transcribe it as "$\varphi$ is good". I'll write out the argument colloquially, with the loss of precision that implies. In particular, the words "possible" and "necessary" are vague, and you need to understand modal logic somewhat to follow their precise usage in this argument.</p> <ul> <li>Axiom $1$: If $\varphi$ is good, and $\varphi$ forces $\psi$ (that is, it's necessarily true that anything with property $\varphi$ has property $\psi$), then $\psi$ is also good.</li> <li>Axiom $2$: For every property $\varphi$, exactly one of $\varphi$ and $\neg\varphi$ is good. (If $\neg\varphi$ is good, we may as well say that $\varphi$ is bad.)</li> <li>Theorem $1$ (Good Things Happen): If $\varphi$ is good, then it's possible that something exists with property $\varphi$.</li> </ul> <p>Proof of Theorem $1$: Suppose $\varphi$ were good, but necessarily nothing had property $\varphi$. Then property $\varphi$ would, vacuously, force every other property; in particular $\varphi$ would force $\neg\varphi$. By Axiom $1$, this would mean that $\neg\varphi$ was also good; but this would then contradict Axiom $2$.</p> <ul> <li>Definition $1$: We call a thing <em>godlike</em> when it has every good property.</li> <li>Axiom $3$: Being godlike is good.</li> <li>Theorem $2$ (No Atheism): It's possible that something godlike exists.</li> </ul> <p>Proof of Theorem $2$: This follows directly from Theorem $1$ applied to Axiom $3$.</p> <ul> <li>Definition $2$: We call property $\varphi$ the <em>essence</em> of a thing $x$ when (1) $x$ has property $\varphi$, and (2) property $\varphi$ forces every property of $x$.</li> <li>Axiom $4$: If $\varphi$ is good, then $\varphi$ is necessarily good.</li> <li>Theorem $3$ (God Has No Hair): If a thing is godlike, then being godlike is its essence.</li> </ul> <p>Proof of Theorem $3$: First note that if $x$ is godlike, it has all good properties (by definition) and no bad properties (by Axiom $2$). So any property that a godlike thing has is good, and is therefore necessarily good (by Axiom $4$), and is therefore necessarily possessed by anything godlike.</p> <ul> <li>Definition $3$: We call a thing <em>indispensable</em> when something with its essence (if it has an essence) must exist.</li> <li>Axiom $5$: Being indispensable is good.</li> <li>Theorem $4$ (Yes, Virginia): Something godlike necessarily exists.</li> </ul> <p>Proof of Theorem $4$: If something is godlike, it has every good property by definition. In particular, it's indispensable, since that's a good property (by Axiom $5$); so by definition something with its essence, which is just "being godlike" (by Theorem $3$), must exist. In other words, if something godlike exists, then it's necessary for something godlike to exist. But by Theorem $2$, it's possible that something godlike exists; so it's possible that it's necessary for something godlike to exist; and so it is, in fact, necessary for something godlike to exist. QED.</p> <p>Convinced?</p>
<p>The box is a modal operator (it is necessarily true that ...), with the diamond its dual (it is possibly true that ...). '$P(\varphi)$' holds when the property expressed by $\varphi$ is 'positive' (maybe better, is a perfection). Other novelties are defined. $G(x)$ says $x$ has all perfections (so is God). $\varphi$ ess $x$ says the property $\varphi$ is the essence of $x$. $E$ is the property of necessary existence (existence in virtue of your essence). [Don't blame me, I'm just reporting ....]</p> <p>You'll find out just a little more about Gödel's very strange apparent aberration here: <a href="http://plato.stanford.edu/entries/ontological-arguments/#GodOntArg" rel="nofollow noreferrer">http://plato.stanford.edu/entries/ontological-arguments/#GodOntArg</a> </p> <p>Robert Adams introduction to Gödel's original note in <em>Kurt Gödel: Collected Works. Vol III, Unpublished Essays and Lectures</em> is well worth reading. </p> <p>Petr Hajek has an amazingly patient exploration of the current state of play in investigations of Gödel's Ontological Proof and its variants in Matthias Baaz et al. (eds) <em>Kurt Gödel and the Foundations of Mathematics</em> (CUP 2011). But I can't say that this changed my impression that this is little more than a curious side note in logic.</p>
matrices
<p>This is more a conceptual question than any other kind. As far as I know, one can define matrices over arbitrary fields, and so do linear algebra in different settings than in the typical freshman-year course. </p> <p>Now, how does the concept of eigenvalues translate when doing so? Of course, a matrix need not have any eigenvalues in a given field, that I know. But do the eigenvalues need to be numbers?</p> <p>There are examples of fields such as that of the rational functions. If we have a matrix over that field, can we have rational functions as eigenvalues?</p>
<p>Of course. The definition of an eigenvalue does not require that the field in question is that of the real or complex numbers. In fact, it doesn't even need to be a matrix. All you need is a vector space $V$ over a field $F$, and a linear mapping $$L: V\to V.$$</p> <p>Then, $\lambda\in F$ is an eigenvalue of $L$ if and only if there exists a nonzero element $v\in V$ such that $L(v)=\lambda v$.</p>
<p>Eigenvalues need to be elements of the field. The most common examples of fields contain objects that we usually call numbers, but this is not part of the definition of an eigenvalue. As a counterexample, consider the field $\mathbb R(x)$ of rational expressions in a variable $x$ with real coefficients. The $2\times 2$ matrix over that field</p> <p>$$\left(\begin{matrix} x &amp; 0 \\ 0 &amp; \frac1x \end{matrix}\right)$$</p> <p>has eigenvalues $x$ and $1/x$: not unknown numbers, but known elements of the field $\mathbb R(x).$</p>
linear-algebra
<p>A method called "Robust PCA" solves the matrix decomposition problem</p> <p>$$L^*, S^* = \arg \min_{L, S} \|L\|_* + \|S\|_1 \quad \text{s.t. } L + S = X$$</p> <p>as a surrogate for the actual problem</p> <p>$$L^*, S^* = \arg \min_{L, S} rank(L) + \|S\|_0 \quad \text{s.t. } L + S = X,$$ i.e. the actual goal is to decompose the data matrix $X$ into a low-rank signal matrix $L$ and a sparse noise matrix $S$. In this context: <strong>why is the nuclear norm a good approximation for the rank of a matrix?</strong> I can think of matrices with low nuclear norm but high rank and vice-versa. Is there any intuition one can appeal to?</p>
<p>Why does <a href="http://en.wikipedia.org/wiki/Compressed_sensing">compressed sensing</a> work? Because the $\ell_1$ ball in high dimensions is extremely "pointy" -- the extreme values of a linear function on this ball are very likely to be attained on the faces of low dimensions, those that consist of sparse vectors. When applied to matrices, the sparseness of the set of eigenvalues means low rank, as @mrig wrote before me. </p>
<p>To be accurate, it has been shown that the $\ell_1$ norm is the <em>convex envelope</em> of the $\| \cdot \|_0$ pseudo-norm while the nuclear norm is the <em>convex envelope</em> of the rank.</p> <p>As a reminder, the convex envelope is the tightest convex surrogate of a function. An important property is that a function and its convex envelope have the same <strong>global minimizer</strong>.</p>
number-theory
<p>Can you check if my proof is right?</p> <p>Theorem. $\forall x\geq8, x$ can be represented by $5a + 3b$ where $a,b \in \mathbb{N}$.</p> <p>Base case(s): $x=8 = 3\cdot1 + 5\cdot1 \quad \checkmark\\ x=9 = 3\cdot3 + 5\cdot0 \quad \checkmark\\ x=10 = 3\cdot0 + 5\cdot2 \quad \checkmark$</p> <p>Inductive step:</p> <p>$n \in \mathbb{N}\\a_1 = 8, a_n = a_1 + (x-1)\cdot3\\ b_1 = 9, b_n = b_1 + (x-1)\cdot3 = a_1 +1 + (x-1) \cdot 3\\ c_1 = 10, c_n = c_1 + (x-1)\cdot3 = b_1 + 1 + (x-1) \cdot 3 = a_1 + 2 + (x-1) \cdot 3\\ \\ S = \{x\in\mathbb{N}: x \in a_{x} \lor x \in b_{x} \lor x \in c_{x}\}$</p> <p>Basis stays true, because $8,9,10 \in S$</p> <p>Lets assume that $x \in S$. That means $x \in a_{n} \lor x \in b_{n} \lor x \in c_{n}$.</p> <p>If $x \in a_n$ then $x+1 \in b_x$,</p> <p>If $x \in b_x$ then $x+1 \in c_x$,</p> <p>If $x \in c_x$ then $x+1 \in a_x$.</p> <p>I can't prove that but it's obvious. What do you think about this?</p>
<p><strong>Proof by induction.</strong><br> For the base case $n=8$ we have $8=5+3$. <br> Suppose that the statement holds for $k$ where $k\gt 8$. We show that it holds for $k+1$.</p> <p>There are two cases.</p> <p>1) $k$ has a $5$ as a summand in its representation.</p> <p>2) $k$ has no $5$ as a summand in its representation.</p> <p><strong>For case 1</strong>, we delete "that $5$" in the sum representation of $k$ and replace it by two "$3$"s ! This proves the statement for $k+1$.</p> <p><strong>For case 2</strong>, since $k\gt 8$, then $k$ has at least three "$3$"s in its sum representation. We remove these three $3$'s and replace them by two fives! We obtain a sum representation for $k+1$. This completes the proof.</p>
<p>I would avoid induction and use the elementary Euclidean division algorithm (Eda).</p> <p>Let $n\geq8$ be an integer. Then, by the Eda, there exist integer $q$ and $r$ such that $r\in\{0,1,2\}$ and $$n=3q+r.$$</p> <ul> <li><p>If $r=0$, we are done since $n=3q$.</p></li> <li><p>If $r=1$, then $q\geq3$ (because $n\geq8$). Hence $n=3(q-3)+10$.</p></li> <li><p>If $r=2$, then $q\geq2$ (because $n\geq8$). Hence $n=3(q-1)+5$.</p></li> </ul>
linear-algebra
<p>I'm trying to intuitively understand the difference between SVD and eigendecomposition.</p> <p>From my understanding, eigendecomposition seeks to describe a linear transformation as a sequence of three basic operations (<span class="math-container">$P^{-1}DP$</span>) on a vector:</p> <ol> <li>Rotation of the coordinate system (change of basis): <span class="math-container">$P$</span></li> <li>Independent scaling along each basis vector (of the rotated system): <span class="math-container">$D$</span></li> <li>De-rotation of the coordinate system (undo change of basis): <span class="math-container">$P^{-1}$</span></li> </ol> <p>But as far as I can see, SVD's goal is to do exactly the same thing, except that resulting decomposition is somehow different.</p> <p>What, then, is the <strong>conceptual</strong> difference between the two?</p> <p>For example:</p> <ul> <li>Is one of them more general than the other?</li> <li>Is either a special case of the other?</li> </ul> <p><strong>Note:</strong> I'm specifically looking for an <strong>intuitive</strong> explanation, <em><strong>not</strong></em> a mathematical one.<br /> Wikipedia is already excellent at explaining the <em>mathematical</em> relationship between the two decompositions (<a href="http://en.wikipedia.org/wiki/Singular_value_decomposition" rel="noreferrer"><em>&quot;The right-singular vectors of M are eigenvectors of <span class="math-container">$M^*M$</span>&quot;</em></a>, for example), but it completely fails to give me any intuitive understanding of what is going on intuitively.</p> <p>The best explanation I've found so far is <a href="http://www.ams.org/samplings/feature-column/fcarc-svd" rel="noreferrer">this one</a>, which is great, except it doesn't talk about eigendecompositions at all, which leaves me confused as to how SVD is any different from eigendecomposition in its goal.</p>
<p>Consider the eigendecomposition $A=P D P^{-1}$ and SVD $A=U \Sigma V^*$. Some key differences are as follows,</p> <ul> <li>The vectors in the eigendecomposition matrix $P$ are not necessarily orthogonal, so the change of basis isn't a simple rotation. On the other hand, the vectors in the matrices $U$ and $V$ in the SVD are orthonormal, so they do represent rotations (and possibly flips).</li> <li>In the SVD, the nondiagonal matrices $U$ and $V$ are not necessairily the inverse of one another. They are usually not related to each other at all. In the eigendecomposition the nondiagonal matrices $P$ and $P^{-1}$ are inverses of each other.</li> <li>In the SVD the entries in the diagonal matrix $\Sigma$ are all real and nonnegative. In the eigendecomposition, the entries of $D$ can be any complex number - negative, positive, imaginary, whatever.</li> <li>The SVD always exists for any sort of rectangular or square matrix, whereas the eigendecomposition can only exists for square matrices, and even among square matrices sometimes it doesn't exist.</li> </ul>
<p>I encourage you to see an <span class="math-container">$(m \times n)$</span> real-valued matrix <span class="math-container">$A$</span> as a bilinear operator between two spaces; intuitively, one space lies to the left (<span class="math-container">$R^m$</span>) and the other (<span class="math-container">$R^n$</span>) to the right of <span class="math-container">$A$</span>. "Bilinear" simply means that <span class="math-container">$A$</span> is linear in both directions (left to right or right to left). The operations <span class="math-container">$A$</span> can perform are limited to scaling, rotation, and reflection, and combinations of these; any other kind of operation is non-linear.</p> <p><span class="math-container">$A$</span> transforms vectors between the two spaces via multiplication:</p> <p><span class="math-container">$x^T$</span> A = <span class="math-container">$y^T$</span> transforms left vector <span class="math-container">$x$</span> to right vector <span class="math-container">$y$</span>.</p> <p><span class="math-container">$x = A y$</span> transforms right vector <span class="math-container">$y$</span> to left vector <span class="math-container">$x$</span>.</p> <p>The point of decompositions of <span class="math-container">$A$</span> is to identify, or highlight, aspects of the action of <span class="math-container">$A$</span> as an operator. The <strong>eigendecomposition</strong> of <span class="math-container">$A$</span> clarifies what <span class="math-container">$A$</span> does by finding the eigenvalues and eigenvectors that satisfy the constraint</p> <p><span class="math-container">$A x = \lambda x$</span>.</p> <p>This constraint identifies vectors (directions) <span class="math-container">$x$</span> that are not rotated by <span class="math-container">$A$</span>, and the scalars <span class="math-container">$\lambda$</span> associated with each of those directions.</p> <p>The problem with eigendecomposition is that when the matrix isn't square, the left and right space are spaces of vectors of different sizes and therefore completely different spaces; there really isn't a sense in which <span class="math-container">$A$</span>'s action can be described as involving a "rotation", because the left and right spaces are not "oriented" relative to one another. There just isn't a way to generalize the notion of an eigendecomposition to a non-square matrix <span class="math-container">$A$</span>.</p> <p>Singular vectors provide a different way to identify vectors for which the action of <span class="math-container">$A$</span> is simple; one that <em>does</em> generalize to the case where the left and right spaces are different. A corresponding pair of singular vectors have a scalar <span class="math-container">$\sigma$</span> for which <span class="math-container">$A$</span> <em>scales</em> by the same amount, whether transforming from the left space to the right space or vice-versa:</p> <p><span class="math-container">$ x^T A = \sigma y^T$</span></p> <p><span class="math-container">$\sigma x = A y$</span>.</p> <p>Thus, eigendecomposition represents <span class="math-container">$A$</span> in terms of how it scales vectors it doesn't rotate, while <strong>singular value decomposition represents <span class="math-container">$A$</span> in terms of corresponding vectors that are scaled the same, whether moving from the left to the right space or vice-versa</strong>. When the left and right space are the same (i.e. when <span class="math-container">$A$</span> is square), singular value decomposition represents <span class="math-container">$A$</span> in terms of how it rotates and reflects vectors that <span class="math-container">$A$</span> and <span class="math-container">$A^T$</span> scale by the same amount.</p>
matrices
<p>I am auditing a Linear Algebra class, and today we were taught about the rank of a matrix. The definition was given from the row point of view: </p> <blockquote> <p>"The rank of a matrix A is the number of non-zero rows in the reduced row-echelon form of A".</p> </blockquote> <p>The lecturer then explained that if the matrix <span class="math-container">$A$</span> has size <span class="math-container">$m \times n$</span>, then <span class="math-container">$rank(A) \leq m$</span> and <span class="math-container">$rank(A) \leq n$</span>. </p> <p>The way I had been taught about rank was that it was the smallest of </p> <ul> <li>the number of rows bringing new information</li> <li>the number of columns bringing new information. </li> </ul> <p>I don't see how that would change if we transposed the matrix, so I said in the lecture:</p> <p>"then the rank of a matrix is the same of its transpose, right?" </p> <p>And the lecturer said: </p> <p>"oh, not so fast! Hang on, I have to think about it". </p> <p>As the class has about 100 students and the lecturer was just substituting for the "normal" lecturer, he was probably a bit nervous, so he just went on with the lecture.</p> <p>I have tested "my theory" with one matrix and it works, but even if I tried with 100 matrices and it worked, I wouldn't have proven that it always works because there might be a case where it doesn't.</p> <p>So my question is first whether I am right, that is, whether the rank of a matrix is the same as the rank of its transpose, and second, if that is true, how can I prove it?</p> <p>Thanks :)</p>
<p>The answer is yes. This statement often goes under the name "row rank equals column rank". Knowing that, it is easy to search the internet for proofs.</p> <p>Also any reputable linear algebra text should prove this: it is indeed a rather important result.</p> <p>Finally, since you said that you had only a substitute lecturer, I won't castigate him, but this would be a distressing lacuna of knowledge for someone who is a regular linear algebra lecturer. </p>
<p>There are several simple proofs of this result. Unfortunately, most textbooks use a rather complicated approach using row reduced echelon forms. Please see some elegant proofs in the Wikipedia page (contributed by myself):</p> <p><a href="http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29" rel="noreferrer">http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29</a></p> <p>or the page on rank factorization:</p> <p><a href="http://en.wikipedia.org/wiki/Rank_factorization" rel="noreferrer">http://en.wikipedia.org/wiki/Rank_factorization</a></p> <p>Another of my favorites is the following:</p> <p>Define $\operatorname{rank}(A)$ to mean the column rank of A: $\operatorname{col rank}(A) = \dim \{Ax: x \in \mathbb{R}^n\}$. Let $A^{t}$ denote the transpose of A. First show that $A^{t}Ax = 0$ if and only if $Ax = 0$. This is standard linear algebra: one direction is trivial, the other follows from:</p> <p>$$A^{t}Ax=0 \implies x^{t}A^{t}Ax=0 \implies (Ax)^{t}(Ax) = 0 \implies Ax = 0$$</p> <p>Therefore, the columns of $A^{t}A$ satisfy the same linear relationships as the columns of $A$. It doesn't matter that they have different number of rows. They have the same number of columns and they have the same column rank. (This also follows from the rank+nullity theorem, if you have proved that independently (i.e. without assuming row rank = column rank)</p> <p>Therefore, $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t}A) \leq \operatorname{col rank}(A^{t})$. (This last inequality follows because each column of $A^{t}A$ is a linear combination of the columns of $A^{t}$. So, $\operatorname{col sp}(A^{t}A)$ is a subset of $\operatorname{col sp}(A^{t})$.) Now simply apply the argument to $A^{t}$ to get the reverse inequality, proving $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t})$. Since $\operatorname{col rank}(A^{t})$ is the row rank of A, we are done.</p>
linear-algebra
<p>Let $A$ be an $n\times m$ matrix. Prove that $\operatorname{rank} (A) = 1$ if and only if there exist column vectors $v \in \mathbb{R}^n$ and $w \in \mathbb{R}^m$ such that $A=vw^t$.</p> <hr> <p>Progress: I'm going back and forth between using the definitions of rank: $\operatorname{rank} (A) = \dim(\operatorname{col}(A)) = \dim(\operatorname{row}(A))$ or using the rank theorem that says $ \operatorname{rank}(A)+\operatorname{nullity}(A) = m$. So in the second case I have to prove that $\operatorname{nullity}(A)=m$-1</p>
<p><strong>Hints:</strong> </p> <p>$A=\mathbf v\mathbf w^T\implies\operatorname{rank}A=1$ should be pretty easy to prove directly. Multiply a vector in $\mathbb R^m$ by $A$ and see what you get. </p> <p>For the other direction, think about what $A$ does to the basis vectors of $\mathbb R^m$ and what this means about the columns of $A$. </p> <hr> <p><strong>Solution</strong> </p> <p>Suppose $A=\mathbf v\mathbf w^T$. If $\mathbf u\in\mathbb R^m$, then $A\mathbf u=\mathbf v\mathbf w^T\mathbf u=(\mathbf u\cdot\mathbf w)\mathbf v$. Thus, $A$ maps every vector in $\mathbb R^m$ to a scalar multiple of $\mathbf v$, hence $\operatorname{rank}A=\dim\operatorname{im}A=1$. </p> <p>Now, assume $\operatorname{rank}A=1$. Then for all $\mathbf u\in\mathbb R^m$, $A\mathbf u=k\mathbf v$ for some fixed $\mathbf v\in\mathbb R^n$. In particular, this is true for the basis vectors of $\mathbb R^m$, so every column of $A$ is a multiple of $\mathbf v$. That is, $$ A=\pmatrix{w_1\mathbf v &amp; w_2\mathbf v &amp; \cdots &amp; w_m\mathbf v}=\mathbf v\pmatrix{w_1&amp;w_2&amp;\cdots&amp;w_m}=\mathbf v\mathbf w^T. $$</p>
<p>Suppose that $A$ has rank one. Then its image is one dimensional, so there is some nonzero $v$ that generates it. Moreover, for any other $w$, we can write $$Aw = \lambda(w)v$$</p> <p>for some scalar $\lambda(w)$ that depends linearly on $w$ by virtue of $A$ being linear and $v$ being a basis of the image of $A$. This defines then a nonzero functional $\mathbb R^n \longrightarrow \mathbb R$ which must be given by taking dot product by some $w_0$; say $\lambda(w) =\langle w_0,w\rangle$. It follows then that </p> <p>$$ A(w) = \langle w_0,w\rangle v$$ for every $w$, or, what is the same, that $A= vw_0^t$. </p>
geometry
<p><a href="https://en.wikipedia.org/wiki/Antoine&#39;s_necklace" rel="noreferrer">Antoine's necklace</a> is an embedding of the <a href="https://en.wikipedia.org/wiki/Cantor_set" rel="noreferrer">Cantor set</a> in <span class="math-container">$\mathbb{R}^3$</span> constructed by taking a torus, replacing it with a necklace of smaller interlinked tori lying inside it, replacing each smaller torus with a necklace of interlinked tori lying inside it, and continuing the process <em>ad infinitum</em>; Antoine's necklace is the intersection of all iterations.</p> <p><a href="https://i.sstatic.net/yZA8N.png" rel="noreferrer"><img src="https://i.sstatic.net/yZA8N.png" alt="Antoine&#39;s necklace"></a></p> <p>A number of sources claim that this necklace '<em>cannot fall apart</em>' (e.g. <a href="https://blogs.scientificamerican.com/roots-of-unity/a-few-of-my-favorite-spaces-antoines-necklace/" rel="noreferrer">here</a>). Given that the necklace is totally disconnected this obviously has to be taken somewhat loosely, but I tried to figure out exactly what is meant by this. Most sources seem to point to <a href="https://www.maa.org/sites/default/files/pdf/upload_library/22/Polya/07468342.di020733.02p01166.pdf" rel="noreferrer">this paper</a> (which it must be noted contains some truly remarkable images, e.g. Figure 12). There the authors make the same point that Antoine's necklace 'cannot fall apart'. Nevertheless, all they seem to show in the paper is that it cannot be separated by a sphere (every sphere with a point of the necklace inside it and a point of the necklace outside it has a point of the necklace on it).</p> <p>It seems to me to be a reasonably trivial exercise to construct a geometrical object in <span class="math-container">$\mathbb{R}^3$</span> which cannot be separated by a sphere, and yet can still 'fall apart'.</p> <p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="https://i.sstatic.net/J5z3km.png" rel="noreferrer"><img src="https://i.sstatic.net/J5z3km.png" alt="Image1"></a></p> <p>In the spirit of the construction of Antoine's necklace, these two interlinked tori cannot be separated by a sphere (any sphere containing a point of one torus inside it will contain a point of that torus on its surface), but this seems to have no relation to the fact that they cannot fall apart - if we remove a segment of one of the tori the object still cannot be separated by a sphere, and yet can fall apart macroscopically.</p> <p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="https://i.sstatic.net/1MckZm.png" rel="noreferrer"><img src="https://i.sstatic.net/1MckZm.png" alt="Image2"></a></p> <p>The fact mentioned <a href="https://math.stackexchange.com/questions/1680081/projection-of-antoines-necklace">here</a> that the complement of the necklace is not simply connected, and the fact mentioned <a href="https://en.wikipedia.org/wiki/Antoine&#39;s_necklace" rel="noreferrer">here</a> that there are loops that cannot be unlinked from the necklace shouldn't impact whether it can be pulled apart either, as both are true of our broken rings</p> <p>My <em>question</em> is this: Is it possible to let me know either:</p> <ol> <li><p>How I have misunderstood separation by a sphere (so that it may still be relevant to an object being able to fall apart),</p></li> <li><p>What property Antoine's necklace does satisfy so that it cannot fall apart (if I have missed this), or</p></li> <li><p>What is actually meant when it is said to be unable to fall apart (if I have misunderstood this)</p></li> </ol>
<blockquote> <p>if we remove a segment of one of the tori the object still cannot be separated by a sphere, and yet can fall apart macroscopically.</p> </blockquote> <p>Well, sure it can. The key observation is that "sphere" means "homeomorphic image of a sphere" in this context.</p> <p>Turn the ring with the part removed (the "C") and pull the in tact ring (the "O") through the missing piece. Then you have two separate components, the C and the O, which it is easy to see could be placed inside and outside of a sphere. Let's put the C on the inside and the O on the outside.</p> <p>Now, think about shrinking that sphere very tightly (but not touching) around the C, then putting the C back where it was. You've separated the two by the homeomorphic image of a sphere.</p> <p>Think about if we tried to do this with two Os, like your first figure. If the Os weren't interlocked, sure, it's easy, just like when the C and the O were separated. But if two Os are interlocked, there isn't any way to fit a sphere around one of them in the same way as you could with the C. And if you can't do it with only two Os, you certainly can't do it with Antoine's Necklace. Thus, it will stay together.</p>
<p>The necklace is a topological space <span class="math-container">$X$</span>, together with a natural embedding <span class="math-container">$i\colon X\to \Bbb R^3$</span>. Since the verb &quot;to fall apart&quot; describes a process, it may best be described by adding time as a variable. So we ask whether there is a homotopy <span class="math-container">$H\colon X\times[0,1]\to\Bbb R^3$</span> such that <span class="math-container">$H(\cdot,0)=i$</span> and <span class="math-container">$H(\cdot,1)$</span> is a embedding <span class="math-container">$X\to\Bbb R^3$</span> that is in an obvious fashion separated, say a non-empty part of (the image of) <span class="math-container">$X$</span> is in the <span class="math-container">$z&gt;0$</span> region, a non-empty part in the <span class="math-container">$z&lt;0$</span> region, but nothing in the <span class="math-container">$z=0$</span> hyperplane. Instead of a separating plane, a separating sphere would certainly also count (and in fact be slightly more general: two concentric spheres cannot fall apart in the first sense, but can in the second; both interpretations have a point as the inner sphere cannot really fall out, but it is not really linked to the outer sphere either ...)</p> <p>I am unaware whether the referenced authors took this homotopy-and-sphere approach and you overread the homotopy part, or perhaps whether they had some argument that the sphere property of <span class="math-container">$i$</span> is sufficient in the given situation (but not in general, as your examples show).</p>
probability
<p>$X \sim \mathcal{P}( \lambda) $ and $Y \sim \mathcal{P}( \mu)$ meaning that $X$ and $Y$ are Poisson distributions. What is the probability distribution law of $X + Y$. I know it is $X+Y \sim \mathcal{P}( \lambda + \mu)$ but I don't understand how to derive it.</p>
<p>This only holds if $X$ and $Y$ are independent, so we suppose this from now on. We have for $k \ge 0$: \begin{align*} P(X+ Y =k) &amp;= \sum_{i = 0}^k P(X+ Y = k, X = i)\\ &amp;= \sum_{i=0}^k P(Y = k-i , X =i)\\ &amp;= \sum_{i=0}^k P(Y = k-i)P(X=i)\\ &amp;= \sum_{i=0}^k e^{-\mu}\frac{\mu^{k-i}}{(k-i)!}e^{-\lambda}\frac{\lambda^i}{i!}\\ &amp;= e^{-(\mu + \lambda)}\frac 1{k!}\sum_{i=0}^k \frac{k!}{i!(k-i)!}\mu^{k-i}\lambda^i\\ &amp;= e^{-(\mu + \lambda)}\frac 1{k!}\sum_{i=0}^k \binom ki\mu^{k-i}\lambda^i\\ &amp;= \frac{(\mu + \lambda)^k}{k!} \cdot e^{-(\mu + \lambda)} \end{align*} Hence, $X+ Y \sim \mathcal P(\mu + \lambda)$.</p>
<p>Another approach is to use characteristic functions. If $X\sim \mathrm{po}(\lambda)$, then the characteristic function of $X$ is (if this is unknown, just calculate it) $$ \varphi_X(t)=E[e^{itX}]=e^{\lambda(e^{it}-1)},\quad t\in\mathbb{R}. $$ Now suppose that $X$ and $Y$ are <em>independent</em> Poisson distributed random variables with parameters $\lambda$ and $\mu$ respectively. Then due to the independence we have that $$ \varphi_{X+Y}(t)=\varphi_X(t)\varphi_Y(t)=e^{\lambda(e^{it}-1)}e^{\mu(e^{it}-1)}=e^{(\mu+\lambda)(e^{it}-1)},\quad t\in\mathbb{R}. $$ As the characteristic function completely determines the distribution, we conclude that $X+Y\sim\mathrm{po}(\lambda+\mu)$.</p>
linear-algebra
<blockquote> <p>If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$. </p> </blockquote> <p>I do not understand anything more than the following.</p> <ol> <li>Elementary row operations.</li> <li>Linear dependence.</li> <li>Row reduced forms and their relations with the original matrix.</li> </ol> <p>If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?</p> <p><strong>P.S.</strong>: Please avoid using the transpose and/or inverse of a matrix.</p>
<p>Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.</p> <p>Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).</p> <p>Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.</p> <p>Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.</p>
<p>We have the following general assertion:</p> <p><strong>Lemma.</strong> <em>Let <span class="math-container">$A$</span> be a finite-dimensional <a href="https://en.wikipedia.org/wiki/Algebra_over_a_field" rel="noreferrer"><span class="math-container">$K$</span>-algebra</a>, and <span class="math-container">$a,b \in A$</span>. If <span class="math-container">$ab=1$</span>, then <span class="math-container">$ba=1$</span>.</em></p> <p>For example, <span class="math-container">$A$</span> could be the algebra of <span class="math-container">$n \times n$</span> matrices over <span class="math-container">$K$</span>.</p> <p><strong>Proof.</strong> The sequence of subspaces <span class="math-container">$\cdots \subseteq b^{k+1} A \subseteq b^k A \subseteq \cdots \subseteq A$</span> must be stationary, since <span class="math-container">$A$</span> is finite-dimensional. Thus there is some <span class="math-container">$k$</span> with <span class="math-container">$b^{k+1} A = b^k A$</span>. So there is some <span class="math-container">$c \in A$</span> such that <span class="math-container">$b^k = b^{k+1} c$</span>. Now multiply with <span class="math-container">$a^k$</span> on the left to get <span class="math-container">$1=bc$</span>. Then <span class="math-container">$ba=ba1 = babc=b1c=bc=1$</span>. <span class="math-container">$\square$</span></p> <p>The proof also works in every left- or right-<a href="https://en.wikipedia.org/wiki/Artinian_ring" rel="noreferrer">Artinian ring</a> <span class="math-container">$A$</span>. In particular, the statement is true in every finite ring.</p> <p>Remark that we need in an essential way some <strong>finiteness condition</strong>. There is no purely algebraic manipulation with <span class="math-container">$a,b$</span> that shows <span class="math-container">$ab = 1 \Rightarrow ba=1$</span>.</p> <p>In fact, there is a <span class="math-container">$K$</span>-algebra with two elements <span class="math-container">$a,b$</span> such that <span class="math-container">$ab=1$</span>, but <span class="math-container">$ba \neq 1$</span>. Consider the left shift <span class="math-container">$a : K^{\mathbb{N}} \to K^{\mathbb{N}}$</span>, <span class="math-container">$a(x_0,x_1,\dotsc) := (x_1,x_2,\dotsc)$</span> and the right shift <span class="math-container">$b(x_0,x_1,\dotsc) := (0,x_0,x_1,\dotsc)$</span>. Then <span class="math-container">$a \circ b = \mathrm{id} \neq b \circ a$</span> holds in the <span class="math-container">$K$</span>-algebra <span class="math-container">$\mathrm{End}_K(K^{\mathbb{N}})$</span>.</p> <p>See <a href="https://math.stackexchange.com/questions/298791">SE/298791</a> for a proof of <span class="math-container">$AB=1 \Rightarrow BA=1$</span> for square matrices over a commutative ring.</p>
logic
<p>I remember hearing several times the advice that, we should avoid using a proof by contradiction, if it is simple to convert to a direct proof or a proof by contrapositive. Could you explain the reason? Do logicians think that proofs by contradiction are somewhat weaker than direct proofs?</p> <p>Is there any reason that one would still continue looking for a direct proof of some theorem, although a proof by contradiction has already been found? I don't mean improvements in terms of elegance or exposition, I am asking about logical reasons. For example, in the case of the "axiom of choice", there is obviously reason to look for a proof that does not use the axiom of choice. Is there a similar case for proofs by contradiction?</p>
<p>To <a href="https://mathoverflow.net/q/12342">this MathOverflow question</a>, I posted the following <a href="https://mathoverflow.net/a/12400">answer</a> (and there are several other interesting answers there):</p> <ul> <li><em>With good reason</em>, we mathematicians prefer a direct proof of an implication over a proof by contradiction, when such a proof is available. (all else being equal)</li> </ul> <p>What is the reason? The reason is the <em>fecundity</em> of the proof, meaning our ability to use the proof to make further mathematical conclusions. When we prove an implication (p implies q) directly, we assume p, and then make some intermediary conclusions r<sub>1</sub>, r<sub>2</sub>, before finally deducing q. Thus, our proof not only establishes that p implies q, but also, that p implies r<sub>1</sub> and r<sub>2</sub> and so on. Our proof has provided us with additional knowledge about the context of p, about what else must hold in any mathematical world where p holds. So we come to a fuller understanding of what is going on in the p worlds.</p> <p>Similarly, when we prove the contrapositive (&not;q implies &not;p) directly, we assume &not;q, make intermediary conclusions r<sub>1</sub>, r<sub>2</sub>, and then finally conclude &not;p. Thus, we have also established not only that &not;q implies &not;p, but also, that it implies r<sub>1</sub> and r<sub>2</sub> and so on. Thus, the proof tells us about what else must be true in worlds where q fails. Equivalently, since these additional implications can be stated as (&not;r<sub>1</sub> implies q), we learn about many different hypotheses that all imply q. </p> <p>These kind of conclusions can increase the value of the proof, since we learn not only that (p implies q), but also we learn an entire context about what it is like in a mathematial situation where p holds (or where q fails, or about diverse situations leading to q). </p> <p>With reductio, in contrast, a proof of (p implies q) by contradiction seems to carry little of this extra value. We assume p and &not;q, and argue r<sub>1</sub>, r<sub>2</sub>, and so on, before arriving at a contradiction. The statements r<sub>1</sub> and r<sub>2</sub> are all deduced under the contradictory hypothesis that p and &not;q, which ultimately does not hold in any mathematical situation. The proof has provided extra knowledge about a nonexistant, contradictory land. (Useless!) So these intermediary statements do not seem to provide us with any greater knowledge about the p worlds or the q worlds, beyond the brute statement that (p implies q) alone.</p> <p>I believe that this is the reason that sometimes, when a mathematician completes a proof by contradiction, things can still seem unsettled beyond the brute implication, with less context and knowledge about what is going on than would be the case with a direct proof.</p> <p>For an example of a proof where we are led to false expectations in a proof by contradiction, consider Euclid's theorem that there are infinitely many primes. In a common proof by contradiction, one assumes that p<sub>1</sub>, ..., p<sub>n</sub> are <em>all</em> the primes. It follows that since none of them divide the product-plus-one p<sub>1</sub>...p<sub>n</sub>+1, that this product-plus-one is also prime. This contradicts that the list was exhaustive. Now, many beginner's falsely expect after this argument that whenever p<sub>1</sub>, ..., p<sub>n</sub> are prime, then the product-plus-one is also prime. But of course, this isn't true, and this would be a misplaced instance of attempting to extract greater information from the proof, misplaced because this is a proof by contradiction, and that conclusion relied on the assumption that p<sub>1</sub>, ..., p<sub>n</sub> were <em>all</em> the primes. If one organizes the proof, however, as a direct argument showing that whenever p<sub>1</sub>, ..., p<sub>n</sub> are prime, then there is yet another prime not on the list, then one is led to the true conclusion, that p<sub>1</sub>...p<sub>n</sub>+1 has merely a prime divisor not on the original list. (And Michael Hardy mentions that indeed Euclid had made the direct argument.)</p>
<p>Most logicians consider proofs by contradiction to be equally valid, however some people are <a href="http://en.wikipedia.org/wiki/Constructivism_%28mathematics%29">constructivists/intuitionists</a> and don't consider them valid. </p> <p>(<strong>Edit:</strong> This is not strictly true, as explained in comments. Only certain proofs by contradiction are problematic from the constructivist point of view, namely those that prove "A" by assuming "not A" and getting a contradiction. In my experience, this is usually exactly the situation that people have in mind when saying "proof by contradiction.")</p> <p>One possible reason that the constructivist point of view makes a certain amount of sense is that statements like the continuum hypothesis are independent of the axioms, so it's a bit weird to claim that it's either true or false, in a certain sense it's neither.</p> <p>Nonetheless constructivism is a relatively uncommon position among mathematicians/logicians. However, it's not considered totally nutty or beyond the pale. Fortunately, in practice most proofs by contradiction can be translated into constructivist terms and actual constructivists are rather adept at doing so. So the rest of us mostly don't bother worrying about this issue, figuring it's the constructivists problem.</p>
logic
<p><span class="math-container">$x,y$</span> are perpendicular if and only if <span class="math-container">$x\cdot y=0$</span>. Now, <span class="math-container">$||x+y||^2=(x+y)\cdot (x+y)=(x\cdot x)+(x\cdot y)+(y\cdot x)+(y\cdot y)$</span>. The middle two terms are zero if and only if <span class="math-container">$x,y$</span> are perpendicular. So, <span class="math-container">$||x+y||^2=(x\cdot x)+(y\cdot y)=||x||^2+||y||^2$</span> if and only if <span class="math-container">$x,y$</span> are perpendicular. ( I copied <a href="https://math.stackexchange.com/questions/11509/how-to-prove-the-pythagoras-theorem-using-vectors">this</a>)</p> <p>I think this argument is circular because the property</p> <blockquote> <p><span class="math-container">$x\cdot y=0 $</span> implies <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are perpendicular</p> </blockquote> <p>comes from the Pythagorean theorem. </p> <p>Oh, it just came to mind that the property could be derived from the law of cosines. The law of cosines can be proved without the Pythagorean theorem, right, so the proof isn't circular?</p> <p><strong>Another question</strong>: If the property comes from the Pythagorean theorem or cosine law, then how does the dot product give a condition for orthogonality for higher dimensions?</p> <p><strong><em>Edit</em></strong>: The following quote by Poincare hepled me regarding the question:</p> <blockquote> <p>Mathematics is the art of giving the same name to different things.</p> </blockquote>
<p>I think the question mixes two quite different concepts together: <em>proof</em> and <em>motivation.</em></p> <p>The <em>motivation</em> for defining the inner product, orthogonality, and length of vectors in <span class="math-container">$\mathbb R^n$</span> in the "usual" way (that is, <span class="math-container">$\langle x,y\rangle = x_1y_1 + x_2y_2 + \cdots + x_ny_n$</span>) is presumably at least in part that by doing this we will be able to establish a property of <span class="math-container">$\mathbb R_n$</span> corresponding to the familiar Pythagorean Theorem from synthetic plane geometry. The motivation is, indeed, circular in that we get the Pythagorean Theorem as one of the results of something we set up because we wanted the Pythagorean Theorem.</p> <p>But that's how many axiomatic systems are born. Someone wants to be able to work with mathematical objects in a certain way, so they come up with axioms and definitions that provide mathematical objects they can work with the way they wanted to. I would be surprised to learn that the classical axioms of Euclidean geometry (from which the original Pythagorean Theorem derives) were <em>not</em> created for the reason that they produced the kind of geometry that Euclid's contemporaries wanted to work with.</p> <p><em>Proof,</em> on the other hand, consists of starting with a given set of axioms and definitions (with emphasis on the word "given," that is, they have no prior basis other than that we want to believe them), and showing that a certain result necessarily follows from those axioms and definitions without relying on any other facts that did not derive from those axioms and definitions. In the proof of the "Pythagorean Theorem" in <span class="math-container">$\mathbb R^n,$</span> after the point at which the axioms were given, did any step of the proof rely on anything other than the stated axioms and definitions?</p> <p>The answer to that question would depend on how the axioms were stated. If there is an axiom that says <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are orthogonal if <span class="math-container">$\langle x,y\rangle = 0,$</span> then this fact does not <em>logically</em> "come from" the Pythagorean Theorem; it comes from the axioms.</p>
<p>Let's try this on a different vector space. Here's a nice one: Let <span class="math-container">$\mathscr L = C([0,1])$</span> be the set of all real continuous functions defined on the interval <span class="math-container">$I = [0,1]$</span>. If <span class="math-container">$f, g \in \mathscr L$</span> and <span class="math-container">$a,b \in \Bbb R$</span>, then <span class="math-container">$h(x) := af(x) + bg(x)$</span> defines another continuous function on <span class="math-container">$I$</span>, so <span class="math-container">$\scr L$</span> is indeed a vector space over <span class="math-container">$\Bbb R$</span>.</p> <p>Now I arbitrarly define <span class="math-container">$f \cdot g := \int_I f(x)g(x)dx$</span>, and quickly note that this operation is commutative and <span class="math-container">$(af + bg)\cdot h = a(f\cdot h) + b(g \cdot h)$</span>, and that <span class="math-container">$f \cdot f \ge 0$</span> and <span class="math-container">$f\cdot f = 0$</span> if and only if <span class="math-container">$f$</span> is the constant function <span class="math-container">$0$</span>.</p> <p>Thus we see that <span class="math-container">$f\cdot g$</span> acts as a dot product on <span class="math-container">$\scr L$</span>, and so we can define <span class="math-container">$$\|f\| := \sqrt{f\cdot f}$$</span> and call <span class="math-container">$\|f - g\|$</span> the "distance from <span class="math-container">$f$</span> to <span class="math-container">$g$</span>".</p> <p>By the Cauchy-Schwarz inequality <span class="math-container">$$\left(\int fg\right)^2 \le \int f^2\int g^2$$</span> and therefore <span class="math-container">$$|f\cdot g| \le \|f\|\|g\|$$</span></p> <p>Therefore, we can arbitrarily define for non-zero <span class="math-container">$f, g$</span> that <span class="math-container">$$\theta = \cos^{-1}\left(\frac{f\cdot g}{\|f\|\|g\|}\right)$$</span> and call <span class="math-container">$\theta$</span> the "angle between <span class="math-container">$f$</span> and <span class="math-container">$g$</span>", and define that <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are "perpendicular" when <span class="math-container">$\theta = \pi/2$</span>. Equivalently, <span class="math-container">$f$</span> is perpendicular to <span class="math-container">$g$</span> exactly when <span class="math-container">$f \cdot g = 0$</span>.</p> <p>And now we see that a Pythagorean-like theorem holds for <span class="math-container">$\scr L$</span>: <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are perpendicular exactly when <span class="math-container">$\|f - g\|^2 = \|f\|^2 + \|g\|^2$</span></p> <hr> <p>The point of this exercise? That the vector Pythagorean theorem is something different from the familiar Pythagorean theorem of geometry. The vector space <span class="math-container">$\scr L$</span> is not a plane, or space, or even <span class="math-container">$n$</span> dimensional space for any <span class="math-container">$n$</span>. It is in fact an infinite dimensional vector space. I did not rely on geometric intuition to develop this. At no point did the geometric Pythagorean theorem come into play.</p> <p>I did choose the definitions to follow a familiar pattern, but the point here is that I (or actually far more gifted mathematicians whose footsteps I'm aping) made those definitions by choice. They were not forced on me by the Pythagorean theorem, but rather were chosen by me exactly so that this vector Pythagorean theorem would be true.</p> <p>By making these definitions, I can now <em>start</em> applying those old geometric intuitions to this weird set of functions that beforehand was something too esoteric to handle.</p> <p>The vector Pythagorean theorem isn't a way to prove that old geometric result. It is a way to show that the old geometric result also has application in this entirely new and different arena of vector spaces.</p>
matrices
<p>How can I understand that $A^TA$ is invertible if $A$ has independent columns? I found a similar <a href="https://math.stackexchange.com/questions/1181271/if-ata-is-invertible-then-a-has-linearly-independent-column-vectors">question</a>, phrased the other way around, so I tried to use the theorem</p> <p>$$ rank(A^TA) \le min(rank(A^T),rank(A)) $$</p> <p>Given $rank(A) = rank(A^T) = n$ and $A^TA$ produces an $n\times n$ matrix, I can't seem to prove that $rank(A^TA)$ is actually $n$.</p> <p>I also tried to look at the question another way with the matrices</p> <p>$$ A^TA = \begin{bmatrix}a_1^T \\ a_2^T \\ \ldots \\ a_n^T \end{bmatrix} \begin{bmatrix}a_1 a_2 \ldots a_n \end{bmatrix} = \begin{bmatrix}A^Ta_1 A^Ta^2 \ldots A^Ta_n\end{bmatrix} $$</p> <p>But I still can't seem to show that $A^TA$ is invertible. So, how should I get a better understanding of why $A^TA$ is invertible if $A$ has independent columns?</p>
<p>Consider the following: $$A^TAx=\mathbf 0$$ Here, $Ax$, an element in the range of $A$, is in the null space of $A^T$. However, the null space of $A^T$ and the range of $A$ are orthogonal complements, so $Ax=\mathbf 0$.</p> <p>If $A$ has linearly independent columns, then $Ax=\mathbf 0 \implies x=\mathbf 0$, so the null space of $A^TA=\{\mathbf 0\}$. Since $A^TA$ is a square matrix, this means $A^TA$ is invertible.</p>
<p>If $A $ is a real $m \times n $ matrix then $A $ and $A^T A $ have the same null space. Proof: $A^TA x =0\implies x^T A^T Ax =0 \implies (Ax)^TAx=0 \implies \|Ax\|^2 = 0 \implies Ax = 0 $. </p>
linear-algebra
<p>I stumbled across the following problem and found it cute. </p> <p><strong>Problem:</strong> We are given that $19$ divides $23028$, $31882$, $86469$, $6327$, and $61902$. Show that $19$ divides the following determinant: </p> <p>$$\left| \begin{matrix} 2 &amp; 3&amp;0&amp;2&amp;8 \\ 3 &amp; 1&amp;8&amp;8&amp;2\\ 8&amp;6&amp;4&amp;6&amp;9\\ 0&amp;6&amp;3&amp;2&amp;7\\ 6&amp;1&amp;9&amp;0&amp;2 \end{matrix}\right|$$</p>
<p>Multiply the first column by $10^4$, the second by $10^3$, third by $10^2$ and fourth by $10$ - this will scale the value of the determinant by $10^{4+3+2+1}=10^{10}$, which is coprime to $19$. Now add the last four columns to the first one - this will not change the value of the determinant. Finally notice the first column now reads $23028, 31882, 86469, 6327$, and $61902$: each is a multiple of $19$ so we can factor a nineteen cleanly out the determinant.</p>
<p>If the determinant is $0$ it is obvious that $19|0$. Now suppose that the determinant is not $0$.</p> <p>$$\begin{align*} 2\cdot10^4+3\cdot10^3+0\cdot10^2+2\cdot10+8\cdot1&amp;=23028\\ 3\cdot10^4+1\cdot10^3+8\cdot10^2+8\cdot10+2\cdot1&amp;=31882\\ 8\cdot10^4+6\cdot10^3+4\cdot10^2+6\cdot10+9\cdot1&amp;=86469\\ 0\cdot10^4+6\cdot10^3+3\cdot10^2+2\cdot10+7\cdot1&amp;=06327\\ 6\cdot10^4+1\cdot10^3+9\cdot10^2+0\cdot10+2\cdot1&amp;=61902 \end{align*}$$</p> <p>By Cramer's rule</p> <p>$$1=\frac{\left|\begin{matrix} 2 &amp; 3 &amp; 0 &amp; 2 &amp; 23028 \\ 3 &amp; 1 &amp; 8 &amp; 8 &amp; 31882 \\ 8 &amp; 6 &amp; 4 &amp; 6 &amp; 86469 \\ 0 &amp; 6 &amp; 3 &amp; 2 &amp; 06327 \\ 6 &amp; 1 &amp; 9 &amp; 0 &amp; 61902 \end{matrix}\right|}{\left|\begin{matrix} 2 &amp; 3 &amp; 0 &amp; 2 &amp; 8 \\ 3 &amp; 1 &amp; 8 &amp; 8 &amp; 2 \\ 8 &amp; 6 &amp; 4 &amp; 6 &amp; 9 \\ 0 &amp; 6 &amp; 3 &amp; 2 &amp; 7 \\ 6 &amp; 1 &amp; 9 &amp; 0 &amp; 2\end{matrix}\right|}$$</p> <p>Then</p> <p>$$\left|\begin{matrix} 2 &amp; 3 &amp; 0 &amp; 2 &amp; 8 \\ 3 &amp; 1 &amp; 8 &amp; 8 &amp; 2 \\ 8 &amp; 6 &amp; 4 &amp; 6 &amp; 9 \\ 0 &amp; 6 &amp; 3 &amp; 2 &amp; 7 \\ 6 &amp; 1 &amp; 9 &amp; 0 &amp; 2\end{matrix}\right|=\left|\begin{matrix} 2 &amp; 3 &amp; 0 &amp; 2 &amp; 23028 \\ 3 &amp; 1 &amp; 8 &amp; 8 &amp; 31882 \\ 8 &amp; 6 &amp; 4 &amp; 6 &amp; 86469 \\ 0 &amp; 6 &amp; 3 &amp; 2 &amp; 06327 \\ 6 &amp; 1 &amp; 9 &amp; 0 &amp; 61902 \end{matrix}\right|$$</p> <p>But last determinant is obviously divisible by $19$.</p>
logic
<p>When I was studying, the mathematical analysis professor said something interesting when he was explaining the implication (logical operator), namely <span class="math-container">$(False \implies True) = True$</span>. He said something like (from my memory):</p> <blockquote> <p>You can derive any truth from a falsehood. <strong>If we accept a single falsehood as truth, then we can prove any single theorem we want.</strong></p> </blockquote> <p>If I understand it correctly, it means that if we assume that <span class="math-container">$2+2=5$</span>, then we can provide proof that <span class="math-container">$\pi = 3$</span>, or that <span class="math-container">$1 = 2$</span> or that <span class="math-container">$\sin^2 x + \cos^2 x \neq 1$</span>.</p> <p>Is the bold statement true? Is it possible to canonically prove it even though it assumes (temporarily) accepting falsehood as truth?</p> <hr /> <p>Just to clarify a little, my question is primarily about the professor’s statement and <em>not</em> about the principles of implication. I’m just curious, what is the scale of destructiveness (in terms of drawing otherwise logical conclusions) of accepting something false as a truth.</p> <p>It is also interesting to me whether this observation can be generalized, in terms of if we, e.g., falsely assume that dogs and cats are exactly the same animal, can we prove (with a series of otherwise logical conclusions) that, e.g., the Statue of Liberty is actually placed underwater or that the Moon is heavily populated by chipmunks (but that’s a bonus question, because it expands outside the field of mathematics, I guess.)</p>
<p>Strangely, this is pretty much correct. It is difficult to come up with a reasonable logic that doesn't permit this.</p> <p>Your arithmetic examples are easy to deal with. If <span class="math-container">$$2+2=5$$</span> then we can proceed:</p> <p><span class="math-container">$$\begin{align} 2+2 &amp;= 5 \\ 2 &amp; = 3 &amp;\text{subtract $2$}\\ 0 &amp; = 1 &amp;\text{subtract $2$ again}\\ 0 &amp; = \pi - 3 &amp;\text{multiply by $\pi-3$} \\ 3 &amp; = \pi &amp;\text{add $3$} \end{align}$$</span></p> <p>as you requested. (It's similarly easy to get <span class="math-container">$1=2$</span> or your other formulas.)</p> <p>But the problem goes deeper than arithmetic. The logical principle here is called the <a href="https://en.wikipedia.org/wiki/Principle_of_explosion" rel="noreferrer">principle of explosion</a> or sometimes “EFQ” for short, and it is very difficult to avoid.</p> <p>Suppose our logical system has an “or” operation <span class="math-container">$\lor$</span> where <span class="math-container">$$X\lor Y$$</span> means <span class="math-container">$$X \text{ is true, or } Y \text{ is true, or both.}$$</span> Then these rules both make perfect sense:</p> <ol> <li>If we have proved <span class="math-container">$X$</span>, we can conclude <span class="math-container">$X\lor Y$</span> for any <span class="math-container">$Y$</span> at all.</li> <li>If we have proved <span class="math-container">$X\lor Y$</span> and we know that <span class="math-container">$X$</span> is false, we can conclude <span class="math-container">$Y$</span>.</li> </ol> <p>If you accept these, then you should also accept EFQ. This is why: Suppose we have proved <span class="math-container">$X$</span>, which is false. By (1) we can conclude <span class="math-container">$X\lor Y$</span>, and since <span class="math-container">$X$</span> is false, by (2) we can conclude <span class="math-container">$Y$</span>. So after proving <span class="math-container">$X$</span>, which is false, we can conclude <span class="math-container">$Y$</span> for any <span class="math-container">$Y$</span> at all.</p> <p>If you don't like this result, you need to say which of (1) and (2) you will reject.</p> <p>(<a href="https://math.stackexchange.com/q/148210/25554">I asked about this previously</a>. It is hard to get rid of.)</p>
<p>Others have given good reasons why the statement that the professor presumably intended is correct.</p> <p>However, the actual statement</p> <blockquote> <p>If we accept a single falsehood as truth, then we can prove any single theorem we want</p> </blockquote> <p>is not quite correct. It is not enough to accept a falsehood as truth: we would need to accept something that is <em>provably</em> false.</p> <p>After all, suppose we have a true statement <span class="math-container">$S$</span> that we want to prove. One strategy for doing this is as follows: assume <span class="math-container">$\neg S$</span>, deduce that <span class="math-container">$0=1$</span>, conclude by contradiction that <span class="math-container">$S$</span> is true. The statement above claims that we can always carry out the middle step, i.e. it implies</p> <blockquote> <p>Any true statement can be proven by contradiction.</p> </blockquote> <p>But we know that, in any reasonable axiomatic system, there are <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems" rel="noreferrer">true statements that can't be proved</a>.</p>
linear-algebra
<p>Given an $n\times n$-matrix $A$ with integer entries, I would like to decide whether there is some $m\in\mathbb N$ such that $A^m$ is the identity matrix.</p> <p>I can solve this by regarding $A$ as a complex matrix and computing its Jordan normal form; equivalently, I can compute the eigenvalues and check whether they are roots of $1$ and whether their geometric and algebraic multiplicities coincide.</p> <p>Are there other ways to solve this problem, perhaps exploiting the fact that $A$ has integer entries? <strong>Edit:</strong> I am interested in conditions which are easy to verify for families of matrices in a proof.</p> <p><strong>Edit:</strong> Thanks to everyone for this wealth of answers. It will take me some time to read all of them carefully.</p>
<p>The following conditions on an $n$ by $n$ integer matrix $A$ are equivalent: </p> <p>(1) $A$ is invertible and of finite order. </p> <p>(2) The minimal polynomial of $A$ is a product of distinct cyclotomic polynomials. </p> <p>(3) The elementary divisors of $A$ are cyclotomic polynomials. </p>
<p>Answer amended in view of Rasmus's comment:</p> <p>I'm not sure how useful it is, but here's a remark. If $A$ has finite order, clearly $\{\|A^{m}\|: m \in \mathbb{N} \}$ is bounded (any matrix norm you care to choose will do). </p> <p>On the other hand, if the semisimple part (in its Jordan decomposition as a complex matrix) of $A$ does not have finite order, at least one of its eigenvalues has absolute value greater than $1$, so $\{ \|A^{m}\| :m \in \mathbb{N} \}$ is unbounded. </p> <p>(I am using the fact that all eigenvalues of $A$ are algebraic integers, and they are closed under algebraic conjugation: it is a very old theorem (and fun to prove) that if all algebraic conjugates of an algebraic integer $\alpha$ are of absolute value $1$, then $\alpha$ is a root of unity).</p> <p>On the other hand, if the semi-simple part of $A$ has finite order, but $A$ itself does not, then (a conjugate of) some power of $A,$ say $A^h$, (as a complex matrix) has a Jordan block of size greater than $1$ associated to the eigenvalue $1$. Then the entries of the powers of $A^h$ become arbitrarily large, and $\{ \| A^{m} \|: m \in \mathbb{N} \}$ is still unbounded.</p>
matrices
<p>Suppose I have a $n\times n$ matrix $A$. Can I, by using only pre- and post-multiplication by permutation matrices, permute all the elements of $A$? That is, there should be no binding conditions, like $a_{11}$ will always be to the left of $a_{n1}$, etc.</p> <p>This seems to be intuitively obvious. What I think is that I can write the matrix as an $n^2$-dimensional vector, then I can permute all entries by multiplying by a suitable permutation matrix, and then re-form a matrix with the permuted vector.</p>
<p>It is not generally possible to do so.</p> <p>For a concrete example, we know that there can exist no permutation matrices <span class="math-container">$P,Q$</span> such that <span class="math-container">$$ P\pmatrix{1&amp;2\\2&amp;1}Q = \pmatrix{2&amp;1\\2&amp;1} $$</span> If such a <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> existed, then both matrices would necessarily have the same rank.</p>
<p>Let me add one more argument:</p> <p>For <span class="math-container">$n \ge 2$</span>:</p> <p>Suppose the entries in the <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> are all distinct. Then there are <span class="math-container">$(n^2)!$</span> distinct permutations of <span class="math-container">$A$</span>.</p> <p>There are <span class="math-container">$n!$</span> row-permutations of <span class="math-container">$A$</span> (generated by premultiplication by various permutation matrices), and <span class="math-container">$n!$</span> col-permutations of <span class="math-container">$A$</span> (generated by post-multiplication by permutation matrices). If we consider <em>all</em> expressions of the form <span class="math-container">$$ RAC $$</span> where <span class="math-container">$R$</span> and <span class="math-container">$C$</span> each range independently over all <span class="math-container">$n!$</span> permutation matrices, we get at most <span class="math-container">$(n!)^2$</span> possible results. But for <span class="math-container">$n &gt; 1$</span>, we have <span class="math-container">\begin{align} (n!)^2 &amp;= [ n \cdot (n-1) \cdots 2 \cdot 1 ] [ n \cdot (n-1) \cdots 2 \cdot 1 ] \\ &amp;&lt; [ 2n \cdot (2n-1) \cdots (n+2) \cdot (n+1) ] [ n \cdot (n-1) \cdots 2 \cdot 1 ] \\ &amp;= (2n)! \\ &amp;\le (n^2)! \end{align}</span> because <span class="math-container">$2n \le n^2$</span> for <span class="math-container">$n \ge 2$</span>, and factorial is an increasing function on the positive integers. So the number of possible results of applying row- and col-permutations to <span class="math-container">$A$</span> is smaller than the number of possible permutations of the elements of <span class="math-container">$A$</span>. Hence there's some permutation of <span class="math-container">$A$</span> that does not appear in our list of all <span class="math-container">$RAC$</span> matrices.</p> <p>BTW, just to close this out: for <span class="math-container">$1 \times 1$</span> matrices, the answer is &quot;yes, all permutations can in fact be realized by row and column permutations.&quot; I suspect you knew that. :)</p> <p>PS: Following the comment by @Jack M, I want to make clear why it's OK to consider only things of the form <span class="math-container">$RAC$</span>. Why do we do the column permutations first, and then the rows? (Or vice-versa, if you read things the other way). What if interleaving row and column permutations does something funky? The answer is that if you do a bunch of row ops interleaved with a bunch of column ops, you get the same thing as if you do all the row-ops first, and then all the column ops afterwards (although the row ops have to come in the same order in this rearrangement, and similarly for the column ops). That requires a little proof, but nothing hairy.</p> <p>What if we do more than one row-permutation, say, two of them? Don't we have to look at <span class="math-container">$R_1 R_2 A C$</span> instead? Answer: <span class="math-container">$R_1 R_2$</span> will again be a permutation matrix, so we can really consider this as being <span class="math-container">$(R_1 R_2) A C$</span>, i.e. the matrix I've called <span class="math-container">$R$</span> is the product of any number of permutation matrices. And it will always be a matrix with one <span class="math-container">$1$</span> in each row and each column. So my counting of possible row-permutations is still valid.</p>
number-theory
<blockquote> <p>Let $x$ be a real number and let $n$ be a positive integer. It is known that both $x^n$ and $(x+1)^n$ are rational. Prove that $x$ is rational. </p> </blockquote> <p>What I have tried:</p> <p>Denote $x^n=r$ and $(x+1)^n=s$ with $r$, $s$ rationals. For each $k=0,1,\ldots, n−2$ expand $x^k\cdot(x+1)^n$ and replace $x^n$ by $r$. One thus obtains a linear system with $n−1$ equations with variables $x$, $x^2$, $x^3,\ldots x^{n−1}$. The matrix associated to this system has rational entries, and therefore if the solution is unique it must be rational (via Cramer's rule). This approach works fine if $n$ is small. The difficulty is to figure out what exactly happens in general. </p>
<p>Here is a proof which does not require Galois theory.</p> <p>Write $$f(z)=z^n-x^n\quad\hbox{and}\quad g(z)=(z+1)^n-(x+1)^n\ .$$ It is clear that these polynomials have rational coefficients and that $x$ is a root of each; therefore each is a multiple of the minimal polynomial of $x$, and every algebraic conjugate of $x$ is a root of both $f$ and $g$. However, if $f(z)=g(z)=0$ then we have $$|z|=|x|\quad\hbox{and}\quad |z+1|=|x+1|\ ;$$ this can be written as $$\def\c{\overline} z\c z=x\c x\ ,\quad z\c z+z+\c z+1=x\c x+x+\c x+1$$ which implies that $${\rm Re}(z)={\rm Re}(x)\ ,\quad {\rm Im}(z)=\pm{\rm Im}(x)=0\tag{$*$}$$ and so $z=x$. In other words, $f$ and $g$ have no common root except for $x$; so $x$ has no conjugates except for itself, and $x$ must be rational.</p> <p>As an alternative, the last part of the argument can be seen visually: the roots of $f$ lie on the circle with centre $0$ and radius $|x|$; the roots of $g$ lie on the circle with centre $-1$ and radius $|x+1|$; and from a diagram, these circles intersect only at $x$. Thus, again, $f$ and $g$ have no common root except for $x$.</p> <p><strong>Observe</strong> that the deduction in $(*)$ relies on the fact that $x$ is real: as mentioned in Micah's comment on the original question, the result need not be true if $x$ is not real.</p> <p><strong>Comment</strong>. A virtually identical argument proves the following: if $n$ is a positive integer, if $a$ is a non-zero rational and if $x^n$ and $(x+a)^n$ are both rational, then $x$ is rational.</p>
<p>Let $x$ be a real number such that $x^n$ and $(x+1)^n$ are rational. Without loss of generality, I assume that $x \neq 0$ and $n&gt;1$.</p> <p>Let $F = \mathbb Q(x, \zeta)$ be the subfield of $\mathbb C$ generated by $x$ and a primitive $n$-th root of unity $\zeta$. </p> <p>It is not difficult to see that $F/\mathbb Q$ is Galois; indeed, $F$ is the splitting field of the polynomial $$X^n - x^n \in \mathbb Q[X].$$ The conjugates of $x$ in $F$ are all of the form $\zeta^a x$ for some powers $\zeta^a$ of $\zeta$. Indeed, if $x'$ is another root of $X^n - x^n,$ in $F$, then</p> <p>$$(x/x')^n = x^n/x^n = 1$$</p> <p>so that $x/x'$ is an $n$-th root of unity, which is necessarily of the form $\zeta^a$ for some $a \in \mathbb Z/n\mathbb Z$.</p> <p>Assume now that $x$ is <strong>not</strong> rational. Then, since $F$ is Galois, there exists an automorphism $\sigma$ of $F$ such that $x^\sigma \neq x$. Choose any such automorphism. By the above remark, we can write $x^\sigma = \zeta^a x$, for some $a \not \equiv 0 \pmod n$. </p> <p>Since also $(1+x)^\sigma = 1+ x^\sigma \neq 1 + x$, and since $(1+x)^n$ is also rational, the same argument applied to $1+x$ shows that $(1+x)^\sigma = \zeta^b (1+x)$ for some $b \not \equiv 0 \pmod n$. And since $\sigma$ is a field automorphism it follows that</p> <p>$$\zeta^b (1+x) = (1+x)^\sigma = 1 + x^\sigma = 1 + \zeta^a x$$</p> <p>and therefore</p> <p>$$x(\zeta^b - \zeta^a) = 1 - \zeta^b.$$</p> <p>But $\zeta^b \neq 1$ because $b \not \equiv 0\pmod n$, so we may divide by $1-\zeta^b$ to get</p> <p>$$x^{-1} = \frac{\zeta^b - \zeta^a}{1-\zeta^b}.$$</p> <p>Now, since $x$ is real, this complex number is invariant under complex conjugation, hence</p> <p>$$x^{-1} = \frac{\zeta^b - \zeta^a}{1-\zeta^b} = \frac{\zeta^{-b} - \zeta^{-a}}{1-\zeta^{-b}} = \frac{1 - \zeta^{b-a}}{\zeta^b-1} = \zeta^{-a}\frac{\zeta^a - \zeta^b}{\zeta^b-1} = \zeta^{-a} x^{-1}.$$</p> <p>But this implies that $\zeta^a = 1$, which contradicts that $x^\sigma \neq x$. So we are done. $\qquad \blacksquare$</p> <p>The following stronger statement actually follows from the proof:</p> <blockquote> <p><em>For each $n$, there are finitely many non-rational complex numbers $x$ such that $x^n$ and $(x+1)^n$ are rational. These complex numbers belong to the cyclotomic field $\mathbb Q(\zeta_n)$, and none of them are real.</em> </p> </blockquote> <p>Indeed, there are finitely many choices for $a$ and $b$.</p> <p>In <a href="https://math.stackexchange.com/a/1087226/12507">this related answer,</a> Tenuous Puffin proves that there exist only $26$ real numbers having this property, allowing for any value of $n$.</p>
logic
<p>I remember hearing several times the advice that, we should avoid using a proof by contradiction, if it is simple to convert to a direct proof or a proof by contrapositive. Could you explain the reason? Do logicians think that proofs by contradiction are somewhat weaker than direct proofs?</p> <p>Is there any reason that one would still continue looking for a direct proof of some theorem, although a proof by contradiction has already been found? I don't mean improvements in terms of elegance or exposition, I am asking about logical reasons. For example, in the case of the "axiom of choice", there is obviously reason to look for a proof that does not use the axiom of choice. Is there a similar case for proofs by contradiction?</p>
<p>To <a href="https://mathoverflow.net/q/12342">this MathOverflow question</a>, I posted the following <a href="https://mathoverflow.net/a/12400">answer</a> (and there are several other interesting answers there):</p> <ul> <li><em>With good reason</em>, we mathematicians prefer a direct proof of an implication over a proof by contradiction, when such a proof is available. (all else being equal)</li> </ul> <p>What is the reason? The reason is the <em>fecundity</em> of the proof, meaning our ability to use the proof to make further mathematical conclusions. When we prove an implication (p implies q) directly, we assume p, and then make some intermediary conclusions r<sub>1</sub>, r<sub>2</sub>, before finally deducing q. Thus, our proof not only establishes that p implies q, but also, that p implies r<sub>1</sub> and r<sub>2</sub> and so on. Our proof has provided us with additional knowledge about the context of p, about what else must hold in any mathematical world where p holds. So we come to a fuller understanding of what is going on in the p worlds.</p> <p>Similarly, when we prove the contrapositive (&not;q implies &not;p) directly, we assume &not;q, make intermediary conclusions r<sub>1</sub>, r<sub>2</sub>, and then finally conclude &not;p. Thus, we have also established not only that &not;q implies &not;p, but also, that it implies r<sub>1</sub> and r<sub>2</sub> and so on. Thus, the proof tells us about what else must be true in worlds where q fails. Equivalently, since these additional implications can be stated as (&not;r<sub>1</sub> implies q), we learn about many different hypotheses that all imply q. </p> <p>These kind of conclusions can increase the value of the proof, since we learn not only that (p implies q), but also we learn an entire context about what it is like in a mathematial situation where p holds (or where q fails, or about diverse situations leading to q). </p> <p>With reductio, in contrast, a proof of (p implies q) by contradiction seems to carry little of this extra value. We assume p and &not;q, and argue r<sub>1</sub>, r<sub>2</sub>, and so on, before arriving at a contradiction. The statements r<sub>1</sub> and r<sub>2</sub> are all deduced under the contradictory hypothesis that p and &not;q, which ultimately does not hold in any mathematical situation. The proof has provided extra knowledge about a nonexistant, contradictory land. (Useless!) So these intermediary statements do not seem to provide us with any greater knowledge about the p worlds or the q worlds, beyond the brute statement that (p implies q) alone.</p> <p>I believe that this is the reason that sometimes, when a mathematician completes a proof by contradiction, things can still seem unsettled beyond the brute implication, with less context and knowledge about what is going on than would be the case with a direct proof.</p> <p>For an example of a proof where we are led to false expectations in a proof by contradiction, consider Euclid's theorem that there are infinitely many primes. In a common proof by contradiction, one assumes that p<sub>1</sub>, ..., p<sub>n</sub> are <em>all</em> the primes. It follows that since none of them divide the product-plus-one p<sub>1</sub>...p<sub>n</sub>+1, that this product-plus-one is also prime. This contradicts that the list was exhaustive. Now, many beginner's falsely expect after this argument that whenever p<sub>1</sub>, ..., p<sub>n</sub> are prime, then the product-plus-one is also prime. But of course, this isn't true, and this would be a misplaced instance of attempting to extract greater information from the proof, misplaced because this is a proof by contradiction, and that conclusion relied on the assumption that p<sub>1</sub>, ..., p<sub>n</sub> were <em>all</em> the primes. If one organizes the proof, however, as a direct argument showing that whenever p<sub>1</sub>, ..., p<sub>n</sub> are prime, then there is yet another prime not on the list, then one is led to the true conclusion, that p<sub>1</sub>...p<sub>n</sub>+1 has merely a prime divisor not on the original list. (And Michael Hardy mentions that indeed Euclid had made the direct argument.)</p>
<p>Most logicians consider proofs by contradiction to be equally valid, however some people are <a href="http://en.wikipedia.org/wiki/Constructivism_%28mathematics%29">constructivists/intuitionists</a> and don't consider them valid. </p> <p>(<strong>Edit:</strong> This is not strictly true, as explained in comments. Only certain proofs by contradiction are problematic from the constructivist point of view, namely those that prove "A" by assuming "not A" and getting a contradiction. In my experience, this is usually exactly the situation that people have in mind when saying "proof by contradiction.")</p> <p>One possible reason that the constructivist point of view makes a certain amount of sense is that statements like the continuum hypothesis are independent of the axioms, so it's a bit weird to claim that it's either true or false, in a certain sense it's neither.</p> <p>Nonetheless constructivism is a relatively uncommon position among mathematicians/logicians. However, it's not considered totally nutty or beyond the pale. Fortunately, in practice most proofs by contradiction can be translated into constructivist terms and actual constructivists are rather adept at doing so. So the rest of us mostly don't bother worrying about this issue, figuring it's the constructivists problem.</p>
matrices
<p>We are allowed to use a calculator in our linear algebra exam. Luckily, my calculator can also do matrix calculations.</p> <p>Let's say there is a task like this:</p> <blockquote> <p>Calculate the rank of this matrix:</p> <p>$$M =\begin{pmatrix} 5 &amp; 6 &amp; 7\\ 12 &amp;4 &amp;9 \\ 1 &amp; 7 &amp; 4 \end{pmatrix}$$</p> </blockquote> <p>The problem with this matrix is we cannot use the trick with multiples, we cannot see multiples on first glance and thus cannot say whether the vectors rows / columns are linearly in/dependent. Using Gauss is also very time consuming (especially in case we don't get a zero line and keep trying harder).</p> <p>Enough said, I took my calculator because we are allowed to use it and it gives me following results:</p> <p>$$M =\begin{pmatrix} 1 &amp; 0{,}3333 &amp; 0{,}75\\ 0 &amp;1 &amp;0{,}75 \\ 0 &amp; 0 &amp; 1 \end{pmatrix}$$</p> <p>I quickly see that $\text{rank(M)} = 3$ since there is no row full of zeroes.</p> <p>Now my question is, how can I convince the teacher that I calculated it? If the task says "calculate" and I just write down the result, I don't think I will get all the points. What would you do?</p> <p>And please give me some advice, this is really time consuming in an exam.</p>
<p>There is a very nice trick for showing that such matrix has full rank, it can be performed in a few seconds without any calculator or worrying "moral bending". The entries of $M$ are integers, so the determinant of $M$ is an integer, and $\det M\mod{2} = \det(M\mod{2})$. Since $M\pmod{2}$ has the following structure $$ \begin{pmatrix} 1 &amp; 0 &amp; 1 \\ 0 &amp; 0 &amp; 1 \\ 1 &amp; 1 &amp; 0\end{pmatrix} $$ it is trivial that $\det M$ is an odd integer. In particular, $\det M\neq 0$ and $\text{rank}(M)=3$.</p>
<p>You're allowed to use your calculator. So, if it were me on the test, I'd write something like this:</p> <blockquote> <p>$$ \pmatrix{5&amp;6&amp;7\\12&amp;4&amp;9\\1&amp;7&amp;4} \overset{REF}{\to} \pmatrix{ 1 &amp; 0,3333 &amp; 0,75\\ 0 &amp;1 &amp;0,75 \\ 0 &amp; 0 &amp; 1 } $$ because the reduced form of $M$ has no zero rows, $M$ has rank $3$.</p> </blockquote> <p>REF here stands for row-echelon form.</p> <hr> <p><strong>Note:</strong> You should check with your professor whether or not this constitutes a sufficient answer. It may be the case that your professor wants any <em>matrix-calculations</em> to be done by hand. See Robert Israel's comment.</p> <p>If that's the case, then you should do row-reduction by hand. It only takes 3 row operations, though.</p>
logic
<p>In mathematics the existence of a mathematical object is often proved by contradiction without showing how to construct the object.</p> <p>Does the existence of the object imply that it is at least possible to construct the object?</p> <p>Or are there mathematical objects that do exist but are impossible to construct? </p>
<p>Really the answer to this question will come down to the way we define the terms "existence" (and "construct"). Going philosophical for a moment, one may argue that constructibility is a priori required for existence, and so ; this, broadly speaking, is part of the impetus for <strong>intuitionism</strong> and <strong>constructivism</strong>, and related to the impetus for <strong>(ultra)finitism</strong>.$^1$ Incidentally, at least to some degree we can produce formal systems which capture this point of view <em>(although the philosophical stance should really be understood as</em> preceding <em>the formal systems which try to reflect them; I believe this was a point Brouwer and others made strenuously in the early history of intuitionism)</em>.</p> <p>A less philosophical take would be to interpret "existence" as simply "provable existence relative to some fixed theory" (say, ZFC, or ZFC + large cardinals). In this case it's clear what "exists" means, and the remaining weasel word is "construct." <strong>Computability theory</strong> can give us some results which may be relevant, depending on how we interpret this word: there are lots of objects we can define in a complicated way but provably have no "concrete" definitions:</p> <ul> <li><p>The halting problem is not computable.</p></li> <li><p>Kleene's $\mathcal{O}$ - or, the set of indices for computable well-orderings - is not hyperarithmetic.</p></li> <li><p>A much deeper example: while we know that for all Turing degrees ${\bf a}$ there is a degree strictly between ${\bf a}$ and ${\bf a'}$ which is c.e. in $\bf a$, we can also show that there is no "uniform" way to produce such a degree in a precise sense.</p></li> </ul> <p>Going further up the ladder, ideas from <strong>inner model theory</strong> and <strong>descriptive set theory</strong> become relevant. For example:</p> <ul> <li><p>We can show in ZFC that there is a (Hamel) basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$; however, we can also show that no such basis is "nicely definable," in various precise senses (and we get stronger results along these lines as we add large cardinal axioms to ZFC). For example, no such basis can be Borel. </p></li> <li><p>Other examples of the same flavor: a nontrivial ultrafilter on $\mathbb{N}$; a well-ordering of $\mathbb{R}$; a Vitali (or Bernstein or Luzin) set, or indeed any non-measurable set (or set without the property of Baire, or without the perfect set property); ...</p></li> <li><p>On the other side of the aisle, the theory ZFC + a measurable cardinal proves that there is a set of natural numbers which is not "constructible" in a <a href="https://en.wikipedia.org/wiki/Constructible_universe" rel="noreferrer">precise set-theoretic sense</a> (basically, can be built just from "definable transfinite recursion" starting with the emptyset). Now the connection between $L$-flavored constructibility and the informal notion of a mathematical construction is tenuous at best, but this does in my opinion say that a measurable cardinal yields a hard-to-construct set of naturals in a precise sense.</p></li> </ul> <hr> <p>$^1$I don't actually hold these stances <a href="https://math.stackexchange.com/a/2757816/28111">except very rarely</a>, and so I'm really not the best person to comment on their motivations; please take this sentence with a grain of salt.</p>
<p>There exists a Hamel basis for the vector space $\Bbb{R}$ over $\Bbb{Q}$, but nobody has seen one so far. It is the axiom of choice which ensures existence.</p>
probability
<p>My understanding right now is that an example of conditional independence would be:</p> <p>If two people live in the same city, the probability that person <strong>A</strong> gets home in time for dinner, and the probability that person <strong>B</strong> gets home in time for dinner are independent; that is, we wouldn't expect one to have an effect on the other. But if a snow storm hits the city and introduces a probability <strong>C</strong> that traffic will be at a stand still, you would expect that the probability of both <strong>A</strong> getting home in time for dinner and <strong>B</strong> getting home in time for dinner, would change.</p> <p>If this is a correct understanding, I guess I still don't understand what exactly conditional independence <em>is</em>, or what it does for us (why does it have a separate name, as opposed to just compounded probabilities), and if this isn't a correct understanding, could someone please provide an example with an explanation?</p>
<p>The scenario you describe provides a good example for conditional independence, though you haven't quite described it as such. As <a href="http://en.wikipedia.org/wiki/Conditional_independence" rel="noreferrer">the Wikipedia article</a> puts it, </p> <blockquote> <p>$R$ and $B$ are conditionally independent [given $Y$] if and only if, given knowledge of whether $Y$ occurs, knowledge of whether $R$ occurs provides no information on the likelihood of $B$ occurring, and knowledge of whether $B$ occurs provides no information on the likelihood of $R$ occurring.</p> </blockquote> <p>In this case, $R$ and $B$ are the events of persons <strong>A</strong> and <strong>B</strong> getting home in time for dinner, and $Y$ is the event of a snow storm hitting the city. Certainly the probabilities of $R$ and $B$ will depend on whether $Y$ occurs. However, just as it's plausible to assume that if these two people have nothing to do with each other their probabilities of getting home in time are independent, it's also plausible to assume that, while they will both have a lower probability of getting home in time if a snow storm hits, these lower probabilities will nevertheless still be independent of each other. That is, if you already know that a snow storm is raging and I tell you that person <strong>A</strong> is getting home late, that gives you no new information about whether person <strong>B</strong> is getting home late. You're getting information on that from the fact that there's a snow storm, but given that fact, the fact that <strong>A</strong> is getting home late doesn't make it more or less likely that <strong>B</strong> is getting home late, too. So conditional independence is the same as normal independence, but restricted to the case where you know that a certain condition is or isn't fulfilled. Not only can you not find out about <strong>A</strong> by finding out about <strong>B</strong> in general (normal independence), but you also can't do so under the condition that there's a snow storm (conditional independence).</p> <p>An example of events that are independent but not conditionally independent would be: You randomly sample two people <strong>A</strong> and <strong>B</strong> from a large population and consider the probabilities that they will get home in time. Without any further knowledge, you might plausibly assume that these probabilities are independent. Now you introduce event $Y$, which occurs if the two people live in the same neighbourhood (however that might be defined). If you know that $Y$ occurred and I tell you that <strong>A</strong> is getting home late, then that would tend to increase the probability that <strong>B</strong> is also getting home late, since they live in the same neighbourhood and any traffic-related causes of <strong>A</strong> getting home late might also delay <strong>B</strong>. So in this case the probabilities of <strong>A</strong> and <strong>B</strong> getting home in time are not conditionally independent given $Y$, since once you know that $Y$ occurred, you are able to gain information about the probability of <strong>B</strong> getting home in time by finding out whether <strong>A</strong> is getting home in time.</p> <p>Strictly speaking, this scenario only works if there's always the same amount of traffic delay in the city overall and it just moves to different neighbourhoods. If that's not the case, then it wouldn't be correct to assume independence between the two probabilities, since the fact that one of the two is getting home late would already make it somewhat likelier that there's heavy traffic in the city in general, even without knowing that they live in the same neighbourhood.</p> <p>To give a precise example: Say you roll a blue die and a red die. The two results are independent of each other. Now you tell me that the blue result isn't a $6$ and the red result isn't a $1$. You've given me new information, but that hasn't affected the independence of the results. By taking a look at the blue die, I can't gain any knowledge about the red die; after I look at the blue die I will still have a probability of $1/5$ for each number on the red die except $1$. So the probabilities for the results are conditionally independent given the information you've given me. But if instead you tell me that the sum of the two results is even, this allows me to learn a lot about the red die by looking at the blue die. For instance, if I see a $3$ on the blue die, the red die can only be $1$, $3$ or $5$. So in this case the probabilities for the results are not conditionally independent given this other information that you've given me. This also underscores that conditional independence is always relative to the given condition -- in this case, the results of the dice rolls are conditionally independent with respect to the event "the blue result is not $6$ and the red result is not $1$", but they're not conditionally independent with respect to the event "the sum of the results is even".</p>
<p>The example you've given (the snowstorm) is usually given as a case where you might <em>think</em> two events might be truly independent (since they take totally different routes home), i.e.</p> <p>$p(A|B)=p(A)$.</p> <p>However in this case they are not truly independent, they are "only" conditionally independent given the snowstorm i.e.</p> <p>$p(A|B,Z) = p(A|Z)$.</p> <p>A clearer example paraphrased from <a href="https://www.eecs.qmul.ac.uk/~norman/BBNs/Independence_and_conditional_independence.htm" rel="noreferrer">Norman Fenton's website</a>: if Alice (A) and Bob (B) both flip the same coin, but that coin might be biased, <em>we cannot say</em></p> <p>$p(A=H|B=H) = p(A=H)$</p> <p>(i.e. that they are independent) because if we see Bob flips heads, it is more likely to be biased towards heads, and hence the left probability should be higher. However if we denote Z as the event "the coin is biased towards heads", then</p> <p>$p(A=H|B=H,Z)=p(A=H|Z)$</p> <p>we can remove Bob from the equation because we know the coin is biased. Given the fact that the coin is biased, the two flips are conditionally independent.</p> <p>This is the common form of conditional independence, you have events that are not statistically independent, but they are conditionally independent.</p> <p>It is possible for something to be statistically independent and not conditionally independent. To borrow from <a href="https://en.wikipedia.org/wiki/Conditional_independence" rel="noreferrer">Wikipedia</a>: if $A$ and $B$ both take the value $0$ or $1$ with $0.5$ probability, and $C$ denotes the product of the values of $A$ and $B$ ($C=A\times B$), then $A$ and $B$ are independent:</p> <p>$p(A=0|B=0) = p(A=0) = 0.5$</p> <p>but they are not conditionally independent given $C$:</p> <p>$p(A=0|B=0,C=0) = 0.5 \neq \frac{2}{3} = p(A=0|C=0)$</p>
matrices
<p>I am looking for an intuitive explanation as to why/how row rank of a matrix = column rank. I've read the <a href="https://en.wikipedia.org/wiki/Rank_(linear_algebra)#Proofs_that_column_rank_=_row_rank" rel="noreferrer">proof on Wikipedia</a> and I understand the proof, but I don't &quot;get it&quot;. Can someone help me out with this ?</p> <p>I find it hard to wrap my head around the idea of how the column space and the row space is related at a fundamental level.</p>
<p>You can apply elementary row operations and elementary column operations to bring a matrix <span class="math-container">$A$</span> to a matrix that is in <strong>both</strong> row reduced echelon form and column reduced echelon form. In other words, there exist invertible matrices <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> (which are products of elementary matrices) such that <span class="math-container">$$PAQ=E:=\begin{pmatrix}I_k\\&amp;0_{(n-k)\times(n-k)}\end{pmatrix}.$$</span> As <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are invertible, the maximum number of linearly independent rows in <span class="math-container">$A$</span> is equal to the maximum number of linearly independent rows in <span class="math-container">$E$</span>. That is, the row rank of <span class="math-container">$A$</span> is equal to the row rank of <span class="math-container">$E$</span>. Similarly for the column ranks. Now it is evident that the row rank and column rank of <span class="math-container">$E$</span> are identical (to <span class="math-container">$k$</span>). Hence the same holds for <span class="math-container">$A$</span>.</p>
<p>This post is quite old, so my answer might come a bit late. If you are looking for an intuition (you want to "get it") rather than a demonstration (of which there are several), then here is my 5c.</p> <p>If you think of a matrix A in the context of solving a system of simultaneous equations, then the row-rank of the matrix is the number of independent equations, and the column-rank of the matrix is the number of independent parameters that you can estimate from the equation. That I think makes it a bit easier to see why they should be equal.</p> <p>Saad.</p>
linear-algebra
<blockquote> <p>Let <span class="math-container">$T : V\to V$</span> be a linear transformation such that <span class="math-container">$\dim\operatorname{Range}(T)=k\leq n$</span>, where <span class="math-container">$n=\dim V$</span>. Show that <span class="math-container">$T$</span> can have at most <span class="math-container">$k+1$</span> distinct eigenvalues.</p> </blockquote> <p>I can realize that the rank will correspond to the number of non-zero eigenvalues (counted up to multiplicity) and the nullity will correspond to the 0 eigenvalue (counted up to multiplicity), but I cannot design an analytical proof of this.</p> <p>Thanks for any help . </p>
<p>Since the nullity of $T$ is $n-k$, that means that the geometric multiplicity of $\lambda=0$ as an eigenvalue of $T$ is $n-k$; hence, the algebraic multiplicity must be at least $n-k$, which means that the characteristic polynomial of $T$ is of the form $x^{N}g(x)$, where $N$ is the algebraic multiplicity of $0$, hence $N\geq n-k$ (so $n-N\leq k$), and $\deg(g) =n-N$. Thus, $g$ has at most $n-N$ distinct roots, none of which are equal to $0$, and that means that the characteristic polynomial of $T$ has exactly: $$1 + \text{# distinct roots of }g \leq 1 + n-N \leq 1 + k$$ distinct eigenvalues.</p> <p>Note that in fact we can say a bit better that $T$ has at most $\min\{k+1,n\}$ distinct eigenvalues (when the rank is $n$).</p>
<p>Here is an outline of one way to solve the problem.</p> <ul> <li>Eigenvectors for distinct eigenvalues are linearly independent. </li> <li>Eigenvectors for nonzero eigenvalues are in the range of $T$. </li> <li>The range of $T$ cannot contain a linearly independent set with more than $k$ vectors.</li> </ul>
matrices
<p>It is well known that for invertible matrices $A,B$ of the same size we have $$(AB)^{-1}=B^{-1}A^{-1} $$ and a nice way for me to remember this is the following sentence:</p> <blockquote> <p>The opposite of putting on socks and shoes is taking the shoes off, followed by taking the socks off.</p> </blockquote> <p>Now, a similar law holds for the transpose, namely:</p> <p>$$(AB)^T=B^TA^T $$</p> <p>for matrices $A,B$ such that the product $AB$ is defined. My question is: is there any intuitive reason as to why the order of the factors is reversed in this case? </p> <p>[Note that I'm aware of several proofs of this equality, and a proof is not what I'm after]</p> <p>Thank you!</p>
<p>One of my best college math professor always said:</p> <blockquote> <p>Make a drawing first.</p> </blockquote> <p><img src="https://i.sstatic.net/uGxff.gif" alt="enter image description here"></p> <p>Although, he couldn't have made this one on the blackboard.</p>
<p>By dualizing $AB: V_1\stackrel{B}{\longrightarrow} V_2\stackrel{A}{\longrightarrow}V_3$, we have $(AB)^T: V_3^*\stackrel{A^T}{\longrightarrow}V_2^*\stackrel{B^T}{\longrightarrow}V_1^*$. </p> <p>Edit: $V^*$ is the dual space $\text{Hom}(V, \mathbb{F})$, the vector space of linear transformations from $V$ to its ground field, and if $A: V_1\to V_2$ is a linear transformation, then $A^T: V_2^*\to V_1^*$ is its dual defined by $A^T(f)=f\circ A$. By abuse of notation, if $A$ is the matrix representation with respect to bases $\mathcal{B}_1$ of $V_1$ and $\mathcal{B}_2$ of $V_2$, then $A^T$ is the matrix representation of the dual map with respect to the dual bases $\mathcal{B}_1^*$ and $\mathcal{B}_2^*$.</p>
probability
<p>Let's define a sequence of numbers between 0 and 1. The first term, $r_1$ will be chosen <strong>uniformly randomly</strong> from $(0, 1)$, but now we iterate this process choosing $r_2$ from $(0, r_1)$, and so on, so $r_3\in(0, r_2)$, $r_4\in(0, r_3)$... The set of all possible sequences generated this way contains the sequence of the reciprocals of all natural numbers, which sum diverges; but it also contains all geometric sequences in which all terms are less than 1, and they all have convergent sums. The question is: does $\sum_{n=1}^{\infty} r_n$ converge in general? (I think this is called <em>almost sure convergence</em>?) If so, what is the distribution of the limits of all convergent series from this family?</p>
<p>Let $(u_i)$ be a sequence of i.i.d. uniform(0,1) random variables. Then the sum you are interested in can be expressed as $$S_n=u_1+u_1u_2+u_1u_2u_3+\cdots +u_1u_2u_3\cdots u_n.$$ The sequence $(S_n)$ is non-decreasing and certainly converges, possibly to $+\infty$.</p> <p>On the other hand, taking expectations gives $$E(S_n)={1\over 2}+{1\over 2^2}+{1\over 2^3}+\cdots +{1\over 2^n},$$ so $\lim_n E(S_n)=1.$ Now by Fatou's lemma, $$E(S_\infty)\leq \liminf_n E(S_n)=1,$$ so that $S_\infty$ has finite expectation and so is finite almost surely.</p>
<blockquote> <p>The probability $f(x)$ that the result is $\in(x,x+dx)$ is given by $$f(x) = \exp(-\gamma)\rho(x)$$ where $\rho$ is the <a href="https://en.wikipedia.org/wiki/Dickman_function" rel="noreferrer">Dickman function</a> as @Hurkyl <a href="https://math.stackexchange.com/questions/2130264/sum-of-random-decreasing-numbers-between-0-and-1-does-it-converge#comment4383202_2130701">pointed out below</a>. This follows from the the delay differential equation for $f$, $$f^\prime(x) = -\frac{f(x-1)}{x}$$ with the conditions $$f(x) = f(1) \;\rm{for}\; 0\le x \le1 \;\rm{and}$$ $$\int\limits_0^\infty f(x) = 1.$$ Derivation follows</p> </blockquote> <p>From the other answers, it looks like the probability is flat for the results less than 1. Let us prove this first.</p> <p>Define $P(x,y)$ to be the probability that the final result lies in $(x,x+dx)$ if the first random number is chosen from the range $[0,y]$. What we want to find is $f(x) = P(x,1)$.</p> <p>Note that if the random range is changed to $[0,ay]$ the probability distribution gets stretched horizontally by $a$ (which means it has to compress vertically by $a$ as well). Hence $$P(x,y) = aP(ax,ay).$$</p> <p>We will use this to find $f(x)$ for $x&lt;1$.</p> <p>Note that if the first number chosen is greater than x we can never get a sum less than or equal to x. Hence $f(x)$ is equal to the probability that the first number chosen is less than or equal to $x$ multiplied by the probability for the random range $[0,x]$. That is, $$f(x) = P(x,1) = p(r_1&lt;x)P(x,x)$$</p> <p>But $p(r_1&lt;x)$ is just $x$ and $P(x,x) = \frac{1}{x}P(1,1)$ as found above. Hence $$f(x) = f(1).$$</p> <p>The probability that the result is $x$ is constant for $x&lt;1$.</p> <p>Using this, we can now iteratively build up the probabilities for $x&gt;1$ in terms of $f(1)$.</p> <p>First, note that when $x&gt;1$ we have $$f(x) = P(x,1) = \int\limits_0^1 P(x-z,z) dz$$ We apply the compression again to obtain $$f(x) = \int\limits_0^1 \frac{1}{z} f(\frac{x}{z}-1) dz$$ Setting $\frac{x}{z}-1=t$, we get $$f(x) = \int\limits_{x-1}^\infty \frac{f(t)}{t+1} dt$$ This gives us the differential equation $$\frac{df(x)}{dx} = -\frac{f(x-1)}{x}$$ Since we know that $f(x)$ is a constant for $x&lt;1$, this is enough to solve the differential equation numerically for $x&gt;1$, modulo the constant (which can be retrieved by integration in the end). Unfortunately, the solution is essentially piecewise from $n$ to $n+1$ and it is impossible to find a single function that works everywhere.</p> <p>For example when $x\in[1,2]$, $$f(x) = f(1) \left[1-\log(x)\right]$$</p> <p>But the expression gets really ugly even for $x \in[2,3]$, requiring the logarithmic integral function $\rm{Li}$.</p> <p>Finally, as a sanity check, let us compare the random simulation results with $f(x)$ found using numerical integration. The probabilities have been normalised so that $f(0) = 1$.</p> <p><a href="https://i.sstatic.net/C86kr.png" rel="noreferrer"><img src="https://i.sstatic.net/C86kr.png" alt="Comparison of simulation with numerical integral and exact formula for $x\in[1,2]$"></a></p> <p>The match is near perfect. In particular, note how the analytical formula matches the numerical one exactly in the range $[1,2]$.</p> <p>Though we don't have a general analytic expression for $f(x)$, the differential equation can be used to show that the expectation value of $x$ is 1.</p> <p>Finally, note that the delay differential equation above is the same as that of the <a href="https://en.wikipedia.org/wiki/Dickman_function" rel="noreferrer">Dickman function</a> $\rho(x)$ and hence $f(x) = c \rho(x)$. Its properties have been studied. <a href="https://www.encyclopediaofmath.org/index.php/Dickman_function" rel="noreferrer">For example</a> the Laplace transform of the Dickman function is given by $$\mathcal L \rho(s) = \exp\left[\gamma-\rm{Ein}(s)\right].$$ This gives $$\int_0^\infty \rho(x) dx = \exp(\gamma).$$ Since we want $\int_0^\infty f(x) dx = 1,$ we obtain $$f(1) = \exp(-\gamma) \rho(1) = \exp(-\gamma) \approx 0.56145\ldots$$ That is, $$f(x) = \exp(-\gamma) \rho(x).$$ This completes the description of $f$.</p>
differentiation
<p>Plotting the function <span class="math-container">$f(x)=x^{1/3}$</span> defined for any real number <span class="math-container">$x$</span> gives us: <a href="https://i.sstatic.net/zdDPO.png" rel="noreferrer"><img src="https://i.sstatic.net/zdDPO.png" alt="plot of function"></a></p> <p>Since <span class="math-container">$f$</span> is a function, for any given <span class="math-container">$x$</span> value it maps to a single y value (and not more than one <span class="math-container">$y$</span> value, because that would mean it's not a function as it fails the vertical line test). This function also has a vertical tangent at <span class="math-container">$x=0$</span>. </p> <p>My question is: how can we have a function that also has a vertical tangent? To get a vertical tangent we need 2 vertical points, which means that we are not working with a "proper" function as it has multiple y values mapping to a single <span class="math-container">$x$</span>. How is it possible for a "proper" function to have a vertical tangent?</p> <p>As I understand, in the graph I pasted we cannot take the derivative of x=0 because the slope is vertical, hence we cannot see the instantaneous rate of change of x to y as the y value is not a value (or many values, which ever way you want to look at it). How is it possible to have a perfectly vertical slope on a function? In this case I can imagine a very steep curve at 0.... but vertical?!? I can't wrap my mind around it. How can we get a vertical slope on a non vertical function?</p>
<p>The tangent line is simply an ideal picture of what you would expect to see if you zoom in around the point.</p> <p><span class="math-container">$\hspace{8em}$</span> <a href="https://i.sstatic.net/ym3bk.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ym3bk.gif" alt="Tangent line" /></a></p> <p>Hence, the vertical tangent line to the graph <span class="math-container">$y = \sqrt[3]{x}$</span> at <span class="math-container">$(0,0)$</span> says nothing more than that the graph would look steeper and steeper as we zoom in further around <span class="math-container">$(0, 0)$</span>.</p> <p>We can also learn several things from this geometric intuition.</p> <p><strong>1.</strong> The line is never required to pass through two distinct points, as the idea of a tangent line itself does not impose such an extraneous condition.</p> <p>For instance, tangent lines pass through a single point even in many classical examples such as conic sections. On the other extreme, a tangent line can pass through infinitely many points of the original curve as in the example of the graph <span class="math-container">$y = \sin x$</span>.</p> <p><strong>2.</strong> Tangent line is purely a geometric notion, hence it should not depend on the coordinate system being used.</p> <p>On the contrary, identifying the curve as the graph of some function <span class="math-container">$f$</span> and differentiating it does depend on the coordinates system. In particular, it is not essential for <span class="math-container">$f$</span> to be differentiable in order to discuss a tangent line to the graph <span class="math-container">$y = f(x)$</span>, although it is a sufficient condition.</p> <p>OP's example serves as a perfect showcase of this. Differentiating the function <span class="math-container">$f(x) = \sqrt[3]{x}$</span> fails to detect the tangent line at <span class="math-container">$(0,0)$</span>, since it is not differentiable at this point. On the other hand, it perfectly makes sense to discuss the vertical tangent line to the <em>curve</em></p> <p><span class="math-container">$$ \mathcal{C} = \{(x, \sqrt[3]{x}) :x \in \mathbb{R} \} = \{(y^3, y) : y \in \mathbb{R} \}, $$</span></p> <p>and indeed the line <span class="math-container">$x = 0$</span> is the tangent line to <span class="math-container">$\mathcal{C}$</span> at <span class="math-container">$(0, 0)$</span>.</p>
<p>No, we don't need two vertical points. By the same idea, if the graph of a function <span class="math-container">$f$</span> has an horizontal tangent line somewhere, then there has to be two points of the graph of <span class="math-container">$f$</span> with the same <span class="math-container">$y$</span> coordinate. However, the tangent at <span class="math-container">$0$</span> of <span class="math-container">$x\mapsto x^3$</span> (note that this is <em>not</em> the function that you mentioned) is horizontal, in spite of the fact that no two points of its graph have the same <span class="math-container">$y$</span> coordinate.</p>
linear-algebra
<p>I am reading the book &quot;Introduction to Linear Algebra&quot; by Gilbert Strang and couldn't help wondering the advantages of LU decomposition over Gaussian Elimination!</p> <p>For a system of linear equations in the form <span class="math-container">$Ax = b$</span>, one of the methods to solve the unknowns is Gaussian Elimination, where you form a upper triangular matrix <span class="math-container">$U$</span> by forward elimination and then figure out the unknowns by backward substitution. This serves the purpose of solving a system of linear equations. What was necessity for LU Decomposition, i.e. after finding <span class="math-container">$U$</span> by forward elimination, why do we go about finding <span class="math-container">$L$</span> (the lower triangular matrix) when you already had <span class="math-container">$U$</span> and could have done a backward substitution?</p>
<p>In many engineering applications, when you solve $Ax = b$, the matrix $A \in \mathbb{R}^{N \times N}$ remains unchanged, while the right hand side vector $b$ keeps changing.</p> <p>A typical example is when you are solving a partial differential equation for different forcing functions. For these different forcing functions, the meshing is usually kept the same. The matrix $A$ only depends on the mesh parameters and hence remains unchanged for the different forcing functions. However, the right hand side vector $b$ changes for each of the forcing function.</p> <p>Another example is when you are solving a time dependent problem, where the unknowns evolve with time. In this case again, if the time stepping is constant across different time instants, then again the matrix $A$ remains unchanged and the only the right hand side vector $b$ changes at each time step.</p> <p>The key idea behind solving using the $LU$ factorization (for that matter any factorization) is to decouple the factorization phase (usually computationally expensive) from the <em>actual</em> solving phase. The factorization phase only needs the matrix $A$, while the <em>actual</em> solving phase makes use of the factored form of $A$ and the right hand side to solve the linear system. Hence, once we have the factorization, we can make use of the factored form of $A$, to solve for different right hand sides at a relatively moderate computational cost.</p> <p>The cost of factorizing the matrix $A$ into $LU$ is $\mathcal{O}(N^3)$. Once you have this factorization, the cost of solving i.e. the cost of solving $LUx = b$ is just $\mathcal{O}(N^2)$, since the cost of solving a triangular system scales as $\mathcal{O}(N^2)$.</p> <p>(Note that to solve $LUx = b$, you first solve $Ly = b$ and then $Ux = y$. Solving $Ly = b$ and $Ux=y$ costs $\mathcal{O}(N^2).$)</p> <p>Hence, if you have '$r$' right hand side vectors $\{b_1,b_2,\ldots,b_r\}$, once you have the $LU$ factorization of the matrix $A$, the total cost to solve $$Ax_1 = b_1, Ax_2 = b_2 , \ldots, Ax_r = b_r$$ scales as $\mathcal{O}(N^3 + rN^2)$.</p> <p>On the other hand, if you do Gauss elimination separately for each right hand side vector $b_j$, then the total cost scales as $\mathcal{O}(rN^3)$, since each Gauss elimination independently costs $\mathcal{O}(N^3)$.</p> <p>However, typically when people say Gauss elimination, they usually refer to $LU$ decomposition and not to the method of solving each right hand side completely independently.</p>
<p>The computational cost of solving <span class="math-container">$x$</span> for <span class="math-container">$Ax = b$</span> via Gaussian elimination or <span class="math-container">$LU-$</span>decomposition is the same. Solving <span class="math-container">$Ax = b$</span> via Gaussian elimination with partial pivoting on the augmented matrix <span class="math-container">$[A | b]$</span> transforms <span class="math-container">$[A|b]$</span> into an upper-triangular system <span class="math-container">$[U | b^{'}]$</span>. Thereafter, backward substitution is performed to determine <span class="math-container">$x$</span>. Furthermore, in the <span class="math-container">$LU-$</span>decomposition matrix, <span class="math-container">$L$</span> is nothing but &quot;multipliers&quot; obtained during the Gauss elimination process. So there is no additional cost of computing <span class="math-container">$L$</span>.</p> <p>But consider a scenario when the right-hand side <span class="math-container">$Ax = b $</span> keeps on changing and <span class="math-container">$A$</span> is fixed. That means you deal with the system of equations <span class="math-container">$Ax = b_1, Ax=b_2, ...Ax=b_k$</span>. In this case, <span class="math-container">$LU-$</span>decomposition works more efficiently than Gauss elimination as you do not need to solve all augmented matrices. You just have to decompose <span class="math-container">$A$</span> into <span class="math-container">$LU$</span> (or Gaussian elimination with cost ~ <span class="math-container">$n^3$</span>) once.</p>
probability
<p>If $f(x)$ is a density function and $F(x)$ is a distribution function of a random variable $X$ then I understand that the expectation of x is often written as: </p> <p>$$E(X) = \int x f(x) dx$$</p> <p>where the bounds of integration are implicitly $-\infty$ and $\infty$. The idea of multiplying x by the probability of x and summing makes sense in the discrete case, and it's easy to see how it generalises to the continuous case. However, in Larry Wasserman's book <em>All of Statistics</em> he writes the expectation as follows:</p> <p>$$E(X) = \int x dF(x)$$</p> <p>I guess my calculus is a bit rusty, in that I'm not that familiar with the idea of integrating over functions of $x$ rather than just $x$.</p> <ul> <li>What does it mean to integrate over the distribution function?</li> <li>Is there an analogous process to repeated summing in the discrete case?</li> <li>Is there a visual analogy?</li> </ul> <p><strong>UPDATE:</strong> I just found the following extract from Wasserman's book (p.47):</p> <blockquote> <p>The notation $\int x d F(x)$ deserves some comment. We use it merely as a convenient unifying notation so that we don't have to write $\sum_x x f(x)$ for discrete random variables and $\int x f(x) dx$ for continuous random variables, but you should be aware that $\int x d F(x)$ has a precise meaning that is discussed in a real analysis course.</p> </blockquote> <p>Thus, I would be interested in any insights that could be shared about <strong>what is the precise meaning that would be discussed in a real analysis course?</strong></p>
<p>There are many definitions of the integral, including the Riemann integral, the Riemann-Stieltjes integral (which generalizes and expands upon the Riemann integral), and the Lebesgue integral (which is even more general.) If you're using the Riemann integral, then you can only integrate with respect to a variable (e.g. <span class="math-container">$x$</span>), and the notation <span class="math-container">$dF(x)$</span> isn't defined. </p> <p>The Riemann-Stieltjes integral generalizes the concept of the Riemann integral and allows for integration with respect to a cumulative distribution function that isn't continuous. </p> <p>The notation <span class="math-container">$\int_{a}^{b} g(x)dF(x)$</span> is roughly equivalent of <span class="math-container">$\int_{a}^{b} g(x) f(x) dx$</span> when <span class="math-container">$f(x)=F'(x)$</span>. However, if <span class="math-container">$F(x)$</span> is a function that isn't differentiable at all points, then you simply can't evaluate <span class="math-container">$\int_{a}^{b} g(x) f(x) dx$</span>, since <span class="math-container">$f(x)=F'(x)$</span> isn't defined. </p> <p>In probability theory, this situation occurs whenever you have a random variable with a discontinuous cumulative distribution function. For example, suppose <span class="math-container">$X$</span> is <span class="math-container">$0$</span> with probability <span class="math-container">$\frac{1}{2}$</span> and <span class="math-container">$1$</span> with probability <span class="math-container">$\frac{1}{2}$</span>. Then </p> <p><span class="math-container">$$ \begin{align} F(x) &amp;= 0 &amp; x &amp;&lt; 0 \\ F(x) &amp;= 1/2 &amp; 0 &amp;\leq x &lt; 1 \\ F(x) &amp;= 1 &amp; x &amp;\geq 1 \\ \end{align} $$</span></p> <p>Clearly, <span class="math-container">$F(x)$</span> doesn't have a derivative at <span class="math-container">$x=0$</span> or <span class="math-container">$x=1$</span>, so there isn't a probability density function <span class="math-container">$f(x)$</span> at those points.</p> <p>Now, suppose that we want to evaluate <span class="math-container">$E[X^3]$</span>. This can be written, using the Riemann-Stieltjes integral, as </p> <p><span class="math-container">$$E[X^3]=\int_{-\infty}^{\infty} x^3 dF(x).$$</span></p> <p>Note that because there isn't a probability density function <span class="math-container">$f(x)$</span>, we can't write this as </p> <p><span class="math-container">$$E[X^{3}]=\int_{-\infty}^{\infty} x^3 f(x) dx.$$</span></p> <p>However, we can use the fact that this random variable is discrete to evaluate the expected value as:</p> <p><span class="math-container">$$E[X^{3}]=(0)^{3}(1/2)+(1)^{3}(1/2)=1/2$$</span></p> <p>So, the short answer to your question is that you need to study alternative definitions of the integral, including the Riemann and Riemann-Stieltjes integrals.</p>
<p>Another way to understand integration with respect to a distribution function is via the Lebesgue-Stieltjes measure. Let $F\!:\mathbb R\to\mathbb R$ be a distribution function (i.e. non-decreasing and right-continuous). Then there exists a unique measure $\mu_F$ on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ that satisfies $$ \mu_F((a,b])=F(b)-F(a) $$ for any choice of $a,b\in\mathbb R$ with $a&lt;b$. Actually there is a one-to-one correspondance between probability measures on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ and non-decreasing, right-continuous functions $F\!:\mathbb R\to\mathbb R$ satisfying $F(x)\to 1$ for $x\to\infty$ and $F(x)\to 0$ for $x\to-\infty$.</p> <p>Now, the integral $$ \int x\,\mathrm dF(x) $$ can be viewed as simply the integral $$ \int x\,\mu_F(\mathrm dx)\quad\text{or}\quad \int x \,\mathrm d\mu_F(x). $$</p> <p>Now if $X$ is a random variable having distribution function $F$, then the Lebesgue-Stieltjes measure is nothing but the distribution $P_X$ of $X$: $$ P_X((a,b])=P(X\in (a,b])=P(X\leq b)-P(X\leq a)=F(b)-F(a)=\mu_F((a,b]),\quad a&lt;b, $$ showing that $P_X=\mu_F$. In particular we see that $$ {\rm E}[X]=\int_\Omega X\,\mathrm dP=\int_\mathbb{R}x\,P_X(\mathrm dx)=\int_\mathbb{R}x\,\mu_F(\mathrm dx)=\int_\mathbb{R}x\,\mathrm dF(x). $$</p>
linear-algebra
<p>I understand that a vector space is a collection of vectors that can be added and scalar multiplied and satisfies the 8 axioms, however, I do not know what a vector is. </p> <p>I know in physics a vector is a geometric object that has a magnitude and a direction and it computer science a vector is a container that holds elements, expand, or shrink, but in linear algebra the definition of a vector isn't too clear. </p> <p>As a result, what is a vector in Linear Algebra?</p>
<p>In modern mathematics, there's a tendency to define things in terms of <em>what they do</em> rather than in terms of <em>what they are</em>.</p> <p>As an example, suppose that I claim that there are objects called "pizkwats" that obey the following laws:</p> <ul> <li><span class="math-container">$\forall x. \forall y. \exists z. x + y = z$</span></li> <li><span class="math-container">$\exists x. x = 0$</span></li> <li><span class="math-container">$\forall x. x + 0 = 0 + x = x$</span></li> <li><span class="math-container">$\forall x. \forall y. \forall z. (x + y) + z = x + (y + z)$</span></li> <li><span class="math-container">$\forall x. x + x = 0$</span></li> </ul> <p>These rules specify what pizkwats <em>do</em> by saying what rules they obey, but they don't say anything about what pizkwats <em>are</em>. We can find all sorts of things that we could call pizkwats. For example, we could imagine that pizkwats are the numbers 0 and 1, with addition being done modulo 2. They could also be bitstrings of length 137, with "addition" meaning "bitwise XOR." Or they could be sets, with “addition” meaning “symmetric difference.” Each of these groups of objects obey the rules for what pizkwats do, but neither of them "are" pizkwats.</p> <p>The advantage of this approach is that we can prove results about pizkwats knowing purely how they behave rather than what they fundamentally are. For example, as a fun exercise, see if you can use the above rules to prove that</p> <blockquote> <p><span class="math-container">$\forall x. \forall y. x + y = y + x$</span>.</p> </blockquote> <p>This means that anything that "acts like a pizkwat" must support a commutative addition operator. Similarly, we could prove that</p> <blockquote> <p><span class="math-container">$\forall x. \forall y. (x + y = 0 \rightarrow x = y)$</span>.</p> </blockquote> <p>The advantage of setting things up this way is that any time we find something that "looks like a pizkwat" in the sense that it obeys the rules given above, we're guaranteed that it must have some other properties, namely, that it's commutative and that every element has its own and unique inverse. We could develop a whole elaborate theory about how pizkwats behave and what pizkwats do purely based on the rules of how they work, and since we specifically <em>never actually said what a pizkwat is</em>, anything that we find that looks like a pizkwat instantly falls into our theory.</p> <p>In your case, you're asking about what a vector is. In a sense, there is no single thing called "a vector," because a vector is just something that obeys a bunch of rules. But any time you find something that looks like a vector, you immediately get a bunch of interesting facts about it - you can ask questions about spans, about changing basis, etc. - regardless of whether that thing you're looking at is a vector in the classical sense (a list of numbers, or an arrow pointing somewhere) or a vector in a more abstract sense (say, a function acting as a vector in a "vector space" made of functions.)</p> <p>As a concluding remark, Grant Sanderson of 3blue1brown has <a href="https://youtu.be/TgKwz5Ikpc8" rel="noreferrer">an excellent video talking about what vectors are</a> that explores this in more depth.</p>
<p>When I was 14, I was introduced to vectors in a freshman physics course (algebra based). We were told that it was a quantity with magnitude and direction. This is stuff like force, momentum, and electric field.</p> <p>Three years later in precalculus we thought of them as "points," but with arrows emanating from the origin to that point. Just another thing. This was the concept that stuck until I took linear algebra two more years later.</p> <p>But now in the abstract sense, vectors don't have to be these "arrows." They can be anything we want: functions, numbers, matrices, operators, whatever. When we build vector spaces (linear spaces in other texts), we just call the objects vectors - who cares what they look like? It's a name to an abstract object.</p> <p>For example, in $\mathbb{R}^n$ our vectors are ordered n-tuples. In $\mathcal{C}[a,b]$ our vectors are now functions - continuous functions on $[a, b]$ at that. In $L^2(\mathbb{R}$) our vectors are those functions for which</p> <p>$$ \int_{\mathbb{R}} | f |^2 &lt; \infty $$</p> <p>where the integral is taken in the Lebesgue sense.</p> <p>Vectors are whatever we take them to be in the appropriate context.</p>
probability
<p>Two people have to spend exactly 15 consecutive minutes in a bar on a given day, between 12:00 and 13:00. Assuming uniform arrival times, what is the probability they will meet?</p> <p>I am mainly interested to see how people would model this formally. I came up with the answer 50% (wrong!) based on the assumptions that:</p> <ul> <li>independent uniform arrival</li> <li>they will meet iff they actually overlap by some $\epsilon &gt; 0$ </li> <li>we can measure time continuously</li> </ul> <p>but my methods felt a little ad hoc to me, and I would like to learn to make it more formal.</p> <p>Also I'm curious whether people think the problem is formulated unambiguously. I added the assumption of independent arrival myself for instance, because I think without such an assumption the problem is not well defined.</p>
<p>This is a great question to answer graphically. First note that the two can't arrive after 12:45, since they have to spend at least 15 minutes in the bar. Second, note that they meet if their arrival times differ by less than 15 minutes.</p> <p>If we plot the arrival time of person 1 on the x axis, and person 2 on the y axis, then they meet if the point representing their arrival times is between the two blue lines in the figure below. So we just need to calculate that area, relative to the area of the whole box.</p> <p>The whole box clearly has area</p> <p>$$45 \times 45 = 2025$$</p> <p>The two triangles together have area</p> <p>$$2\times \tfrac{1}{2} \times 30 \times 30 = 900$$</p> <p>Therefore the probability of the friends meeting is</p> <p>$$(2025-900)/2025 = 1125/2025 = 5/9$$</p> <p><img src="https://i.sstatic.net/Yw8sL.png" alt="Graph"></p>
<p><img src="https://i.sstatic.net/zaxIv.png" alt="solution"></p> <p>$$ \text{chance they meet} = \frac{\text{green area}}{\text{green area + blue area}} = \frac{45^2 - 30^2}{45^2} $$</p>
linear-algebra
<p><a href="http://www.agnesscott.edu/lriddle/women/todd.htm" rel="noreferrer">Olga Tausky-Todd</a> had once said that</p> <blockquote> <p><strong>"If an assertion about matrices is false, there is usually a 2x2 matrix that reveals this."</strong></p> </blockquote> <p>There are, however, assertions about matrices that are true for $2\times2$ matrices but not for the larger ones. I came across one nice little <a href="https://math.stackexchange.com/questions/577163/how-prove-this-matrix-inequality-detb0">example</a> yesterday. Actually, every student who has studied first-year linear algebra should know that there are even assertions that are true for $3\times3$ matrices, but false for larger ones --- <a href="http://en.wikipedia.org/wiki/Rule_of_Sarrus" rel="noreferrer">the rule of Sarrus</a> is one obvious example; a <a href="https://math.stackexchange.com/questions/254731/schurs-complement-of-a-matrix-with-no-zero-entries">question</a> I answered last year provides another.</p> <p>So, here is my question. <em>What is your favourite assertion that is true for small matrices but not for larger ones?</em> Here, $1\times1$ matrices are ignored because they form special cases too easily (otherwise, Tausky-Todd would have not made the above comment). The assertions are preferrably simple enough to understand, but their disproofs for larger matrices can be advanced or difficult.</p>
<p>Any two rotation matrices commute.</p>
<p>I like this one: two matrices are similar (conjugate) if and only if they have the same minimal and characteristic polynomials and the same dimensions of eigenspaces corresponding to the same eigenvalue. This statement is true for all $n\times n$ matrices with $n\leq6$, but is false for $n\geq7$. </p>
linear-algebra
<p>What is the general way of finding the basis for intersection of two vector spaces in $\mathbb{R}^n$?</p> <p>Suppose I'm given the bases of two vector spaces U and W: $$ \mathrm{Base}(U)= \left\{ \left(1,1,0,-1\right), \left(0,1,3,1\right) \right\} $$ $$ \mathrm{Base}(W) =\left\{ \left(0,-1,-2,1\right), \left(1,2,2,-2\right) \right\} $$</p> <p>I already calculated $U+W$, and the dimension is $3$ meaning the dimension of $ U \cap W $ is $1$.</p> <p>The answer is supposedly obvious, one vector is the basis of $ U \cap W $ but how do I calculate it?</p>
<p>Assume $\textbf{v} \in U \cap W$. Then $\textbf{v} = a(1,1,0,-1)+b(0,1,3,1)$ and $\textbf{v} = x(0,-1,-2,1)+y(1,2,2,-2)$.</p> <p>Since $\textbf{v}-\textbf{v}=0$, then $a(1,1,0,-1)+b(0,1,3,1)-x(0,-1,-2,1)-y(1,2,2,-2)=0$. If we solve for $a, b, x$ and $y$, we obtain the solution as $x=1$, $y=1$, $a=1$, $b=0$.</p> <p>so $\textbf{v}=(1,1,0,-1)$</p> <p>You can validate the result by simply adding $(0,-1,-2,1)$ and $(1,2,2,-2)$</p>
<p>The comment of Annan with slight correction is one possibility of finding basis for the intersection space $ U \cap W $, the steps are as follow:</p> <p>1) Construct the matrix $ A=\begin{pmatrix}\mathrm{Base}(U) &amp; | &amp; -\mathrm{Base}(W)\end{pmatrix} $ and find the basis vectors $ \textbf{s}_i=\begin{pmatrix}\textbf{u}_i \\ \textbf{v}_i\end{pmatrix} $ of its nullspace.</p> <p>2) For each basis vector $ \textbf{s}_i $ construct the vector $ \textbf{w}_i=\mathrm{Base}(U)\textbf{u}_i=\mathrm{Base}(W)\textbf{v}_i $.</p> <p>3) The set $ \{ \textbf{w}_1,\ \textbf{w}_2,...,\ \textbf{w}_r \} $ constitute the basis for the intersection space $ span(\textbf{w}_1,\ \textbf{w}_2,...,\ \textbf{w}_r) $.</p>
linear-algebra
<p>How could we prove that the "The trace of an idempotent matrix equals the rank of the matrix"?</p> <p>This is another property that is used in my module without any proof, could anybody tell me how to prove this one?</p>
<p>Sorry to post solution to this such a old question, but "The trace of an idempotent matrix equals the rank of the matrix" is very basic problem and every answer here is using the solution using <strong>eigen values</strong>. But there is another way which should be highlighted.</p> <p><strong>Solution:</strong></p> <p>Let $A_{n\times n}$ is a idempotent matrix. Using <a href="https://en.wikipedia.org/wiki/Rank_factorization" rel="noreferrer">Rank factorization</a>, we can write $A=B_{n\times r}C_{r\times n}$ where $B$ is of full column rank and $C$ is of full row rank, then $B$ has left inverse and $C$ has right inverse.</p> <p>Now, since $A^2=A$, we have $BCBC=BC$. Note that, $$BCBC=BC\Rightarrow CBC=C\Rightarrow CB=I_{r\times r}$$</p> <p>Therefore $$\text{trace}(A)=\text{trace}(BC)=\text{trace}(CB)=\text{trace}(I_{r\times r})=r=\text{rank}(A)\space\space\space\blacksquare$$</p>
<p>An idempotent has two possible eigenvalues, zero and one, and the multiplicity of one as an eigenvalue is precisely the rank. Therefore the trace, being the sum of the eigenvalues, <em>is</em> the rank (assuming your field contains $\mathbb Q$...)</p>
logic
<p>Sorry, but I don't think I can know, since it's a definition. Please tell me. I don't think that <span class="math-container">$0=\emptyset\,$</span>, since I distinguish between empty set and the value <span class="math-container">$0$</span>. Do all sets, even the empty set, have infinite emptiness e.g. do all sets, including the empty set, contain infinitely many empty sets?</p>
<p>There is only one empty set. It is a subset of every set, including itself. Each set only includes it once as a subset, not an infinite number of times.</p>
<blockquote> <p>Let $A$ and $B$ be sets. If every element $a\in A$ is also an element of $B$, then $A\subseteq B$.</p> </blockquote> <p>Flip that around and you get</p> <blockquote> <p>If $A\not\subseteq B$, then there exists some element $x\in A$ such that $x\notin B$.</p> </blockquote> <p>If $A$ is the empty set, there are no $x$s in $A$, so in particular there are no $x$s in $A$ that are not in $B$. Thus $A\not\subseteq B$ can't be true. Furthermore, note that we haven't used any property of $B$ in the previous line, so this applies to every set $B$, including $B=\emptyset$.</p> <p>(From a wider standpoint, you can think of the empty set as the set for which $x\in \emptyset\implies P$ is true for every statement $P$. For example, every $x$ in the empty set is orange; also, every $x$ in the emptyset is not orange. There is no contradiction in either of these statements because there are no $x$'s which could provide counterexamples.)</p>
logic
<p><a href="http://math.gmu.edu/~eobrien/Venn4.html" rel="noreferrer">This page</a> gives a few examples of Venn diagrams for <span class="math-container">$4$</span> sets. Some examples:<br> <img src="https://i.sstatic.net/fHbmV.gif" alt="alt text"> <img src="https://i.sstatic.net/030jM.gif" alt="alt text"><br> Thinking about it for a little, it is impossible to partition the plane into the <span class="math-container">$16$</span> segments required for a complete <span class="math-container">$4$</span>-set Venn diagram using only circles as we could do for <span class="math-container">$&lt;4$</span> sets. Yet it is doable with ellipses or rectangles, so we don&#39;t require non-convex shapes as <a href="http://en.wikipedia.org/wiki/Venn_diagram#Edwards.27_Venn_diagrams" rel="noreferrer">Edwards</a> uses. </p> <p>So what properties of a shape determine its suitability for <span class="math-container">$n$</span>-set Venn diagrams? Specifically, why are circles not good enough for the case <span class="math-container">$n=4$</span>?</p>
<p>The short answer, from a <a href="http://www.ams.org/notices/200611/ea-wagon.pdf" rel="noreferrer">paper</a> by Frank Ruskey, Carla D. Savage, and Stan Wagon is as follows:</p> <blockquote> <p>... it is impossible to draw a Venn diagram with circles that will represent all the possible intersections of four (or more) sets. This is a simple consequence of the fact that circles can finitely intersect in at most two points and <a href="http://en.wikipedia.org/wiki/Planar_graph#Euler.27s_formula" rel="noreferrer">Euler’s relation</a> F − E + V = 2 for the number of faces, edges, and vertices in a plane graph.</p> </blockquote> <p>The same paper goes on in quite some detail about the process of creating Venn diagrams for higher values of <em>n</em>, especially for simple diagrams with rotational symmetry.</p> <p>For a simple summary, the best answer I could find was on <a href="http://wiki.answers.com/Q/How_do_you_solve_a_four_circle_venn_diagram" rel="noreferrer">WikiAnswers</a>:</p> <blockquote> <p>Two circles intersect in at most two points, and each intersection creates one new region. (Going clockwise around the circle, the curve from each intersection to the next divides an existing region into two.)</p> <p>Since the fourth circle intersects the first three in at most 6 places, it creates at most 6 new regions; that's 14 total, but you need 2^4 = 16 regions to represent all possible relationships between four sets.</p> <p>But you can create a Venn diagram for four sets with four ellipses, because two ellipses can intersect in more than two points.</p> </blockquote> <p>Both of these sources indicate that the critical property of a shape that would make it suitable or unsuitable for higher-order Venn diagrams is the number of possible intersections (and therefore, sub-regions) that can be made using two of the same shape.</p> <p>To illustrate further, consider some of the complex shapes used for <em>n</em>=5, <em>n</em>=7 and <em>n</em>=11 (from <a href="http://mathworld.wolfram.com/VennDiagram.html" rel="noreferrer">Wolfram Mathworld</a>):</p> <p><img src="https://i.sstatic.net/JnpeY.jpg" alt="Venn diagrams for n=5, 7 and 11" /></p> <p>The structure of these shapes is chosen such that they can intersect with each-other in as many different ways as required to produce the number of unique regions required for a given <em>n</em>.</p> <p>See also: <a href="http://www.brynmawr.edu/math/people/anmyers/PAPERS/Venn.pdf" rel="noreferrer">Are Venn Diagrams Limited to Three or Fewer Sets?</a></p>
<p>To our surprise, we found that the standard proof that a rotationally symmetric <span class="math-container">$n$</span>-Venn diagram is impossible when <span class="math-container">$n$</span> is not prime is incorrect. So Peter Webb and I found and published a correct proof that addresses the error. The details are all discussed at the paper</p> <p>Stan Wagon and Peter Webb, <a href="https://www-users.cse.umn.edu/%7Ewebb/Publications/WebbWagonVennNote8.pdf" rel="nofollow noreferrer">Venn symmetry and prime numbers: A seductive proof revisited</a>, American Mathematical Monthly, August 2008, pp 645-648.</p> <p>We discovered all this after the long paper with Savage et al. cited in another answer.</p>
geometry
<p>I have a circle like so</p> <p><img src="https://i.sstatic.net/vNuOu.png" alt="circle with a given radius r, with angle \theta to the y-axis" /></p> <p>Given a rotation <strong>θ</strong> and a radius <strong>r</strong>, how do I find the coordinate (x,y)? Keep in mind, this rotation could be anywhere between 0 and 360 degrees.</p> <p>For example, I have a radius <strong>r</strong> of 12 and a rotation <strong>θ</strong> of 115 degrees. How would you find the point (x,y)?</p>
<p>From the picture, it seems that your circle has centre the origin, and radius <span class="math-container">$r$</span>. The rotation appears to be clockwise. And the question appears to be about where the point <span class="math-container">$(0,r)$</span> at the top of the circle ends up.</p> <p>The point <span class="math-container">$(0,r)$</span> ends up at <span class="math-container">$x=r\cos\theta$</span>, <span class="math-container">$y=r\sin\theta$</span>.</p> <p>In general, suppose that you are rotating about the origin clockwise through an angle <span class="math-container">$\theta$</span>. Then the point <span class="math-container">$(s,t)$</span> ends up at <span class="math-container">$(u,v)$</span> where <span class="math-container">$$u=s\cos\theta+t\sin\theta\qquad\text{and} \qquad v=-s\sin\theta+t\cos\theta.$$</span></p>
<p>With an angle of <strong>115°</strong> in a clockwise direction, you can find your point (x,y) as shown in your diagram with the following math:</p> <hr /> <p>Any point <span class="math-container">$(x,y)$</span> on the path of the circle is <span class="math-container">$x = r*sin(θ), y = r*cos(θ)$</span></p> <p>thus: <span class="math-container">$(x,y) = (12*sin(115), 12*cos(115))$</span></p> <p>So your point will <em>roughly</em> be <span class="math-container">$(10.876, -5.071)$</span> (assuming the top right quadrant is x+, y+)</p>
linear-algebra
<p>I'm trying to build an intuitive geometric picture about diagonalization. Let me show what I got so far.</p> <p>Eigenvector of some linear operator signifies a direction in which operator just ''works'' like a stretching, in other words, operator preserves the direction of its eigenvector. Corresponding eigenvalue is just a value which tells us for <em>how much</em> operator stretches the eigenvector (negative stretches = flipping in the opposite direction). When we limit ourselves to real vector spaces, it's intuitively clear that rotations don't preserve direction of any non-zero vector. Actually, I'm thinking about 2D and 3D spaces as I write, so I talk about ''rotations''... for n-dimensional spaces it would be better to talk about ''operators which act like rotations on some 2D subspace''.</p> <p>But, there are non-diagonalizable matrices that aren't rotations - all non-zero nilpotent matrices. My intuitive view of nilpotent matrices is that they ''gradually collapse all dimensions/gradually lose all the information'' (if we use them over and over again), so it's clear to me why they can't be diagonalizable.</p> <p>But, again, there are non-diagonalizable matrices that aren't rotations nor nilpotent, for an example:</p> <p>$$ \begin{pmatrix} 1 &amp; 1 \\ 0 &amp; 1 \end{pmatrix} $$</p> <p>So, what's the deal with them? Is there any kind of intuitive geometric reasoning that would help me grasp why there are matrices like this one? What's their characteristic that stops them from being diagonalizable?</p>
<p>I think a very useful notion here is the idea of a "<strong>generalized eigenvector</strong>".</p> <p>An <strong>eigenvector</strong> of a matrix $A$ is a vector $v$ with associated value $\lambda$ such that $$ (A-\lambda I)v=0 $$ A <strong>generalized eigenvector</strong>, on the other hand, is a vector $w$ with the same associated value such that $$ (A-\lambda I)^kw=0 $$ That is, $(A-\lambda I)$ is nilpotent on $w$. Or, in other words: $$ (A - \lambda I)^{k-1}w=v $$ For some eigenvector $v$ with the same associated value.</p> <hr> <p>Now, let's see how this definition helps us with a non-diagonalizable matrix such as $$ A = \pmatrix{ 2 &amp; 1\\ 0 &amp; 2 } $$ For this matrix, we have $\lambda=2$ as a unique eigenvalue, and $v=\pmatrix{1\\0}$ as the associated eigenvector, which I will let you verify. $w=\pmatrix{0\\1}$ is our generalized eiegenvector. Notice that $$ (A - 2I) = \pmatrix{ 0 &amp; 1\\ 0 &amp; 0} $$ Is a nilpotent matrix of order $2$. Note that $(A - 2I)v=0$, and $(A- 2I)w=v$ so that $(A-2I)^2w=0$. But what does this mean for what the matrix $A$ does? The behavior of $v$ is fairly obvious, but with $w$ we have $$ Aw = \pmatrix{1\\2}=2w + v $$ So $w$ behaves kind of like an eigenvector, but not really. In general, a generalized eigenvector, when acted upon by $A$, gives another vector in the generalized eigenspace.</p> <hr> <p>An important related notion is <a href="http://en.wikipedia.org/wiki/Jordan_normal_form">Jordan Normal Form</a>. That is, while we can't always diagonalize a matrix by finding a basis of eigenvectors, we can always put the matrix into Jordan normal form by finding a basis of generalized eigenvectors/eigenspaces.</p> <p>I hope that helps. I'd say that the most important thing to grasp from the idea of generalized eigenvectors is that every transformation can be related to the action of a nilpotent over some subspace. </p>
<p><strong>Edit:</strong> The algebra I speak of here is <em>not</em> actually the Grassmann numbers at all -- they are <span class="math-container">$\mathbb{R}[X]/(X^n)$</span>, whose generators <em>don't</em> satisfy the anticommutativity relation even though they satisfy all the nilpotency relations. The dual-number stuff for 2 by 2 is still correct, just ignore my use of the word "Grassmann".</p> <hr> <p>Non-diagonalisable 2 by 2 matrices can be diagonalised over the <a href="https://en.wikipedia.org/wiki/Dual_number" rel="nofollow noreferrer">dual numbers</a> -- and the "weird cases" like the Galilean transformation are not fundamentally different from the nilpotent matrices.</p> <p>The intuition here is that the Galilean transformation is sort of a "boundary case" between real-diagonalisability (skews) and complex-diagonalisability (rotations) (which you can sort of think in terms of discriminants). In the case of the Galilean transformation <span class="math-container">$\left[\begin{array}{*{20}{c}}{1}&amp;{v}\\{0}&amp;{1}\end{array}\right]$</span>, it's a small perturbation away from being diagonalisable, i.e. it sort of has "repeated eigenvectors" (you can visualise this with <a href="https://shadanan.github.io/MatVis/" rel="nofollow noreferrer">MatVis</a>). So one may imagine that the two eigenvectors are only an "epsilon" away, where <span class="math-container">$\varepsilon$</span> is the unit dual satisfying <span class="math-container">$\varepsilon^2=0$</span> (called the "soul"). Indeed, its characteristic polynomial is:</p> <p><span class="math-container">$$(\lambda-1)^2=0$$</span></p> <p>Whose solutions among the dual numbers are <span class="math-container">$\lambda=1+k\varepsilon$</span> for real <span class="math-container">$k$</span>. So one may "diagonalise" the Galilean transformation over the dual numbers as e.g.:</p> <p><span class="math-container">$$\left[\begin{array}{*{20}{c}}{1}&amp;{0}\\{0}&amp;{1+v\varepsilon}\end{array}\right]$$</span></p> <p>Granted this is not unique, this is formed from the change-of-basis matrix <span class="math-container">$\left[\begin{array}{*{20}{c}}{1}&amp;{1}\\{0}&amp;{\epsilon}\end{array}\right]$</span>, but any vector of the form <span class="math-container">$(1,k\varepsilon)$</span> is a valid eigenvector. You could, if you like, consider this a canonical or "principal value" of the diagonalisation, and in general each diagonalisation corresponds to a limit you can take of real/complex-diagonalisable transformations. Another way of thinking about this is that there is an entire eigenspace spanned by <span class="math-container">$(1,0)$</span> and <span class="math-container">$(1,\varepsilon)$</span> in that little gap of multiplicity. In this sense, the geometric multiplicity is forced to be equal to the algebraic multiplicity*.</p> <p>Then a nilpotent matrix with characteristic polynomial <span class="math-container">$\lambda^2=0$</span> has solutions <span class="math-container">$\lambda=k\varepsilon$</span>, and is simply diagonalised as:</p> <p><span class="math-container">$$\left[\begin{array}{*{20}{c}}{0}&amp;{0}\\{0}&amp;{\varepsilon}\end{array}\right]$$</span></p> <p>(Think about this.) Indeed, the resulting matrix has minimal polynomial <span class="math-container">$\lambda^2=0$</span>, and the eigenvectors are as before.</p> <hr> <p>What about higher dimensional matrices? Consider:</p> <p><span class="math-container">$$\left[ {\begin{array}{*{20}{c}}0&amp;v&amp;0\\0&amp;0&amp;w\\0&amp;0&amp;0\end{array}} \right]$$</span></p> <p>This is a nilpotent matrix <span class="math-container">$A$</span> satisfying <span class="math-container">$A^3=0$</span> (but not <span class="math-container">$A^2=0$</span>). The characteristic polynomial is <span class="math-container">$\lambda^3=0$</span>. Although <span class="math-container">$\varepsilon$</span> might seem like a sensible choice, it doesn't really do the trick -- if you try a diagonalisation of the form <span class="math-container">$\mathrm{diag}(0,v\varepsilon,w\varepsilon)$</span>, it has minimal polynomial <span class="math-container">$A^2=0$</span>, which is wrong. Indeed, you won't be able to find three linearly independent eigenvectors to diagonalise the matrix this way -- they'll all take the form <span class="math-container">$(a+b\varepsilon,0,0)$</span>.</p> <p>Instead, you need to consider a generalisation of the dual numbers, called the Grassmann numbers, with the soul satisfying <span class="math-container">$\epsilon^n=0$</span>. Then the diagonalisation takes for instance the form:</p> <p><span class="math-container">$$\left[ {\begin{array}{*{20}{c}}0&amp;0&amp;0\\0&amp;{v\epsilon}&amp;0\\0&amp;0&amp;{w\epsilon}\end{array}} \right]$$</span></p> <hr> <p>*Over the reals and complexes, when one defines algebraic multiplicity (as "the multiplicity of the corresponding factor in the characteristic polynomial"), there is a single eigenvalue corresponding to that factor. This is of course no longer true over the Grassmann numbers, because they are not a field, and <span class="math-container">$ab=0$</span> no longer implies "<span class="math-container">$a=0$</span> or <span class="math-container">$b=0$</span>".</p> <p>In general, if you want to prove things about these numbers, the way to formalise them is by constructing them as the quotient <span class="math-container">$\mathbb{R}[X]/(X^n)$</span>, so you actually have something clear to work with.</p> <p>(Perhaps relevant: <a href="https://math.stackexchange.com/questions/46078/grassmann-numbers-as-eigenvalues-of-nilpotent-operators">Grassmann numbers as eigenvalues of nilpotent operators?</a> -- discussing the fact that the Grassmann numbers are not a field).</p> <p>You might wonder if this sort of approach can be applicable to LTI differential equations with repeated roots -- after all, their characteristic matrices are exactly of this Grassmann form. As pointed out in the comments, however, this diagonalisation is still not via an invertible change-of-basis matrix, it's still only of the form <span class="math-container">$PD=AP$</span>, not <span class="math-container">$D=P^{-1}AP$</span>. I don't see any way to bypass this. See my posts <a href="https://thewindingnumber.blogspot.com/2019/02/all-matrices-can-be-diagonalised.html" rel="nofollow noreferrer">All matrices can be diagonalised</a> (a re-post of this answer) and <a href="https://thewindingnumber.blogspot.com/2018/03/repeated-roots-of-differential-equations.html" rel="nofollow noreferrer">Repeated roots of differential equations</a> for ideas, I guess.</p>
differentiation
<p>In a scientific paper, I've seen the following</p> <p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-1}\frac{\delta K}{\delta p}K^{-1}$$</span></p> <p>where <span class="math-container">$K$</span> is a <span class="math-container">$n \times n$</span> matrix that depends on <span class="math-container">$p$</span>. In my calculations I would have done the following</p> <p><span class="math-container">$$\frac{\delta K^{-1}}{\delta p} = -K^{-2}\frac{\delta K}{\delta p}=-K^{-T}K^{-1}\frac{\delta K}{\delta p}$$</span></p> <p>Is my calculation wrong?</p> <p>Note: I think <span class="math-container">$K$</span> is symmetric.</p>
<p>The major trouble in matrix calculus is that the things are no longer commuting, but one tends to use formulae from the scalar function calculus like $(x(t)^{-1})'=-x(t)^{-2}x'(t)$ replacing $x$ with the matrix $K$. <strong>One has to be more careful here and pay attention to the order</strong>. The easiest way to get the derivative of the inverse is to derivate the identity $I=KK^{-1}$ <em>respecting the order</em> $$ \underbrace{(I)'}_{=0}=(KK^{-1})'=K'K^{-1}+K(K^{-1})'. $$ Solving this equation with respect to $(K^{-1})'$ (again paying attention to the order (!)) will give $$ K(K^{-1})'=-K'K^{-1}\qquad\Rightarrow\qquad (K^{-1})'=-K^{-1}K'K^{-1}. $$</p>
<p>Yes, your calculation is wrong, note that $K$ may not commute with $\frac{\partial K}{\partial p}$, hence you must apply the chain rule correctly. The derivative of $\def\inv{\mathrm{inv}}\inv \colon \def\G{\mathord{\rm GL}}\G_n \to \G_n$ is <strong>not</strong> given by $\inv'(A)B = -A^2B$, but by $\inv'(A)B = -A^{-1}BA^{-1}$. To see that, note that for small enough $B$ we have \begin{align*} \inv(A + B) &amp;= (A + B)^{-1}\\ &amp;= (\def\I{\mathord{\rm Id}}\I + A^{-1}B)^{-1}A^{-1}\\ &amp;= \sum_k (-1)^k (A^{-1}B)^kA^{-1}\\ &amp;= A^{-1} - A^{-1}BA^{-1} + o(\|B\|) \end{align*} Hence, $\inv'(A)B= -A^{-1}BA^{-1}$, and therefore, by the chain rule $$ \partial_p (\inv \circ K) = \inv'\circ K\bigl(\partial_p K) = -K^{-1}(\partial_p K) K^{-1} $$ </p>
linear-algebra
<p>Let $V$ and $W$ be vector spaces over a field $\mathbb{F}$ with $\text{dim }V \ge 2$. A <strong>line</strong> is a set of the form $\{ \mathbf{u} + t\mathbf{v} : t \in \mathbb{F} \}$. A map $f: V \to W$ <strong>preserves lines</strong> if the image of every line in $V$ is a line in $W$. A map <strong>fixes the origin</strong> if $f(0) = 0$.</p> <p>Is a function $f: V\to W$ that preserves lines and fixes the origin necessarily linear?</p>
<p>Consider $V=W=\Bbb F_2^k$. Then, since a subset is a line if and only if it contains (at most) two points, any bijective map that sends $0$ to $0$ does the trick. However, there are $(2^k-1)!$ such maps, while the bijective linear maps are less than that for $k\ge 3$.</p>
<p>Let me give an example in the Euclidean plane. The function $f\colon\mathbb R^2\to\mathbb R$ given by $f(x,y)=x^3$ maps lines to lines. A vertical line $x=a$ is mapped to the line $a^3$ &mdash; points are lines by the OP's definition. Any other line is of the form $\{(t,a+bt);t\in\mathbb R\}$ for some $a,b\in\mathbb R$. The image of any such line is $\mathbb R$. Thus $f$ maps lines to lines, and it clearly fixes the origin. Non-linearity is evident.</p> <p>This $f$ can be promoted to a function $g\colon\mathbb R^2\to\mathbb R^2$ by letting $g(x,y)=(f(x,y),0)$. This inherits the desired properties and is a function between two-dimensional spaces.</p> <hr> <p><em>Below is a previous, erroneous answer. I left it here as a warning example. My actual answer is above.</em> I can delete this if it would be more appropriate.</p> <p>Let me give an example with infinite fields. The real line $\mathbb R$ is an infinite dimensional vector space over $\mathbb Q$. Lines &mdash; other than the origin &mdash; are translations of the rationals ($r+\mathbb Q$ for some $r$). Take the function $g\colon\mathbb R\to\mathbb R$, $$ g(x) = \begin{cases} x, &amp; x\in\mathbb Q\\ 0, &amp; x\notin\mathbb Q. \end{cases} $$ The image of the line $r+\mathbb Q$ is the line $\mathbb Q$ if $r\in\mathbb Q$ and $\{0\}$ if $r\notin\mathbb Q$. Therefore $g$ preserves lines and fixes the origin. But it is not linear: $5=g(5)\neq g(5-\pi)+g(\pi)=0$.</p> <p>This is not a valid example because I had misidentified lines. For example, $2+\pi\mathbb Q$ is a line but its image $\{0,2\}$ is not. A weaker statement is true: the image of every line is either a line or a set containing the origin and a non-zero rational number.</p>
logic
<p>I am studying entailment in classical first-order logic.</p> <p>The Truth Table we have been presented with for the statement $(p \Rightarrow q)\;$ (a.k.a. '$p$ implies $q$') is: $$\begin{array}{|c|c|c|} \hline p&amp;q&amp;p\Rightarrow q\\ \hline T&amp;T&amp;T\\ T&amp;F&amp;F\\ F&amp;T&amp;T\\ F&amp;F&amp;T\\\hline \end{array}$$</p> <p>I 'get' lines 1, 2, and 3, but I do not understand line 4. </p> <p>Why is the statement $(p \Rightarrow q)$ True if both p and q are False?</p> <p>We have also been told that $(p \Rightarrow q)$ is logically equivalent to $(~p || q)$ (that is $\lnot p \lor q$).</p> <p>Stemming from my lack of understanding of line 4 of the Truth Table, I do not understand why this equivalence is accurate.</p> <hr> <blockquote> <p><strong>Administrative note.</strong> You may experience being directed here even though your question was actually about line 3 of the truth table instead. In that case, see the companion question <a href="https://math.stackexchange.com/questions/70736/in-classical-logic-why-is-p-rightarrow-q-true-if-p-is-false-and-q-is-tr">In classical logic, why is $(p\Rightarrow q)$ True if $p$ is False and $q$ is True?</a> And even if your original worry was about line 4, it might be useful to skim the other question anyway; many of the answers to either question attempt to explain <em>both</em> lines.</p> </blockquote>
<p>Here is an example. Mathematicians claim that this is true: </p> <p><strong>If $x$ is a rational number, then $x^2$ is a rational number</strong> </p> <p>But let's consider some cases. Let $P$ be "$x$ is a rational number". Let $Q$ be "$x^2$ is a rational number".<br> When $x=3/2$ we have $P, Q$ both true, and $P \rightarrow Q$ of the form $T \rightarrow T$ is also true.<br> When $x=\pi$ we have $P,Q$ both false, and $P \rightarrow Q$ of the form $F \rightarrow F$ is true.<br> When $x=\sqrt{2}$ we have $P$ false and $Q$ true, so $P \rightarrow Q$ of the form $F \rightarrow T$ is again true. </p> <p>But the assertion in bold I made above means that we never <em>ever</em> get the case $T \rightarrow F$, no matter what number we put in for $x$.</p>
<p>Here are two explanations from the books on my shelf followed by my attempt. The first one is probably the easiest justification to agree with. The second one provides a different way to think about it.</p> <p>From Robert Stoll's “Set Theory and Logic” page 165:</p> <blockquote> <p>To understand the 4th line, consider the statement <span class="math-container">$(P \land Q) \to P$</span>. We expect this to be true regardless of the choice of <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>. But if <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are both false, then <span class="math-container">$P \land Q$</span> is false, and we are led to the conclusion that if both antecedent and consequent are false, a conditional is true.</p> </blockquote> <p>From Herbert Enderton's “A Mathematical Introduction to Logic” page 21:</p> <blockquote> <p>For example, we might translate the English sentence, ”If you're telling the truth then I'm a monkey's uncle,” by the formula <span class="math-container">$(V \to M)$</span>. We assign this formula the value <span class="math-container">$T$</span> whenever you are fibbing. In assigning the value <span class="math-container">$T$</span>, we are certainly not assigning any causal connection between your veracity and any simian features of my nephews or nieces. The sentence in question is a <em>conditional</em> statement. It makes an assertion about my relatives provided a certain <em>condition</em> — that you are telling the truth — is met. Whenever that condition fails, the statement is vacuously true.</p> <p>Very roughly, we can think of a conditional formula <span class="math-container">$(p \to q)$</span> as expressing a <em>promise</em> that if a certain condition is met (viz., that <span class="math-container">$p$</span> is true), then <span class="math-container">$q$</span> is true. If the condition <span class="math-container">$p$</span> turns out not to be met, then the promise stands unbroken, regardless of <span class="math-container">$q$</span>.</p> </blockquote> <p>That's why it's said to be “vacuously true”. That <span class="math-container">$(p \to q)$</span> is True when both <span class="math-container">$p$</span>, <span class="math-container">$q$</span> are False is different from saying the conclusion <span class="math-container">$q$</span> is True (which would be a contradiction). Rather, this is more like saying “we cannot show <span class="math-container">$p \to q)$</span> to be false here” and Not False is True.</p>
linear-algebra
<p>According to C.H. Edwards' <em>Advanced Calculus of Several Variables</em>: The dimension of the subspace <span class="math-container">$V$</span> is defined to be the minimal number of vectors required to generate <span class="math-container">$V$</span> (pp. 4).</p> <p>Then why does <span class="math-container">$\{\mathbf{0}\}$</span> have dimension zero instead of one? Shouldn't it be true that only the empty set has dimension zero?</p>
<p>A vector by itself doesn't have a dimension. A <em>subspace</em> has a dimension. Why $\{\mathbf{0}\}$ is considered as having dimension $0$? Because of consistency with all other situations. For instance $\mathbb{R}^3$ has dimension $3$ because we can find in it a linearly independent set with three elements, but no larger linearly independent set. This applies to vector spaces having a finite spanning set and so of subspaces thereof.</p> <p>What's the largest linearly independent set in $\{\mathbf{0}\}$? The only subsets in it are the empty set and the whole set. But any set containing the zero vector is linearly dependent; conversely, the empty set is certainly linearly independent (because you can't find a zero linear combination with non zero coefficients out of its elements). So the only linearly independent set in $\{\mathbf{0}\}$ is the empty set that has zero elements.</p>
<p>The sum over the empty set is the additive identity... in this case zero.</p>
differentiation
<p>Let <span class="math-container">$f:\mathbb R^d \to \mathbb R^m$</span> be a map of class <span class="math-container">$C^1$</span>. That is, <span class="math-container">$f$</span> is continuous and its derivative exists and is also continuous. Why is <span class="math-container">$f$</span> locally Lipschitz?</p> <h3>Remark</h3> <p>Such <span class="math-container">$f$</span> will not be <em>globally</em> Lipschitz in general, as the one-dimensional example <span class="math-container">$f(x)=x^2$</span> shows: for this example, <span class="math-container">$|f(x+1)-f(x)| = |2x+1|$</span> is unbounded.</p>
<p>If $f:\Omega\to{\mathbb R}^m$ is continuously differentiable on the open set $\Omega\subset{\mathbb R}^d$, then for each point $p\in\Omega$ there is a convex neighborhood $U$ of $p$ such that all partial derivatives $f_{i.k}:={\partial f_i\over \partial x_k}$ are bounded by some constant $M&gt;0$ in $U$. Using Schwarz' inequality one then easily proves that $$\|df(x)\|\ \leq\sqrt{dm}\&gt;M=:L$$ for all $x\in U$. Now let $a$, $b$ be two arbitrary points in $U$ and consider the auxiliary function $$\phi(t):=f\bigl(a+t(b-a)\bigr)\qquad(0\leq t\leq1)$$ which computes the values of $f$ along the segment connecting $a$ and $b$. By means of the chain rule we obtain $$f(b)-f(a)=\phi(1)-\phi(0)=\int_0^1\phi&#39;(t)\&gt;dt=\int_0^1df\bigl(a+t(b-a)\bigr).(b-a)\&gt;dt\ .$$ Since all points $a+t(b-a)$ lie in $U$ one has $$\bigl|df\bigl(a+t(b-a)\bigr).(b-a)\bigr|\leq L\&gt;|b-a|\qquad(0\leq t\leq1)\&gt;;$$ therefore we get $$|f(b)-f(a)|\leq L\&gt;|b-a|\ .$$ This proves that $f$ is Lipschitz-continuous in $U$ with Lipschitz constant $L$.</p>
<p>Maybe <a href="http://en.wikipedia.org/wiki/Mean_value_theorem">this</a> can help. The Lipschitz condition comes many times from the Mean Value Theorem. Search the link for the multivariable case. The fact that $f$ is $C^1$ helps you to see that when restricted to a compact set the differential is bounded. That's why you only have local Lipschitz condition.</p>
probability
<p>If we have a sequence of random variables $X_1,X_2,\ldots,X_n$ converges in distribution to $X$, i.e. $X_n \rightarrow_d X$, then is $$ \lim_{n \to \infty} E(X_n) = E(X) $$ correct?</p> <p>I know that converge in distribution implies $E(g(X_n)) \to E(g(X))$ when $g$ is a bounded continuous function. Can we apply this property here?</p>
<p>With your assumptions the best you can get is via Fatou's Lemma: $$\mathbb{E}[|X|]\leq \liminf_{n\to\infty}\mathbb{E}[|X_n|]$$ (where you used the continuous mapping theorem to get that $|X_n|\Rightarrow |X|$).</p> <p>For a "positive" answer to your question: you need the sequence $(X_n)$ to be uniformly integrable: $$\lim_{\alpha\to\infty} \sup_n \int_{|X_n|&gt;\alpha}|X_n|d\mathbb{P}= \lim_{\alpha\to\infty} \sup_n \mathbb{E} [|X_n|1_{|X_n|&gt;\alpha}]=0.$$ Then, one gets that $X$ is integrable and $\lim_{n\to\infty}\mathbb{E}[X_n]=\mathbb{E}[X]$.</p> <p>As a remark, to get uniform integrability of $(X_n)_n$ it suffices to have for example: $$\sup_n \mathbb{E}[|X_n|^{1+\varepsilon}]&lt;\infty,\quad \text{for some }\varepsilon&gt;0.$$</p>
<p>Try $\mathrm P(X_n=2^n)=1/n$, $\mathrm P(X_n=0)=1-1/n$.</p>
number-theory
<p>Suppose you're trying to teach analysis to a stubborn algebraist who refuses to acknowledge the existence of any characteristic $0$ field other than $\mathbb{Q}$. How ugly are things going to get for him?</p> <p>The algebraist argues that the real numbers are a silly construction because any real number can be approximated to arbitrarily high precision by the rational numbers - i.e., given any real number $r$ and any $\epsilon&gt;0$, the set $\left\{x\in\mathbb{Q}:\left|x-r\right|&lt;\epsilon\right\}$ is nonempty, thus sating the mad gods of algebra.</p> <p>As @J.M. and @75064 pointed out to me in chat, we do start having some topology problems, for example that $f(x)=x^2$ and $g(x)=2$ are nonintersecting functions in $\mathbb{Q}$. They do, however, come <em>arbitrarily close</em> to intersecting, i.e. given any $\epsilon&gt;0$ there exist rational solutions to $\left|2-x^2\right|&lt;\epsilon$. The algebraist doesn't find this totally unsatisfying.</p> <blockquote> <p>Where is this guy <em>really</em> going to start running into trouble? Are there definitions in analysis which simply can't be reasonably formulated without leaving the rational numbers? Which concepts would be particularly difficult to understand without the rest of the reals?</p> </blockquote>
<p>What kind of <em>algebraist</em> &quot;refuses to acknowledge the existence of any characteristic <span class="math-container">$0$</span> field other than <span class="math-container">$\mathbb{Q}$</span>&quot;?? But there is a good question in here nevertheless: the basic definitions of limit, continuity, and differentiability all make sense for functions <span class="math-container">$f: \mathbb{Q} \rightarrow \mathbb{Q}$</span>. The real numbers are in many ways a much more complicated structure than <span class="math-container">$\mathbb{Q}$</span> (and in many other ways are much simpler, but never mind that here!), so it is natural to ask whether they are really necessary for calculus.</p> <p>Strangely, this question has gotten serious attention only relatively recently. For instance:</p> <p><span class="math-container">$\bullet$</span> <a href="https://books.google.com/books/about/A_companion_to_analysis.html?id=H3zGTvmtp74C" rel="noreferrer">Tom Korner's real analysis text</a> takes this question seriously and gives several examples of pathological behavior over <span class="math-container">$\mathbb{Q}$</span>.</p> <p><span class="math-container">$\bullet$</span> <a href="https://books.google.com/books/about/Introduction_to_Real_Analysis.html?id=ztGKKuDcisoC" rel="noreferrer">Michael Schramm's real analysis text</a> is unusually thorough and lucid in making logical connections between the main theorems of calculus (though there is one mistaken implication there). I found it to be a very appealing text because of this.</p> <p><span class="math-container">$\bullet$</span> <a href="http://alpha.math.uga.edu/%7Epete/2400full.pdf" rel="noreferrer">My honors calculus notes</a> often explain what goes wrong if you use an ordered field other than <span class="math-container">$\mathbb{R}$</span>.</p> <p><span class="math-container">$\bullet$</span> <span class="math-container">$\mathbb{R}$</span> is the unique ordered field in which <a href="http://alpha.math.uga.edu/%7Epete/instructors_guide_shorter.pdf" rel="noreferrer">real induction</a> is possible.</p> <p><span class="math-container">$\bullet$</span> The most comprehensive answers to your question can be found in two recent Monthly articles, <a href="https://faculty.uml.edu//jpropp/reverse.pdf" rel="noreferrer">by Jim Propp</a> and <a href="https://www.jstor.org/stable/10.4169/amer.math.monthly.120.02.099" rel="noreferrer" title="Toward a More Complete List of Completeness Axioms, The American Mathematical Monthly, Vol. 120, No. 2 (February 2013), pp. 99-114">by Holger Teismann</a>.</p> <p>But as the title of Teismann's article suggests, even the latter two articles do not complete the story.</p> <p><span class="math-container">$\bullet$</span> <a href="http://alpha.math.uga.edu/%7Epete/Clark-Diepeveen_PLUS.pdf" rel="noreferrer">Here is a short note</a> whose genesis was on this site which explains a further pathology of <span class="math-container">$\mathbb{Q}$</span>: there are absolutely convergent series which are not convergent.</p> <p><span class="math-container">$\bullet$</span> Only a few weeks ago Jim Propp wrote to tell me that <a href="https://en.wikipedia.org/wiki/Knaster%E2%80%93Tarski_theorem" rel="noreferrer">Tarski's Fixed Point Theorem</a> characterizes completeness in ordered fields and admits a nice proof using Real Induction. (I put it in my honors calculus notes.) So the fun continues...</p>
<p>This situation strikes me as about as worthwhile a use of your time as trying to reason with a student in a foreign language class who refuses to accept a grammatical construction or vocabulary word that is used every day by native speakers. Without the real numbers as a background against which constructions are made, most fundamental constructions in analysis break down, e.g., no coherent theory of power series as functions. And even algebraic functions have non-algebraic antiderivatives: if you are a fan of the function $1/x$ and you want to integrate it then you'd better be ready to accept logarithms. The theorem that a continuous function on a closed and bounded interval is uniformly continuous breaks down if you only work over the rational numbers: try $f(x) = 1/(x^2-2)$ on the <em>rational</em> closed interval $[1,2]$.</p> <p>Putting aside the issue of analysis, such a math student who continues with this attitude isn't going to get far even in algebra, considering the importance of algebraic numbers that are not rational numbers, even for solving problems that are posed only in the setting of the rational numbers. This student must have a very limited understanding of algebra. After all, what would this person say about constructions like ${\mathbf Q}[x]/(x^2-2)$?</p> <p>Back to analysis, if this person is a die hard algebraist then provide a definition of the real numbers that feels largely algebraic: the real numbers are the quotient ring $A/M$ where $A$ is the ring of Cauchy sequences in ${\mathbf Q}$ and $M$ is the ideal of sequences in ${\mathbf Q}$ that tend to $0$. This is a maximal ideal, so $A/M$ is a field, and by any of several ways one can show this is more than just the rational numbers in disguise (e.g., it contains a solution of $t^2 = 2$, or it's not countable). If this student refuses to believe $A/M$ is a new field of characteristic $0$ (though there are much easier ways to construct fields of characteristic $0$ besides the rationals), direct him to books that explain what fields are.</p>
probability
<p>It's a standard exercise to find the Fourier transform of the Gaussian $e^{-x^2}$ and show that it is equal to itself. Although it is computationally straightforward, this has always somewhat surprised me. My intuition for the Gaussian is as the integrand of normal distributions, and my intuition for Fourier transforms is as a means to extract frequencies from a function. They seem unrelated, save for their use of the exponential function.</p> <p>How should I understand this property of the Gaussian, or in general, eigenfunctions of the Fourier transform? The Hermite polynomials are eigenfunctions of the Fourier transform and play a central role in probability. Is this an instance of a deeper connection between probability and harmonic analysis?</p>
<p>The generalization of this phenomenon, from a probabilistic standpoint, is the Wiener-Askey Polynomial Chaos.</p> <p>In general, there is a connection between orthogonal polynomial families in the Askey scheme and probability distribution/mass functions.</p> <p>Orthogonality of these polynomials can be shown in an inner product space using a weighting function -- a weight function that typically happens to be, within a scale factor, the pdf/pmf of some distribution.</p> <p>In other words, we can use these orthogonal polynomials as a basis for a series expansion of a random variable:</p> <p>$$z = \sum_{i=0}^\infty z_i \Phi_i(\zeta).$$</p> <p>The random variable $\zeta$ belongs to a distribution we choose, and the orthogonal polynomial family to which $\Phi$ belongs follows from this choice.</p> <p>The <em>deterministic</em> coefficients $z_i$ can be computed easily by using Galerkin's method.</p> <p>So, yes. There is a very deep connection in this regard, and it is extremely powerful, particularly in engineering applications. Strangely, many mathematicians do not know this relationship!</p> <hr> <p>See also: <a href="http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA460654" rel="noreferrer">http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA460654</a> and the Cameron-Martin Theorem.</p>
<p>There's a simple reason why taking a Fourier-like transform of a Gaussian-like function yields another Gaussian-like function. Consider the property $$\mathcal{T}[f^\prime](\xi) \propto \xi \hat{f}(\xi)$$ of a transform $\mathcal{T}$. We will call an invertible transform $\mathcal{F}$ "Fourier-like" if both it and its inverse have this property.</p> <p>Define a "Gaussian-like" function as one with the form $$f(x) = A e^{a x^2}.$$ Functions with this form satisfy $$f^\prime(x) \propto x f(x).$$ Taking a Fourier-like transform of each each side yields $$\xi \hat{f}(\xi) \propto \hat{f}^\prime(\xi).$$ This is has the same form as the previous equation, so it is not surprising that its solutions have the Gaussian-like form $$\hat{f}(\xi) = B e^{b \xi^2}.$$</p>
geometry
<p>Having a slight parenting anxiety attack and I hate teaching my son something incorrect.</p> <p>Wiktionary tells me that a Hexagon is a polygon with $6$ sides and $6$ angles.</p> <p>Why the $6$ angle requirement? This has me confused.</p> <p>Would the shape below be also considered a hexagon?</p> <p><img src="https://i.sstatic.net/77S2Q.gif" alt="http://mathcentral.uregina.ca/QQ/database/QQ.09.10/h/emma2.1.gif"></p>
<p>Yes, it would still be considered a hexagon. The reason why we require "$6$ angles" is probably just because you don't want the $6$ lines to cross and create "more than $6$ angles". The hexagons you are probably thinking of (the one with all the angles and sides equal) would be regular hexagons.</p> <p>Hope that helps.</p>
<p>Yes. It also has six angles, but one of them is greater than $180^{\circ}$. </p>
combinatorics
<p>The statement is simply that the sequence $\left(1+\frac{1}{n}\right)^n$ is increasing. </p> <p>Since the numbers $n^m$ have quite natural combinatorial interpretations, it makes me wonder if a combinatorial proof exists, but I haven't been able to find one.</p> <p>For example, if we let $S_{n,m}$ denote the set of function $\{1, \dots, n\} \to \{1, \dots, m\}$, then a proof would follow from the construction of an injection $S_{2n+1,n+1} \hookrightarrow S_{n, n} \times S_{n+1,n+2} $, or of a surjection going the other way.</p>
<p>I managed to come up with a combinatorial proof of the statement in t.b.'s comment. I would guess a similar argument solves the original problem. Anyway, let $n$ be a positive integer. We will prove that $$ n^{2n+1} &gt; (n-1)^n(n+1)^{n+1}.$$ After multiplying by through by $n$ and performing some trivial (though slightly arcane) manipulations, we see this is equivalent to proving $$(n^2)^{n+1} &gt; (n^2-1)^{n+1} + (n+1) \cdot (n^2-1)^n.$$ Now for the combinatorics (see problem statement for notation). The LHS counts the functions in $S_{n+1,n^2}$. The RHS counts only the functions in $S_{n+1,n^2}$ which do <em>not</em> take the value $1$ (via the first term) or else take the value $1$ <em>exactly once</em> (via the second term). And, well, that's it!!</p> <hr> <p><strong>Added:</strong> Okay I think I figured out how to do the original problem in the same sort of way. The combinatorial step is a bit clumsier for some reason. Maybe someone else can see a better way? As before, let $n$ be a positive integer. We will prove that $$(n-1)^{n-1}(n+1)^n &gt; n^{2n-1}.$$ Multiplying through by $n-1$, this becomes $$(n^2 -1)^n &gt; (n^2)^n - n \cdot (n^2)^{n-1}$$ or, equivalently, $$(n^2 -1)^n + n \cdot (n^2)^{n-1} &gt; (n^2)^n.$$ This last bound can be proven combinatorially. On the RHS we have all the functions in $S_{n,n^2}$. The first term on the LHS counts the functions in $S_{n,n^2}$ which never take the value $1$. Let $S$ be the set of functions in $S_{n,n^2}$ which <em>do</em> take the value $1$. We need to prove $S$ has less than $n \cdot (n^2)^{n-1}$ elements. We can inject $S$ into $\{1,\ldots,n\} \times S_{n-1,n^2}$ by sending $f \in S$ to the pair $(x,f_x)$ where $x$ is the smallest member of $\{1,\ldots,n\}$ with $f(x) = 1$, and $f_x$ is obtained from $f$ by "skipping" over $x$ (in order to get a function with one less element in the domain). In order to see the inequality is strict, note that (I think maybe assuming $n \geq 2$) $(n,1)$ is not in the range of this injection (here $1$ is the constant function). This completes the proof.</p>
<p>This isn't combinatorial, but my favorite proof is from N.S Mendelsohn, "An application of a famous inequality", Amer. Math. Monthly 58 (1951), 563.</p> <p>The proof use the arithmetic-geometric mean inequality (AGMI) in the form $(\frac{1}{n}\sum_{i=1}^n a_i)^n \ge \prod_{i=1}^n a_i $ with the inequality being strict if not all the $a_i$ are equal.</p> <p>Consider $n$ values of $1+1/n$ and 1 value of 1. By the AGMI, $((n+2)/(n+1))^{n+1} &gt; (1+1/n)^n$, or $(1+1/(n+1))^{n+1} &gt; (1+1/n)^n$.</p> <p>Consider $n$ values of $1-1/n$ and 1 value of 1. By the AGMI, $(n/(n+1))^{n+1} &gt; (1-1/n)^n$ or $(1+1/n)^{n+1} &lt; (1+1/(n-1))^n$.</p> <p>An interesting note is that this uses the version of the AGMI in which $n-1$ of the $n$ values are the same, which can be shown to be implied by Bernoulli's inequality ($(1+x)^n \ge 1+nx$ for $x \ge 0$ and integer $n \ge 1$).</p>
differentiation
<p>I'm asked to solve this using calculus:</p> <blockquote> <p>Let $$ f(x) = ax^2 + bx +c .$$ If $ f(1) = 3 $, $f(2) = 7$, $f(3) = 13$, then find $a$, $b$, and $f(0)$.</p> </blockquote> <p>I know I can solve this using solving three equations simultaneously. And I can also solve this using Gauss Jordan or Gaussian elimination method by writing the augmented matrix. But I'm wondering is there any other method to solve this. </p> <p>Solving by any method it turns out that $a = b = c = 1$.</p>
<p>This problem has nothing to do with calculus. Knowing about the symmetries of the quadratic function one can proceed as follows:</p> <p>Make $(2,7)$ your origin. This amounts to introducing the function $$g(y):=f(2+y)-7\ .$$ Then $$g(y)=a'y^2+b'y+c', \qquad g(-1)=-4,\quad g(0)=0,\quad g(1)=6\ ,$$ and therefore $$c'=0,\qquad 2a'=g(1)+g(-1)=2,\qquad 2b'=g(1)-g(-1)=10\ .$$ It follows that $g(y)=y^2+5y$, so that $$f(x)=g(x-2)+7=(x-2)^2+5(x-2)+7=x^2+x+1\ .$$</p>
<p><strong>Hint</strong> </p> <p>We can use the following convenient geometric fact about the graphs of polynomials of degree $\leq 2$:</p> <p><strong>Lemma</strong> The slope $m$ of the secant line between points $(x_1, y_1)$ and $(x_2, y_2)$ on the graph of a polynomial function $f(x) := a x^2 + b x + c$ of degree $\leq 2$ is the slope of the tangent line to the graph of $f$ at the midpoint of the interval $[x_1, x_2]$, that is, $m = f'\left(\frac{x_1 + x_2}{2}\right)$.</p> <p><strong>Proof</strong> By definition, $$m := \frac{f(x_2) - f(x_1)}{x_2 - x_1} = \frac{(a x_2^2 + b x_2 + c) - (a x_1^2 + b x_1 + c)}{x_2 - x_1} = a (x_1 + x_2) + b,$$ but we can write this as $$2a\left(\frac{x_1 + x_2}{2}\right) + b = f'\left(\frac{x_1 + x_2}{2}\right) .$$ QED.</p> <p>This lemma immediately gives $$f'\left(\tfrac{3}{2}\right) = f(2) - f(1), \qquad f'(2) = \frac{f(3) - f(1)}{2}, \qquad f'\left(\tfrac{5}{2}\right) = f(3) - f(2) .$$ Now, $f'$ is itself linear, so it satisfies the same property, and hence we can recover the second derivative of $f$ at a convenient point.</p> <blockquote class="spoiler"> <p>Computing gives $$f''(2) = f'\left(\tfrac{5}{2}\right) - f'\left(\tfrac{3}{2}\right) = [f(3) - f(2)] - [f(2) - f(1)] = f(3) - 2 f(2) + f(1) .$$ We now have $f(2), f'(2), f''(2)$, which lets us recover $f(x)$ from its Taylor polynomial at $x = 2$: $$f(2) + f'(2) (x - 2) + \tfrac{1}{2} f''(2) (x - 2)^2 .$$ Substituting and simplifying gives $f(x) = x^2 + x + 1$.</p> </blockquote>
differentiation
<p>Say I was trying to find the derivative of <span class="math-container">$x^2$</span> using differentiation from first principles. The usual argument would go something like this:</p> <blockquote> <p>If <span class="math-container">$f(x)=x^2$</span>, then <span class="math-container">\begin{align} f'(x) &amp;= \lim_{h \to 0}\frac{(x+h)^2-x^2}{h} \\ &amp;= \lim_{h \to 0}\frac{2hx+h^2}{h} \\ &amp;= \lim_{h \to 0} 2x+h \end{align}</span> As <span class="math-container">$h$</span> approaches <span class="math-container">$0$</span>, <span class="math-container">$2x+h$</span> approaches <span class="math-container">$2x$</span>, so <span class="math-container">$f'(x)=2x$</span>.</p> </blockquote> <p>Throughout this argument, I assumed that <span class="math-container">$$ \lim_{h \to 0}\frac{f(x+h)-f(x)}{h} $$</span> was actually a meaningful object—that the limit actually existed. I don't really understand what justifies this assumption. To me, sometimes the assumption that an object is well-defined can lead you to draw incorrect conclusions. For example, assuming that <span class="math-container">$\log(0)$</span> makes any sense, we can conclude that <span class="math-container">$$ \log(0)=\log(0)+\log(0) \implies \log(0)=0 \, . $$</span> So the <em>assumption</em> that <span class="math-container">$\log(0)$</span> represented anything meaningful led us to incorrectly conclude that it was equal to <span class="math-container">$0$</span>. Often, to prove that a limit exists, we manipulate it until we can write it in a familiar form. This can be seen in the proofs of the chain rule and product rule. But it often seems that that manipulation can only be justified <em>if</em> we know the limit exists in the first place! So what is really going on here?</p> <hr /> <p>For another example, the chain rule is often stated as:</p> <blockquote> <p>Suppose that <span class="math-container">$g$</span> is differentiable at <span class="math-container">$x$</span>, and <span class="math-container">$f$</span> is differentiable at <span class="math-container">$g(x)$</span>. Then, <strong><span class="math-container">$(f \circ g)$</span> is differentiable at <span class="math-container">$x$</span>,</strong> and <span class="math-container">$$ (f \circ g)'(x) = f'(g(x))g'(x) $$</span></p> </blockquote> <p>If the proof that <span class="math-container">$(f \circ g)$</span> is differentiable at <span class="math-container">$x$</span> simply amounts to computing the derivative using the limit definition, then again I feel unsatisfied. Doesn't this computation again make the assumption that <span class="math-container">$(f \circ g)'(x)$</span> makes sense in the first place?</p>
<p>You're correct that it doesn't really make sense to write <span class="math-container">$\lim\limits_{h\to 0}\frac{f(x+h)-f(x)}{h}$</span> unless we already know the limit exists, but it's really just a grammar issue. To be precise, you could first say that the difference quotient can be re-written <span class="math-container">$\frac{f(x+h)-f(x)}{h}=2x+h$</span>, and then use the fact that <span class="math-container">$\lim\limits_{h\to 0}x=x$</span> and <span class="math-container">$\lim\limits_{h\to 0}h=0$</span> as well as the constant-multiple law and the sum law for limits.</p> <p>Adding to the last sentence: most of the familiar properties of limits are written &quot;backwards&quot; like this. I.e., the &quot;limit sum law&quot; says <span class="math-container">$$\lim\limits_{x\to c}(f(x)+g(x))=\lim\limits_{x\to c}f(x)+\lim\limits_{x\to c}g(x)$$</span> <em>as long as <span class="math-container">$\lim\limits_{x\to c}f(x)$</span> and <span class="math-container">$\lim\limits_{x\to c}g(x)$</span> exist</em>. Of course, if they don't exist, then the equation we just wrote is meaningless, so really we should begin with that assertion.</p> <p>In practice, one can usually be a bit casual here, if for no other reason than to save word count. In an intro analysis class, though, you would probably want to be as careful as you reasonably can.</p>
<p>The other answers are perfectly fine; just a perspective that can save your day in situations in which the existence of the limit is actually a critical point.</p> <p>The crucial definition is the one of limsup and liminf: these are always well defined, and all you have to know at the moment are the following two properties:</p> <ol> <li><span class="math-container">$\liminf_{x \to x_0} f(x) \le \limsup_{x\to x_0} f(x) $</span></li> <li>The limit of <span class="math-container">$f$</span> exist if and only if <span class="math-container">$\liminf_{x \to x_0} f(x) = \limsup_{x\to x_0} f(x) $</span>, and in this case the limit agree with this value.</li> </ol> <p>Now imagine you do your computation twice: firstly, you compute the liminf; then you compute the limsup. In both computations, as soon as you arrive to something that actually has limit (like <span class="math-container">$2x+h$</span>), because of property (2) you can forget about the inf/sup story and just compute the limit.</p> <p>Since with some manipulations you arrive to something that actually has limit, both calculations will give the same result and, because of property (2) again, the limit exist and coincide with the value you just computed.</p> <p>Now this is not really the thing you should do if you are doing introductory analysis and you don't know liminf and limsup: formal properties of these two are slightly different from the formal properties of lim, and you could end up with an error. But as long as you don't &quot;touch&quot; the limit, and you just make some manipulation inside theimit, the same argument will carry on: if you end up with a well defined result, it is the limit :)</p>
matrices
<p>Can <span class="math-container">$\det(A + B)$</span> expressed in terms of <span class="math-container">$\det(A), \det(B), n$</span> where <span class="math-container">$A,B$</span> are <span class="math-container">$n\times n$</span> matrices?</p> <h3></h3> <p>I made the edit to allow <span class="math-container">$n$</span> to be factored in.</p>
<p>When <span class="math-container">$n=2$</span>, and suppose <span class="math-container">$A$</span> has inverse, you can easily show that</p> <p><span class="math-container">$\det(A+B)=\det A+\det B+\det A\,\cdot \mathrm{Tr}(A^{-1}B)$</span>.</p> <hr /> <p>Let me give a general method to find the determinant of the sum of two matrices <span class="math-container">$A,B$</span> with <span class="math-container">$A$</span> invertible and symmetric (The following result might also apply to the non-symmetric case. I might verify that later...). I am a physicist, so I will use the index notation, <span class="math-container">$A_{ij}$</span> and <span class="math-container">$B_{ij}$</span>, with <span class="math-container">$i,j=1,2,\cdots,n$</span>. Let <span class="math-container">$A^{ij}$</span> donate the inverse of <span class="math-container">$A_{ij}$</span> such that <span class="math-container">$A^{il}A_{lj}=\delta^i_j=A_{jl}A^{li}$</span>. We can use <span class="math-container">$A_{ij}$</span> to lower the indices, and its inverse to raise. For example <span class="math-container">$A^{il}B_{lj}=B^i{}_j$</span>. Here and in the following, the Einstein summation rule is assumed.</p> <p>Let <span class="math-container">$\epsilon^{i_1\cdots i_n}$</span> be the totally antisymmetric tensor, with <span class="math-container">$\epsilon^{1\cdots n}=1$</span>. Define a new tensor <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}=\epsilon^{i_1\cdots i_n}/\sqrt{|\det A|}$</span>. We can use <span class="math-container">$A_{ij}$</span> to lower the indices of <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}$</span>, and define <span class="math-container">$\tilde\epsilon_{i_1\cdots i_n}=A_{i_1j_1}\cdots A_{i_nj_n}\tilde\epsilon^{j_1\cdots j_n}$</span>. Then there is a useful property: <span class="math-container">$$ \tilde\epsilon_{i_1\cdots i_kl_{k+1}\cdots l_n}\tilde\epsilon^{j_1\cdots j_kl_{k+1}\cdots l_n}=(-1)^sl!(n-l)!\delta^{[j_1}_{i_1}\cdots\delta^{j_k]}_{i_k}, $$</span> where the square brackets <span class="math-container">$[]$</span> imply the antisymmetrization of the indices enclosed by them. <span class="math-container">$s$</span> is the number of negative elements of <span class="math-container">$A_{ij}$</span> after it has been diagonalized.</p> <p>So now the determinant of <span class="math-container">$A+B$</span> can be obtained in the following way <span class="math-container">$$ \det(A+B)=\frac{1}{n!}\epsilon^{i_1\cdots i_n}\epsilon^{j_1\cdots j_n}(A+B)_{i_1j_1}\cdots(A+B)_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\tilde\epsilon^{i_1\cdots i_n}\tilde\epsilon^{j_1\cdots j_n}\sum_{k=0}^n C_n^kA_{i_1j_1}\cdots A_{i_kj_k}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon^{j_1\cdots j_k}{}_{i_{k+1}\cdots i_n}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$</span> <span class="math-container">$$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon_{j_1\cdots j_ki_{k+1}\cdots i_n}B_{i_{k+1}}{}^{j_{k+1}}\cdots B_{i_n}{}^{j_n} $$</span> <span class="math-container">$$ =\frac{\det A}{n!}\sum_{k=0}^nC_n^kk!(n-k)!B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$</span> <span class="math-container">$$ =\det A\sum_{k=0}^nB_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$</span> <span class="math-container">$$ =\det A+\det A\sum_{k=1}^{n-1}B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]}+\det B. $$</span></p> <p>This reproduces the result for <span class="math-container">$n=2$</span>. An interesting result for physicists is when <span class="math-container">$n=4$</span>,</p> <p><span class="math-container">\begin{split} \det(A+B)=&amp;\det A+\det A\cdot\text{Tr}(A^{-1}B)+\frac{\det A}{2}\{[\text{Tr}(A^{-1}B)]^2-\text{Tr}(BA^{-1}BA^{-1})\}\\ &amp;+\frac{\det A}{6}\{[\text{Tr}(BA^{-1})]^3-3\text{Tr}(BA^{-1})\text{Tr}(BA^{-1}BA^{-1})+2\text{Tr}(BA^{-1}BA^{-1}BA^{-1})\}\\ &amp;+\det B. \end{split}</span></p>
<p>When $n\ge2$, the answer is no. To illustrate, consider $$ A=I_n,\quad B_1=\pmatrix{1&amp;1\\ 0&amp;0}\oplus0,\quad B_2=\pmatrix{1&amp;1\\ 1&amp;1}\oplus0. $$ If $\det(A+B)=f\left(\det(A),\det(B),n\right)$ for some function $f$, you should get $\det(A+B_1)=f(1,0,n)=\det(A+B_2)$. But in fact, $\det(A+B_1)=2\ne3=\det(A+B_2)$ over any field.</p>
probability
<p>How should I understand the difference or relationship between binomial and Bernoulli distribution?</p>
<p>A Bernoulli random variable has two possible outcomes: $0$ or $1$. A binomial distribution is the sum of <strong>independent</strong> and <strong>identically</strong> distributed Bernoulli random variables.</p> <p>So, for example, say I have a coin, and, when tossed, the probability it lands heads is $p$. So the probability that it lands tails is $1-p$ (there are no other possible outcomes for the coin toss). If the coin lands heads, you win one dollar. If the coin lands tails, you win nothing.</p> <p>For a <em>single</em> coin toss, the probability you win one dollar is $p$. The random variable that represents your winnings after one coin toss is a Bernoulli random variable.</p> <p>Now, if you toss the coin $5$ times, your winnings could be any whole number of dollars from zero dollars to five dollars, inclusive. The probability that you win five dollars is $p^5$, because each coin toss is independent of the others, and for each coin toss the probability of heads is $p$.</p> <p>What is the probability that you win <em>exactly</em> three dollars in five tosses? That would require you to toss the coin five times, getting exactly three heads and two tails. This can be achieved with probability $\binom{5}{3} p^3 (1-p)^2$. And, in general, if there are $n$ Bernoulli trials, then the sum of those trials is binomially distributed with parameters $n$ and $p$.</p> <p>Note that a binomial random variable with parameter $n = 1$ is equivalent to a Bernoulli random variable, i.e. there is only one trial.</p>
<p>All Bernoulli distributions are binomial distributions, but most binomial distributions are not Bernoulli distributions.</p> <p>If $$ X=\begin{cases} 1 &amp; \text{with probability }p, \\ 0 &amp; \text{with probability }1-p, \end{cases} $$ then the probability distribution of the random variable $X$ is a Bernoulli distribution.</p> <p>If $X=X_1+\cdots+X_n$ and each of $X_1,\ldots,X_n$ has a Bernoulli distribution with the same value of $p$ and they are independent, then $X$ has a binomial distribution, and the possible values of $X$ are $\{0,1,2,3,\ldots,n\}$. If $n=1$ then that binomial distribution is a Bernoulli distribution.</p>
geometry
<p>Coming from a physics background, my understanding of geometry (in a very generic sense) is that it involves taking a space and adding some extra structure to it. The extra structure takes some local data about the space as its input and outputs answers to local or global questions about the space + structure. We can use it to probe either the structure itself or the underlying space it lives on. For example, we can take a smooth manifold and add a Riemannian metric and a connection, and then we can ask about distances between points, curvature, geodesics, etc. In symplectic geometry, we take an even-dimensional manifold and add a symplectic form, and then we can ask about... well, honestly, I don't know. But I'm sure there is interesting stuff you can ask.</p> <p>Knowing very little about algebraic geometry, I am wondering what the &quot;geometry&quot; part is. I am assuming that the spaces in this case are algebraic varieties, but what is the extra structure that gets added? What sorts of questions can we answer with this extra structure that we couldn't answer without it?</p> <p>I have to guess that this is a little more complicated than just taking a manifold and adding a metric, otherwise I would expect to be able to find this explained in a relatively straightforward way somewhere. If it turns out the answer is &quot;it's hard to explain, and you just need to read an algebraic geometry text,&quot; then that's fine. In that case, it would be interesting to try to get a sense of <em>why</em> it's more complicated. (I have a guess, which is that varieties tend to be a lot less tame than manifolds, so you have to jump through more technical hoops to tack on extra stuff to them, but that's pure speculation.)</p>
<p>This is a big complicated question and many different kinds of answers could be given at many different levels of sophistication. The very short answer is that the geometry in algebraic geometry comes from considering only polynomial functions as the meaningful functions. Here is essentially the simplest nontrivial example I know of:</p> <p>Consider the intersection of the unit circle <span class="math-container">$\{ x^2 + y^2 = 1 \}$</span> with a vertical line <span class="math-container">$\{ x = c \}$</span>, for different values of the parameter <span class="math-container">$c$</span>. If <span class="math-container">$-1 &lt; c &lt; 1$</span> we get two intersection points. If <span class="math-container">$c &gt; 1$</span> or <span class="math-container">$c &lt; -1$</span> we get no (real) intersection points. But something special happens at <span class="math-container">$c = \pm 1$</span>: in this case the vertical lines <span class="math-container">$x = \pm 1$</span> are tangent to the circle. This tangency is invisible if we just consider the &quot;set-theoretic intersection&quot; of the circle and the line, which consists of a single point; for various reasons (e.g. to make <a href="https://en.wikipedia.org/wiki/B%C3%A9zout%27s_theorem" rel="noreferrer">Bezout's theorem</a> true) we'd like a way to formalize the intuition that this intersection has &quot;multiplicity two&quot; in some sense, and so is geometrically more interesting than just a single point.</p> <p>This can be done by taking what is called the <a href="https://en.wikipedia.org/wiki/Scheme-theoretic_intersection" rel="noreferrer">scheme-theoretic intersection</a>. This is a complicated name for a simple idea: instead of asking directly what the intersection is, we ask what the ring of <em>polynomial functions</em> on the intersection is. The ring of polynomial functions on the unit circle is the <a href="https://en.wikipedia.org/wiki/Quotient_ring" rel="noreferrer">quotient ring</a> <span class="math-container">$\mathbb{R}[x, y]/(x^2 + y^2 - 1)$</span>, while the ring of polynomial functions on the vertical line is the quotient ring <span class="math-container">$\mathbb{R}[x, y]/(x - c) \cong \mathbb{R}[y]$</span>. It turns out that the ring of polynomial functions on the intersection is the quotient by both of the defining polynomials, which gives, say at <span class="math-container">$x = 1$</span> to be concrete,</p> <p><span class="math-container">$$\mathbb{R}[x, y]/(x^2 - y^2 - 1, x - 1) \cong \mathbb{R}[y]/y^2.$$</span></p> <p>This is a funny ring: it has a nontrivial <a href="https://en.wikipedia.org/wiki/Nilpotent" rel="noreferrer">nilpotent</a>! That nilpotent <span class="math-container">$y$</span> is exactly telling us the sense in which the intersection has &quot;multiplicity two&quot;; it's saying that a function on the scheme-theoretic intersection records not only its value at the intersection point but its derivative with respect to tangent vectors at the intersection point, reflecting the geometric fact that the unit circle and the line are tangent and so share a tangent space. In other words it is saying, roughly speaking, that the intersection is &quot;two points infinitesimally close together, connected by an infinitesimally short vector.&quot;</p> <p>Adding nilpotents to geometry takes some getting used to but it turns out to be very useful; among other things it is possible to define tangent spaces in algebraic geometry this way (<a href="https://en.wikipedia.org/wiki/Zariski_tangent_space" rel="noreferrer">Zariski tangent spaces</a>), hence to define <a href="https://en.wikipedia.org/wiki/Lie_algebra" rel="noreferrer">Lie algebras</a> of <a href="https://en.wikipedia.org/wiki/Algebraic_group" rel="noreferrer">algebraic groups</a> in a purely algebraic way.</p> <p>So, this is one story you can tell about what kind of geometry algebraic geometry captures, and there are many others, for example the rich story of <a href="https://en.wikipedia.org/wiki/Arithmetic_geometry" rel="noreferrer">arithmetic geometry</a> and its applications to number theory. It's difficult to say anything remotely complete here because algebraic geometry is <em>absurdly</em> general and the sorts of geometry it is capable of capturing veer off in wildly different directions depending on what you're interested in.</p>
<p>A classical manifold is a space that locally looks like <span class="math-container">$\mathbb{R}^n$</span>; or, via results like the Whitney embedding theorem, a suitably nice subspace of some <span class="math-container">$\mathbb{R}^N$</span>. If &quot;looks like&quot; involves some notion of smoothness, for example, then we can expand into differential geometry and talk about constructions like tangent spaces and differential forms. If we stick to just continuity, then we can still work with some constructions like homology and cohomology (just not, say, de Rham cohomology), and we can deal with more pathological spaces and functions between them.</p> <p>A natural question to ask, then, is what's so special about <span class="math-container">$\mathbb{R}^n$</span>? We can consider spaces that locally look like an arbitrary Banach space, for example. (I don't think this is a particularly popular approach, at least, at the undergrad/early grad school level, but Abraham, Marsden, and Ratiu works in this category.) The starting point of algebraic geometry is wanting to deal with spaces over an arbitrary commutative ring. It's not clear how continuity or smoothness should map over to this case, but at the very least polynomials make sense over an arbitrary ring, and we can look at the space like <span class="math-container">$V(f) = \{x\in k^n:\, f(x) = 0\}$</span> for a polynomial <span class="math-container">$f\in k[X_1, \dots, X_n]$</span>. But that's not exactly what we want either; for the important case of <span class="math-container">$k$</span> finite, for example, <span class="math-container">$V(f)$</span> is just a finite collection of points.</p> <p>The analogy that turns out to work is going in the opposite direction, and trying to generalize the idea of functions on a manifold. To that end, algebraic geometry works with locally ringed spaces, which are pairs <span class="math-container">$(X, \mathcal{O}_X)$</span> with <span class="math-container">$X$</span> a topological space and <span class="math-container">$\mathcal{O}_X$</span> a sheaf of rings on <span class="math-container">$X$</span> satisfying properties roughly analogous to what you'd expect for, say, smooth functions on a manifold. In rough terms, what you wind up with is a space that locally looks like the spectrum of a commutative ring--- but unlike the case of real manifolds, that ring can vary along the space. That's admittedly a vague analogy, and it takes a lot of technical results to even talk about the resulting object. But if you're familiar with vector bundles, for example, then consider Swan's Theorem: For a smooth, connected, closed manifold <span class="math-container">$X$</span>, the sections functor <span class="math-container">$\Gamma(\cdot)$</span> gives an equivalence between vector bundles over <span class="math-container">$X$</span> and f.g., projective modules over the ring <span class="math-container">$C^\infty(X)$</span>.</p> <p>So, what makes this algebraic thing we've constructed look geometric? Smoothness doesn't make sense outside of <span class="math-container">$\mathbb{R}^n$</span>, but if we're working with polynomials, they have a formal derivative that allows us to do roughly the same thing. More generally, a local ring <span class="math-container">$(R, \mathfrak{m})$</span> has a cotangent space <span class="math-container">$\mathfrak{m}/\mathfrak{m}^2$</span> that's roughly analogous to the cotangent space of a manifold; and with a bit of work, we can get something that at least has some of the formal properties one wants for a tangent or cotangent space. Even though the topology we're working with turns out to be much more complicated than the case of manifolds (the Zariski topology, for example, is generally non-Hausdorff), we still have a notion of cohomology (the simplest being Cech cohomology with a sheaf). There's a massive jump in abstraction and technical requirements compared with the more geometric case, but algebraic geometry turns out to be the right extension of more familiar geometry when dealing with things such as, say, number fields.</p>
logic
<p>Often, we find different proofs for certain theorems that, on the surface, seem to be very different but actually use the same fundamental ideas. For example, the topological proof of the infinitude of primes is quite different than the standard one, but the arguments are essentially the same, just using different vocabulary.</p> <p>So, my question is the following:</p> <blockquote> <p>Is there a rigorous notion of "proof isomorphism"?</p> </blockquote>
<p>The question of proof equivalence is quite an old one! In fact, David Hilbert considered adding it (or a similar one) to his celebrated <a href="http://en.wikipedia.org/wiki/Hilbert%27s_problems" rel="nofollow noreferrer">list of open problems</a>, but finally decided to leave it out, so it is sometimes referred to as <a href="http://en.wikipedia.org/wiki/Hilbert%27s_twenty-fourth_problem" rel="nofollow noreferrer">Hilbert's 24th problem</a>.</p> <p>There is a rather well-established field investigating proof equivalence, though definitely no clear answer to either your or Hilbert's question (and it is likely that this is quite out of reach). However, here are some various notions of proof equivalence of increasing strength.</p> <ol> <li><p>Equality w.r.t. <strong>variable re-naming</strong> (also called <span class="math-container">$\alpha$</span>-renaming). This is much too fine: clearly there are proofs that are &quot;morally&quot; the same, but that differ by more than variable re-namings.</p> </li> <li><p>Equality w.r.t. <strong>definition unfolding</strong>. This doesn't solve all of the above problems, but it's clear that if one proof involves a compact definition and the other does not, they should be seen to be the same.</p> </li> <li><p>Equality w.r.t. <a href="http://en.wikipedia.org/wiki/Cut-elimination_theorem" rel="nofollow noreferrer"><strong>cut elimination</strong></a>. This one is much more interesting! For two proofs <span class="math-container">$\Delta$</span>, <span class="math-container">$\Phi$</span>, we say that <span class="math-container">$\Delta\simeq_{\mathrm{cut}}\Phi$</span> if the two proofs are the same (modulo variable re-naming) <em>after having eliminated all cuts</em>. Now a lot of rather different proofs become quite similar, e.g. proofs in calculus involving abstract &quot;open/closed&quot; terminology can be reduced to simple proofs involving <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> notation. This <em>still</em> isn't satisfactory since e.g., some hypotheses can be used in different orders, or some useless steps still remain. It's also not clear that this is not <em>too</em> coarse, since sometimes the use of crucial lemmas makes a proof <em>much simpler</em>, and cut-elimination makes all intermediate lemmas &quot;disappear&quot;.</p> </li> <li><p>Equality w.r.t. cut elimination with <strong>commutative cuts</strong>. See, for example <a href="http://www.lama.univ-savoie.fr/%7Etypes09/slides/types09-slides-45.pdf" rel="nofollow noreferrer">these</a> really nice slides by Clement Houtmann. This might get closer to the &quot;right&quot; notion, though as you can see things start to get a bit subjective at this point. What does it mean to &quot;use the same idea&quot;?</p> </li> </ol> <p>As Bruno mentioned, there is a deep connection between proofs of propositions and certain programs in particular programing languages, so one may re-formulate the question as</p> <blockquote> <p>When are two programs the same?</p> </blockquote> <p>with very fruitful results. The conclusion should be that this is a very active area of research in proof theory, with connections to computer science and <a href="http://en.wikipedia.org/wiki/Categorical_logic" rel="nofollow noreferrer">categorical logic</a>.</p> <hr /> <p><strong>Addendum</strong></p> <p>Looking back at this answer a couple of years later, I feel like I should add a couple of more recent research directions around this idea of proof equivalence.</p> <ol> <li><p><a href="https://en.wikipedia.org/wiki/Focused_proof" rel="nofollow noreferrer">Focusing</a> is a proof system that refines the classical sequent presentation of derivations, trying to make irrelevant choices disappear by distinguishing between the &quot;invertible&quot; rules, which can be applied at any time without worry and the &quot;positive&quot; rules in which choices do matter. A pretty neat paper explaining the philosophy of this approach is <a href="https://www.sciencedirect.com/science/article/pii/S0168007208000080" rel="nofollow noreferrer">Zeilberger, On the unity of duality</a></p> </li> <li><p>Simlarly, <a href="https://en.wikipedia.org/wiki/Proof_net" rel="nofollow noreferrer">proof nets</a> try to represent families of proofs in ways that quotient out irrelevant proof search distinctions. I know less about this, but it is certainly related to focusing.</p> </li> </ol>
<p>This <em>proof irrelevance</em> is one of the problems of classic foundations.</p> <p>In <a href="http://en.wikipedia.org/wiki/Type_theory" rel="nofollow">Type Theory</a>, however, we represent mathematical statements as types, which enables us to treat proofs as mathematical objects. This is because of a well-known isomorphism between types and propositions a.k.a. <strong>Curry-Howard Correspondence</strong>, which roughly says that to find a proof of a statement <em>A</em> is to find a inhabitant of this type: $$a:A$$ Which, in the point of view of logic, can be read '<em>a</em> is a proof of the proposition <em>A</em>'. </p> <p>In this sense, to prove a proposition is to construct an inhabitant of a type, which means that every mathematical proof can be seen as an algorithm. This is related with "constructive" (intuitionistic) conception of logic where (i) to prove a statement of the form "A and B" is to find a proof of A and a proof of B, (i) to prove that A implies B is to find a function that converts a proof of A into a proof of B (iii) every prove that something exists carries with it enough information to exhibit such object and so on. Hence equality of elements of a type (proofs) is treated intensionally.</p> <p>Now <a href="http://homotopytypetheory.org/" rel="nofollow">Homotopy Type Theory</a> thinks of types as "homotopical spaces", interpreting, as stated in the comments, the relation of identity $a=b$ between elements (proofs) of the same type (proposition) $a,b: A$ as homotopical equivalence, understood as a path from the point $a$ to the point $b$ in the space $A$. The HoTT book is available for free in the project website.</p>
differentiation
<p>I read somewhere, gradient vector is defined only for scalar valued functions and not for vector valued functions. When gradient of a vector is definitely defined(correct, right?), why is gradient vector of a vector valued function not defined? Is my understanding incorrect? Is there not a contradiction?</p> <p>I would appreciate clear clarification.</p> <p>Thank You.</p>
<p>$\def\R{{\bf R}}$It's partly an issue of naming. The gradient is most often defined for scalar fields, but the same idea exists for vector fields - it's called the <a href="http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant">Jacobian</a>. </p> <p>Taking the gradient of a vector valued function is a perfectly sensible thing to do. You just don't usually call it the gradient.</p> <p>A neat way to think about the gradient is as a higher-order function (i.e. a function whose arguments or return values are functions). Specifically, the gradient operator takes a function between two vector spaces $U$ and $V$, and returns another function which, when evaluated at a point in $U$, gives a linear map between $U$ and $V$.</p> <hr> <p>We can look at an example to get intuition. Consider the scalar field $f:\R^2\to\R$ given by</p> <p>$$f(x,y) = x^2+y^2$$</p> <p>The gradient $g=\nabla f$ is the function on $\R^2$ given by </p> <p>$$g(x,y) = \left(2x, 2y\right)$$</p> <p>We can interpret $(2x,2y)$ as an element of the space of linear maps from $\R^2$ to $\R$. I will denote this space $L(\R^2,\R)$.</p> <p>Therefore $g=\nabla f$ is a function that takes an element of $\R^2$ and returns an element of $L(\R^2,\R)$. Schematically,</p> <p>$$g: \R^2 \to L(\R^2 ,\R)$$</p> <p>This means that $\nabla$ should be interpreted as a higher-order function</p> <p>$$\nabla : (\R^2 \to \R) \to (\R^2 \to L(\R^2, \R))$$</p> <hr> <p>There's nothing special about $\R^2$ and $\R$ here. The construction works for any vector spaces $U$ and $V$, giving</p> <p>$$\nabla : (U\to V) \to (U \to L(U,V))$$</p> <p>A good reference for this way of thinking about the gradient is Spivak's book <a href="http://www.amazon.co.uk/Calculus-On-Manifolds-Michael-Spivak/dp/0805390219/ref=sr_1_1?ie=UTF8&amp;qid=1351432069&amp;sr=8-1">Calculus on Manifolds</a>.</p>
<p>Sure enough a vector valued function ${\bf f}$ can have a derivative, but this derivative does not have the "type" of a vector, unless the domain or the range of ${\bf f}$ is one-dimensional. The general setup is the following: Given a function $${\bf f}:\quad{\mathbb R}^n\to{\mathbb R}^m,\qquad {\bf x}\mapsto {\bf y}={\bf f}({\bf x})$$ and a point ${\bf p}$ in the domain of ${\bf f}$ the derivative of ${\bf f}$ at ${\bf p}$ is a linear map $d{\bf f}({\bf p})=:L$ that maps the tangent space $T_{\bf p}$ to the tangent space $T_{\bf q}$, where ${\bf q}:={{\bf f}({\bf p})}$. The matrix of $L$ with respect to the standard bases is the <em>Jacobian</em> of ${\bf f}$ at ${\bf p}$ and is given by $$\bigl[L\bigr]=\left[{\partial y_i\over\partial x_k}\right]_{1\leq i\leq m,\ 1\leq k\leq n}\ .$$ If $m=1$, i.e., if ${\bf f}$ is in a fact a scalar function, then the matrix $\bigl[L\bigr]$ has just one row (of length $n$): $$\bigl[L\bigr]=\bigl[{\partial f\over\partial x_1} \ {\partial f\over\partial x_2}\ \ldots\ {\partial f\over\partial x_n}\bigr]_{\bf p}\ .$$ The $n$ entries of this one-row matrix can be viewed as coordinates of a vector which is then called the <em>gradient</em> of $f$ at ${\bf p}$.</p>
geometry
<p>My headphone cables formed this knot:</p> <p><a href="https://i.sstatic.net/Q7s4H.png"><img src="https://i.sstatic.net/Q7s4H.png" alt="enter image description here"></a></p> <p>however I don't know much about knot theory and cannot tell what it is. In my opinion it isn't a figure-eight knot and certainly not a trefoil. Since it has $6$ crossings that doesn't leave many other candidates!</p> <p>What is this knot? How could one figure it out for similarly simple knots?</p>
<p>Arthur's answer is completely correct, but for the record I thought I would give a general answer for solving problems of this type using the <a href="http://www.math.uic.edu/t3m/SnapPy/index.html">SnapPy software package</a>. The following procedure can be used to recognize almost any prime knot with a small number of crossings, and takes about 10-15 minutes for a new user.</p> <p><strong>Step 1.</strong> Download and install the SnapPy software from the <a href="http://www.math.uic.edu/t3m/SnapPy/installing.html">SnapPy installation page</a>. This is very quick and easy, and works in Mac OS X, Windows, or Linux.</p> <p><strong>Step 2.</strong> Open the software and type:</p> <pre><code>M = Manifold() </code></pre> <p>to start the link editor. (Here "manifold" refers to the <a href="https://en.wikipedia.org/wiki/Knot_complement">knot complement</a>.)</p> <p><strong>Step 3.</strong> Draw the shape of the knot. Don't worry about crossings to start with: just draw a closed polygonal curve that traces the shape of the knot. Here is the shape that I traced: <a href="https://i.sstatic.net/WDNbk.png"><img src="https://i.sstatic.net/WDNbk.png" alt="enter image description here"></a></p> <p>If you make a mistake, choose "Clear" from the Tools menu to start over.</p> <p><strong>Step 4.</strong> After you draw the shape of the knot, you can click on the crossings with your mouse to change which strand is on top. Here is my version of the OP's knot:</p> <p><a href="https://i.sstatic.net/BrzcX.png"><img src="https://i.sstatic.net/BrzcX.png" alt="enter image description here"></a></p> <p><strong>Step 5.</strong> Go to the "Tools" menu and select "Send to SnapPy". My SnapPy shell now looks like this:</p> <p><a href="https://i.sstatic.net/Cd6w1.png"><img src="https://i.sstatic.net/Cd6w1.png" alt="enter image description here"></a></p> <p><strong>Step 6.</strong> Type</p> <pre><code>M.identify() </code></pre> <p>The software will give you various descriptions of the manifold, one of which will identify the prime knot using Alexander-Briggs notation. In this case, the output is</p> <pre><code>[5_1(0,0), K5a2(0,0)] </code></pre> <p>and the first entry means that it's the $5_1$ knot.</p>
<p>Here is a redrawing of your knot, with some colour added to it:</p> <p><a href="https://i.sstatic.net/d1RaP.png"><img src="https://i.sstatic.net/d1RaP.png" alt="enter image description here"></a></p> <p>Now, take the red part, and flip it down, and you get this knot:</p> <p><a href="https://i.sstatic.net/QXk4R.png"><img src="https://i.sstatic.net/QXk4R.png" alt="enter image description here"></a></p> <p>which has five crossings, and, if it weren't for my lousy Paint skills, would look an aweful lot like the pentafoil knot $5_1$ (the edge of a mobius strip with $5$ half-twists, or the $(5, 2)$ torus knot), if you just smoothen out the edges a bit</p>
logic
<p>Several times in my studies, I've come across Hilbert-style proof systems for various systems of logic, and when an author says, "<strong>Theorem:</strong> $\varphi$ is provable in system $\cal H$," or "<strong>Theorem:</strong> the following axiomatizations of $\cal H$ are equivalent: ...," I usually just take the author's word as an oracle instead of actually trying to construct a Hilbert-style proof (can you blame me?). However, I would like to change this habit and least be in a position where I <em>could</em> check these claims in principle.</p> <p>On the few occasions where I actually did try to construct a (not-so-short) Hilbert-style proof from scratch, I found it easier to first construct a proof in the corresponding natural deduction system, show that the natural deduction system and the Hilbert system were equivalent, and then try to deconstruct the natural deduction system into a Hilbert-style proof (in the style of Anderson and Belnap). The problem with that (apart from being tortuous) is that I would need the natural deduction system first, and it's not always obvious to me how to construct the natural deduction system given the axioms (sometimes it's not so bad; it's easy to see, for instance, that $(A \rightarrow (A \rightarrow B)) \rightarrow (A \rightarrow B)$ corresponds to contraction; but it's not always <em>that</em> easy...).</p> <p><strong>So I'm wondering</strong>: are there "standard tricks" for constructing Hilbert-style proofs floating around out there? Or are there tricks for constructing a corresponding natural deduction system given a set of Hilbert axioms? Or is it better to just accept proof-by-oracle?</p>
<p>Regarding "tricks for constructing a corresponding natural deduction system given a set of Hilbert axioms": Constructing natural deduction systems corresponding to axiomatic propositional or first-order systems isn't too hard when most of the axioms have fairly clear 'meanings', but I think it gets a bit tricker with nonclassical logics. <a href="http://www.ualberta.ca/~francisp/papers/PellHazenSubmittedv2.pdf">Pelletier &amp; Hazen's <em>Natural Deduction</em></a> gives a good overview of some different types of natural deduction systems. See, in particular, &sect;2.3, pp. 6–12, <em>The Beginnings of Natural Deduction: Jaśkowski and Gentzen (and Suppes) on Representing Natural Deduction Proofs</em>. I think that there are three types of natural deduction systems that should be considered (in order of increasing ease of translation from the natural deduction system to the axiomatic system): Gentzen-style; Fitch-style (Jaśkowski's first method); and Suppes-style (Jaśkowski's second method).</p> <h2>Gentzen-style Natural Deduction</h2> <p>Gentzen-style natural deduction use proof trees composed of instances of inference rules. Inference rules typically look like this:</p> <p>$$ \begin{array}{c} A \quad B \\ \hline A \land B \end{array}\land I \qquad \begin{array}{c} A \\ \hline A \lor B \end{array}\lor I \qquad \begin{array}{c} [A] \\ \vdots \\ B \\ \hline A \to B \end{array}\to I $$</p> <p>A significant difference between axiomatic systems and Gentzen-style natural deduction is that intermediate deductions can be cited by any later line in an axiomatic system, but can only used <em>once</em> in a proof tree. For instance, to prove $(P \land Q) \land P$ from $P \land Q$ requires three instances of the assumption $P \land Q$ in a proof tree:</p> <p>$$ \frac{\displaystyle \frac{\displaystyle \frac{[P \land Q]}{Q} \quad \frac{[P \land Q]}{P}}{Q \land P} \quad \frac{[P \land Q]}{P} }{ (Q \land P) \land P } $$</p> <p>There's no way to reuse the intermediate deduction of $P$ from $P \land Q$. A naïve translation of a proof tree into corresponding axiomatic deductions will probably be pretty verbose with lots of repeated work (but it would be easy to check for and eliminate redundant deductions in the axiomatic proof). A very nice benefit of these systems, however, is that it is very easy to determine where a rule can be applied, and whether a formula is "in-scope" for use as a premise. Fitch-style and Suppes-style systems are more complicated in this regard.</p> <h2>Fitch-style Natural Deduction Systems</h2> <p><a href="http://en.wikipedia.org/wiki/Fitch-style_calculus">Fitch-style natural deduction systems</a> for propositional logic have a type of subproof for conditional introduction. These capture "<em>Suppose</em> $\phi$. … $\psi$. Therefore (no longer supposing $\phi$), $\phi \to \psi$.) Even in this simplest type of subproof, the natural deduction has proof-construction rules about how lines in subproofs may be cited (e.g., a line outside of a subproof can't cite lines within the subproof). Still, unlike proof trees, some reusability is gained. For instance, in the Barwise &amp; Etchemendy's <em>Fitch</em> (from <em>Language, Proof, and Logic</em>), would simplify the proof by reusing the deduction of $P$ from $P \land Q$:</p> <ol> <li><ul> <li>$P \land Q$ Assume.</li> </ul></li> <li><ul> <li>$P$ by conjunction elimination with 1.</li> </ul></li> <li><ul> <li>$Q$ by conjunction elimination with with 1.</li> </ul></li> <li><ul> <li>$Q \land P$ by conjunction introduction with 2 and 3.</li> </ul></li> <li><ul> <li>$(Q \land P) \land P$ by conjunction introduction with 2 and 4.</li> </ul></li> </ol> <p>Some presentations allow for conditional introduction from <em>any</em> line top-level line within a subproof. In these presentations, not only can intermediate deductions be reused, but entire subproofs:</p> <ol> <li><ul> <li>$P \land Q$ Assume.</li> </ul></li> <li><ul> <li>$P$ by conjunction elimination with 1.</li> </ul></li> <li><ul> <li>$Q$ by conjunction elimination with with 1.</li> </ul></li> <li>$(P \land Q) \to P$ by conditional introduction with 1–3.</li> <li>$(P \land Q) \to Q$ by conditional introduction with 1–3.</li> </ol> <p>In the first-order case, not only are there subproofs for conditional introduction, but there are subproofs for introducing new 'temporary' individuals (e.g., generic instances for universal introduction, or witnesses for existential elimination). These subproofs require special rules about where the individual of concern may appear.</p> <p>Kenneth Konyndyk's <em>Introductory Modal Logic</em> gives Fitch-style natural deduction systems for <strong>T</strong>, <strong>S4</strong>, and <strong>S5</strong>. In addition to a condtional introduction, these have modal subproofs for necessity-introduction, and those subproofs require special rules for reiterating formulae into the subproof. For instance, in <strong>T</strong>, only a modal formula $\Box \phi$ can be reiterated into a subproof, and when it does, the $\Box$ is dropped. That is, when $\Box \phi$ is outside a subproof, $\phi$ can be reiterated in (but only through one 'layer' of subproof). In <strong>S4</strong>, $\Box\phi$ can still be reiterated into a modal subproof, but the $\Box$ need not be dropped. In <strong>S5</strong> both $\Box\phi$ and $\Diamond\phi$ can be reiterated into a modal subproof, and neither modality needs to be dropped. </p> <p>The point to all this is that in the propositional case, many Hilbert-style axioms correspond nicely to Fitch-style natural deduction style rules, but it seems that the nicest cases are those for boolean connectives. E.g., </p> <p>$$ A \to \left( A \lor B \right) $$</p> <p>and</p> <p>$$ A \to \left( B \to (A \land B)\right) $$</p> <p>turn into "left disjunction introduction" and "conjunction introduction" pretty easily. However, more complicated axiom schemata (such as what would be used for universal introduction, or modal necessitation) that really require new types of subproofs for good natural deduction treatment are trickier to handle nicely.</p> <h2>Suppes-style Natural Deduction Systems</h2> <p>There are, of course, other formalizations of natural deduction than Fitch's. Some of these might make for easier translation from axiomatic systems. For instance, consider a proof of $(A \to (B \to (A \land B))) \land (B \to (A \to (A \land B)))$. In a Fitch-style proof, the left conjunct, $A \to \dots$, would have to be proved in a subproof assuming $A$ containing a subproof containing $B$:</p> <ol> <li><ul> <li>Assume $A$.</li> </ul></li> <li><ul> <li><ul> <li>Assume $B$.</li> </ul></li> </ul></li> <li><ul> <li><ul> <li>$A \land B$ by conjunction introduction with 1 and 2.</li> </ul></li> </ul></li> <li><ul> <li>$B \to (A \land B)$ by conditional introduction with 2–3.</li> </ul></li> <li>$A \to (B \to (A \land B))$ by conditional introduction with 1–4.</li> </ol> <p>Then another five lines are needed to get $B \to (A \to (A \land B))$, and an eleventh for the final conjunction introduction. In Suppes's system, this is shorter (eight lines) because <em>any</em> in-scope assumption can be discharged by conditional introduction, so we can "get out of the subproofs" in different orders:</p> <ol> <li>{1} $A$ Assume.</li> <li>{2} $B$ Assume.</li> <li>{1,2} $A \land B$ $\land$-introduction with 1 and 2.</li> <li>{1} $B \to (A \land B)$ $\to$-introduction with 3.</li> <li>{} $A \to (B \to (A \land B))$ $\to$-introduction with 4.</li> <li>{2} $A \to (A \land B)$ $\to$-introduction with 3.</li> <li>{} $B \to (A \to (A \land B))$ $\to$-introduction with 4.</li> <li>{} $(A \to (B \to (A \land B))) \land (B \to (A \to (A \land B)))$ $\land$-introduction with 5 and 7.</li> </ol> <p>(Note: some implementations of Fitch's system allow this for conditional introduction as well. E.g., in <strong>Fitch</strong> from Barwise and Etchemendy's <em>Language, Proof and Logic</em> conditional introduction can cite a subproof that starts with an assumption $A$ and contains lines $B$ and $C$ to infer both $A \to B$ and $A \to C$.)</p> <p>To use this approach, each inference rule must also specify how the set of tracked assumptions for its conclusion is determined based on the premises of the rule. For most rules, the assumptions of a conclusion are just the union of the assumptions of the premises. Conditional introduction is the obvious exception. This approach also specifies that only lines with empty assumption sets are <em>theorems</em>.</p> <p>This "tracking" approach, though, can be used for other properties too. The same considerations apply: each rule must specify how the tracked properties of the conclusion are computed from the premises, and the proof system must define which sentences are theorems.</p> <p>For instance, in a system for first-order logic, the set of new individuals (for universal generalization or existential elimination) can be tracked, with most rules giving their conclusion the "union of the premises' individuals", with existential elimination and universal introduction the exceptions. Theorems are those sentences with an empty set of individuals and an empty set of assumptions.</p> <p>This approach works nicely for modal logics, too. A Suppes-style proof system for <strong>K</strong>, for instance, in addition to tracking assumptions, tracks a "modal context", which is a natural number or $\infty$. The modal context indicates how many "necessitation contexts" we're in (intuitively, how many times we should be able to apply necessity introduction to a formula). In terms of Kripke semantics, the modal context is how far removed from the designated world we are. Sentences without any assumptions have context $\infty$, corresponding to the $\vdash \phi / \vdash \Box\phi$ rule. The inference rules require that their premises have compatible modal contexts (i.e., all non-$\infty$ modal contexts are the same). The default modal propagation is that the context of the conclusion is the same as the minimum context of the premises. The exceptions are that $\Box$ introduction subtracts 1, and $\Box$ elimination adds 1. Theorems are those sentences that have no assumptions and modal context $\infty$. </p> <ol> <li>{1} (0) $\Box P$ Assume.</li> <li>{2} (0) $\Box (P \to Q)$ Assume.</li> <li>{1} (1) $P$ $\Box$-elimination with 1.</li> <li>{2} (1) $P \to Q$ $\Box$-elimination with 2.</li> <li>{1,2} (1) $Q$ $\to$-elimination with 3 and 4.</li> <li>{1,2} (0) $\Box Q$ $\Box$-introduction with 5.</li> <li>{2} (0) $\Box P \to \Box Q$ $\to$-introduction with 6.</li> <li>{} ($\infty$) $\Box(P \to Q) \to (\Box P \to \Box Q)$ $\to$-introduction with 7.</li> </ol> <p>In Suppes-style proof systems, the question is no longer about reiteration rules, about about property tracking and propagation rules. The purposes are similar, but in practice, certain kinds of axiomatic systems might be easier to translate into one kind or another.</p>
<p>It's a long list of postings, a lot of it in Polish notation that is hard to read. Anyway, the systematic way to do it is to define a translation algorithm from derivations in ND to derivations in axiomatic logic. This is done in my recent "Elements of Logical Reasoning" section 3.6.(c). ND is there written in SC notation with the open assumptions displayed at the left of a turnstile. </p> <p>I don't know if anyone ever mastered fully axiomatic logic. Russell was very bad at it, didn't even notice that one of his axioms was in fact a theorem! I needed the translation to be able to produce some of the more elaborate axiomatic derivations. Would be nice if someone implemented the algorithm. </p>
differentiation
<p>$R(t) \cdot R'(t) = 0$, which is what every source I can find tells me. Even though I understand the proof I don't understand the underlying concept. If $R(t)\cdot R'(t) = 0$, then $R'(t)$ is orthogonal to $R(t)$, right? </p> <p>But you use the same derivative to find the <em>tangent</em> of a curve. Then somehow if you differentiate the tangent itself, you get the <em>normal</em> to the curve.</p> <p>I really can't wrap my head around this. Could someone help me understand?</p>
<p>I'm going to assume that you mean to ask why the derivative of a <em>fixed</em> length vector is perpendicular to the vector itself. Here's the idea:</p> <p><span class="math-container">$$\vec{r}(t)\cdot\vec{r}'(t) = \frac{1}{2}\frac{d}{dt}(\vec{r}(t)\cdot\vec{r}(t)) = 0.$$</span></p> <p>If you want to think about <em>why</em> it is true, think about an object rotating on a circle around the origin. Draw some diagrams to figure out what the position and velocity vectors are. Position points outward radially if we imagine the center of the circle is the origin of the <span class="math-container">$xy$</span> plane. Velocity is the rate of change of position. If you consider the average velocity over some interval of time, it would be a secant vector (if you want to call it that). As you take a limit, the secant vector will become a vector tangent to the circle. See the images below. (Note that there are length considerations for <span class="math-container">$v_{\text{avg}}$</span> that I am ignoring, but that's not super important for this heuristic argument.)</p> <p><img src="https://i.sstatic.net/YaTmt.png" alt="enter image description here" /></p> <p><img src="https://i.sstatic.net/SFPfO.png" alt="enter image description here" /></p> <p><img src="https://i.sstatic.net/EWkHl.png" alt="enter image description here" /></p>
<p>The derivative of <span class="math-container">$\vec{v}$</span> is orthogonal to <span class="math-container">$\vec{v}$</span> <strong>if and only if</strong> <span class="math-container">$\vec{v}$</span> has constant magnitude. To see this:</p> <p><span class="math-container">$|\vec{v}|$</span> is constant <span class="math-container">$\iff|\vec{v}|^2$</span> is constant <span class="math-container">$\iff\sum v_\alpha^2$</span> is constant <span class="math-container">$\iff\frac{d}{dt}\sum v_\alpha^2=0$</span> <span class="math-container">$\iff\sum \frac{d}{dt}(v_\alpha^2)=0$</span> <span class="math-container">$\iff\sum 2v_\alpha\frac{dv_\alpha}{dt}=0$</span> <span class="math-container">$\iff\sum v_\alpha\frac{dv_\alpha}{dt}=0$</span> <span class="math-container">$\iff \vec{v}\cdot\frac{d\vec{v}}{dt}=0$</span></p> <p>You can close your eyes and picture a vector in space with tail at the origin, and head moving... it changes magnitude when and only when the direction it's moving in isn't orthogonal to itself!</p>
linear-algebra
<p>I know that in <span class="math-container">$A\textbf{x}=\lambda \textbf{x}$</span>, <span class="math-container">$\textbf{x}$</span> is the right eigenvector, while in <span class="math-container">$\textbf{y}A =\lambda \textbf{y}$</span>, <span class="math-container">$\textbf{y}$</span> is the left eigenvector.</p> <p>But what is the significance of left and right eigenvectors? How do they differ from each other geometrically?</p>
<p>The (right) eigenvectors for $A$ correspond to lines through the origin that are sent to themselves (or $\{0\}$) under the action $x\mapsto Ax$. The action $y\mapsto yA$ for row vectors corresponds to an action of $A$ on hyperplanes: each row vector $y$ defines a hyperplane $H$ given by $H=\{\text{column vectors }x: yx=0\}$. The action $y\mapsto yA$ sends the hyperplane $H$ defined by $y$ to a hyperplane $H'$ given by $H'=\{x: Ax\in H\}$. (This is because $(yA)x=0$ iff $y(Ax)=0$.) A left eigenvector for $A$, then, corresponds to a hyperplane fixed by this action.</p>
<p>The set of left eigenvectors and right eigenvectors together form what is known as a Dual Basis and Basis pair.</p> <p><a href="http://en.wikipedia.org/wiki/Dual_basis">http://en.wikipedia.org/wiki/Dual_basis</a></p> <p>In simpler terms, if you arrange the right eigenvectors as columns of a matrix B, and arrange the left eigenvectors as rows of a matrix C, then BC = I, in other words B is the inverse of C</p>
linear-algebra
<p>Consider square matrices over a field $K$. I don't think additional assumptions about $K$ like algebraically closed or characteristic $0$ are pertinent, but feel free to make them for comfort. For any such matrix $A$, the set $K[A]$ of polynomials in $A$ is a commutative subalgebra of $M_n(K)$; the question is whether for any pair of commuting matrices $X,Y$ at least one such commutative subalgebra can be found that contains both $X$ and $Y$.</p> <p>I was asking myself this in connection with frequently recurring requests to completely characterise commuting pairs of matrices, like <a href="https://math.stackexchange.com/questions/323011/find-all-a-b-such-that-ab-ba-o">this one</a>. While providing a useful characterisation seems impossible, a positive anwer to the current question would at least provide <em>some</em> answer.</p> <p>Note that in many rather likely situations one can in fact take $A$ to be one of the matrices $X,Y$, for instance when one of the matrices <a href="https://math.stackexchange.com/questions/65012/if-matrices-a-and-b-commute-a-with-distinct-eigenvalues-then-b-is-a-po">has distinct eigenvalues</a>, or more generally if its <a href="https://math.stackexchange.com/questions/57308/commuting-matrices">minimal polynomial has degree $n$</a> (so coincides with the characteristic polynomial). However this is not always possible, as can be easily seen for instance for diagonal matrices $X=\operatorname{diag}(0,0,1)$ and $Y=\operatorname{diag}(0,1,1)$. However in that case both will be polynomials in $A=\operatorname{diag}(x,y,z)$ for any <em>distinct</em> values $x,y,z$ (then $K[A]$ consists of all diagonal matrices); although in the example in <a href="https://math.stackexchange.com/a/83977/18880">this answer</a> the matrices are not both diagonalisable, an appropriate $A$ can be found there as well. </p> <p>I thought for some time that any maximal commutative subalgebra of $M_n(K)$ was of the form $K[A]$ (which would imply a positive answer) for some $A$ with minimal polynomial of degree$~n$, and that a positive answer to my question was in fact instrumental in proving this. However I was wrong on both counts: there exist (for $n\geq 4$) commutative subalgebras of dimension${}&gt;n$ (whereas $\dim_KK[A]\leq n$ for all $A\in M_n(K)$) as shown in <a href="https://mathoverflow.net/questions/29087/commutative-subalgebras-of-m-n/29089#29089">this MathOverflow answer</a>, and I was forced to correct <a href="https://math.stackexchange.com/a/301972/18880">an anwer I gave here</a> in the light of this; however it seems (at least in the cases I looked at) that many (all?) pairs of matrices $X,Y$ in such a subalgebra still admit a matrix $A$ (which in general is <em>not in the subalgebra</em>) such that $X,Y\in K[A]$. This indicates that a positive answer to my question would not contradict the existence of such large commutative subalgebras: it would just mean that to obtain a maximal dimensional subalgebra containing $X,Y$ one should in general <em>avoid</em> throwing in an $A$ with $X,Y\in K[A]$. I do think these large subalgebras easily show that my question but for three commuting matrices has a negative answer.</p> <p>Finally I note that <a href="https://mathoverflow.net/questions/29087/commutative-subalgebras-of-m-n/30931#30931">this other answer</a> to the cited MO question mentions a result by Gerstenhaber that the dimension of the subalgebra generated two commuting matrices in $M_n(K)$ cannot exceed$~n$. This unfortunately does not settle my question (if $X,Y$ would generate a subalgebra of dimension${}&gt;n$, it would have proved a negative answer); it just might be that the mentioned result is true because of the existence of $A$ (I don't have access to a proof right now, but given the formulation it seems unlikely that it was done this way).</p> <p>OK, I've tried to build up the suspense. Honesty demands that I say that I do know the answer to my question, since a colleague of mine provided a convincing one. I will however not give this answer right away, but post it once there has been some time for gathering answers here; who knows somebody will prove a different answer than the one I have (heaven forbid), or at least give the same answer with a different justification.</p>
<p>As promised I will answer my own question. The answer is negative, it can happen that $X$ and $Y$ cannot be written as polynomials of any one matrix $A\in M_n(K)$.</p> <p>Following the comment by Martin Brandenburg, this can even happen when $X$ and $Y$ are both diagonal, if the field $K$ is too small. Indeed if $K$ is a finite field of $q$ elements then for any <em>diagonalisable</em> matrix $A$ one has $\dim K[A]\leq q$ because $q$ limits the degree of split polynomials without multiple roots, and the minimal polynomial of $A$ must be of this kind. This means that when $n&gt;q$ the dimension $n$ of the subalgebra $D$ of all diagonal matrices is too large for it to be of the form $K[A]$ for one of its members. And as long as $n\leq q^2$ one can find diagonal matrices $X,Y$ that generate all of $D$, while ensuring that all matrices$~A$ commuting with both $X$ and $Y$ must be diagonal; this excludes finding such $A$ with $X,Y\in K[A]$. Indeed one can arbitrarily label each of the $n$ standard basis vectors with distinct elements of $K\times K$, and define $X,Y$ so that each such basis vector is a common eigenvector, with respective eigenvalues given by the two components of the label; one then easily realises each projection on the $1$-dimensional space generated by one of the standard basis vectors as a polynomial in $X,Y$, forcing any matrix commuting with both $X$ and $Y$ (and therefore with these projections) to be diagonal.</p> <p>But for examples valid in arbitrary field, it is better to focus attention on nilpotent matrices, avoiding the "regular" ones with minimal polynomial $X^n$ (and hence with a single Jordan block). The counterexample that I had in mind was the pair of $4\times4$ matrices each of Jordan type $(2,2)$: $$ X=\begin{pmatrix}0&amp;1&amp;0&amp;0\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;1\\0&amp;0&amp;0&amp;0\end{pmatrix} ,\qquad Y=\begin{pmatrix}0&amp;0&amp;1&amp;0\\0&amp;0&amp;0&amp;1\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;0\end{pmatrix} ,\qquad \text{which have } XY=YX=\begin{pmatrix}0&amp;0&amp;0&amp;1\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;0\end{pmatrix} $$ while $X^2=Y^2=0$. Then the algebra $K[X,Y]$ has dimension $4$, and is contained in the subalgebra of upper triangular matrices $A$ with identical diagonal entries and also $A_{2,3}=0$. For any $A$ in this subalgebra, and therefore in particular for $A\in K[X,Y]$, one has $\dim K[A]\leq3$, since $(A-\lambda I)^3=0$ where $\lambda$ is the common diagonal entry of $A$; in particular $K[A]\not\supseteq K[X,Y]$ for such$~A$. But one also has $\dim K[A]\leq4$ for <em>any</em> $A\in M_4(K)$, and this shows that one cannot have $K[A]\supseteq K[X,Y]$ <em>unless</em> $A\in K[X,Y]$, and one therefore cannot have it at all.</p> <p>The <a href="https://mathoverflow.net/questions/34314/when-is-an-algebra-of-commuting-matrices-contained-in-one-generated-by-a-single/35024#35024">answer in the MO thread</a> indicated in the other answer indicates that there are counterexamples even for $n=3$; indeed it suffices to strip either the first row and column or the last row and column off the matrices $X,Y$ given above. The argument is similar but even simpler for them. The resulting matrices generate the full subalgebra of matrices of the form<br> $$ \begin{pmatrix}\lambda&amp;a&amp;b\\0&amp;\lambda&amp;0\\0&amp;0&amp;\lambda \end{pmatrix} \quad \text{respectively of those of form}\quad \begin{pmatrix}\lambda&amp;0&amp;b\\0&amp;\lambda&amp;a\\0&amp;0&amp;\lambda \end{pmatrix}; $$ having dimension $3$ it can only be contained in $K[A]$ if $A$ is in the subalgebra, but then the minimal polynomial of $A$ has degree at most $2$ so the algebra it generates cannot be all of that subalgebra.</p> <p>These two subalgebras are analogues of the commutative subalgebra of dimension${}&gt;n$ that I mentioned in the question, but for the case $n=3$ where its dimension is only equal to $n$. If had bothered to look at this more modest case rather than go directly for the "excessive" subalgebra for $n\geq4$ right away, then I might have found this example myself; I guess one should never neglect the small cases.</p>
<p>The answer is given in the accepted answer of <a href="https://mathoverflow.net/questions/34314">MO/34314</a>. Don't klick when you want to keep the suspense.</p>
probability
<p>Here is the situation: My friend and I are at an impasse. I believe I'm correct, but he's so damn stubborn he won't believe me. Also, I'm not the most articulate at explaining things. Hopefully some of you guys can help me explain this to him in a way he'll understand. Here is the problem:</p> <p>A DVD is either at his parent's house or his own. The probability that it's at his house is 30%. If the DVD is at his own house, there is a 90% chance it's on the porch, and a 10% chance it's in the living room. What is the % chance the DVD is on the porch?</p> <p>My friend says you take 90% of 30% which is 27 and that is the % chance it's on the porch. Is this correct? I don't believe so. </p> <p>I believe that regardless of where the DVD is, the chance of it being anywhere in his house is still 30% overall. Location inside his house won't change those odd because the porch and the living room are both part of the house. If there is a 90% chance it's on the porch, it doesn't change the overall odds of it being in that location.</p> <p>Now, if you rephrase the question and ask, "The DVD Is either at my parents, my porch, or my living room. What is the % chance it's on my porch?", the answer is 33%. If there are three places it could be, then there is 33.333% chance it's on the porch. Even if it's a 90% chance it's at his house, if there are only three places it can be, it remains the same.</p> <p>I think the correct way of answering the question is: There is a 30% chance the DVD is at my house. If it is at my house, there is a 90% chance it's on my porch. They are two separate odds and you can't take a percentage of the overall odds since the locations are inside the house. </p> <p>Is this correct or am I wrong? And regardless, please give me your explanation.</p>
<p>Your friend is correct and I'll give you an experiment you can try:</p> <p>Take two boxes labeled "my house" and "my friend's house" and in the first box put two bags labeled "porch" and "living room". You will also need a marble and a die. Take one die and roll it (we're going to approximate 70% ~ 2/3 here). If it's a 1,2,3 or 4, put your hand in the "my house" box but don't pick a bag yet. If it's a 5 or 6 put it anywhere in the "friend's house" box. If your hand is in the "my house" box then you need to roll again to figure out which bag to put it in. If it's a 1,2,3,4 or 5 (let's say 90% ~ 5/6) put it in the "porch" box, and if it's a 6 put it in the "living room" box. Repeat this exercise until you have a feel for how often it lands in "my porch." Record the trials and see what the odds are. You should be convinced now.</p> <p>Your reasoning is incorrect here:</p> <blockquote> <p>If there is a 90% chance it's on the porch, it doesn't change the overall odds of it being in that location.</p> </blockquote> <p>No, you changed things. There is <em>not</em> a 90% chance it's on the porch. <em>If</em> - if! - it is in your house, then (and only then!) there is a 90% chance it's on the porch. The magic of the 27% comes from the fact that mathematically - <em>and</em> experimentally, as I hope the above box/bag/marble exercise shows - we know how the "in your house" 30% and the "on the porch" 90% interact. Namely, they interact multiplicatively.</p> <p>How about this? There is a 30% chance you'll go to New York and a 100% chance you'll go to the Empire State Building if you go to New York (because why else would you go to New York? kidding...). Does this mean there's a 100% chance you'll go to the Empire State Building? Well, since you <em>have</em> to go to NY to go to the ESB, that would mean there's a 100% you'll go to New York - and now we're being contradictory! So this interpretation makes no sense and is never what we mean mathematically or in plain English.</p> <blockquote> <p>"The DVD Is either at my parents, my porch, or my living room. What is the % chance it's on my porch?", the answer is 33%.</p> </blockquote> <p>This is definitely wrong. I'm pretty sure you either have ebola or you don't have ebola. Up to you whether you need to call 911 and get yourself quarantined right away because there's a 50% chance you have ebola. Or perhaps the doctors gave your sick relative 6 months to live, but your relative <em>might</em> live a year or two years or three years or four years or five years, which means there's at least an 86% chance the doctor is wrong.</p> <p>Now put 10 red m&amp;ms in a bag and 1 blue one. Grab one without looking. Since there's two possibilities, there's a 50% chance it's a blue one, right? So I'll bet you a dollar that it's red and you bet me a dollar that it's blue, and we'll see who's paying for lunch later.</p>
<p>The DVD is</p> <ul> <li>at his parents' house with a probability of $70\,\%$ ($=100\,\%-30\,\%$)</li> <li>in the porch of his house with $27\,\%$ ($=90\,\%$ of $30\,\%$)</li> <li>in the living room of his house with $3\,\%$ ($=10\,\%$ of $30\,\%$)</li> </ul> <p>Check: $70\,\% + 27\,\%+3\,\%=100\,\%$.</p>
geometry
<p>What software do you use to <strong>accurately draw geometry diagrams</strong>?</p>
<p>For geometry I've always used <a href="https://www.geogebra.org/graphing" rel="noreferrer">Geogebra</a>, and I think it's pretty good.</p>
<p><a href="http://sourceforge.net/projects/pgf/" title="TikZ">Tikz</a> is a nice LaTeX package for easily drawing diagrams. Diagrams are made by putting code directly into the TeX document, eliminating the need for extra image files. The package also is very powerful and versatile; the <a href="http://www.ctan.org/tex-archive/graphics/pgf/base/doc/generic/pgf/pgfmanual.pdf">manual</a> contains a very detailed description of its features.</p>
matrices
<p>I've come across a paper that mentions the fact that matrices commute if and only if they share a common basis of eigenvectors. Where can I find a proof of this statement?</p>
<p>Suppose that $A$ and $B$ are $n\times n$ matrices, with complex entries say, that commute.<br> Then we decompose $\mathbb C^n$ as a direct sum of eigenspaces of $A$, say $\mathbb C^n = E_{\lambda_1} \oplus \cdots \oplus E_{\lambda_m}$, where $\lambda_1,\ldots, \lambda_m$ are the eigenvalues of $A$, and $E_{\lambda_i}$ is the eigenspace for $\lambda_i$. (Here $m \leq n$, but some eigenspaces could be of dimension bigger than one, so we need not have $m = n$.)</p> <p>Now one sees that since $B$ commutes with $A$, $B$ preserves each of the $E_{\lambda_i}$: If $A v = \lambda_i v, $ then $A (B v) = (AB)v = (BA)v = B(Av) = B(\lambda_i v) = \lambda_i Bv.$ </p> <p>Now we consider $B$ restricted to each $E_{\lambda_i}$ separately, and decompose each $E_{\lambda_i}$ into a sum of eigenspaces for $B$. Putting all these decompositions together, we get a decomposition of $\mathbb C^n$ into a direct sum of spaces, each of which is a simultaneous eigenspace for $A$ and $B$.</p> <p>NB: I am cheating here, in that $A$ and $B$ may not be diagonalizable (and then the statement of your question is not literally true), but in this case, if you replace "eigenspace" by "generalized eigenspace", the above argument goes through just as well.</p>
<p>This is false in a sort of trivial way. The identity matrix $I$ commutes with every matrix and has eigenvector set all of the underlying vector space $V$, but no non-central matrix has this property.</p> <p>What is true is that two matrices which commute and are also <em>diagonalizable</em> are <a href="http://en.wikipedia.org/wiki/Diagonalizable_matrix#Simultaneous_diagonalization">simultaneously diagonalizable</a>. The proof is particularly simple if at least one of the two matrices has distinct eigenvalues.</p>
logic
<p>Popular mathematics folklore provides some simple tools enabling us compactly to describe some truly enormous numbers. For example, the number $10^{100}$ is commonly known as a <a href="http://en.wikipedia.org/wiki/Googol">googol</a>, and a <a href="http://en.wikipedia.org/wiki/Googolplex">googol plex</a> is $10^{10^{100}}$. For any number $x$, we have the common vernacular:</p> <ul> <li>$x$ <em>bang</em> is the factorial number $x!$</li> <li>$x$ <em>plex</em> is the exponential number $10^x$</li> <li>$x$ <em>stack</em> is the number obtained by iterated exponentiation (associated upwards) in a tower of height $x$, also denoted $10\uparrow\uparrow x$, $$10\uparrow\uparrow x = 10^{10^{10^{\cdot^{\cdot^{10}}}}}{\large\rbrace} x\text{ times}.$$</li> </ul> <p>Thus, a googol bang is $(10^{100})!$, and a googol stack is $10\uparrow\uparrow 10^{100}$. The vocabulary enables us to name larger numbers with ease:</p> <ul> <li>googol bang plex stack. (This is the exponential tower $10^{10^{\cdot^{\cdot^{^{10}}}}}$ of height $10^{(10^{100})!}$)</li> <li>googol stack bang stack bang</li> <li>googol bang bang stack plex stack</li> <li>and so on…</li> </ul> <p>Consider the collection of all numbers that can be named in this scheme, by a term starting with googol and having finitely many adjectival operands: bang, stack, plex, in any finite pattern, repetitions allowed. (For the purposes of this question, let us limit ourselves to these three operations and please accept the base 10 presumption of the stack and plex terminology simply as an artifact of its origin in popular mathematics.)</p> <p>My goal is to sort all such numbers nameable in this vocabulary by size.</p> <p>A few simple observations get us started. Once $x$ is large enough (about 20), then the factors of $x!$ above $10$ compensate for the few below $10$, and so we see that $10^x\lt x!$, or in other words, $x$ plex is less than $x$ bang. Similarly, $10^{10^{:^{10}}}x$ times is much larger than $x!$, since $10^y\gt (y+1)y$ for large $y$, and so for large values we have</p> <ul> <li>$x$ plex $\lt$ $x$ bang $\lt$ $x$ stack.</li> </ul> <p>In particular, the order for names having at most one adjective is:</p> <pre><code> googol googol plex googol bang googol stack </code></pre> <p>And more generally, replacing plex with bang or bang with stack in any of our names results in a strictly (and much) larger number.</p> <p>Continuing, since $x$ stack plex $= (x+1)$ stack, it follows that</p> <ul> <li>$x$ stack plex $\lt x$ plex stack.</li> </ul> <p>Similarly, for large values,</p> <ul> <li>$x$ plex bang $\lt x$ bang plex,</li> </ul> <p>because $(10^x)!\lt (10^x)^{10^x}=10^{x10^x}\lt 10^{x!}$. Also,</p> <ul> <li>$x$ stack bang $\lt x$ plex stack $\lt x$ bang stack,</li> </ul> <p>because $(10\uparrow\uparrow x)!\lt (10\uparrow\uparrow x)^{10\uparrow\uparrow x}\lt 10\uparrow\uparrow 2x\lt 10\uparrow\uparrow 10^x\lt 10\uparrow\uparrow x!$. It also appears to be true for large values that</p> <ul> <li>$x$ bang bang $\lt x$ stack.</li> </ul> <p>Indeed, one may subsume many more iterations of plex and bang into a single stack. Note also for large values that</p> <ul> <li>$x$ bang $\lt x$ plex plex</li> </ul> <p>since $x!\lt x^x$, and this is seen to be less than $10^{10^x}$ by taking logarithms.</p> <p>The observations above enable us to form the following order of all names using at most two adjectives.</p> <pre><code> googol googol plex googol bang googol plex plex googol plex bang googol bang plex googol bang bang googol stack googol stack plex googol stack bang googol plex stack googol bang stack googol stack stack </code></pre> <p>My request is for any or all of the following:</p> <ol> <li><p>Expand the list above to include numbers named using more than two adjectives. (This will not be an end-extension of the current list, since googol plex plex plex and googol bang bang bang will still appear before googol stack.) If people post partial progress, we can assemble them into a master list later.</p></li> <li><p>Provide general comparison criteria that will assist such an on-going effort.</p></li> <li><p>Provide a complete comparison algorithm that works for any two expressions having the same number of adjectives.</p></li> <li><p>Provide a complete comparison algorithm that compares any two expressions.</p></li> </ol> <p>Of course, there is in principle a computable comparison procedure, since we may program a Turing machine to actually compute the two values and compare their size. What is desired, however, is a simple, feasible algorithm. For example, it would seem that we could hope for an algorithm that would compare any two names in polynomial time of the length of the names.</p>
<p>OK, let's attempt a sorting of the names having at most three operands. I'll make several observations, and then use them to assemble the order section by section, beginning with the part below googol stack.</p> <ul> <li><p>googol bang bang bang $\lt$ googol stack. It seems clear that we shall be able to iterated bangs many times before exceeding googol stack. Since googol bang bang bang is the largest three-operand name using only plex and bang, this means that all such names will interact only with each below googol stack.</p></li> <li><p>plex $\lt$ bang. This was established in the question.</p></li> <li><p>plex bang $\lt$ bang plex. This was established in the question, and it allows us to make many comparisons in terms involving only plex and bang, but not quite all of them.</p></li> <li><p>googol bang bang $\lt$ googol plex plex plex. This is because $g!!\lt (g^g)^{g^g}=g^{gg^g}=10^{100\cdot gg^g}$, which is less than $10^{10^{10^g}}$, since $100\cdot gg^g=10^{102\cdot 10^{100}}\lt 10^{10^g}$. Since googol bang bang is the largest two-operand name using only plex and bang and googol plex plex plex is the smallest three-operand name, this means that the two-operand names using only plex and bang will all come before all the three-operand names.</p></li> <li><p>googol plex bang bang $\lt$ googol bang plex plex. This is because $(10^g)!!\lt ((10^g)^{10^g})!=(10^{g10^g})!=(10^{10^{g+100}})!\lt (10^{10^{g+100}})^{10^{10^{g+100}}}=10^{10^{g+100}10^{10^{g+100}}}= 10^{10^{(g+100)10^{g+100}}}\lt 10^{10^{g!}}$.</p></li> </ul> <p>Combining the previous observations leads to the following order of the three-operand names below googol stack:</p> <pre><code> googol googol plex googol bang googol plex plex googol plex bang googol bang plex googol bang bang googol bang bang googol plex plex plex googol plex plex bang googol plex bang plex googol plex bang bang googol bang plex plex googol bang plex bang googol bang bang plex googol bang bang bang googol stack </code></pre> <p>Perhaps someone can generalize the methods into a general comparison algorithm for larger smallish terms using only plex and bang? This is related to the topic of the Velleman article linked to by J. M. in the comments.</p> <p>Meanwhile, let us now turn to the interaction with stack. Using the observations of the two-operand case in the question, we may continue as follows:</p> <pre><code> googol stack plex googol stack bang googol stack plex plex googol stack plex bang googol stack bang plex googol stack bang bang </code></pre> <p>Now we use the following fact:</p> <ul> <li>stack bang bang $\lt$ plex stack. This is established as in the question, since $(10\uparrow\uparrow x)!!\lt (10\uparrow\uparrow x)^{10\uparrow\uparrow x}!\lt$ $(10\uparrow\uparrow x)^{(10\uparrow\uparrow x)(10\uparrow\uparrow x)^{10\uparrow\uparrow x}}=$ $(10\uparrow\uparrow x)^{(10\uparrow\uparrow x)^{1+10\uparrow\uparrow x}} 10\uparrow\uparrow 4x\lt 10\uparrow\uparrow 10^x$. In fact, it seems that we will be able to absorb many more iterated bangs after stack into plex stack.</li> </ul> <p>The order therefore continues with:</p> <pre><code> googol plex stack googol plex stack plex googol plex stack bang </code></pre> <ul> <li>plex stack bang $\lt$ bang stack. To see this, observe that $(10\uparrow\uparrow 10^x)!\lt (10\uparrow\uparrow 10^x)^{10\uparrow\uparrow 10^x}\lt 10\uparrow\uparrow 2\cdot10^x$, since associating upwards is greater, and this is less than $10\uparrow\uparrow x!$. Again, we will be able to absorb many operands after plex stack into bang stack.</li> </ul> <p>The order therefore continues with:</p> <pre><code> googol bang stack googol bang stack plex googol bang stack bang </code></pre> <ul> <li>bang stack bang $\lt$ plex plex stack. This is because $(10\uparrow\uparrow x!)!\lt (10\uparrow\uparrow x!)^{10\uparrow\uparrow x!}\lt 10\uparrow\uparrow 2x!\lt 10\uparrow 10^{10^x}$.</li> </ul> <p>Thus, the order continues with:</p> <pre><code> googol plex plex stack googol plex bang stack googol bang plex stack googol bang bang stack </code></pre> <p>This last item is clearly less than googol stack stack, and so, using all the pairwise operations we already know, we continue with:</p> <pre><code> googol stack stack googol stack stack plex googol stack stack bang googol stack plex stack googol stack bang stack googol plex stack stack googol bang stack stack googol stack stack stack </code></pre> <p>Which seems to complete the list for three-operand names. If I have made any mistakes, please comment below.</p> <p>Meanwhile, this answer is just partial progress, since we have the four-operand names, which will fit into the hierarchy, and I don't think the observations above are fully sufficient for the four-operand comparisons, although many of them will now be settled by these criteria. And of course, I am nowhere near a general comparison algorithm.</p> <p>Sorry for the length of this answer. Please post comments if I've made any errors.</p>
<p>The following describes a comparison algorithm that will work for expressions where the number of terms is less than googol - 2.</p> <p>First, consider the situation with only bangs and plexes. To compare two numbers, first count the total number of bangs and plexes in each. If one has more than the other, that number is bigger. If the two numbers have the same number of bangs and plexes, compare the terms lexicographically, setting bang > plex. So googol plex bang plex plex > googol plex plex bang bang, since the first terms are equal, and the second term favors the first.</p> <p>To prove this, first note that x bang > x plex for $x \ge 25$. To show that the higher number of terms always wins, it suffices to show that googol plex$^{k+1} >$ googol bang$^k$. We will instead show that $x$ plex$^{k+1} > x$ bang $^k$ for $x \ge 100$. Set $x = 10^y$.</p> <p>$10^y$ bang $&lt; (10^y)^{10^y} = 10^{y*10^y} &lt; 10^{10^{10^y}} = 10^y$ plex plex</p> <p>$10^y$ bang bang $&lt; (10^{y*10^y})^{10^{y*10^y}} $</p> <p>$= 10^{10^{y*10^y + y + \log_{10} y}}$ </p> <p>$= 10^{10^{10^{y + \log_{10} y} (1 + \frac{y + \log_{10} y}{10^{y + \log_{10} y}})}}$ </p> <p>$&lt; 10^{10^{10^{y + \log_{10} y} (1 + \frac{y}{10^{y}})}}$</p> <p>(We use the fact that x/10^x is decreasing for large x.) </p> <p>$= 10^{10^{10^{y + \log_{10} y + \log_{10}(1 + \frac{y}{10^{y}})}}}$ </p> <p>$&lt; 10^{10^{10^{y + \log_{10} y + \frac{y}{10^{y}}}}}$ </p> <p>(We use the fact that ln(1+x) &lt; x, so log_10 (1+x) &lt; x)</p> <p>$&lt; 10^{10^{10^{2y}}} &lt; 10^{10^{10^{10^y}}} = 10^y$ plex plex plex</p> <p>$10^y$ bang bang bang &lt; $(10^{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}})^{10^{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}}} $</p> <p>$= 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}} + 10^{y + \log_{10} y + \frac{y}{10^y}})}}$ </p> <p>$= 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}}(1 + \frac{10^{y + \log_{10} y + \frac{y}{10^y}}}{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}})}}$ </p> <p>$&lt; 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}}(1 + \frac{10^{y }}{10^{10^{y}}})}}$ </p> <p>$= 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} + \log_{10}(1+\frac{10^{y }}{10^{10^{y}}}))}}}$ </p> <p>$&lt; 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} + \frac{10^{y }}{10^{10^{y}}})}}}$ </p> <p>$= 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} (1 + \frac{10^{y }}{10^{10^{y}} * (10^{y + \log_{10} y + \frac{y}{10^y}})}))}}}$ </p> <p>$&lt; 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} (1 + \frac{1}{10^{10^{y}} }))}}}$</p> <p>$= 10^{10^{10^{10^{y + \log_{10} y + \frac{y}{10^y} + \frac{1}{10^{10^{y}} }} }}}$ </p> <p>$&lt; 10^{10^{10^{10^{2y}}}} &lt; 10^{10^{10^{10^{10^y}}}} = 10^y$ plex plex plex plex</p> <p>We can see that the third bang added less than $\frac{1}{10^{10^y}}$ to the top exponent. Similarly, adding a fourth bang will add less than $\frac{1}{10^{10^{10^y}}}$, adding a fifth bang will add less than $\frac{1}{10^{10^{10^{10^y}}}}$, and so on. It's clear that all the fractions will add up to less than 1, so in general,</p> <p>$10^y$ bang$^{k} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{y + \log_{10} y + 1}}}}}}{\large\rbrace} k+1\text{ 10's} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{10^y}}}}}}{\large\rbrace} k+2\text{ 10's} = 10^y$ plex$^{k+1}$.</p> <p>Next, we have to show that the lexicographic order works. We will show that it works for all $x \ge 100$. Suppose our procedure failed; take two numbers with the fewest number of terms for which it fails, e.g. $x s_1 ... s_n$ and $x t_1 ... t_n$. It cannot be that $s_1$ and $t_1$ are both plex or both bang, since then $(x s_1) s_2 ... s_n$ and $(x s_1) t_2 ... t_n$ would be a failure of the procedure with one fewer term. So set $s_1 =$ bang and $t_1 =$ plex. Since our procedure tells us that $x s_1 ... s_n$ > $x t_1 ... t_n$, and our procedure fails, it must be that $x s_1 ... s_n$ &lt; $x t_1 ... t_n$. Then</p> <p>$x$ bang plex ... plex $&lt; x$ bang $s_2 ... s_n &lt; x$ plex $t_2 ... t_n &lt; x$ plex bang ... bang.</p> <p>So to show our procedure works, it suffices to show that x bang plex$^k$ > x plex bang$^k$. Set x = 10^y.</p> <p>$10^y$ bang > $(\frac{10^y}{e})^{10^y} &gt; (10^{y - \frac{1}{2}})^{10^y} = 10^{(y-\frac{1}{2})10^y}$</p> <p>$10^y$ bang plex$^k > 10^{10^{10^{\cdot^{\cdot^{10^{(y-\frac{1}{2})10^y}}}}}}{\large\rbrace} k+1\text{ 10's}$</p> <p>To determine $10^y$ plex bang$^k$, we can use our previous inequality for $10^y$ bang$^k$ and set $x = 10^y$ plex $= 10^{10^y}$, i.e. substitute $10^y$ for $y$. We get</p> <p>$10^y$ plex bang$^k &lt; 10^{10^{10^{\cdot^{\cdot^{10^{(10^y + \log_{10}(10^y) + 1}}}}}}{\large\rbrace} k+1\text{ 10's} = 10^{10^{10^{\cdot^{\cdot^{10^{10^y + y + 1}}}}}}{\large\rbrace} k+1\text{ 10's}$</p> <p>$&lt; 10^{10^{10^{\cdot^{\cdot^{10^{(y-\frac{1}{2})10^y}}}}}}{\large\rbrace} k+1\text{ 10&#39;s} &lt; 10^y$ bang plex$^k$.</p> <p>Okay, now for terms with stack. Given two expressions, first compare the number of times stack appears; the number in which stack appears more often is the winner. If stack appears n times for both expressions, then in each expression consider the n+1 groups of plexes and bangs separated by the n stacks. Compare the n+1 groups lexicographically, using the ordering we defined above for plexes and bangs. Whichever expression is greater denotes the larger number.</p> <p>Now, this procedure clearly does not work all the time, since a googol followed be a googol-2 plexes is greater than googol stack. However, I believe that if the number of terms in the expressions are less than googol-2, then the procedure is correct.</p> <p>First, observe that $x$ plex stack > $x$ stack plex and $x$ bang stack > $x$ stack bang, since </p> <p>$x$ stack plex $&lt; x$ stack bang $&lt; (10\uparrow\uparrow x)^{10\uparrow\uparrow x} &lt; 10\uparrow\uparrow (2x) &lt; x$ plex stack $&lt; x$ bang stack.</p> <p>Thus if googol $s_1 ... s_n$ is some expression with fewer stacks than googol $t_1 ... t_m$, we can move all the plexes and bangs in $s_1 ... s_n$ to the beginning. Let $s_1 ... s_i$ and $t_1 ... t_j$ be the initial bangs and plexes before the first stack. There will be less than googol-2 bangs and plexes, and </p> <p>googol bang$^{\text{googol}-3} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{100 + \log_{10} 100 + 1}}}}}}{\large\rbrace} \text{googol-2 10's} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{103}}}}}}{\large\rbrace} \text{googol-2 10's}$</p> <p>$ &lt; 10 \uparrow\uparrow $googol = googol stack</p> <p>and so googol $s_1 ... s_i$ will be less than googol $t_1 ... t_{j+1}$ ($t_{j+1}$ is a stack). $s_{i+1} ... s_n$ consists of $k$ stacks, and $t_{j+2} ... t_m$ consists of at least $k$ stacks and possibly some plexes and bangs. Thus googol $s_1 ... s_n$ will be less than googol $t_1 ... t_m$.</p> <p>Now consider $x S_1$ stack $S_2$ stack ... stack $S_n$ versus $x T_1$ stack $T_2$ stack ... stack $T_n$, where the $S_i$ and $T_i$ are sequences of plexes and bangs. Without loss of generality, we can assume that $S_1 &gt; T_1$ in our order. (If $S_1 = T_1$, we can consider ($x S_1$ stack) $S_2$ stack ... stack $S_n$ versus ($x T_1$ stack) $T_2$ stack ... stack $T_n$, and compare $S_2$ versus $T_2$, etc., until we get to an $S_i$ and $T_i$ that are different.) $x S_1$ stack $S_2$ stack ... stack $S_n$ is, at the minimum, $x S_1$ stack ... stack, while $x T_1$ stack $T_2$ stack ... stack $T_n$, is, at the maximum, $x T_1$ stack bang$^{\text{googol}-3}$ stack .... stack. So it is enough to show</p> <p>x S_1 stack > x T_1 stack bang$^{\text{googol}-3}$</p> <p>We have seen that $x$ bang$^k &lt; 10^{10^{10^{\cdot^{\cdot^{10^x}}}}}{\large\rbrace} k+1\text{ times}$ so $x$ bang$^{\text{googol}-3} &lt; 10^{10^{10^{\cdot^{\cdot^{10^x}}}}}{\large\rbrace} \text{googol-2 times}$, and $x T_1$ stack bang$^{\text{googol}-3} &lt; ((x T_1) +$ googol) stack. Thus we must show $x S_1 &gt; (x T_1) +$ googol.</p> <p>We can assume without loss of generality that the first term of $S_1$ and the first term of $T_1$ are different. (Otherwise set $x = x s_1 ... s_{i-1}$ where i is the smallest number such that s_i and t_i are different.) We have seen above that it is enough to consider </p> <p>x (plex)^(k+1) versus x (bang)^k x bang (plex)^k versus x plex (bang)^k</p> <p>We have previously examined these two cases. In both cases, adding a googol to the smaller leads to the same inequality.</p> <p>And with that, we are done.</p> <hr> <p>What are the prospects for a general comparison algorithm, when the number of terms exceeds googol-3? The difficulty can be illustrated by considering the following two expressions:</p> <p>$x$ $S_1$ stack plex$^k$</p> <p>$x$ $T_1$ stack</p> <p>The two expressions are equal precisely when k = $x$ $T_1$ - $x$ $S_1$. So a general comparison algorthm must allow for the calculation of arbitrary expressions, which perhaps makes our endeavor pointless.</p> <p>In light of this, I believe the following general comparison algorithm is the best that can be done.</p> <p>We already have a general comparison algorithm for expressions with no appearances of stack. If stack appears in both expressions, let them be $x$ $S_1$ stack $S_2$ and $x$ $T_1$ stack $T_2$, where $S_2$ and $T_2$ have no appearances of stack. Replace $x$ $S_1$ stack with plex$^{(x S_1)}$, and $x$ $T_1$ stack with plex$^{(x T_1)}$, and do our previous comparison algorithm on the two new expressions. This clearly works because $x$ $S_1$ stack = $10^{10}$ plex$^{(x S_1 -2)}$ and $x$ $T_1$ stack = $10^{10}$ plex$^{(x T_1-2)}$.</p> <p>The remaining case is where one expression has stack and the other does not, i.e. googol $S_1$ stack $S_2$ versus googol $T$, where $S_2$ and $T$ have no appearances of stack. Let $s$ and $t$ be the number of terms in $S_2$ and $T$ respectively. Then googol $T$ is greater than googol $S_1$ stack $S_2$ iff $t \ge $ googol $S_1 + s - 2$.</p> <p>Indeed, if $t \ge $ googol $S_1 + s - 2$,</p> <p>googol $T \ge$ googol plex$^{\text{googol} S_1 + s - 2} = 10^{10^{10^{\cdot^{\cdot^{10^{100}}}}}}{\large\rbrace} $ googol $S_1$ $+s-1$ 10's $ &gt; 10^{10^{10^{\cdot^{\cdot^{10^{10} + 10 + 1}}}}}{\large\rbrace} $ googol $S_1 +s-1$ 10's > googol $S_1$ stack bang$^s$ </p> <p>$\ge$ googol $S_1$ stack $S_2$.</p> <p>If $t \le $ googol $S_1 + s - 3$,</p> <p>googol $T \le$ googol bang$^{\text{googol} S_1 + s - 3} &lt; 10^{10^{10^{\cdot^{\cdot^{10^{103}}}}}}{\large\rbrace} $ googol $S_1$ $+s-2$ 10's $ &lt; 10^{10^{10^{\cdot^{\cdot^{10^{10^{10}}}}}}}{\large\rbrace} $ googol $S_1 +s$ 10's = googol $S_1$ stack plex$^s$ </p> <p>$\le$ googol $S_1$ stack $S_2$.</p> <p>So the comparison algorithm, while not particularly clever, works. </p> <p>In one of the comments, someone raised the question of a polynomial algorithm (presumably as a function of the maximum number of terms). We can implement one as follows. Let n be the maximum number of terms. We use the following lemma.</p> <p>Lemma. For any two expressions googol $S$ and googol $T$, if googol $S$ > googol $T$, then googol $S$ > 2 googol $T$.</p> <p>This lemma is not too hard to verify, but for reasons of space I will not do so here.</p> <p>As before, we have a simple algorithm in O(n) time when both expressions do not contain stack. If exactly one of the expressions has a stack, we compute $x$ $S_1$ as above, but we stop if the calculation exceeds n. If the calculation finishes, then we can do the previous comparison in O(log n) time; if the calculation stops, then we know that $x$ $S_1$ stack $S_2$ is larger, again in O(log n) time.</p> <p>If both expressions have stack, then from our previous precedure we calculate both $x$ $S_1$ and $x$ $T_1$. Now we stop if either calculation exceeds $2m$, where $m = $the maximum length of $S_2$ and $T_2$ (Clearly, $2m &lt; 2n$). If the caculation finishes, then we can do our previous procedure in O(n) time. If the calculation stops prematurely, than the larger of $x$ $S_1$ or $x$ $T_1$ will determine the larger original expression. indeed, if $y = x$ $S_1$ and $z = x$ $T_1$, and $y &gt; z$, then by the Lemma $y &gt; 2z$, so since $y &gt; 2m$, we have $y &gt; z+m$. In our procedure we replace $x$ $S_1$ stack by plex$^y$ and $x$ $T_1$ stack by plex$^z$; since $y$ is more than $m$ more than $z$, plex$^y$ $S_2$ will be longer than plex$^z$ $T_2$, so the first expression will be larger. So we apply our procedure to $x$ $S_1$ and $x$ $S_2$; this will reduce the sum of the lengths by at least $m+2$, having used O(log m) operations.</p> <p>So we wind up with an algorithm that takes O(n) operations.</p> <hr> <p>We could extend the notation to have suffixes that apply k -> 10^^^k (Pent), k -> 10^^^^k (Hex), etc. I believe the obvious extension of the above procedure will work, e.g. for expressions with plex, bang, stack, and pent, first count the number of pents; the expression with more pents will be the larger. Otherwise, compare the n+1 groups of plex bang and stack lexicographically by our previously defined procedure. So long as the number of terms is less than googol-2, this procedure should work.</p>
linear-algebra
<p>I am trying to understand why the method used in my linear algebra textbook to find the basis of the null space works. The textbook is 'Elementary Linear Algebra' by Anton.</p> <p>According to the textbook, the basis of the null space for the following matrix:</p> <p>$A=\left(\begin{array}{rrrrrr} 1 &amp; 3 &amp; -2 &amp; 0 &amp; 2 &amp; 0 \\ 2 &amp; 6 &amp; -5 &amp; -2 &amp; 4 &amp; -3 \\ 0 &amp; 0 &amp; 5 &amp; 10 &amp; 0 &amp; 15 \\ 2 &amp; 6 &amp; 0 &amp; 8 &amp; 4 &amp; 18 \end{array}\right) $</p> <p>is found by first finding the reduced row echelon form, which leads to the following:</p> <p>$(x_1,x_2,x_3,x_4,x_5,x_6)=(-3r-4s-2t,r,-2s,s,t,0)$</p> <p>or, alternatively as</p> <p>$(x_1,x_2,x_3,x_4,x_5,x_6)=r(-3,1,0,0,0,0)+s(-4,0,-2,1,0,0)+t(-2,0,0,0,1,0)$</p> <p>This shows that the vectors</p> <p>${\bf v_1}=(-3,1,0,0,0,0),\hspace{0.5in} {\bf v_2}=(-4,0,-2,1,0,0),\hspace{0.5in} {\bf v_3}=(-2,0,0,0,1,0)$</p> <p>span the solution space.</p> <p>It can be shown that for a homogenous linear system, this method always produces a basis for the solution space of the system.</p> <p><strong>Question</strong></p> <ol> <li><p>I don't understand why this method will always produce a basis for $Ax=0$. Could someone please explain to me why this method will always work? If it helps to explain, I already understand the process of finding the basis of a column space and row space. I also understand why elementary row operations do not alter the null space of a matrix.</p></li> <li><p>What specific properties of matrices or vector space that I need to be aware of in order to understand why this method works?</p></li> </ol>
<p>The null space of $A$ is the set of solutions to $A{\bf x}={\bf 0}$. To find this, you may take the augmented matrix $[A|0]$ and row reduce to an echelon form. Note that every entry in the rightmost column of this matrix will always be 0 in the row reduction steps. So, we may as well just row reduce $A$, and when finding solutions to $A{\bf x}={\bf 0}$, just keep in mind that the missing column is all 0's.</p> <p>Suppose after doing this, you obtain $$ \left[\matrix{1&amp;0&amp;0&amp;0&amp;-1 \cr 0&amp;0&amp;1&amp;1&amp;0 \cr 0&amp;0&amp;0&amp;0&amp;0 \cr 0&amp;0&amp;0&amp;0&amp;0 \cr }\right] $$</p> <p>Now, look at the columns that do not contain any of the leading row entries. These columns correspond to the free variables of the solution set to $A{\bf x}={\bf 0}$ Note that at this point, we know the dimension of the null space is 3, since there are three free variables. That the null space has dimension 3 (and thus the solution set to $A{\bf x}={\bf 0}$ has three free variables) could have also been obtained by knowing that the dimension of the column space is 2 from the rank-nullity theorem.</p> <p>The "free columns" in question are 2,4, and 5. We may assign any value to their corresponding variable. </p> <p>So, we set $x_2=a$, $x_4=b$, and $x_5=c$, where $a$, $b$, and $c$ are arbitrary.</p> <p>Now solve for $x_1$ and $x_3$:</p> <p>The second row tells us $x_3=-x_4=-b$ and the first row tells us $x_1=x_5=c$.</p> <p>So, the general solution to $A{\bf x}={\bf 0}$ is $$ {\bf x}=\left[\matrix{c\cr a\cr -b\cr b\cr c}\right] $$</p> <p>Let's pause for a second. We know:</p> <p>1) The null space of $A$ consists of all vectors of the form $\bf x $ above.</p> <p>2) The dimension of the null space is 3.</p> <p>3) We need three independent vectors for our basis for the null space.</p> <p>So what we can do is take $\bf x$ and split it up as follows:</p> <p>$$\eqalign{ {\bf x}=\left[\matrix{c\cr a\cr -b\cr b\cr c}\right] &amp;=\left[ \matrix{0\cr a\cr 0\cr 0\cr 0}\right]+ \left[\matrix{c\cr 0\cr 0\cr 0\cr c}\right]+ \left[\matrix{0\cr 0\cr -b\cr b\cr 0}\right]\cr &amp;= a\left[ \matrix{0\cr1\cr0\cr 0\cr 0}\right]+ c\left[ \matrix{1\cr 0\cr 0\cr 0\cr 1}\right]+ b\left[ \matrix{0\cr 0\cr -1\cr 1\cr 0}\right]\cr } $$ Each of the column vectors above are in the null space of $A$. Moreover, they are independent. Thus, they form a basis.</p> <p>I'm not sure that this answers your question. I did a bit of "hand waving" here. What I glossed over were the facts:</p> <p>1)The columns of the echelon form of $A$ that do not contain leading row entries correspond to the "free variables" to $A{\bf x}={\bf 0}$. If the number of these columns is $r$, then the dimension of the null space is $r$ (again, if you know the dimension of the column space, you can see that the dimension of the null space must be the number of these columns from the rank-nullity theorem).</p> <p>2) If you split up the general solution to $A{\bf x}={\bf 0}$ as done above, then these vectors will be independent (and span of course since you'll have $r$ of them). </p>
<p>There is a mechanical part in the accepted (+1) answer that may possibly need a more step-by-step explanation, namely the process behind</p> <blockquote> <p>what we can do is take $\mathbf x$ and split it up</p> </blockquote> <p>Reducing the matrix $A$ to the reduced row echelon (rref) form results in:</p> <p>$$A=\left(\begin{array}{rrrrrr} 1 &amp; 3 &amp; -2 &amp; 0 &amp; 2 &amp; 0 \\ 2 &amp; 6 &amp; -5 &amp; -2 &amp; 4 &amp; -3 \\ 0 &amp; 0 &amp; 5 &amp; 10 &amp; 0 &amp; 15 \\ 2 &amp; 6 &amp; 0 &amp; 8 &amp; 4 &amp; 18 \end{array}\right)\to\left(\begin{array}{rrrrrr} \bbox[5px,border:2px solid red]1 &amp; 3 &amp; 0 &amp; 4 &amp; 2 &amp; 0 \\ 0 &amp; 0 &amp; \bbox[5px,border:2px solid red]1 &amp; 2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; \bbox[5px,border:2px solid red]1 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \end{array}\right)$$</p> <p>There are three pivot columns corresponding to the pivot $1$'s in red, and the rank-nullity theorem tells us that there are $n-r=3$ vectors in any basis of the $N(A)$, corresponding to the free variables.</p> <p>At this point it is worth remembering where all this comes from: the homogeneous system of linear equations $A\vec x = \vec 0,$ which has now been reduced to:</p> <p>$$\begin{align} 1x_1 + 3x_2+ 0x_3+4x_4+2x_5 +0x_6 &amp;=0\\ 0x_1 + 0x_2+ 1x_3+2x_4+0x_5 +0x_6 &amp;=0\\ 0x_1 + 0x_2+ 0x_3+0x_4+0x_5 +1x_6 &amp;=0\\ 0x_1 + 0x_2+ 0x_3+0x_4+0x_5 +0x_6 &amp;=0\\ \end{align}$$</p> <p>Expressing the pivot variables in terms of the free variables:</p> <p>$$\begin{align} 1x_1 &amp;= - 3\;\color{blue}{x_2} - 4\;\color{red}{x_4} - 2\;\color{magenta}{x_5} \\ 1x_3 &amp;= -2\;\color{red}{x_4}\\ 1x_6 &amp;=\;0\\ \end{align}$$</p> <p>immediately shows the way the basis vectors of the $N(A)$ will be filled in from their "skeleton" form simply indicating the column where the free variable in question is located (i.e. $x_2,x_4,x_5$):</p> <p>$$\left\{\begin{bmatrix}0\\\color{blue}1\\0\\0\\0\\0\end{bmatrix},\begin{bmatrix}0\\0\\0\\\color{red}1\\0\\0\end{bmatrix},\begin{bmatrix}0\\0\\0\\0\\\color{magenta}1\\0\end{bmatrix}\right\}\to \color{blue}{x_2}\,\begin{bmatrix}0\\1\\0\\0\\0\\0\end{bmatrix}+\color{red}{x_4}\,\begin{bmatrix}0\\0\\0\\1\\0\\0\end{bmatrix}+\color{magenta}{x_5}\,\begin{bmatrix}0\\0\\0\\0\\1\\0\end{bmatrix}$$</p> <p>to the final form:</p> <p>$$\color{blue}{x_2}\,\begin{bmatrix}-3\\1\\0\\0\\0\\0\end{bmatrix}+\color{red}{x_4}\,\begin{bmatrix}-4\\0\\-2\\1\\0\\0\end{bmatrix}+\color{magenta}{x_5}\,\begin{bmatrix}-2\\0\\0\\0\\1\\0\end{bmatrix}\to \text{basis }N(A)= \left\{\begin{bmatrix}-3\\\color{blue}1\\0\\0\\0\\0\end{bmatrix},\begin{bmatrix}-4\\0\\-2\\\color{red}1\\0\\0\end{bmatrix},\begin{bmatrix}-2\\0\\0\\0\\\color{magenta}1\\0\end{bmatrix}\right\}$$</p> <p>So it boils down to changing the signs of the entries in the rref, and keeping track of the free variables.</p> <hr> <pre><code>require("pracma") A = matrix(c(1,3,-2,0,2,0,2,6,-5,-2,4,-3,0,0,5,10,0,15,2,6,0,8,4,18), nrow=4, byrow=T) x2= c(-3,1,0,0,0,0); A %*% x2; x4=c(-4,0,-2,1,0,0); A %*% x4; x5=c(-2,0,0,0,1,0); A %*% x5 </code></pre>
matrices
<p>Some days ago, I was thinking on a problem, which states that <span class="math-container">$$AB-BA=I$$</span> does not have a solution in <span class="math-container">$M_{n\times n}(\mathbb R)$</span> and <span class="math-container">$M_{n\times n}(\mathbb C)$</span>. (Here <span class="math-container">$M_{n\times n}(\mathbb F)$</span> denotes the set of all <span class="math-container">$n\times n$</span> matrices with entries from the field <span class="math-container">$\mathbb F$</span> and <span class="math-container">$I$</span> is the identity matrix.)</p> <p>Although I couldn't solve the problem, I came up with this problem:</p> <blockquote> <p>Does there exist a field <span class="math-container">$\mathbb F$</span> for which that equation <span class="math-container">$AB-BA=I$</span> has a solution in <span class="math-container">$M_{n\times n}(\mathbb F)$</span>?</p> </blockquote> <p>I'd really appreciate your help.</p>
<p>Let $k$ be a field. The first Weyl algebra $A_1(k)$ is the free associative $k$-algebra generated by two letters $x$ and $y$ subject to the relation $$xy-yx=1,$$ which is usually called the Heisenberg or Weyl commutation relation. This is an extremely important example of a non-commutative ring which appears in many places, from the algebraic theory of differential operators to quantum physics (the equation above <em>is</em> Heisenberg's indeterminacy principle, in a sense) to the pinnacles of Lie theory to combinatorics to pretty much anything else.</p> <p>For us right now, this algebra shows up because </p> <blockquote> <p>an $A_1(k)$-modules are essentially the same thing as solutions to the equation $PQ-QP=I$ with $P$ and $Q$ endomorphisms of a vector space. </p> </blockquote> <p>Indeed:</p> <ul> <li><p>if $M$ is a left $A_1(k)$-module then $M$ is in particular a $k$-vector space and there is an homomorphism of $k$-algebras $\phi_M:A_1(k)\to\hom_k(M,M)$ to the endomorphism algebra of $M$ viewed as a vector space. Since $x$ and $y$ generate the algebra $A_1(k)$, $\phi_M$ is completely determined by the two endomorphisms $P=\phi_M(x)$ and $Q=\phi_M(y)$; moreover, since $\phi_M$ is an algebra homomorphism, we have $PQ-QP=\phi_1(xy-yx)=\phi_1(1_{A_1(k)})=\mathrm{id}_M$. We thus see that $P$ and $Q$ are endomorphisms of the vector space $M$ which satisfy our desired relation.</p></li> <li><p>Conversely, if $M$ is a vector space and $P$, $Q:M\to M$ are two linear endomorphisms, then one can show more or less automatically that there is a unique algebra morphism $\phi_M:A_1(k)\to\hom_k(M,M)$ such that $\phi_M(x)=P$ and $\phi_M(y)=Q$. This homomorphism turns $M$ into a left $A_1(k)$-module.</p></li> <li><p>These two constructions, one going from an $A_1(k)$-module to a pair $(P,Q)$ of endomorphisms of a vector space $M$ such that $PQ-QP=\mathrm{id}_M$, and the other going the other way, are mutually inverse.</p></li> </ul> <p>A conclusion we get from this is that your question </p> <blockquote> <p>for what fields $k$ do there exist $n\geq1$ and matrices $A$, $B\in M_n(k)$ such that $AB-BA=I$?</p> </blockquote> <p>is essentially equivalent to</p> <blockquote> <p>for what fields $k$ does $A_1(k)$ have finite dimensional modules?</p> </blockquote> <p>Now, it is very easy to see that $A_1(k)$ is an infinite dimensional algebra, and that in fact the set $\{x^iy^j:i,j\geq0\}$ of monomials is a $k$-basis. Two of the key properties of $A_1(k)$ are the following:</p> <blockquote> <p><strong>Theorem.</strong> If $k$ is a field of characteristic zero, then $A_1(k)$ is a simple algebra—that is, $A_1(k)$ does not have any non-zero proper bilateral ideals. Its center is trivial: it is simply the $1$-dimensional subspace spanned by the unit element.</p> </blockquote> <p>An immediate corollary of this is the following</p> <blockquote> <p><strong>Proposition.</strong> If $k$ is a field of characteristic zero, the $A_1(k)$ does not have any non-zero finite dimensional modules. Equivalently, there do not exist $n\geq1$ and a pair of matrices $P$, $Q\in M_n(k)$ such that $PQ-QP=I$.</p> </blockquote> <p><em>Proof.</em> Suppose $M$ is a finite dimensional $A_1(k)$-module. Then we have an algebra homomorphism $\phi:A_1(k)\to\hom_k(M,M)$ such that $\phi(a)(m)=am$ for all $a\in A_1(k)$ and all $m\in M$. Since $A_1(k)$ is infinite dimensional and $\hom_k(M,M)$ is finite dimensional (because $M$ is finite dimensional!) the kernel $I=\ker\phi$ cannot be zero —in fact, it must hace finite codimension. Now $I$ is a bilateral ideal, so the theorem implies that it must be equal to $A_1(k)$. But then $M$ must be zero dimensional, for $1\in A_1(k)$ acts on it at the same time as the identity and as zero. $\Box$</p> <p>This proposition can also be proved by taking traces, as everyone else has observed on this page, but the fact that $A_1(k)$ is simple is an immensely more powerful piece of knowledge (there are examples of algebras which do not have finite dimensional modules and which are not simple, by the way :) )</p> <p><em>Now let us suppose that $k$ is of characteristic $p&gt;0$.</em> What changes in term of the algebra? The most significant change is </p> <blockquote> <p><strong>Observation.</strong> The algebra $A_1(k)$ is not simple. Its center $Z$ is generated by the elements $x^p$ and $y^p$, which are algebraically independent, so that $Z$ is in fact isomorphic to a polynomial ring in two variables. We can write $Z=k[x^p,y^p]$.</p> </blockquote> <p>In fact, once we notice that $x^p$ and $y^p$ are central elements —and this is proved by a straightforward computation— it is easy to write down non-trivial bilateral ideals. For example, $(x^p)$ works; the key point in showing this is the fact that since $x^p$ is central, the <em>left</em> ideal which it generates coincides with the <em>bilateral</em> ideal, and it is very easy to see that the <em>left</em> ideal is proper and non-zero.</p> <p>Moreover, a little playing with this will give us the following. Not only does $A_1(k)$ have bilateral ideals: it has bilateral ideals of <em>finite codimension</em>. For example, the ideal $(x^p,y^p)$ is easily seen to have codimension $p^2$; more generally, we can pick two scalars $a$, $b\in k$ and consider the ideal $I_{a,b}=(x^p-a,y^p-b)$, which has the same codimension $p^2$. Now this got rid of the obstruction to finding finite-dimensional modules that we had in the characteristic zero case, so we can hope for finite dimensional modules now!</p> <p>More: this actually gives us a method to produce pairs of matrices satisfying the Heisenberg relation. We just can pick a proper bilateral ideal $I\subseteq A_1(k)$ of finite codimension, consider the finite dimensional $k$-algebra $B=A_1(k)/I$ and look for finitely generated $B$-modules: every such module is provides us with a finite dimensional $A_1(k)$-module and the observations above produce from it pairs of matrices which are related in the way we want.</p> <p>So let us do this explicitly in the simplest case: let us suppose that $k$ is algebraically closed, let $a$, $b\in k$ and let $I=I_{a,b}=(x^p-a,y^p-b)$. The algebra $B=A_1(k)/I$ has dimension $p^2$, with $\{x^iy^j:0\leq i,j&lt;p\}$ as a basis. The exact same proof that the Weyl algebra is simple when the ground field is of characteristic zero proves that $B$ is simple, and in the same way the same proof that proves that the center of the Weyl algebra is trivial in characteristic zero shows that the center of $B$ is $k$; going from $A_1(k)$ to $B$ we have modded out the obstruction to carrying out these proofs. In other words, the algebra $B$ is what's called a (finite dimensional) central simple algebra. Wedderburn's theorem now implies that in fact $B\cong M_p(k)$, as this is the only semisimple algebra of dimension $p^2$ with trivial center. A consequence of this is that there is a unique (up to isomorphism) simple $B$-module $S$, of dimension $p$, and that all other finite dimensional $B$-modules are direct sums of copies of $S$. </p> <p>Now, since $k$ is algebraically closed (much less would suffice) there is an $\alpha\in k$ such that $\alpha^p=a$. Let $V=k^p$ and consider the $p\times p$-matrices $$Q=\begin{pmatrix}0&amp;&amp;&amp;&amp;b\\1&amp;0\\&amp;1&amp;0\\&amp;&amp;1&amp;0\\&amp;&amp;&amp;\ddots&amp;\ddots\end{pmatrix}$$ which is all zeroes expect for $1$s in the first subdiagonal and a $b$ on the top right corner, and $$P=\begin{pmatrix}-\alpha&amp;1\\&amp;-\alpha&amp;2\\&amp;&amp;-\alpha&amp;3\\&amp;&amp;&amp;\ddots&amp;\ddots\\&amp;&amp;&amp;&amp;-\alpha&amp;p-1\\&amp;&amp;&amp;&amp;&amp;-\alpha\end{pmatrix}.$$ One can show that $P^p=aI$, $Q^p=bI$ and that $PQ-QP=I$, so they provide us us a morphism of algebras $B\to\hom_k(k^ p,k^ p)$, that is, they turn $k^p$ into a $B$-module. It <em>must</em> be isomorphic to $S$, because the two have the same dimension and there is only one module of that dimension; this determines <em>all</em> finite dimensional modules, which are direct sums of copies of $S$, as we said above..</p> <p>This generalizes the example Henning gave, and in fact one can show that this procedure gives <em>all</em> $p$-dimensional $A_1(k)$-modules can be constructed from quotients by ideals of the form $I_{a,b}$. Doing direct sums for various choices of $a$ and $b$, this gives us lots of finite dimensional $A_1(k)$-modules and, then, of pairs of matrices satisfying the Heisenberg relation. I think we obtain in this way all the semisimple finite dimensional $A_1(k)$-modules but I would need to think a bit before claiming it for certain.</p> <p>Of course, this only deals with the simplest case. The algebra $A_1(k)$ has non-semisimple finite-dimensional quotients, which are rather complicated (and I think there are pretty of wild algebras among them...) so one can get many, many more examples of modules and of pairs of matrices.</p>
<p>As noted in the comments, this is impossible in characteristic 0.</p> <p>But $M_{2\times 2}(\mathbb F_2)$ contains the example $\pmatrix{0&amp;1\\0&amp;0}, \pmatrix{0&amp;1\\1&amp;0}$.</p> <p>In general, in characteristic $p$, we can use the $p\times p$ matrices $$\pmatrix{0&amp;1\\&amp;0&amp;2\\&amp;&amp;\ddots&amp;\ddots\\&amp;&amp;&amp;0&amp;p-1\\&amp;&amp;&amp;&amp;0}, \pmatrix{0\\1&amp;0\\&amp;\ddots&amp;\ddots\\&amp;&amp;1&amp;0\\&amp;&amp;&amp;1&amp;0}$$ which works even over general unital rings of finite characteristic.</p>
differentiation
<p>So I was exploring some math the other day... and I came across the following neat identity:</p> <p>Given $y$ is a function of $x$ ($y(x)$) and $$ y = 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \right) \right) \right) \text{ (repeated differential)} $$</p> <p>then we can solve this equation as follows: $$ y - 1 = \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \iff \int y - 1 \, \mathrm{d} x = 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) $$ $$ \implies \int y - 1 \, \mathrm{d} x = y \iff y - 1 = \frac{\mathrm{d} y }{ \mathrm{d} x} $$</p> <p>So</p> <p>$$ \ln \left( y - 1 \right) = x + C \iff y = Ce^x + 1 $$</p> <p>This problem reminded me a lot of nested radical expressions such as: $$ x = 1 + \sqrt{1 + \sqrt{ 1 + \sqrt{ \cdots }}} \iff x - 1 = \sqrt{1 + \sqrt{ 1 + \sqrt{ \cdots }}} $$ $$ \implies (x - 1)^2 = x \iff x^2 - 3x + 1 = 0 $$</p> <p>and so</p> <p>$$ x = \frac{3}{2} + \frac{\sqrt{5}}{2} $$</p> <p>This reminded of the Ramanujan nested radical which is:</p> <p>$$ x = 0 + \sqrt{ 1 + 2 \sqrt{ 1 + 3 \sqrt{1 + 4 \sqrt{ \cdots }}}} $$</p> <p>whose solution cannot be done by simple series manipulations but requires knowledge of general formula found by algebraically manipulating the binomial theorem...</p> <p>This made me curious...</p> <p>say $y$ is a function of $x$ ($y(x)$) and</p> <p>$$ y = 0 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 2\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 3\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 4\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 5\frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \right) \right) \right) $$</p> <p>What would the solution come out to be?</p>
<p>If the operator is meant to derive what follows, then we have $$y(x) = \lim_{n \to \infty} n! \cdot y^{(n)}(x) \qquad,\qquad \forall\ x \in X$$</p> <p>since the derivative of a constant is always $0$ , and the derivative of a sum is the sum of derivatives. However, if multiplication is meant, with the “last” term of the nested product presumably being none other than <i>y(x)</i>, then we have $$y(x) = \sum_{n=1}^\infty n! \cdot y^{(n)}(x) \qquad,\qquad \forall\ x \in X$$</p> <p>where $y : X \to Y$ ; either way, since $$\lim_{n \to \infty}n! = \infty$$ then, in order for the function to converge $\forall\ x \in X$ , we must have $$\lim_{n \to \infty} y^{(n)}(x) = 0 \quad,\quad \forall\ x \in X \quad=&gt;\quad y(x)\ =\ P_m(x)\ =\ \sum_{k=0}^m a_k \cdot x^k \quad,\quad m \in \mathbb{N}$$</p> <p>since the N<sup>th</sup> nested integral of $0$ is nothing else than a polynomial function of degree N-$1$ .</p>
<p>Having been quite a bit of time I thought about this some more!</p> <p>If we cut our original radical at a finite time we have (as pointed out by Aryabhata) we have</p> <p>$$y = n!\frac{d^n}{dx^n} $$</p> <p>Solutions to this include $$ y= C_1 e^{x\sqrt[n]{n!}_1} +C_2 e^{x\sqrt[n]{n!}_2}... C_n e^{x\sqrt[n]{n!}_n} $$</p> <p>For all possible nth roots of unity.</p> <p>Now consider stirlings approximation of n! which states</p> <p>$$n! \le e n^{n + \frac{1}{2}} e^{-n} $$</p> <p>(Note that for $n \ge 0$ these functions are both greater than or equal to 1) hence</p> <p>$$ |\sqrt[n]{n!}| \le |\sqrt[n]{e n^{n + \frac{1}{2}} e^{-n}} | $$</p> <p>Which yields</p> <p>$$ |\sqrt[n]{n!}| \le |\sqrt[n]{e n^{n + \frac{1}{2}} e^{-n}} | $$</p> <p>Which yields:</p> <p>$$ |\sqrt[n]{n!}| \le |{e^{\frac{1}{n}} n^{1 + \frac{1}{2n}} e^{-1}} | $$</p> <p>Which yields:</p> <p>$$ |\sqrt[n]{n!}| = O(n) $$</p> <p>As n tends to infinity, so does this.</p> <p>But before I throw out any hope for this, I would like to point out that as n gets larger, we essentially obtain a basis for functions in terms of complex exponentials, I'm curious if there is a way to pick the constants $C_i$ such that for a given n, you can model the function as closely as possible, and determine then in the limit what the $C_i$ need to be (probably very degenerate piecewise constant functions)</p> <p>If there is such a scheme, it may very well be that ANY function satisfies this differential equation, assuming the set of exponentials gets closer and closer to forming a basis for all functions (which in the limit as n goes to infinity means it does indeed form a basis). The tricky part is defining, "how well" a basis is. IE if it doesn't cover every case, how do we say that it covers more cases, or relatively more cases than its predecessor.</p>
game-theory
<p>Let's say we have a cable of unit length, which is damaged at one unknown point, the location of which is uniformly distributed. You are allowed to cut the cable at any point, and after a cut, you'd know which piece is damaged and which is not. You can do this as many times as you want, and you want to maximize the <em>expected length of the biggest undamaged piece</em> after all the cutting. What is the best you can do? The strategy need not be deterministic.</p> <p>I currently have a lower bound of $\frac12$ and upper bound of $\frac34$. The lower bound comes from just cutting the cable in half once.</p> <p>For the upper bound, notice that if the fault is at $\frac12 \pm x$, then you cannot do better than $\frac12 +x$. So taking the expected value, $$ \int_0^{\frac12} \left(\frac12+x\right)2 dx = \frac34$$</p> <p>I initially thought that an optimal strategy would have to be applied recursively to the damaged piece, but now I'm no longer convinced of this. If you've already obtained an undamaged piece of length $l$, then there is no point in cutting a damaged piece into two pieces of length $\leq l$.</p> <p>Any reference to an existing treatment is also welcome.</p>
<p>An &quot;algorithm&quot; (though it might possibly run indefinitely, depending on our strategy) necessarily goes like this</p> <blockquote> <p><strong>Step 0.</strong> Let <span class="math-container">$k\leftarrow1$</span>.</p> <p><strong>Step 1.</strong> <em>[Now we have <span class="math-container">$k$</span> parts of cable, i.e. subintervals of <span class="math-container">$[0,1]$</span>, exactly one of which is defective]</em> If the defective part is strictly longer than all good parts, go to step 2. Otherwise there exists some good part at least as long as the defective part and further subdivisions cannot improve the result; pick a longest good part as result and terminate.</p> <p><strong>Step 2.</strong> <em>[Now the defective part <span class="math-container">$[a,b]$</span> is strictly longer than all other parts]</em> Pick a suitable point of cutting the defective part, i.e. pick some <span class="math-container">$x_k\in(a,b)$</span>, thus replacing <span class="math-container">$[a,b]$</span> with <span class="math-container">$[a,x_k]$</span> and <span class="math-container">$[x_k,b]$</span>. <em>[The probability that <span class="math-container">$x_k$</span> equals the point of defect is zero and can be ignored]</em> Let <span class="math-container">$k\leftarrow k+1$</span> and go to step 1.</p> </blockquote> <p>Thus a strategy consists of a sequence of cutting points <span class="math-container">$x_i\in[0,1]$</span> for all <span class="math-container">$i\in N$</span> with <span class="math-container">$N=\mathbb N$</span> or <span class="math-container">$N=\{1,2,\ldots n\}$</span> (i.e. the sequence might be finite or infinte). We may assume wlog. that <span class="math-container">$N$</span> is not unnecessary big, i.e. if step 2 will never be entered with some value <span class="math-container">$k$</span>, no matter where the actual defect is in the cable, we can as well assume that <span class="math-container">$N\subseteq\{1,\ldots,k-1\}$</span>. In other words, if <span class="math-container">$k\in N$</span>, then there exists a point <span class="math-container">$x\in[0,1]$</span> such that a defect at <span class="math-container">$x$</span> will make the algorithm above make use of the cutting point <span class="math-container">$x_k$</span>.</p> <p>By symmetry, we may assume in step 2 that <span class="math-container">$$\tag1x_k-a\ge b-x_k,$$</span> i.e. the new left part is always at least as long as the new right part. Then we can show by induction, that the interval <span class="math-container">$[a,b]$</span> in step 2 is always <span class="math-container">$[0,x_{k-1}]$</span> (formally letting <span class="math-container">$x_0=1$</span>). In fact this is trivially true when <span class="math-container">$k=1$</span>. If the claim is true for some <span class="math-container">$k$</span>, then in step 2 interval <span class="math-container">$[0,x_{k-1}]$</span> is divided into <span class="math-container">$[0,x_k]$</span> and <span class="math-container">$[x_k,x_{k-1}]$</span>. By <span class="math-container">$(1)$</span> the interval <span class="math-container">$[0,x_k]$</span> is at least as long as <span class="math-container">$[x_k,x_{k-1}]$</span> (i.e. after <span class="math-container">$k\leftarrow k+1$</span>, interval <span class="math-container">$[0,x_{k-1}]$</span> is at least as long as <span class="math-container">$[x_{k-1},x_{k-2}]$</span>). Thus in step 1, we either terminate or continue with step 2 and <span class="math-container">$[a,b]=[0,x_{k-1}]$</span> as was to be shown.</p> <p>In other words, the sequence <span class="math-container">$(x_i)_{i\in N}$</span> is strictly decreasing and more precisely <span class="math-container">$$\tag2 \frac {x_{k-1}}2\le x_k&lt;x_{k-1}\qquad\text{for all }k\in N.$$</span> As the sequence is also bounded from below, <span class="math-container">$L:=\max\{\,x_{k-1}-x_{k}\mid k\in N\,\}$</span> exists. Note that we may assume <span class="math-container">$x_k\ge L$</span> for all <span class="math-container">$k\in N$</span> as otherwise by <span class="math-container">$(2)$</span> neither of the pieces <span class="math-container">$[0,x_k]$</span> or <span class="math-container">$[x_k,x_{k-1}]$</span> can be an improvement over <span class="math-container">$L$</span>.</p> <p>Let <span class="math-container">$f(x)$</span> denote the length of longest good cable obtained by employing the strategy <span class="math-container">$(x_n)_{n\in N}$</span> if the actual point of defect is at <span class="math-container">$x\in[0,1]$</span>. If <span class="math-container">$x_k&lt; x&lt; x_{k-1}$</span> for some <span class="math-container">$k\in N$</span>, then we will stop after performing the cut at <span class="math-container">$x_k$</span> and the longest part will be <span class="math-container">$[0,x_k]$</span>, so <span class="math-container">$f(x)=x_k$</span> for <span class="math-container">$x\in(x_k,x_{k-1})$</span>. If <span class="math-container">$x&lt;x_k$</span> for all <span class="math-container">$k\in N$</span>, then <span class="math-container">$f(x)=L$</span>. The expected length is simply <span class="math-container">$\int_0^1f(x)\,\mathrm dx$</span>. The graph of <span class="math-container">$f$</span> is bounded from above by the diagonal, except for the triangle on the lower left with vertices <span class="math-container">$(0,0) (0,L), (L,L)$</span>. On the other hand, a triangle of the same size is &quot;missing&quot; over an interval <span class="math-container">$[x_k,x_{k-1}]$</span> with <span class="math-container">$x_{k-1}-x_k=L$</span>. <img src="https://i.sstatic.net/IDpWG.png" alt="Proof without words of <span class="math-container">$(3)$</span> for an example function <span class="math-container">$f$</span>" /></p> <p>We conclude that <span class="math-container">$$\tag3 \int_0^1f(x)\,\mathrm dx\le \int_0^1x\,\mathrm dx=\frac12.$$</span></p> <p>The estimate <span class="math-container">$(3)$</span> also applies to mixed strategies (i.e. a strategy <span class="math-container">$(x_n)_{n\in N}$</span> is picked randomly according to some distribution). On the other hand, inequality <span class="math-container">$(3)$</span> is strict: A strategy that does indeed lead to <span class="math-container">$\frac12$</span> as expected value is given by <span class="math-container">$N=\{1\}$</span>, <span class="math-container">$x_1=\frac12$</span> (as already noted by the OP).</p>
<p>A strategy consists of snipping off pieces of length $\ell_1,\ell_2,\ldots$, subject to the condition $\ell_1+\ell_2+\cdots = 1-M$, where $M=\max\{\ell_1,\ell_2,\ldots\}$, and stopping if and when the cut-off piece is found to contain the bad spot. The probability that the bad spot lies in the $i$th snippet is $\ell_i$, and the probability it lies in none of them is $M$. Therefore the expected longest piece when the process ends is</p> <p>$$\ell_1(\ell_2+\ell_3+\cdots+M)+\ell_2(\ell_3+\ell_4+\cdots+M)+\cdots+M^2$$</p> <p>It's easy to see that this sum is unchanged if any $\ell_i$ and $\ell_{i+1}$ are interchanged:</p> <p>$$\begin{align} &amp;\cdots+\ell_{i-1}(\ell_i+\ell_{i+1}+\ell_{i+2}+\cdots+M)+\ell_i(\ell_{i+1}+\ell_{i+2}+\cdots+M)+\ell_{i+1}(\ell_{i+2}+\cdots+M)+\cdots\\ &amp;=+\ell_{i-1}(\ell_{i+1}+\ell_i+\ell_{i+2}+\cdots+M)+\ell_{i+1}(\ell_i+\ell_{i+2}+\cdots+M)+\ell_i(\ell_{i+2}+\cdots+M)+\cdots\\ \end{align}$$</p> <p>Consequently, any strategy is equivalent to one in which $\ell_1\ge\ell_2\ge\ell_3\ge\cdots$, for which $M=\ell_1$. But this leaves us in the case that kaine earlier analyzed: As soon as you are guaranteed a piece of length $M$ and plan to make all subsequent cuts no longer than $M$, you can only improve the final result by making the subsequent cuts infinitesimally short. So the best you can get, as kaine showed, is an expected value of $1/2$.</p>
linear-algebra
<p>The rotation matrix $$\pmatrix{ \cos \theta &amp; \sin \theta \\ -\sin \theta &amp; \cos \theta}$$ has complex eigenvalues $\{e^{\pm i\theta}\}$ corresponding to eigenvectors $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$. The real eigenvector of a 3d rotation matrix has a natural interpretation as the axis of rotation. Is there a nice geometric interpretation of the eigenvectors of the $2 \times 2$ matrix?</p>
<p><a href="https://math.stackexchange.com/a/241399/3820">Tom Oldfield's answer</a> is great, but you asked for a geometric interpretation so I made some pictures.</p> <p>The pictures will use what I called a "phased bar chart", which shows complex values as bars that have been rotated. Each bar corresponds to a vector component, with length showing magnitude and direction showing phase. An example:</p> <p><img src="https://i.sstatic.net/8YsvT.png" alt="Example phased bar chart"></p> <p>The important property we care about is that scaling a vector corresponds to the chart scaling or rotating. Other transformations cause it to distort, so we can use it to recognize eigenvectors based on the lack of distortions. (I go into more depth in <a href="http://twistedoakstudios.com/blog/Post7254_visualizing-the-eigenvectors-of-a-rotation" rel="noreferrer">this blog post</a>.)</p> <p>So here's what it looks like when we rotate <code>&lt;0, 1&gt;</code> and <code>&lt;i, 0&gt;</code>:</p> <p><img src="https://i.sstatic.net/Uauy6.gif" alt="Rotating 0, 1"> <img src="https://i.sstatic.net/nYUhr.gif" alt="Rotating i, 0"></p> <p>Those diagram are not just scaling/rotating. So <code>&lt;0, 1&gt;</code> and <code>&lt;i, 0&gt;</code> are not eigenvectors.</p> <p>However, they do incorporate horizontal and vertical sinusoidal motion. Any guesses what happens when we put them together?</p> <p>Trying <code>&lt;1, i&gt;</code> and <code>&lt;1, -i&gt;</code>:</p> <p><img src="https://i.sstatic.net/t80MU.gif" alt="Rotating 1, i"> <img src="https://i.sstatic.net/OObMZ.gif" alt="Rotation 1, -i"></p> <p>There you have it. The phased bar charts of the rotated eigenvectors are being rotated (corresponding to the components being phased) as the vector is turned. Other vectors get distorting charts when you turn them, so they aren't eigenvectors.</p>
<p>Lovely question!</p> <p>There is a kind of intuitive way to view the eigenvalues and eigenvectors, and it ties in with geometric ideas as well (without resorting to four dimensions!). </p> <p>The matrix, is unitary (more specifically, it is real so it is called orthogonal) and so there is an orthogonal basis of eigenvectors. Here, as you noted, it is $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$, let us call them $v_1$ and $v_2$, that form a basis of $\mathbb{C^2}$, and so we can write any element of $\mathbb{R^2}$ in terms of $v_1$ and $v_2$ as well, since $\mathbb{R^2}$ is a subset of $\mathbb{C^2}$. (And we normally think of rotations as occurring in $\mathbb{R^2}$! Please note that $\mathbb{C^2}$ is a two-dimensional vector space with components in $\mathbb{C}$ and need not be considered as four-dimensional, with components in $\mathbb{R}$.)</p> <p>We can then represent any vector in $\mathbb{R^2}$ uniquely as a linear combination of these two vectors $x = \lambda_1 v_1 + \lambda_2v_2$, with $\lambda_i \in \mathbb{C}$. So if we call the linear map that the matrix represents $R$</p> <p>$$R(x) = R(\lambda_1 v_1 + \lambda_2v_2) = \lambda_1 R(v_1) + \lambda_2R(v_2) = e^{i\theta}\lambda_1 (v_1) + e^{-i\theta}\lambda_2(v_2) $$</p> <p>In other words, when working in the basis ${v_1,v_2}$: $$R \pmatrix{\lambda_1 \\\lambda_2} = \pmatrix{e^{i\theta}\lambda_1 \\ e^{-i\theta}\lambda_2}$$</p> <p>And we know that multiplying a complex number by $e^{i\theta}$ is an anticlockwise rotation by theta. So the rotation of a vector when represented by the basis ${v_1,v_2}$ is the same as just rotating the individual components of the vector in the complex plane!</p>
probability
<p>TL;DR:</p> <ul> <li><p>is a stopping time some sort of event, or is it a point in discrete time, or something else entirely</p></li> <li><p>what is an example of something which is not a stopping time?</p></li> <li><p>is my understanding of the concepts and definitions below correct?</p></li> </ul> <hr> <p>I am having difficulty understanding what a stopping time is.</p> <p>The definition I am provided with is as follows: A random time $τ$ is called a stopping time if for any $n$, one can decide whether the event $\{τ ≤ n\}$ (and hence the complementary event $\{τ &gt; n\}$) has occurred by observing the first n variables $X_1, X_2, . . . , X_n$.</p> <p>We are then given an example: Time of ruin is a stopping time.</p> <p>$τ = \min\{n : X_n = 0\}$. $\{τ &gt; n\} = \{X_1 ≥ 0, X_2 ≥ 0, . . . , X_n &gt; 0\}$.</p> <p>I don't quite understand what this is supposed to tell us.</p> <blockquote> <p>random time $τ$ is called a stopping time if for any $n$, one can decide whether the event $\{τ ≤ n\}$ has occurred by observing the first n variables $X_1, X_2, . . . , X_n$</p> </blockquote> <p>When they say the event $\{ τ \leq n\}$ they are referring to some specific time, are they not? e.g $τ = 1$ or maybe $τ = 4$ as long as $τ \leq n$</p> <p>Is this correct?</p> <p>So then if we know the event $ \{τ \leq n\}$ has or has not occurred, we can conclude whether the complementary even $τ &gt; n$ has occurred.</p> <p>If this is fine so far, then I have issues with the example.</p> <blockquote> <p>Time of ruin is stopping time, $τ = \min\{n : X_n = 0\}$</p> </blockquote> <p>Firstly, time of ruin to me means at a point where you have $0$ or a negative balance of some sort of asset (For a gambler, no more money to gamble with, for a business owner, no more cash to pay expenses or obligations) - is this correct?</p> <p>In that case Time of ruin should occur when $X_n \leq 0$, correct?</p> <p>IF that is fine, then continuing, what does</p> <p>$τ = \min\{n : X_n = 0\}$ mean? This is not the same as $τ = \min\{n,X_n\}$, is it? What is it trying to say? I read it as, the minimum of $n$, such that $X_n = 0$</p> <p>So it's saying $τ$ is the first point at which we are ruined?</p> <p>Is my understanding all correct? Can someone provide me with an example of what is NOT a stopping time? Does a "stopping time" refer to a type of event?</p>
<p>You have some event, which you typically don't know when occurs, but that can/will occur some time in the future. The time that this event occurs is random, and it is a stopping time if, at any point in time, you know whether the event has occurred or not.</p> <p>A few quick examples.</p> <p>1) Your own (a stopping time): Let $\tau$ denote the time that I'm ruined (i.e. when I have no money left). At any time, I know whether I am ruined or not. For instance, I am not ruined right now. I don't know when ruin occurs, or if it will occur at all, but if it does, I will know.</p> <p>2) Parking (not a stopping time): Suppose I am driving along a very long road, and that I'm looking for the parking spot which is furthest towards the other end of the road (call this "the last parking spot"). I pass by available spots along the way, but at any time, I never know if I have passed the last free parking spot. Why? I could just have passed some empty spot, but I cannot see if there are more empty spots later on, and I wouldn't know if the spot that I just passed was the last one or not.</p> <p>3) My birthday this year (a stopping time): This is a deterministic stopping time. At any time, I know whether or not my birthday has occurred this year. In fact, I know exactly when my birthday occurs, which makes this a non-typical stopping time in the sense that it is deterministic. </p>
<p>You seem to understand the concept pretty well. Just like you, I would have said that the time of ruin is $τ = \min\{n : X_n \leq 0\}$ instead of $τ = \min\{n : X_n = 0\}$. But in this precise example, the time of ruin is the first time that you have exactly 0.</p> <p>The concept of stopping time is closely related to that of filtration of a stochastic process. In other words, $\tau $ is a stopping time if the event $\lbrace\tau \leq n\rbrace$ is measurable, with respect to the filtration you're using, which is usually $\mathcal{F}_n=\sigma(X_0,\dots,X_n)$.</p>
matrices
<p>For a lower triangular matrix, the inverse of itself should be easy to find because that's the idea of the LU decomposition, am I right? For many of the lower or upper triangular matrices, often I could just flip the signs to get its inverse. For eg: $$\begin{bmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 0\\ -1.5 &amp; 0 &amp; 1 \end{bmatrix}^{-1}= \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 0\\ 1.5 &amp; 0 &amp; 1 \end{bmatrix}$$ I just flipped from -1.5 to 1.5 and I got the inverse.</p> <p>But this apparently doesn't work all the time. Say in this matrix: $$\begin{bmatrix} 1 &amp; 0 &amp; 0\\ -2 &amp; 1 &amp; 0\\ 3.5 &amp; -2.5 &amp; 1 \end{bmatrix}^{-1}\neq \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 2 &amp; 1 &amp; 0\\ -3.5 &amp; 2.5 &amp; 1 \end{bmatrix}$$ By flipping the signs, the inverse is wrong. But if I go through the whole tedious step of gauss-jordan elimination, I would get its correct inverse like this: $\begin{bmatrix} 1 &amp; 0 &amp; 0\\ -2 &amp; 1 &amp; 0\\ 3.5 &amp; -2.5 &amp; 1 \end{bmatrix}^{-1}= \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 2 &amp; 1 &amp; 0\\ 1.5 &amp; 2.5 &amp; 1 \end{bmatrix}$ And it looks like some entries could just flip its signs but not for others.</p> <p>Then this is kind of weird because I thought the whole idea of getting the lower and upper triangular matrices is to avoid the need to go through the tedious process of gauss-jordan elimination and can get the inverse quickly by flipping signs? Maybe I have missed something out here. How should I get an inverse of a lower or an upper matrix quickly?</p>
<p>Ziyuang's answer handles the cases, where <span class="math-container">$N^2=0$</span>, but it can be generalized as follows. A triangular <span class="math-container">$n\times n$</span> matrix <span class="math-container">$T$</span> with 1s on the diagonal can be written in the form <span class="math-container">$T=I+N$</span>. Here <span class="math-container">$N$</span> is the strictly triangular part (with zeros on the diagonal), and it always satisfies the relation <span class="math-container">$N^{n}=0$</span>. Therefore we can use the polynomial factorization <span class="math-container">$1-x^n=(1-x)(1+x+x^2+\cdots +x^{n-1})$</span> with <span class="math-container">$x=-N$</span> to get the matrix relation <span class="math-container">$$ (I+N)(I-N+N^2-N^3+\cdot+(-1)^{n-1}N^{n-1})=I + (-1)^{n-1}N^n=I $$</span> telling us that <span class="math-container">$(I+N)^{-1}=I+\sum_{k=1}^{n-1}(-1)^kN^k$</span>.</p> <p>Yet another way of looking at this is to notice that it also is an instance of a geometric series <span class="math-container">$1+q+q^2+q^3+\cdots =1/(1-q)$</span> with <span class="math-container">$q=-N$</span>. The series converges for the unusual reason that powers of <span class="math-container">$q$</span> are all zero from some point on. The same formula can be used to good effect elsewhere in algebra, too. For example, in a residue class ring like <span class="math-container">$\mathbf{Z}/2^n\mathbf{Z}$</span> all the even numbers are nilpotent, so computing the modular inverse of an odd number can be done with this formula. </p>
<p>In case of a lower triangular matrix with arbitrary non-zero diagonal members, you may just need to change it in to: $T = D(I+N)$ where $D$ is a diagonal matrix and $N$ is again an strictly lower diagonal matrix. Apparently, all said about inverse in previous comments will be the same.</p>
linear-algebra
<p>I wrote an answer to <a href="https://math.stackexchange.com/questions/854154/when-is-r-a-1-rt-invertible/854160#854160">this</a> question based on determinants, but subsequently deleted it because the OP is interested in non-square matrices, which effectively blocks the use of determinants and thereby undermined the entire answer. However, it can be salvaged if there exists a function $\det$ defined on <strong>all</strong> real-valued matrices (not just the square ones) having the following properties.</p> <ol> <li>$\det$ is real-valued</li> <li>$\det$ has its usual value for square matrices</li> <li>$\det(AB)$ always equals $\det(A)\det(B)$ whenever the product $AB$ is defined.</li> <li>$\det(A) \neq 0$ iff $\det(A^\top) \neq 0$</li> </ol> <p>Does such a function exist?</p>
<p>Such a function cannot exist. Let $A = \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ 0 &amp; 0\end{pmatrix}$ and $B = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \end{pmatrix}$. Then, since both $AB$ and $BA$ are square, if there existed a function $D$ with the properties 1-3 stated there would hold \begin{align} \begin{split} 1 &amp;= \det \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \end{pmatrix} = \det(BA) = D(BA) = D(B)D(A) \\ &amp;= D(A)D(B) = D(AB) = \det(AB) = \det \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{pmatrix} = 0. \end{split} \end{align}</p>
<p>This extension of determinants has all 4 properties if A is a square matrix (except it loses the sign), and retains some attributes of determinants otherwise.</p> <p>If A has more rows than columns, then <span class="math-container">$$|A|^2=|A^{T}A|$$</span> If A has more columns than rows, then <span class="math-container">$$|A|^2=|AA^{T}|$$</span></p> <p>This has a valid and useful geometric interpretation. If you have a transformation <span class="math-container">$A$</span>, this can still return how the transformation will scale an input transformation, limited to the smaller of the dimensions of the input and output space.</p> <p>You may take this to be the absolute value of the determinant. It's always positive because, when looking at a space embedded in a higher dimensional space, a positive area can become a negative area when looked at from behind.</p>
logic
<p>By the fundamental theorem of calculus I mean the following.</p> <p><strong>Theorem:</strong> Let $B$ be a Banach space and $f : [a, b] \to B$ be a continuously differentiable function (this means that we can write $f(x + h) = f(x) + h f'(x) + o(|h|)$ for some continuous function $f' : [a, b] \to B$). Then</p> <p>$$\int_a^b f'(t) \, dt = f(b) - f(a).$$</p> <p>(This integral can be defined in any reasonable way, e.g. one can use the Bochner integral or a Riemann sum.)</p> <p>This theorem can be proven from Hahn-Banach, which allows you to reduce to the case $B = \mathbb{R}$. However, Hahn-Banach is independent of ZF.</p> <p>Recently I tried to prove this theorem without Hahn-Banach and found that I couldn't do it. The standard proof in the case $B = \mathbb{R}$ relies on the mean value theorem, which is not applicable here. I can only prove it (I think) under stronger hypotheses, e.g. $f'$ continuously differentiable or Lipschitz.</p> <p>So I am curious whether this theorem is even true in the absence of Hahn-Banach. It is likely that I am just missing some nice argument involving uniform continuity, but if I'm not, that would be good to know.</p>
<p>I believe that one of the standard proofs works.</p> <ol> <li><p>Let $F(x) := \intop_a^x f^\prime (t) dt$. Then $F$ is differentiable and its derivative is $f^\prime$ due to a standard estimate that has nothing to do with AC.</p></li> <li><p>$(F-f)^\prime = 0$, hence it is constant. This boils down to the one-dimensional case: just consider $g := \Vert F-f-F(a)+f(a) \Vert$. It is a real-valued function with zero derivative, and $g(a)=0$, so we can use the usual "one-dimensional" mean value theorem.</p></li> </ol>
<p>Claim: Let $g:[a,b]\to B$ be differentiable, with $g'(t)=0$ for all $t\in[a,b]$. Then $g(t)$ is a constant.</p> <p>Proof: Fix $\epsilon&gt;0$. For each $t\in[a,b]$, we can find $\delta_t&gt;0$ such that $0&lt;|h|&lt;\delta_t\Rightarrow\|g(t+h)-g(t)\|&lt;\epsilon|h|$. The open intervals $(t-\delta_t,t+\delta_t)$ cover $[a,b]$, so there is a finite subcover $\{(t_i-\delta_{t_i},t_i+\delta_{t_i}):1\leq i\leq N\}$. We may choose our labeling so that $t_1&lt;t_2&lt;\ldots&lt;t_N$. Now, we should be able to find points $x_0,\ldots,x_N$ with $x_0=a&lt;t_1&lt;x_1&lt;t_2&lt;\ldots&lt;t_N&lt;x_N=b$, satisfying $|x_i-t_i|&lt;\delta_i$ and $|x_{i-1}-t_i|&lt;\delta_i$ for $1\leq i\leq N$. Now, \begin{eqnarray*} \|g(b)-g(a)\|&amp;=&amp;\left\|\sum_{i=1}^N(g(x_i)-g(t_{i}))-(g(t_i)-g(x_{i-1}))\right\|\\ &amp;&lt;&amp;\sum_{i=1}^N \epsilon (x_i-t_i)+\epsilon(t_i-x_{i-1})\\ &amp;=&amp;\epsilon(b-a) \end{eqnarray*} Since $\epsilon$ is arbitrary, we have $g(b)=g(a)$. This argument works on any subinterval, so $g(t)$ is constant.</p> <p>We can apply the above with $g(x):=\int_a^xf'(t)\,dt-(f(x)-f(a))$. I believe it can be checked directly that $g'(x)=0$ for all $x$, and that $g(a)=0$, so that $g(x)$ must be identically 0.</p>
logic
<p>Given the set of standard axioms (I'm not asking for proof of those), do we know for sure that a proof exists for all unproven theorems? For example, I believe the Goldbach Conjecture is not proven even though we "consider" it true.</p> <p>Phrased another way, have we <em>proven</em> that if a mathematical statement is true, a proof of it exists? That, therefore, anything that is true can be proven, and anything that cannot be proven is not true? Or, is there a counterexample to that statement?</p> <p>If it hasn't been proven either way, do we have a strong idea one way or the other? Is it generally thought that some theorems can have no proof, or not?</p>
<p>Relatively recent discoveries yield a number of so-called 'natural independence' results that provide much more natural examples of independence than does Gödel's example based upon the liar paradox (or other syntactic diagonalizations). As an example of such results, I'll sketch a simple example due to Goodstein of a concrete number theoretic theorem whose proof is independent of formal number theory PA <a href="https://en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic" rel="nofollow noreferrer">(Peano Arithmetic)</a> (following [Sim]).</p> <p>Let <span class="math-container">$\,b\ge 2\,$</span> be a positive integer. Any nonnegative integer <span class="math-container">$n$</span> can be written uniquely in base <span class="math-container">$b$</span> <span class="math-container">$$\smash{n\, =\, c_1 b^{\large n_1} +\, \cdots + c_k b^{\large n_k}} $$</span></p> <p>where <span class="math-container">$\,k \ge 0,\,$</span> and <span class="math-container">$\, 0 &lt; c_i &lt; b,\,$</span> and <span class="math-container">$\, n_1 &gt; \ldots &gt; n_k \ge 0,\,$</span> for <span class="math-container">$\,i = 1, \ldots, k.$</span></p> <p>For example the base <span class="math-container">$\,2\,$</span> representation of <span class="math-container">$\,266\,$</span> is <span class="math-container">$$266 = 2^8 + 2^3 + 2$$</span></p> <p>We may extend this by writing each of the exponents <span class="math-container">$\,n_1,\ldots,n_k\,$</span> in base <span class="math-container">$\,b\,$</span> notation, then doing the same for each of the exponents in the resulting representations, <span class="math-container">$\ldots,\,$</span> until the process stops. This yields the so-called 'hereditary base <span class="math-container">$\,b\,$</span> representation of <span class="math-container">$\,n$</span>'. For example the hereditary base <span class="math-container">$2$</span> representation of <span class="math-container">$\,266\,$</span> is <span class="math-container">$${266 = 2^{\large 2^{2+1}}\! + 2^{2+1} + 2} $$</span></p> <p>Let <span class="math-container">$\,B_{\,b}(n)$</span> be the nonnegative integer which results if we take the hereditary base <span class="math-container">$\,b\,$</span> representation of <span class="math-container">$\,n\,$</span> and then syntactically replace each <span class="math-container">$\,b\,$</span> by <span class="math-container">$\,b+1,\,$</span> i.e. <span class="math-container">$\,B_{\,b}\,$</span> is a base change operator that 'Bumps the Base' from <span class="math-container">$\,b\,$</span> up to <span class="math-container">$\,b+1.\,$</span> For example bumping the base from <span class="math-container">$\,2\,$</span> to <span class="math-container">$\,3\,$</span> in the prior equation yields <span class="math-container">$${B_{2}(266) = 3^{\large 3^{3+1}}\! + 3^{3+1} + 3\quad\ \ \ }$$</span></p> <p>Consider a sequence of integers obtained by repeatedly applying the operation: bump the base then subtract one from the result. For example, iteratively applying this operation to <span class="math-container">$\,266\,$</span> yields <span class="math-container">$$\begin{eqnarray} 266_0 &amp;=&amp;\ 2^{\large 2^{2+1}}\! + 2^{2+1} + 2\\ 266_1 &amp;=&amp;\ 3^{\large 3^{3+1}}\! + 3^{3+1} + 3 - 1\ =\ B_2(266_0) - 1 \\ ~ \ &amp;=&amp;\ 3^{\large 3^{3+1}}\! + 3^{3+1} + 2 \\ 266_2 &amp;=&amp;\ 4^{\large 4^{4+1}}\! + 4^{4+1} + 1\qquad\! =\ B_3(266_1) - 1 \\ 266_3 &amp;=&amp;\ 5^{\large5^{5+1}}\! + 5^{5+1}\phantom{ + 2}\qquad\ =\ B_4(266_2) - 1 \\ 266_4 &amp;=&amp;\ 6^{\large 6^{6+1}}\! + \color{#0a0}{6^{6+1}\! - 1} \\ ~ \ &amp;&amp;\ \textrm{using}\quad \color{#0a0}{6^7\ -\,\ 1}\ =\ \color{#c00}{5555555}\, \textrm{ in base } 6 \\ ~ \ &amp;=&amp;\ 6^{\large 6^{6+1}}\! + \color{#c00}5\cdot 6^6 + \color{#c00}5\cdot 6^5 + \,\cdots + \color{#c00}5\cdot 6 + \color{#c00}5 \\ 266_5 &amp;=&amp;\ 7^{\large 7^{7+1}}\! + 5\cdot 7^7 + 5\cdot 7^5 +\, \cdots + 5\cdot 7 + 4 \\ &amp;\vdots &amp; \\ 266_{k+1} &amp;=&amp; \ \qquad\quad\ \cdots\qquad\quad\ = \ B_{k+2}(266_k) - 1 \\ \end{eqnarray}$$</span></p> <p>In general, if we start this procedure at the integer <span class="math-container">$\,n\,$</span> then we obtain what is known as the <em>Goodstein sequence</em> starting at <span class="math-container">$\,n.$</span></p> <p>More precisely, for each nonnegative integer <span class="math-container">$\,n\,$</span> we recursively define a sequence of nonnegative integers <span class="math-container">$\,n_0,\, n_1,\, \ldots ,\, n_k,\ldots\,$</span> by <span class="math-container">$$\begin{eqnarray} n_0\ &amp;:=&amp;\ n \\ n_{k+1}\ &amp;:=&amp;\ \begin{cases} B_{k+2}(n_k) - 1 &amp;\mbox{if }\ n_k &gt; 0 \\ \,0 &amp;\mbox{if }\ n_k = 0 \end{cases} \\ \end{eqnarray}$$</span></p> <p>If we examine the above Goodstein sequence for <span class="math-container">$\,266\,$</span> numerically we find that the sequence initially increases extremely rapidly:</p> <p><span class="math-container">$$\begin{eqnarray} 2^{\large 2^{2+1}}\!+2^{2+1}+2\ &amp;\sim&amp;\ 2^{\large 2^3} &amp;\sim&amp;\, 3\cdot 10^2 \\ 3^{\large 3^{3+1}}\!+3^{3+1}+2\ &amp;\sim&amp;\ 3^{\large 3^4} &amp;\sim&amp;\, 4\cdot 10^{38} \\ 4^{\large 4^{4+1}}\!+4^{4+1}+1\ &amp;\sim&amp;\ 4^{\large 4^5} &amp;\sim&amp;\, 3\cdot 10^{616} \\ 5^{\large 5^{5+1}}\!+5^{5+1}\ \ \phantom{+ 2} \ &amp;\sim&amp;\ 5^{\large 5^6} &amp;\sim&amp;\, 3\cdot 10^{10921} \\ 6^{\large 6^{6+1}}\!+5\cdot 6^{6}\quad\!+5\cdot 6^5\ \:+\cdots +5\cdot 6\ \ +5\ &amp;\sim&amp;\ 6^{\large 6^7} &amp;\sim&amp;\, 4\cdot 10^{217832} \\ 7^{\large 7^{7+1}}\!+5\cdot 7^{7}\quad\!+5\cdot 7^5\ \:+\cdots +5\cdot 7\ \ +4\ &amp;\sim&amp;\ 7^{\large 7^8} &amp;\sim&amp;\, 1\cdot 10^{4871822} \\ 8^{\large 8^{8+1}}\!+5\cdot 8^{8}\quad\!+5\cdot 8^5\ \: +\cdots +5\cdot 8\ \ +3\ &amp;\sim&amp;\ 8^{\large 8^9} &amp;\sim&amp;\, 2\cdot 10^{121210686} \\ 9^{\large 9^{9+1}}\!+5\cdot 9^{9}\quad\!+5\cdot 9^5\ \: +\cdots +5\cdot 9\ \ +2\ &amp;\sim&amp;\ 9^{\large 9^{10}} &amp;\sim&amp;\, 5\cdot 10^{3327237896} \\ 10^{\large 10^{10+1}}\!\!\!+5\cdot 10^{10}\!+5\cdot 10^5\!+\cdots +5\cdot 10+1\ &amp;\sim&amp;\ 10^{\large 10^{11}}\!\!\!\! &amp;\sim&amp;\, 1\cdot 10^{100000000000} \\ \end{eqnarray}$$</span></p> <p>Nevertheless, despite numerical first impressions, one can prove that this sequence converges to <span class="math-container">$\,0.\,$</span> In other words, <span class="math-container">$\,266_k = 0\,$</span> for all sufficiently large <span class="math-container">$\,k.\,$</span> This surprising result is due to Goodstein <span class="math-container">$(1944)$</span> who actually proved the same result for <em>all</em> Goodstein sequences:</p> <p><strong>Goodstein's Theorem</strong> <span class="math-container">$\ $</span> For all <span class="math-container">$\,n\,$</span> there exists <span class="math-container">$\,k\,$</span> such that <span class="math-container">$\,n_k = 0.\,$</span> In other words, every Goodstein sequence converges to <span class="math-container">$\,0.$</span></p> <p>The secret underlying Goodstein's theorem is that hereditary expression of <span class="math-container">$\,n\,$</span> in base <span class="math-container">$\,b\,$</span> mimics an ordinal notation for all ordinals less than <a href="https://en.wikipedia.org/wiki/Epsilon_numbers_%28mathematics%29" rel="nofollow noreferrer">epsilon nought</a> <span class="math-container">$\,\varepsilon_0 = \omega^{\large \omega^{\omega^{\Large\cdot^{\cdot^\cdot}}}}\!\!\! =\, \sup \{ \omega,\, \omega^{\omega}\!,\, \omega^{\large \omega^{\omega}}\!,\, \omega^{\large \omega^{\omega^\omega}}\!,\, \dots\, \}$</span>. For such ordinals, the base bumping operation leaves the ordinal fixed, but subtraction of one decreases the ordinal. But these ordinals are well-ordered, which allows us to conclude that a Goodstein sequence eventually converges to zero. Goodstein actually proved his theorem for a general increasing base-bumping function <span class="math-container">$\,f:\Bbb N\to \Bbb N\,$</span> (vs. <span class="math-container">$\,f(b)=b+1\,$</span> above). He proved that convergence of all such <span class="math-container">$f$</span>-Goodstein sequences is equivalent to transfinite induction below <span class="math-container">$\,\epsilon_0.$</span></p> <p>One of the primary measures of strength for a system of logic is the size of the largest ordinal for which transfinite induction holds. It is a classical result of Gentzen that the consistency of PA (Peano Arithmetic, or formal number theory) can be proved by transfinite induction on ordinals below <span class="math-container">$\,\epsilon_0.\,$</span> But we know from Godel's second incompleteness theorem that the consistency of PA cannot be proved in PA. It follows that neither can Goodstein's theorem be proved in PA. Thus we have an example of a very simple concrete number theoretical statement in PA whose proof is nonetheless independent of PA.</p> <p>Another way to see that Goodstein's theorem cannot be proved in PA is to note that the sequence takes too long to terminate, e.g.</p> <p><span class="math-container">$$ 4_k\,\text{ first reaches}\,\ 0\ \,\text{for }\, k\, =\, 3\cdot(2^{402653211}\!-1)\,\sim\, 10^{121210695}$$</span></p> <p>In general, if 'for all <span class="math-container">$\,n\,$</span> there exists <span class="math-container">$\,k\,$</span> such that <span class="math-container">$\,P(n,k)$</span>' is provable, then it must be witnessed by a provably computable choice function <span class="math-container">$\,F\!:\, $</span> 'for all <span class="math-container">$\,n\!:\ P(n,F(n)).\,$</span>' But the problem is that <span class="math-container">$\,F(n)\,$</span> grows too rapidly to be provably computable in PA, see [Smo] <span class="math-container">$1980$</span> for details.</p> <p>Goodstein's theorem was one of the first examples of so-called 'natural independence phenomena', which are considered by most logicians to be more natural than the metamathematical incompleteness results first discovered by Gödel. Other finite combinatorial examples were discovered around the same time, e.g. a finite form of Ramsey's theorem, and a finite form of Kruskal's tree theorem, see [KiP], [Smo] and [Gal]. [Kip] presents the Hercules vs. Hydra game, which provides an elementary example of a finite combinatorial tree theorem (a more graphical tree-theoretic form of Goodstein's sequence).</p> <p>Kruskal's tree theorem plays a fundamental role in computer science because it is one of the main tools for showing that certain orderings on trees are well-founded. These orderings play a crucial role in proving the termination of rewrite rules and the correctness of the Knuth-Bendix equational completion procedures. See [Gal] for a survey of results in this area.</p> <p>See the references below for further details, especially Smorynski's papers. Start with Rucker's book if you know no logic, then move on to Smorynski's papers, and then the others, which are original research papers. For more recent work, see the references cited in Gallier, especially to Friedman's school of 'Reverse Mathematics', and see [JSL].</p> <p><strong>References</strong></p> <p>[Gal] Gallier, Jean. <a href="ftp://ftp.cis.upenn.edu/pub/papers/gallier/kruskal1.pdf" rel="nofollow noreferrer">What's so special about Kruskal's theorem and the ordinal <span class="math-container">$\Gamma_0$</span>?</a><br /> A survey of some results in proof theory,<br /> Ann. Pure and Applied Logic, 53 (1991) 199-260.</p> <p>[HFR] Harrington, L.A. et.al. (editors)<br /> <a href="https://www.sciencedirect.com/bookseries/studies-in-logic-and-the-foundations-of-mathematics/vol/117" rel="nofollow noreferrer">Harvey Friedman's Research on the Foundations of Mathematics,</a> Elsevier 1985.</p> <p>[KiP] Kirby, Laurie, and Paris, Jeff. <a href="https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.107.3303" rel="nofollow noreferrer">Accessible independence results for Peano arithmetic,</a><br /> <em>Bull. London Math. Soc.,</em> 14 (1982), 285-293.</p> <p>[JSL] <a href="https://web.archive.org/web/20140910164006/http://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.jsl/1183742622" rel="nofollow noreferrer">The Journal of Symbolic Logic,* v. 53, no. 2, 1988</a>, <a href="https://www.jstor.org/stable/i339588" rel="nofollow noreferrer">jstor</a>, <a href="https://www.cambridge.org/core/journals/journal-of-symbolic-logic/issue/D570C6A9732ADFA7511C249AF3EF01AD" rel="nofollow noreferrer">cambridge.org</a><br /> This issue contains papers from the Symposium &quot;Hilbert's Program Sixty Years Later&quot;.</p> <p>[Kol] Kolata, Gina. <a href="https://www.science.org/doi/abs/10.1126/science.218.4574.779" rel="nofollow noreferrer">Does Goedel's Theorem Matter to Mathematics?</a><br /> <em>Science</em> 218 11/19/1982, 779-780; reprinted in [HFR]</p> <p>[Ruc] Rucker, Rudy. <a href="https://rads.stackoverflow.com/amzn/click/com/0691121273" rel="nofollow noreferrer" rel="nofollow noreferrer">Infinity and The Mind,</a> 1995, Princeton Univ. Press.</p> <p>[Sim] Simpson, Stephen G. <a href="https://web.archive.org/web/20210507012644/https://groups.google.com/forum/message/raw?msg=sci.math/KQ4Weqk4TmE/LE_Wfsk00H4J" rel="nofollow noreferrer">Unprovable theorems and fast-growing functions,</a><br /> <em>Contemporary Math.</em> 65 1987, 359-394.</p> <p>[Smo] Smorynski, Craig. (all three articles are reprinted in [HFR])<br /> Some rapidly growing functions, <em>Math. Intell.,</em> 2 1980, 149-154.<br /> The Varieties of Arboreal Experience, <em>Math. Intell.,</em> 4 1982, 182-188.<br /> &quot;Big&quot; News from Archimedes to Friedman, <em>Notices AMS,</em> 30 1983, 251-256.</p> <p>[Spe] Spencer, Joel. <a href="https://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Spencer669-675.pdf" rel="nofollow noreferrer">Large numbers and unprovable theorems,</a><br /> <em>Amer. Math. Monthly,</em> Dec 1983, 669-675.</p>
<p>Gödel was able to construct a statement that says "this statement is not provable."</p> <p>The proof is something like this. First create an enumeration scheme of written documents. Then create a statement in number theory "$P(x,y,z)$", which means "if $x$ is interpreted as a computer program, and we input the value $y$, then the value $z$ is the output." (This part was quite hard, but intuitively you can see it could be done.)</p> <p>Then write a computer program that checks proofs. Creating proofs is undecidable, and it is hard to create a program to do that. But a program to check a proof can be created. Let's suppose this program becomes the literal number $n$ in our enumeration scheme. Then we can create a statement in number theory "$Q(x)$"${}={}$"$\exists y:P(n,\text{cat}(x,y),1)$". Here $\text{cat}(x,y)$ concatenates a written statement in number theory $x$ with its proof $y$. So $Q(x)$ says "$x$ is provable."</p> <p>Now construct in number theory a formula $S(x,y)$, which means take the statement enumerated by $x$, and whenever you see the symbol $x$ in it, substitute it with the literal number represented by $y$.</p> <p>Now consider the statement "$T(x)$"${}={}$"$\text{not} \ Q(S(x,x))$". Let's suppose this enumerates as the number $m$.</p> <p>Then "$T(m)$" is a statement in number theory that says "this statement is not provable."</p> <p>Now suppose "$T(m)$" is provable. Then it is true. But if it is true, then it is not provable (because that is what the statement says).</p> <p>So "$T(m)$" is clearly not provable. Hence it is true.</p> <p>I know I am missing some important technical issues. I'll answer them as best I can when they are asked. But that is the rough outline of the proof of Gödel's incompleteness theorem.</p>
differentiation
<p>Let <span class="math-container">$f:\Bbb{R}\to\Bbb{R}$</span> be a function given by</p> <p><span class="math-container">$$f(x)=\begin{cases} \exp\left(\frac{1}{x^2-1}\right) &amp; \text{if }\vert x\vert\lt 1\\ 0 &amp; \text{if }\vert x\vert\geqslant 1 \end{cases}$$</span></p> <p>I would like to prove that <span class="math-container">$f\in C^\infty$</span>, that is, <span class="math-container">$f\in C^k$</span> for all <span class="math-container">$k\in \mathbb{N}$</span>. I think that it can be done by induction on <span class="math-container">$k$</span>. If <span class="math-container">$\vert x\vert\gt1$</span>, the problem is trivial. On other points, the base case is the simplest and the only that I'm be able to do. Can someone help me?</p> <p>Thanks.</p>
<p>Do it for <span class="math-container">$$f(x)=\begin{cases}\exp\left(-\frac 1 x\right)&amp;x&gt;0\\ 0&amp;x\leq 0\end{cases}$$</span></p> <p>Note that everywhere <em>but</em> in the origin, <span class="math-container">$f$</span> is infinitely differentiable. Moreover, for <span class="math-container">$x&gt;0$</span> </p> <p><span class="math-container">$$\eqalign{ f'\left( x \right) &amp;= \frac{1}{{{x^2}}}f\left( x \right) \cr f''\left( x \right) &amp;= \left( {\frac{1}{{{x^4}}} - \frac{2}{{{x^3}}}} \right)f\left( x \right) \cr f'''\left( x \right)&amp;= \left( {\frac{1}{{{x^6}}} - \frac{6}{{{x^5}}} + \frac{6}{{{x^4}}}} \right)f\left( x \right)\cr &amp;\&amp;c \cr} $$</span></p> <p>You can thus prove inductively that for <span class="math-container">$x&gt;0$</span>, <span class="math-container">$$f^{(k)}(x)=P_{2k}(x^{-1})f(x)$$</span> where <span class="math-container">$P_{2k}$</span> is a polynomial of degree <span class="math-container">$2k$</span>.</p> <p>As <span class="math-container">$x\to 0^+$</span> this amounts to looking at <span class="math-container">$$\lim_{x\to +\infty}P(x)\exp(-x)=0$$</span> for any polynomial <span class="math-container">$P$</span>. </p> <p>So, for any <span class="math-container">$k$</span>, the limit as <span class="math-container">$x\to 0$</span> of the derivative is <span class="math-container">$0$</span>. Now we use a slightly underrated theorem</p> <blockquote> <p><strong>Theorem</strong> (Spivak) Suppose <span class="math-container">$f$</span> is continuous at <span class="math-container">$x=a$</span>, that <span class="math-container">$f'(x)$</span> exists for all <span class="math-container">$x$</span> in a neighborhood of <span class="math-container">$a$</span>. Suppose moreover that <span class="math-container">$$\lim_{x\to a}f'(x)$$</span> exists. Then <span class="math-container">$f'(a)$</span> exists and <span class="math-container">$$f'(a)=\lim_{x\to a}f'(x)$$</span></p> </blockquote> <p><strong>Proof</strong> By definition, <span class="math-container">$$f'(a)=\lim_{h\to 0 }\frac{f(a+h)-f(a)}h$$</span></p> <p>Consider <span class="math-container">$h&gt;0$</span>. For <span class="math-container">$h$</span> sufficiently small, <span class="math-container">$f$</span> will be continuous over <span class="math-container">$[a,a+h]$</span>, and differentiable over <span class="math-container">$(a,a+h)$</span>. Thus, by Lagrange, we can find <span class="math-container">$a&lt;\alpha_h&lt;a+h$</span> such that <span class="math-container">$$\frac{f(a+h)-f(a)}h=f'(\alpha_h)$$</span></p> <p>As <span class="math-container">$h\to 0^+$</span>; <span class="math-container">$\alpha_h\to a$</span>, and since the limit exists, <span class="math-container">$$f'(a)^+=\lim_{h\to 0^+}\frac{f(a+h)-f(a)}h=\lim_{h\to 0^+}f'(\alpha_h)=\lim_{x\to a}f'(x)$$</span> The case <span class="math-container">$h&lt;0$</span> is analogous. <span class="math-container">$\blacktriangle$</span>.</p> <p>The above lets you conclude that indeed <span class="math-container">$f^{(k)}(0)=0$</span> for all <span class="math-container">$k$</span>, whence <span class="math-container">$f$</span> is <span class="math-container">$C^k$</span> for any <span class="math-container">$k$</span>. Now, note your function is <span class="math-container">$$g(x)=f(1-x^2)$$</span></p>
<p>Taking derivatives you happen to have $p(x)e^{\frac{1}{x^2 - 1}}$, where $p(x)$ is a rational polynomial function that blows up at $x = \pm 1$. What you need know is to prove the continuity of these derivatives. This, as anon suggests, turns out to be a very simple problem if you use the fact that the exponential function is way faster than every rational polynomial function. Thus, $\lim_{x \to \pm 1}p(x)e^{\frac{1}{x^2 - 1}} = 0$ $\forall p$. This means that even for $x = \pm 1$, the critical cases where the two branches need to match, every derivative of the function is continuous (in other words $\lim_{x \to \pm 1^-} f^k(x) = \lim_{x \to \pm 1^+} f^k(x)$, and this is exactly the continuity condition you were looking for).</p> <p>I hope it helps :D</p>
number-theory
<p>The identity</p> <p>$\displaystyle (n+1) \text{lcm} \left( {n \choose 0}, {n \choose 1}, ... {n \choose n} \right) = \text{lcm}(1, 2, ... n+1)$</p> <p>is probably not well-known. The only way I know how to prove it is by using <a href="http://planetmath.org/kummerstheorem" rel="noreferrer">Kummer's theorem</a> that the power of $p$ dividing ${a+b \choose a}$ is the number of carries needed to add $a$ and $b$ in base $p$. Is there a more direct proof, e.g., by showing that each side divides the other?</p>
<p>Consider <em><a href="http://en.wikipedia.org/wiki/Leibniz_harmonic_triangle">Leibniz harmonic triangle</a></em> — a table that is like &laquo;Pascal triangle reversed&raquo;: on it's sides lie numbers $\frac{1}{n}$ and each number is the sum of two beneath it (see the <a href="http://upload.wikimedia.org/math/5/8/1/581098eb5e9213bf6c66e932ed218e08.png">picture</a>).</p> <p>One can easily proove by induction that m-th number in n-th row of Leibniz triangle is $\frac{1}{(n+1)\binom{n}{m}}$. So LHS of our identity is just lcd of fractions in n-th row of the triangle.</p> <p>But it's not hard to see that any such number is an integer linear combination of fractions on triangle's sides (i.e. $1/1,1/2,\dots,1/n$) — and vice versa. So LHS is equal to $lcd(1/1,\dots,1/n)$ — and that is exactly RHS.</p>
<p>More generally, for $0 \leq k \leq n$, there is an identity</p> <p>$(n+1) {\rm lcm} ({n \choose 0}, {n \choose 1}, \dots {n \choose k}) = {\rm lcm} (n+1,n,n-1, \dots n+1-k)$. </p> <p>This is simply the fact that any integer linear combination of $f(x), \Delta f(x), \Delta^2 f(x), \dots \Delta^k f(x)$ is an integer linear combination of $f(x), f(x-1), f(x-2), \dots f(x-k)$ where $\Delta$ is the difference operator, $f(x) = 1/x$, and $x = (n+1)$.</p>
logic
<p>As a physicist trying to understand the foundations of modern mathematics (in particular Model Theory) $-$ I have a hard time coping with the border between syntax and semantics. I believe a lot would become clearer for me, if I stated what I think the Gödel's Completeness Theorem is about (after studying various materials including Wikipedia it seems redundant for me) and someone knowledgeable would clarify my misconceptions. So here it goes:</p> <p>As I understand, if we have a set $U$ with a particular structure (functions, relations etc.) we can interpret it (through a particular signature, e.g. group signature $\{ e,\cdot \}$ ), as a model $\mathfrak{A}$ for a certain mathematical theory $\mathcal{T}$ (a theory being a set of axioms and its consequences). The theory is satisfied by $\mathfrak{A}$ only if $U$'s structure satisfies the axioms.</p> <p>Enter Gödel's theorem: For every first order theory $\mathcal{T}$ :</p> <p>$$\left( \exists \textrm{model } \mathfrak{A}: \mathfrak{A} \models \mathcal{T} \right) \iff \mathcal{T} \textrm{ is consistent}$$ So I'm confused. Isn't $\mathcal{T}$ being consistent a natural requirement which implicates that a set $U$ with a corresponding structure always exists (because of the ZFC's set theory freedom in constructing sets as we please without any concerns regarding what constitutes the set)? And that in turn always allows us to create a model $\mathfrak{A}$ with an interpretation of the signature of the theory $\mathcal{T}$ in terms of $U$'s structure?</p> <p>Where am I making mistakes? What concepts do I need to understand better in order to be able to properly comprehend this theorem and what model theory is and is not about? Please help!</p>
<p>It may help to look at things from a more general perspective. Presentations that focus on just first-order logic may obscure the fact that specific choices are implicit in the definitions of first-order logic; the general perspective highlights these choices. I want to write this up in detail, as a reference.</p> <h3>General "logics"</h3> <p>We define a particular type of general "logic" with negation. This definition is intended to be very general. In particular, it accommodates much broader types of "syntax" and "semantics" than first-order logic. </p> <p>A general "logic" will consist of:</p> <ul> <li><p>A set of "sentences" $L$. These do not have to be sentences in the sense of first-order logic, they can be any set of objects.</p></li> <li><p>A function $N: L \to L$ that assigns to each $x \in L$ a "negation" or "denial" $N(x)$.</p></li> <li><p>A set of "deductive rules", which are given as a closure operation on the powerset of $L$. So we have a function $c: 2^L \to 2^L$ such that</p> <ol> <li><p>$S \subseteq c(S)$ for each $S \subseteq L$</p></li> <li><p>$c(c(S)) = c(S)$ for each $S \subseteq L$</p></li> <li><p>If $S \subseteq S'$ then $c(S) \subseteq c(S')$. </p></li> </ol></li> <li><p>A set of "models" $M$. These do not have to be structures in the sense of first-order logic. The only assumption is that each $m \in M$ comes with a set $v_m \subseteq L$ of sentences that are "satisfied" (in some sense) by $M$: </p> <ol> <li><p>If $S \subseteq L$ and $x \in v_m$ for each $x \in S$ then $y \in v_m $ for each $y \in c(S)$</p></li> <li><p>There is no $m \in M$ and $x \in L$ with $x \in v_m$ and $N(x) \in v_m$</p></li> </ol></li> </ul> <p>The exact nature of the "sentences", "deductive rules", and "models", and the definition of a model "satisfying" a sentence are irrelevant, as long as they satisfy the axioms listed above. These axioms are compatible with both classical and intuitionistic logic. They are also compatible with infinitary logics such as $L_{\omega_1, \omega}$, with modal logics, and other logical systems.</p> <p>The main restriction in a general "logic" is that we have included a notion of negation or denial in the definition of a general "logic" so that we can talk about consistency.</p> <ul> <li><p>We say that a set $S \subseteq L$ is <strong>syntactically consistent</strong> if there is no $x \in L$ such that $x$ and $N(x)$ are both in $c(S)$.</p></li> <li><p>We say $S$ is <strong>semantically consistent</strong> if there is an $m \in M$ such that $x \in v_m$ for all $x \in S$. </p></li> </ul> <p>The definition of a general "logic" is designed to imply that each semantically consistent theory is syntactically consistent. </p> <h3>First-order logic as a general logic</h3> <p>To see how the definition of a general "logic" works, here is how to view first-order logic in any fixed signature as a general "logic". Fix a signature $\sigma$.</p> <ul> <li><p>$L$ will be the set of all $\sigma$-sentences. </p></li> <li><p>$N$ will take a sentence $x$ and return $\lnot x$, the canonical negation of $x$. </p></li> <li><p>$c$ will take $S \subseteq L$ and return the set of all $\sigma$-sentences provable from $S$. </p></li> <li><p>$M$ will be the set of <em>all</em> $\sigma$-structures. For each $m \in M$, $v_m$ is given by the usual Tarski definition of truth.</p></li> </ul> <p>With these definitions, syntactic consistency and semantic consistency in the general sense match up with syntactic consistency and semantic consistency as usually defined for first-order logic.</p> <h3>The completeness theorem</h3> <p>Gödel's completeness theorem simply says that, if we treat first-order logic in a fixed signature as a general "logic" (as above) then syntactic consistency is equivalent to semantic consistency. </p> <p>The benefit of the general perspective is that we can see how things could go wrong if we change just one part of the interpretation of first-order logic with signature $\sigma$ as a general "logic":</p> <ol> <li><p>If we were to replace $c$ with a weaker operator, syntactic consistency may not imply semantic consistency. For example, we could take $c(S) = S$ for all $S$. Then there would be syntactically consistent theories that have no model. In practical terms, making $c$ weaker means removing deduction rules.</p></li> <li><p>If we were to replace $M$ with a smaller class of models, syntactic consistency may not imply semantic consistency. For example, if we we take $M$ to be just the set of <em>finite</em> $\sigma$-structures, there are syntactically consistent theories that have no model. In practical terms, making $M$ smaller means excluding some structures from consideration.</p></li> <li><p>If we were to replace $c$ with a stronger closure operator, semantic consistency may not imply syntactic consistency. For example, we could take $c(S) = L$ for all $S$. Then there would be semantically consistent theories that are syntactically inconsistent. In practical terms, making $c$ stronger means adding new deduction rules.</p></li> </ol> <p>On the other hand, some changes would preserve the equivalence of syntactic and semantic consistency. For example, if we take $M$ to be just the set of <em>finite or countable</em> $\sigma$-structures, we can still prove the corresponding completeness theorem for first-order logic. In this sense, the choice of $M$ to be the set of <em>all</em> $\sigma$-structures is arbitrary.</p> <h3>Other completeness theorems</h3> <p>We say that the "completeness theorem" for a general "logic" is the theorem that syntactic consistency is equivalent to semantic consistency in that logic.</p> <ul> <li><p>There is a natural completeness theorem for intuitionistic first-order logic. Here we let $c$ be the closure operator derived from any of the usual deductive systems for intuitionistic logic, and let $M$ be the set of Kripke models. </p></li> <li><p>There is a completeness theorem for second-order logic (in a fixed signature) with Henkin semantics. Here we let $c$ be the closure operator derived from the usual deductive system for second-order logic, and let $M$ be the set of Henkin models. On the other hand, if we let $M$ be the set of all "full" models, the corresponding completeness theorem fails, because this class of models is too small.</p></li> <li><p>There are similar completeness theorems for propositional and first-order modal logics using Kripke frames.</p></li> </ul> <p>In each of those three cases, the historical development began with a deductive system, and the corresponding set of models was identified later. But, in other cases, we may begin with a set of models and look for a deductive system (including, in this sense, a set of axioms) that leads to a generalized completeness theorem. This is related to a common problem in model theory, which is to determine whether a given class of structures is "axiomatizable".</p>
<p>The usual form of the completeness theorem is this: $ T \models \phi \implies T \vdash\phi$, or that if $\phi$ is true in all models $\mathcal{M} \models T$, then there is a proof of $\phi$ from $T$. This is a non-trivial statement, structures and models are about sets with operations and relations that satisfy sentences. Proofs ignore the sets with structure and just gives rules for deriving new sentences from old. </p> <p>If you go to second order logic, this is no longer true. We can have a theory $PA$, which only has one model $\mathbb N \models PA$, but there are sentences $PA \models \phi$ with $PA \not\vdash \phi$ ("true but not provable" sentences). This follows from the incompleteness theorem which says that truth in the particular model $\mathbb N$ cannot be pinned down by proofs. The way first order logic avoids this is by the fact that a first order theory can't pin down only one model $\mathbb N \models PA$. It has to also admit non-standard models (this follows from Lowenheim-Skolem).</p> <p>This theorem, along with the soundness theorem $T\vdash \phi \implies T\models \phi$ give a strong correspondence between syntax and semantics of first order logic.</p> <p>Your main confusion is that consistency here is a syntactic notion, so it doesn't directly have anything to do with models. The correspondence between the usual form of the completeness theorem, and your form is by using a contradiction in place of $\phi$, and taking the contrapositive. So if $T \not \vdash \bot$ ($T$ is consistent), then $T \not \models \bot$. That is, if $T$ is consistent, then there exists a model $\mathcal M \models T$ such that $\mathcal M \not \models \bot \iff \mathcal M \models \top$, but that's a tautology, so we just get that there exists a model of $T$. </p>
logic
<p>In logic, a semantics is said to be compact iff if every finite subset of a set of sentences has a model, then so to does the entire set. </p> <p>Most logic texts either don't explain the terminology, or allude to the topological property of compactness. I see an analogy as, given a topological space X and a subset of it S, S is compact iff for every open cover of S, there is a finite subcover of S. But, it doesn't seem strong enough to justify the terminology. </p> <p>Is there more to the choice of the terminology in logic than this analogy?</p>
<p>The Compactness Theorem is equivalent to the compactness of the <a href="http://en.wikipedia.org/wiki/Stone%27s_representation_theorem_for_Boolean_algebras" rel="noreferrer">Stone space</a> of the <a href="http://en.wikipedia.org/wiki/Lindenbaum%E2%80%93Tarski_algebra" rel="noreferrer">Lindenbaum–Tarski algebra</a> of the first-order language <span class="math-container">$L$</span>. (This is also the space of <a href="http://en.wikipedia.org/wiki/Type_%28model_theory%29" rel="noreferrer"><span class="math-container">$0$</span>-types</a> over the empty theory.)</p> <p>A point in the Stone space <span class="math-container">$S_L$</span> is a complete theory <span class="math-container">$T$</span> in the language <span class="math-container">$L$</span>. That is, <span class="math-container">$T$</span> is a set of sentences of <span class="math-container">$L$</span> which is closed under logical deduction and contains exactly one of <span class="math-container">$\sigma$</span> or <span class="math-container">$\lnot\sigma$</span> for every sentence <span class="math-container">$\sigma$</span> of the language. The topology on the set of types has for basis the open sets <span class="math-container">$U(\sigma) = \{T:\sigma\in T\}$</span> for every sentence <span class="math-container">$\sigma$</span> of <span class="math-container">$L$</span>. Note that these are all clopen sets since <span class="math-container">$U(\lnot\sigma)$</span> is complementary to <span class="math-container">$U(\sigma)$</span>.</p> <p>To see how the Compactness Theorem implies the compactness of <span class="math-container">$S_L$</span>, suppose the basic open sets <span class="math-container">$U(\sigma_i)$</span>, <span class="math-container">$i\in I$</span>, form a cover of <span class="math-container">$S_L$</span>. This means that every complete theory <span class="math-container">$T$</span> contains at least one of the sentences <span class="math-container">$\sigma_i$</span>. I claim that this cover has a finite subcover. If not, then the set <span class="math-container">$\{\lnot\sigma_i:i\in I\}$</span> is finitely consistent. By the Compactness Theorem, the set consistent and hence (by Zorn's Lemma) is contained in a maximally consistent set <span class="math-container">$T$</span>. This theory <span class="math-container">$T$</span> is a point of the Stone space which is not contained in any <span class="math-container">$U(\sigma_i)$</span>, which contradicts our hypothesis that the <span class="math-container">$U(\sigma_i)$</span>, <span class="math-container">$i\in I$</span>, form a cover of the space.</p> <p>To see how the compactness of <span class="math-container">$S_L$</span> implies the Compactness Theorem, suppose that <span class="math-container">$\{\sigma_i:i\in I\}$</span> is an inconsistent set of sentences in <span class="math-container">$L$</span>. Then <span class="math-container">$U(\lnot\sigma_i),i\in I$</span> forms a cover of <span class="math-container">$S_L$</span>. This cover has a finite subcover, which corresponds to a finite inconsistent subset of <span class="math-container">$\{\sigma_i:i\in I\}$</span>. Therefore, every inconsistent set has a finite inconsistent subset, which is the contrapositive of the Compactness Theorem.</p>
<p>The analogy for the compactness theorem for propositional calculus is as follows. Let $p_i $ be propositional variables; together, they take values in the product space $2^{\mathbb{N}}$. Suppose we have a collection of statements $S_t$ in these boolean variables such that every finite subset is satisfiable. Then I claim that we can prove that they are all simultaneously satisfiable by using a compactness argument.</p> <p>Let $F$ be a finite set. Then the set of all truth assignments (this is a subset of $2^{\mathbb{N}}$) which satisfy $S_t$ for $t \in F$ is a closed set $V_F$ of assignments satisfying the sentences in $F$. The intersection of any finitely many of the $V_F$ is nonempty, so by the finite intersection property, the intersection of all of them is nonempty (since the product space is compact), whence any truth in this intersection satisfies all the statements.</p> <p>I don't know how this works in predicate logic.</p>
geometry
<p><em>Context:</em> I'm taking a course in geometry (we see affine, projective, inversive, etc, geometries) in which our basic structure is a vector space, usually $\mathbb{R}^2$. It is very convenient, and also very useful, since I can then use geometry whenever I have a vector space at hand. </p> <p>However, some of that structure is superfluous, and I'm afraid that we can prove things that are not true in the more modest axiomatic geometry (say in axiomatic euclidian geometry versus the similar geometry in $\mathbb{R}^3$).</p> <p>My questions are thus, in the context of plane geometry in particular:</p> <ol> <li><p><em>Can we deduce, from some axiomatic geometries, an algebraic structure?</em></p></li> <li><p><em>Are some axiomatic geometries equivalent, in some way, to their more algebraic counterparts?</em></p></li> </ol> <p>(Note that by « more algebraic » geometry, I mean geometry in a vector space. The « more algebraic » counterpart of axiomatic euclidian geometry would be geometry in $R^2$ with the usual lines and points, and where we might restrict in some way the figures that we can build.)</p> <p>I think it is useful to know when the two approaches intersect, first to be able to use the more powerful tools of algebra while doing axiomatic geometry, and second to aim for greater generality.</p> <p>Another use for this type of considerations could be in the modelisation of geometry in a computer (for example in an application like Geogebra). Even though exact symbolic calculations are possible, an axiomatic formulation could be of use and maybe more economical, or otherwise we might prefer to do calculations rather than keep track of the axiomatic formulation. One of the two approaches is probably better for the computer, thus the need to be able to switch between them.</p>
<p>Hilbert's <a href="https://en.wikipedia.org/wiki/Hilbert%27s_axioms"><em>Foundations of Geometry</em></a> did more or less precisely what you are asking for. Starting from an extension of Euclid's Axioms, Hilbert proves that any model of the axioms is isomorphic to $\mathbb{R}^2$ with the usual definition of line. </p> <p>Later, Tarski gave a first-order axiomatization of plane geometry. Because of built in restrictions in the first-order approach, one cannot get isomorphism with the natural geometry of $\mathbb{R}^2$. But one can get isomorphism to the natural geometry of $F^2$, for some real-closed field $F$.</p>
<p>I'll be repeating some stuff that has already been said, but I have my own spin on it.</p> <blockquote> <p>Can we deduce, from some axiomatic geometries, an algebraic structure?</p> </blockquote> <p>Yes. As you can find in Hartshorne's book, or in Hilbert's <em>Foundations</em>, the idea is an "algebra of segments" for any ordered Desarguesian plane in which you construct an Archimedian, ordered division ring $D$ with certain geometric operations such that the points and lines in $D\times D$ are coordinatized in exactly the way we're taught for $\Bbb R \times \Bbb R$. So, they are a natural starting point for ordered geometry. </p> <p>Now, every Archimedian, ordered division ring embeds in $\Bbb R$. Since $\Bbb R$ is also an Archimedian, ordered field, you can see it is the maximal such field for such a geometry. In fact, needing the reals to coordinatize a plane is equivalent to very strong "completeness" of lines in that geometry. <strong>This completeness/maximality property makes it very attractive to study geometry with it.</strong> </p> <p>Ordered geometry is great, but reexamination of the ideas shows that division rings in general are exactly what you need to coordinatize Desaguesian planes. They imbue their lines with exactly the translation and scaling behavior one would expect in a geometry according to the Erlangen program.</p> <blockquote> <p>Are some axiomatic geometries equivalent, in some way, to their more algebraic counterparts?</p> </blockquote> <p>Let me continue briefly along the lines above. The amazing theorem that a Desaguesian plane is Pappian iff the division ring is commutative has already been mentioned. There are also theorems about equivalence of certain types of fields and constructability criterion in the geometry. </p> <p>Desarguesian projective planes enjoy the same coorditinization theorem with division rings. Hyperbolic planes require one more unique axiom before they can be coordinatized by a division ring. The division ring you get is an analogue of the "algebra of segments" called the "field of ends". <strong>So as far as these theorems go, you have a really strong connection between synthetic geometry and analytic geometry. Geometry with vector spaces over division ring captures a large part, but not the whole of synthetic geometries.</strong></p> <p>In fact, there are even more general coordinatization theorems for projective planes using 'ternary rings'. As you add more special properties to the projective plane, the ternary ring gets closer to being a division ring.</p> <h2>More posts you may like:</h2> <p><a href="https://math.stackexchange.com/q/1066176/29335">Why are every structures I study based on Real number?</a></p> <p><a href="https://math.stackexchange.com/q/875959/29335">Geometries (Euclidean and Projective)</a></p> <p><a href="https://math.stackexchange.com/q/907781/29335">Main theorem of Pythagorean plane</a></p> <p><a href="https://math.stackexchange.com/q/733874/29335">which axiom(s) are behind the Pythagorean Theorem</a></p>
probability
<blockquote> <p>You play a game where a fair coin is flipped. You win 1 if it shows heads and lose 1 if it shows tails. You start with 5 and decide to play until you either have 20 or go broke. What is the probability that you will go broke?</p> </blockquote>
<p>You can use symmetry here - Starting at $5$, it is equally likely to get to $0$ first or to $10$ first. Now, if you get to $10$ first, then it is equally likely to get to $0$ first or to $20$ first.</p> <p>What does that mean for the probability of getting to $0$ before getting to $20$?</p>
<p>It is a fair game, so your expected value at the end has to be $5$ like you started. You must have $\frac 34$ chance to go broke and $\frac 14$ chance to end with $20$.</p>
logic
<p>maybe this question doesn't make sense at all. I don't know exactly the meaning of all these concepts, except the internal language of a topos (and searching on the literature is not helping at all). </p> <p>However, vaguely speaking by a logic I mean a pair $(\Sigma, \vdash)$ where $\Sigma$ is a signature (it has the types, functionals and relational symbols) and a consequence operator $\vdash$. </p> <p>By an internal logic in a category I mean viewing each object as a type and each morphism as a term. </p> <p>So what's the difference between a logic, an internal logic (language) of a category, an internal logic (language) of a topos and a type theory? Furthermore why the interchanging in the literature between the word "logic" and "language" when dealing with the internal properties of a category? Moreover, how higher order logic, modal logic and fuzzy logic suit in these concepts stated above?</p> <p>Thanks in advance.</p>
<p>The internal logic of a topos is an instance of the internal logic of a category (since toposes are special kinds of categories). The internal logic of toposes (instead of an arbitrary category) can also be interpreted with the Kripke-Joyal semantics. (<strong>Update almost ten years later:</strong> A form of the Kripke-Joyal semantics exists also for general categories which are not toposes.) For more on this, check part D of Johnstone's <em>Elephant</em> and chapter VI of Mac Lane's and Moerdijk's <em>Sheaves in Geometry and Logic</em>, <a href="http://www.mathematik.tu-darmstadt.de/%7Estreicher/CTCL.pdf" rel="nofollow noreferrer">lecture notes by Thomas Streicher</a>, and of course the nLab articles on these matters.</p> <p>I don't know the term &quot;internal logic of a type theory&quot;. But check the (very accessible) introduction of the <a href="http://homotopytypetheory.org/book/" rel="nofollow noreferrer">HoTT book</a> on how type theory and logic are related.</p> <p>The terms &quot;internal logic&quot; and &quot;internal language&quot; are often used synonymously. Personally, I prefer &quot;internal language&quot;, since this stresses that one can use it not only to <em>reason</em> internally, but also to <em>construct objects and morphisms</em> internally.</p> <p>The internal language of a topos <span class="math-container">$\mathcal{E}$</span> is higher-order in the sense that, because of the existence of a subobject classifier, every object <span class="math-container">$X \in \mathcal{E}$</span> has an associated <a href="http://ncatlab.org/nlab/show/power+object" rel="nofollow noreferrer">power object</a> <span class="math-container">$\mathcal{P}(X) \in \mathcal{E}$</span> which one can quantify over in the internal language. (In the special case <span class="math-container">$\mathcal{E} = \mathrm{Set}$</span>, the internal language is really the same as the usual mathematical language and <span class="math-container">$\mathcal{P}(X)$</span> is simply the power set of <span class="math-container">$X$</span>.) In an arbitrary category, power objects need not exist, such that their internal language is (at best) first-order. Generally, richer categorical properties allow you to interpret greater fragments of first-order logic, this is neatly explained in Johnstone's part D.</p> <p>Any <a href="http://ncatlab.org/nlab/show/Lawvere-Tierney+topology" rel="nofollow noreferrer">Lawvere-Tierney topology</a> in a topos gives rise to a modal operator in its associated internal language. (These operators can have concrete geometric meanings, for instance &quot;on a dense open set it holds that&quot; or &quot;on an open neighbourhood of a point <span class="math-container">$x$</span> it holds that&quot;.)</p> <p>I don't know of a direct relationship between fuzzy logic and the internal language of toposes, see <a href="https://math.stackexchange.com/questions/55957/fuzzy-logic-and-topos-theory">an older question here</a>.</p>
<p>Your definition of logic is pretty much correct. A logic contains both the <em>language</em> which the signature <span class="math-container">$\Sigma$</span> generates and the deductive system defined by <span class="math-container">$\vdash$</span>.</p> <p>A <em>type theory</em> is a logic with different sorts of individuals (called "types") and constructions that generate new types from existing ones, like product and arrow types.</p> <p>An <em>internal logic</em> is a type theory derived from a category and the <em>internal language</em> is the language part of that logic. Specifically, the atomic sorts of the internal language are the objects of the category. Since a topos is a specific category of categories, the internal logic of a topos is the derived type theory.</p> <p>The modalities of modal logic can sometimes be related to operators on subobjects in a category, but only if they preserve logical equivalence: <span class="math-container">$\alpha\iff\beta$</span> should imply <span class="math-container">$\Box\alpha \iff \Box \beta$</span>. Otherwise, the induced function of subobjects is not well-defined.</p> <p>Fuzzy logic is misnamed. It is more like a model theory than a logic.</p>
probability
<blockquote> <p>If a $1$ meter rope is cut at two uniformly randomly chosen points (to give three pieces), what is the average length of the smallest piece?</p> </blockquote> <p>I got this question as a mathematical puzzle from a friend. It looks similar to MathOverflow question <a href="https://mathoverflow.net/q/2014/">If you break a stick at two points chosen uniformly, the probability the three resulting sticks form a triangle is 1/4.</a></p> <p>However, in this case, I have to find the expected length of the smallest segment. The two points where the rope is cut are selected uniformly at random. </p> <p>I tried simulating it and I got an average value of $0.1114$. I suspect the answer is $1/9$ but I don't have any rigorous math to back it up.</p> <p>How do I solve this problem?</p>
<p>Update: The current version of this answer is more intuitive (IMHO) than the previous one. See also <a href="https://math.stackexchange.com/questions/14190/average-length-of-the-longest-segment/14194#14194">this similar answer</a> to a similar question.</p> <p>The generalization of this problem to <span class="math-container">$n$</span> pieces is discussed extensively in David and Nagaraja's <em><a href="http://books.google.com/books?id=bdhzFXg6xFkC&amp;printsec=frontcover&amp;dq=david+and+nagaraja+order+statistics&amp;source=bl&amp;ots=OaK_k-BbTf&amp;sig=vAtEAURNfZTe3NDV9WodQZ-3ip4&amp;hl=en&amp;ei=DfsDTdiXPML88AafsKnqAg&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CBgQ6AEwAA#v=onepage&amp;q&amp;f=false" rel="noreferrer">Order Statistics</a></em> (pp. 133-135, and p. 153). </p> <p>If <span class="math-container">$X_1, X_2, \ldots, X_{n-1}$</span> denote the positions on the rope where the cuts are made, let <span class="math-container">$V_i = X_i - X_{i-1}$</span>, where <span class="math-container">$X_0 = 0$</span> and <span class="math-container">$X_n = 1$</span>. So the <span class="math-container">$V_i$</span>'s are the lengths of the pieces of rope. <BR><BR> The key idea is that the probability that any particular <span class="math-container">$k$</span> of the <span class="math-container">$V_i$</span>'s simultaneously have lengths longer than <span class="math-container">$c_1, c_2, \ldots, c_k$</span>, respectively (where <span class="math-container">$\sum_{i=1}^k c_i \leq 1$</span>), is <span class="math-container">$$(1-c_1-c_2-\ldots-c_k)^{n-1}.$$</span> This is proved formally in David and Nagaraja's <em><a href="http://books.google.com/books?id=bdhzFXg6xFkC&amp;pg=PA133&amp;lpg=PA133&amp;dq=david+and+nagaraja+order+statistics+division+of+random+interval&amp;source=bl&amp;ots=OaK_l7B9-j&amp;sig=por_GsntbxBDII72xTWISQ9Keas&amp;hl=en&amp;ei=urkGTevVIZTmsQOM_7HsBw&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CBMQ6AEwAA#v=onepage&amp;q&amp;f=false" rel="noreferrer">Order Statistics</a></em>, p. 135. Intuitively, the idea is that in order to have pieces of size at least <span class="math-container">$c_1, c_2, \ldots, c_k$</span>, all <span class="math-container">$n-1$</span> of the cuts have to occur in intervals of the rope of total length <span class="math-container">$1 - c_1 - c_2 - \ldots - c_k$</span>. For example, <span class="math-container">$P(V_1 &gt; c_1)$</span> is the probability that all <span class="math-container">$n-1$</span> cuts occur in the interval <span class="math-container">$(c_1, 1]$</span>, which, since the cuts are randomly distributed in <span class="math-container">$[0,1]$</span>, is <span class="math-container">$(1-c_1)^{n-1}$</span>. <BR><BR> If <span class="math-container">$V_{(1)}$</span> denotes the shortest piece of rope, then for <span class="math-container">$x \leq \frac{1}{n}$</span>, (following Raskolnikov's comment) <span class="math-container">$$P(V_{(1)} &gt; x) = P(V_1 &gt; x, V_2 &gt; x, \ldots, V_n &gt; x) = (1 - nx)^{n-1}.$$</span></p> <p>Therefore, <span class="math-container">$$E[V_{(1)}] = \int_0^{\infty} P(V_{(1)} &gt; x) dx = \int_0^{1/n} (1-nx)^{n-1} dx = \frac{1}{n^2}.$$</span></p> <p>David and Nagaraja also give the formula Yuval Filmus mentions (as Problem 6.4.2):</p> <p><span class="math-container">$$E[V_{(r)}] = \frac{1}{n} \sum_{j=1}^r \frac{1}{n-j+1}.$$</span></p>
<p>My approach is maybe more naive than the others posted.</p> <p>Break the unit interval at $x$ and $y$ where $x &lt; y$. Our lengths are then $x$, $y - x$, and $1 - y$. It's not hard to show that they all have probability $1/3$ of being the shortest. In any case, our joint PDF is given by $f(x,y) = 6$ (since $x$ and $y$ remain uniform random variables on $1/6$th of the square $[0,1] \times [0,1]$). Each triangle in the diagram below corresponds to the domain of the PDF for one of the three cases.</p> <p><img src="https://i.sstatic.net/obVJs.png" alt="Each triangle corresponds to which length is shortest"></p> <p>I'll take care of the case when $x$ is shortest, that is, $x \leq y - x$ and $x \leq 1 - y$. This is the leftmost triangle. Since we're assuming $x$ is least, we are looking for $$E[x] = \int_0^{1/3} \int_{2x}^{1 - x} 6x \;dy \;dx = 1/9$$</p> <p>The cases when $y - x$ and $1 - y$ are shortest are similar.</p>
linear-algebra
<p>After looking in my book for a couple of hours, I'm still confused about what it means for a $(n\times n)$-matrix $A$ to have a determinant equal to zero, $\det(A)=0$.</p> <p>I hope someone can explain this to me in plain English.</p>
<p>For an $n\times n$ matrix, each of the following is equivalent to the condition of the matrix having determinant $0$:</p> <ul> <li><p>The columns of the matrix are dependent vectors in $\mathbb R^n$</p></li> <li><p>The rows of the matrix are dependent vectors in $\mathbb R^n$</p></li> <li><p>The matrix is not invertible. </p></li> <li><p>The volume of the parallelepiped determined by the column vectors of the matrix is $0$.</p></li> <li><p>The volume of the parallelepiped determined by the row vectors of the matrix is $0$.</p></li> <li><p>The system of homogenous linear equations represented by the matrix has a non-trivial solution. </p></li> <li><p>The determinant of the linear transformation determined by the matrix is $0$. </p></li> <li><p>The free coefficient in the characteristic polynomial of the matrix is $0$. </p></li> </ul> <p>Depending on the definition of the determinant you saw, proving each equivalence can be more or less hard.</p>
<p>For me, this is the most intuitive video on the web that explains determinants, and everyone who wants a deep and visual understanding of this topic should watch it:</p> <p><a href="https://www.youtube.com/watch?v=Ip3X9LOh2dk" rel="noreferrer">The determinant by 3Blue1Brown</a></p> <p>The whole playlist is available at this link:</p> <p><a href="https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="noreferrer">Essence of linear algebra by 3Blue1Brown</a></p> <p>The crucial part of the series is "Linear transformations and matrices". If you understand that well, everything else will be like a piece of cake. Literally: plain English + visual.</p>
logic
<h2>Uncomputable functions: Intro</h2> <p>The last month I have been going down the rabbit hole of googology (mathematical study of large numbers) in my free time. I am still trying to wrap my head around the seeming paradox of the existence of natural numbers that are <strong>well-defined</strong> but <strong>uncomputable</strong> (in the sense that it has been proven that they can never be calculated by a human / a Turing machine). Let me give two of the most famous examples:</p> <p><strong>Busy beaver function <span class="math-container">$\Sigma(n,m)$</span></strong></p> <p><span class="math-container">$\Sigma(n,m)$</span> &quot;is defined as the maximum number of non-blank symbols that can be written (in the finished tape) with an <span class="math-container">$n$</span>-state, <span class="math-container">$m$</span>-color halting Turing machine starting from a blank tape before halting.&quot; It has been shown that <span class="math-container">$\Sigma$</span> grows faster than all computable functions and, thus, is uncomputable. Calculating <span class="math-container">$\Sigma$</span> for sufficiently large inputs would require an oracle Turing machine as it would literally be a solution to the halting problem. Thus, it is uncomputable, although the forumlation of <span class="math-container">$\Sigma$</span> in set theory is precise and clear. <a href="https://googology.fandom.com/wiki/Busy_beaver_function" rel="noreferrer">More details here.</a></p> <p><strong>Rayo's number <span class="math-container">$\text{Rayo}\left(10^{100}\right)$</span></strong></p> <p>Rayo's number was the record holder in the googology community for a long time and it is defined as &quot;the smallest positive integer bigger than any finite positive integer named by an expression in the language of first-order set theory with googol symbols or less.&quot; It is defined in the language of an (unspecified) second-order set theory <a href="https://googology.fandom.com/wiki/Rayo%27s_number#Definition" rel="noreferrer">here</a>. (Its well-definedness is thusly a bit controversial but it would outgrow <span class="math-container">$\Sigma$</span> by a huge marigin if resolved.)</p> <h2>My mathematical / existential questions</h2> <ul> <li><p>Does a number like <span class="math-container">$x=\Sigma\left(10^{100},10^{100}\right)$</span> &quot;exist&quot; in set theory in the same sense like the number <span class="math-container">$4$</span>? Does it even make sense to include it in a mathematical operation like <span class="math-container">$(x$</span> mod <span class="math-container">$4)$</span> or <span class="math-container">$x^x$</span> if we cannot even write it down in a decimal expansion?</p> </li> <li><p>I am well aware of Gödel's incompleteness theorems and the existence of unprovable statements like the continuum hypothesis, which can neither be proven nor disproven by ZFC axioms in any finite number of steps. Is there some parallel between that and the existence of numbers that cannot be computed in any finite amount of time?</p> </li> <li><p>Is there some version of mathematics or system of axioms which resolves this problem? (i.e. where well-definedness of an object is equivalent to computability?)</p> </li> </ul> <p>I would be very happy if anyone could answer or point me in the right direction.</p>
<p>Replying to your three mathematical/existential questions in order:</p> <ul> <li>Yes. There exists a TM that, when started on a blank tape, eventually halts (after finitely many steps) with the exact decimal expansion of <span class="math-container">$x=\Sigma\left(10^{100},10^{100}\right)$</span> on the tape. The same is true for <span class="math-container">$x\bmod 4$</span> or <span class="math-container">$x^x$</span> <em>or any other natural number</em>.</li> <li>There does not exist a natural number &quot;that cannot be computed in any finite amount of time&quot;.</li> <li>There is no such problem for natural numbers.</li> </ul> <p>On the other hand, <em>provability</em> is another kettle o' fish: <a href="https://math.stackexchange.com/q/3854667/16397">There is no natural number <span class="math-container">$n$</span> such that <strong>ZFC proves</strong> <span class="math-container">$\ BB(7918)=n,$</span></a> and more recently we have that <a href="https://www.scottaaronson.com/papers/bb.pdf" rel="noreferrer">for any <span class="math-container">$m\ge 748,$</span> there is no natural number <span class="math-container">$n$</span> such that ZFC proves <span class="math-container">$BB(m)=n.$</span></a> (Here <span class="math-container">$BB$</span> is the Busy Beaver function w.r.t. the number of steps taken before halting.)</p> <hr /> <p><strong>NB</strong>: As your questions (and your entire Intro) seemed to be about <strong>natural numbers</strong>, that is the context of my replies above. The situation is quite different in the larger context of <strong>real numbers</strong>. Note that every natural number has a <em>finite</em> representation, which is the basic reason it is computable. In contrast, a real number typically requires an <em>infinite</em> representation, which opens the <em>possibility</em> of not being computable. (It turns out that almost all reals are not computable.)</p>
<p>My view is that questions such as these arise from a failure to distinguish between intension (a description of a thing, an arithmetic expression, program source code, etc.) and extension (the thing described, the result of evaluating the expression, the observable behaviour of the compiled program, etc.). Maybe the following &quot;theorem&quot; will throw the difference into sharp relief.</p> <p><strong>Theorem.</strong> Every natural number is computable.</p> <p><em>Proof.</em> We must show that for every natural number <span class="math-container">$n$</span>, there is a Turing machine whose output is (say) a unary representation of <span class="math-container">$n$</span>. We proceed by induction. There is certainly a Turing machine whose output is empty, i.e. a representation of <span class="math-container">$0$</span>. If we have a Turing machine whose output is a unary representation of <span class="math-container">$n$</span> it is clear that we can modify it to output just one more unit, so as to obtain a unary representation of <span class="math-container">$n + 1$</span>. Hence every natural number is indeed computable. ◼</p> <p>There is nothing wrong with the proof above, but nonetheless you may feel unable to accept the conclusion. The only way out is to conclude that there is a problem with the interpretation of the statement of the theorem. What this is actually proving is that every natural-number-in-extension is computable. This proof has nothing to do with natural-numbers-in-intension.</p> <p>In order to say anything mathematically rigorous about natural-numbers-in-intension we must first choose a mathematical model for &quot;describing natural numbers&quot;. Unfortunately, there is no canonical choice, and different choices have different expressive power. If you choose to define &quot;description of a natural number&quot; as &quot;Turing machine that computes it&quot;, then tautologically every natural-number-in-intension is computable. But you could also choose to define it as &quot;formula <span class="math-container">$\phi$</span> in the language of set theory with at least one free variable <span class="math-container">$n$</span> such that ZFC proves <span class="math-container">$\exists ! n \in \mathbb{N} . \phi$</span>&quot;. In this case it is true that not every natural-number-in-intension is computable – or, to put it another way, there is no procedure (computable or otherwise!) that will convert the formula <span class="math-container">$\phi$</span> into a Turing machine that (ZFC proves) computes the unique natural number that <span class="math-container">$\phi$</span> describes.</p>
probability
<p>In one of his interviews, <a href="https://www.youtube.com/shorts/-qvC0ISkp1k" rel="noreferrer">Clip Link</a>, Neil DeGrasse Tyson discusses a coin toss experiment. It goes something like this:</p> <ol> <li>Line up 1000 people, each given a coin, to be flipped simultaneously</li> <li>Ask each one to flip if <strong>heads</strong> the person can continue</li> <li>If the person gets <strong>tails</strong> they are out</li> <li>The game continues until <strong>1*</strong> person remains</li> </ol> <p>He says the &quot;winner&quot; should not feel too surprised or lucky because there would be another winner if we re-run the experiment! This leads him to talk about our place in the Universe.</p> <p>I realised, however, that there need <strong>not be a winner at all</strong>, and that the winner should feel lucky and be surprised! (Because the last, say, three people can all flip tails)</p> <p>Then, I ran an experiment by writing a program with the following parameters:</p> <ol> <li><strong>Bias of the coin</strong>: 0.0001 - 0.8999 (8999 values)</li> <li><strong>Number of people</strong>: 10000</li> <li><strong>Number of times experiment run per Bias</strong>: 1000</li> </ol> <p>I plotted the <strong>Probability of 1 Winner</strong> vs <strong>Bias</strong></p> <p><a href="https://i.sstatic.net/iVUuXvZj.png" rel="noreferrer"><img src="https://i.sstatic.net/iVUuXvZj.png" alt="enter image description here" /></a></p> <p>The plot was interesting with <strong>zig-zag</strong> for low bias (for heads) and a smooth one after <strong>p = 0.2</strong>. (Also, there is a 73% chance of a single winner for a fair coin).</p> <p><strong>Is there an analytic expression for the function <span class="math-container">$$f(p) = (\textrm{probability of $1$ winner with a coin of bias $p$}) \textbf{?}$$</span></strong></p> <p>I tried doing something and got here: <span class="math-container">$$ f(p)=p\left(\sum_{i=0}^{e n d} X_i=N-1\right) $$</span> where <span class="math-container">$X_i=\operatorname{Binomial}\left(N-\sum_{j=0}^{i-1} X_j, p\right)$</span> and <span class="math-container">$X_0=\operatorname{Binomial}(N, p)$</span></p>
<p>It is known (to a nonempty set of humans) that when <span class="math-container">$p=\frac12$</span>, there is no limiting probability. Presumably the analysis can be (might have been) extended to other values of <span class="math-container">$p$</span>. Even more surprisingly, the reason I know this is because it ends up having an application in number theory! In any case, a reference for this limit's nonexistence is <a href="https://www.kurims.kyoto-u.ac.jp/%7Ekyodo/kokyuroku/contents/pdf/1274-9.pdf" rel="noreferrer">Primitive roots: a survey</a> by Li and Pomerance (see the section &quot;The source of the oscillation&quot; starting on page 79). As the number of coins increases to infinity, the probability of winning (when <span class="math-container">$p=\frac12$</span>) oscillates between about <span class="math-container">$0.72134039$</span> and about <span class="math-container">$0.72135465$</span>, a difference of about <span class="math-container">$1.4\times10^{-5}$</span>.</p>
<p>There is a pretty simple formula for the probability of a unique winner, although it involves an infinite sum. To derive the formula, suppose that there are <span class="math-container">$n$</span> people, and that you continue tossing until everyone is out, since they all got tails. Then you want the probability that at the last time before everyone was out, there was just one person left. If <span class="math-container">$p$</span> is the probability that the coin-flip is heads, to find the probability that this happens after <span class="math-container">$k+1$</span> steps with just person <span class="math-container">$i$</span> left, you can multiply the probability that person <span class="math-container">$i$</span> survives for <span class="math-container">$k$</span> steps, which is <span class="math-container">$p^k$</span>, by the probability that he is out on the <span class="math-container">$k+1^{\rm st}$</span> step, which is <span class="math-container">$1-p$</span>. You also have to multiply this by the probability that none of the other <span class="math-container">$n-1$</span> people survive for <span class="math-container">$k$</span> steps, which is <span class="math-container">$1-p^k$</span> for each of them. Multiplying these probabilities together gives <span class="math-container">$$ p^k (1-p) (1-p^k)^{n-1} $$</span> and summing over the <span class="math-container">$n$</span> possible choices for <span class="math-container">$i$</span> and all <span class="math-container">$k$</span> gives <span class="math-container">$$ f(p)= (1-p)\sum_{k\ge 1} n p^k (1 - p^k)^{n-1}.\qquad\qquad(*) $$</span> (I am assuming that <span class="math-container">$n&gt;1$</span>, so <span class="math-container">$k=0$</span> is impossible.)</p> <p>Now, the summand in (*) can be approximated by <span class="math-container">$n p^k \exp(-n p^k)$</span>, so if <span class="math-container">$n=p^{-L-\epsilon}$</span>, <span class="math-container">$L\ge 0$</span> large and integral, <span class="math-container">$0\le \epsilon \le1$</span>, <span class="math-container">$f(p)$</span> will be about <span class="math-container">$$ (1-p) \sum_{j\ge 1-L} p^{j-\epsilon} \exp(-p^{j-\epsilon}) $$</span> and we can further approximate this by summing over all integers: if <span class="math-container">$L$</span> becomes large and <span class="math-container">$\epsilon$</span> approaches some <span class="math-container">$0\le \delta \le 1$</span>, <span class="math-container">$f(p)$</span> will approach <span class="math-container">$$ g(\delta):=(1-p)\sum_{j\in\Bbb Z} p^{j-\delta} \exp(-p^{j-\delta}). $$</span> The average of this over <span class="math-container">$\delta$</span> has the simple formula <span class="math-container">$$ \int_0^1 g(\delta) d\delta = (1-p)\int_{\Bbb R} p^x \exp(-p^x) dx = -\frac{1-p}{\log p}, $$</span> which is <span class="math-container">$1/(2 \log 2)\approx 0.72134752$</span> if <span class="math-container">$p=\frac 1 2$</span>, but as others have pointed out, <span class="math-container">$g(\delta)$</span> oscillates, so the large-<span class="math-container">$n$</span> limit for <span class="math-container">$f(p)$</span> will not exist. You can expand <span class="math-container">$g$</span> in Fourier series to get <span class="math-container">$$ g(\delta)=-\frac{1-p}{\log p}\left(1+2\sum_{n\ge 1} \Re\left(e^{2\pi i n \delta} \,\,\Gamma(1 + \frac{2\pi i n}{\log p})\right) \right). $$</span> Since <span class="math-container">$\Gamma(1+ri)$</span> falls off exponentially as <span class="math-container">$|r|$</span> becomes large, the peak-to-peak amplitude of the largest oscillation will be <span class="math-container">$$ h(p):=-\frac{4(1-p)}{\log p} \left|\Gamma(1+\frac{2\pi i}{\log p})\right|, $$</span> which, as has already been pointed out, is <span class="math-container">$\approx 1.426\cdot 10^{-5}$</span> for <span class="math-container">$p=1/2$</span>. For some smaller <span class="math-container">$p$</span> it will be larger, although for very small <span class="math-container">$p$</span>, it will decrease as the value of the gamma function approaches 1 and <span class="math-container">$|\log p|$</span> increases. This doesn't mean that the overall oscillation disappears, though, since other terms in the Fourier series will become significant. To illustrate this, here are some graphs of <span class="math-container">$g(\delta)$</span>. From top to bottom, the <span class="math-container">$p$</span> values are <span class="math-container">$0.9$</span>, <span class="math-container">$0.5$</span>, <span class="math-container">$0.2$</span>, <span class="math-container">$0.1$</span>, <span class="math-container">$10^{-3}$</span>, <span class="math-container">$10^{-6}$</span>, <span class="math-container">$10^{-12}$</span>, and <span class="math-container">$10^{-24}$</span>. As <span class="math-container">$p$</span> becomes small, <span class="math-container">$$ g(0)=(1-p)\sum_{j\in\Bbb Z} p^{j} \exp(-p^{j}) \ \ \qquad {\rm approaches} \ \ \qquad p^0 \exp(-p^0) = \frac 1 e. $$</span></p> <p><a href="https://i.sstatic.net/KX5RNjGy.png" rel="noreferrer"><img src="https://i.sstatic.net/KX5RNjGy.png" alt="Graphs of g(delta)" /></a></p> <p><strong>References</strong></p> <ul> <li><a href="https://research.tue.nl/en/publications/on-the-number-of-maxima-in-a-discrete-sample" rel="noreferrer">&quot;On the number of maxima in a discrete sample&quot;, J. J. A. M. Brands, F. W. Steutel, and R. J. G. Wilms, Memorandum COSOR 92-16, 1992, Technische Universiteit Eindhoven.</a></li> <li><a href="https://doi.org/10.1214/aoap/1177005360" rel="noreferrer">&quot;The Asymptotic Probability of a Tie for First Place&quot;, Bennett Eisenberg, Gilbert Stengle, and Gilbert Strang, <em>The Annals of Applied Probability</em> <strong>3</strong>, #3 (August 1993), pp. 731 - 745.</a></li> <li><a href="https://www.jstor.org/stable/2325134" rel="noreferrer">Problem E3436, &quot;Tossing Coins Until All Show Heads&quot;, solution, Lennart Råde, Peter Griffin, O. P. Lossers, <em>The American Mathematical Monthly</em>, <strong>101</strong>, #1 (January 1994), pp. 78-80.</a></li> </ul>
linear-algebra
<p>In my opinion both are almost same. However there should be some differenes like any two elements can be multiplied in a field but it is not allowed in vector space as only scalar multiplication is allowed where scalars are from the field.</p> <p>Could anyone give me atleast one counter- example where field and vector space are both same. Every field is a vector space but not every vectorspace is a field. I need an example for which a vector space is also a field.</p> <p>Thanks in advance. (I'm not from mathematical background.)</p>
<p>It is true that vector spaces and fields both have operations we often call multiplication, but these operations are fundamentally different, and, like you say, we sometimes call the operation on vector spaces <em>scalar multiplication</em> for emphasis.</p> <p>The operations on a field <span class="math-container">$\mathbb{F}$</span> are</p> <ul> <li><span class="math-container">$+$</span>: <span class="math-container">$\mathbb{F} \times \mathbb{F} \to \mathbb{F}$</span></li> <li><span class="math-container">$\times$</span>: <span class="math-container">$\mathbb{F} \times \mathbb{F} \to \mathbb{F}$</span></li> </ul> <p>The operations on a vector space <span class="math-container">$\mathbb{V}$</span> over a field <span class="math-container">$\mathbb{F}$</span> are</p> <ul> <li><span class="math-container">$+$</span>: <span class="math-container">$\mathbb{V} \times \mathbb{V} \to \mathbb{V}$</span></li> <li><span class="math-container">$\,\cdot\,$</span>: <span class="math-container">$\mathbb{F} \times \mathbb{V} \to \mathbb{V}$</span></li> </ul> <p>One of the field axioms says that any nonzero element <span class="math-container">$c \in \mathbb{F}$</span> has a multiplicative inverse, namely an element <span class="math-container">$c^{-1} \in \mathbb{F}$</span> such that <span class="math-container">$c \times c^{-1} = 1 = c^{-1} \times c$</span>. There is no corresponding property among the vector space axioms.</p> <p>It's an important example---and possibly the source of the confusion between these objects---that any field <span class="math-container">$\mathbb{F}$</span> is a vector space over itself, and in this special case the operations <span class="math-container">$\cdot$</span> and <span class="math-container">$\times$</span> coincide.</p> <p>On the other hand, for any field <span class="math-container">$\mathbb{F}$</span>, the Cartesian product <span class="math-container">$\mathbb{F}^n := \mathbb{F} \times \cdots \times \mathbb{F}$</span> has a natural vector space structure over <span class="math-container">$\mathbb{F}$</span>, but for <span class="math-container">$n &gt; 1$</span> it does not in general have a <em>natural</em> multiplication rule satisfying the field axioms, and hence does not have a natural field structure.</p> <p><strong>Remark</strong> As @hardmath points out in the below comments, one can often realize a finite-dimensional vector space <span class="math-container">$\mathbb{F}^n$</span> over a field <span class="math-container">$\mathbb{F}$</span> as a field in its own right <em>if</em> one makes additional choices. If <span class="math-container">$f$</span> is a polynomial irreducible over <span class="math-container">$\mathbb{F}$</span>, say with <span class="math-container">$n := \deg f$</span>, then we can form the set <span class="math-container">$$\mathbb{F}[x] / \langle f(x) \rangle$$</span> over <span class="math-container">$\mathbb{F}$</span>: This just means that we consider the vector space of polynomials with coefficients in <span class="math-container">$\mathbb{F}$</span> and declare two polynomials to be equivalent if their difference is some multiple of <span class="math-container">$f$</span>. Now, polynomial addition and multiplication determine operations <span class="math-container">$+$</span> and <span class="math-container">$\times$</span> on this set, and it turns out that because <span class="math-container">$f$</span> is irreducible, these operations give the set the structure of a field. If we denote by <span class="math-container">$\alpha$</span> the image of <span class="math-container">$x$</span> under the map <span class="math-container">$\mathbb{F}[x] \to \mathbb{F}[x] / \langle f(x) \rangle$</span> (since we identify <span class="math-container">$f$</span> with <span class="math-container">$0$</span>, we can think of <span class="math-container">$\alpha$</span> as a root of <span class="math-container">$f$</span>), then by construction <span class="math-container">$\{1, \alpha, \alpha^2, \ldots, \alpha^{n - 1}\}$</span> is a basis of (the underlying vector space of) <span class="math-container">$\mathbb{F}[x] / \langle f \rangle$</span>; in particular, we can identify the span of <span class="math-container">$1$</span> with <span class="math-container">$\Bbb F$</span>, which we may hence regard as a subfield of <span class="math-container">$\mathbb{F}[x] / \langle f(x) \rangle$</span>; we thus call the latter a <em>field extension</em> of <span class="math-container">$\Bbb F$</span>. In particular, this basis defines a vector space isomorphism <span class="math-container">$$\mathbb{F}^n \to \mathbb{F}[x] / \langle f(x) \rangle, \qquad (p_0, \ldots, p_{n - 1}) \mapsto p_0 + p_1 \alpha + \ldots + p_{n - 1} \alpha^{n - 1}.$$</span> Since <span class="math-container">$\alpha$</span> depends on <span class="math-container">$f$</span>, this isomorphism <em>does</em> depend on a choice of irreducible polynomial <span class="math-container">$f$</span> of degree <span class="math-container">$n$</span>, so the field structure defined on <span class="math-container">$\mathbb{F}^n$</span> by declaring the vector space isomorphism to be a field isomorphism is not natural.</p> <p><strong>Example</strong> Taking <span class="math-container">$\Bbb F := \mathbb{R}$</span> and <span class="math-container">$f(x) := x^2 + 1 \in \mathbb{R}[x]$</span> gives a field <span class="math-container">$$\mathbb{C} := \mathbb{R}[x] / \langle x^2 + 1 \rangle.$$</span> In this case, the image of <span class="math-container">$x$</span> under the canonical quotient map <span class="math-container">$\mathbb{R}[x] \to \mathbb{R}[x] / \langle x^2 + 1 \rangle$</span> is usually denoted <span class="math-container">$i$</span>, and this field is exactly the complex numbers, which we have realized as a (real) vector space of dimension <span class="math-container">$2$</span> over <span class="math-container">$\mathbb{R}$</span> with basis <span class="math-container">$\{1, i\}$</span>.</p>
<p>A <em>field</em> is an algebraic structure allowing the four basic operations $+$, $-$, $\cdot$, and $:\,$, such that the usual rules of algebra hold, e.g., $(x+y)\cdot z=(x\cdot z) +(y\cdot z)$, etcetera, and division by $0$ is forbidden. The elements of a given field should be considered as "numbers". The systems ${\mathbb Q}$, ${\mathbb R}$, and ${\mathbb C}$ are fields, but there are many others, e.g., the field ${\mathbb F}_2$ consisting only of the two elements $0$, $1$ and satisfying (apart from the obvious relations) $1+1=0$.</p> <p>A <em>vector space</em> $X$ is in the first place an "additive structure" satisfying the rules we associate with such structures, e.g., $a+({-a})=0$, etc. In addition any vector space has associated with it a certain field $F$, the <em>field of scalars</em> for that vector space. The elements $x$, $y\in X$ cannot only be added and subtracted, but they can be as well <em>scaled</em> by "numbers" $\lambda\in F$. The vector $x$ scaled by the factor $\lambda$ is denoted by $\lambda x$. This scaling satisfies the laws we are accustomed to from the scaling of vectors in ${\mathbb R}^3$: $$\lambda(x+y)=\lambda x+\lambda y,\qquad (\lambda+\mu)x=\lambda x+\mu x\ .$$</p> <p>Asking "What is the difference between a vector space and a field" is similar to asking "What is the difference between tension and charge" in electrodynamics. In both cases the simple answer would be: "They are different notions making sense in the same discipline".</p>
differentiation
<p>For instance, the absolute value function is defined and continuous on the whole real line, but its derivative behaves like a step function with a jump-discontinuity.</p> <p>For some nice functions, though, such as $e^x$ or $\sin(x)$, the derivatives of course are no "worse" than the original function.</p> <p>Can I say something that is <em>typical</em> of the derivative? Is it typically not as nice as the original function?</p>
<p>Yes, that is completely right. And inversely, integration makes functions nicer. </p> <p>One way of measuring how "nice" a function is, it by how many derivatives it has. We say a function $f\in C^k$ if it is $k$ times continuously differentiable. The more times differentiable it is, the nicer a function is. It is "smoother". So if a function is $k$ times differentiable then its derivative is $k-1$ times differentiable. A function is "as nice" as its derivative if and only if its smooth (infinitely differentiable). These are functions like $\sin(x), e^x$, polynomials, etc. </p> <p>Inversely, integration makes things nicer. For example integrating even a non continuous function results in a continuous function: <a href="https://math.stackexchange.com/questions/429769/is-an-integral-always-continuous">Is an integral always continuous?</a></p>
<p>The concept you're talking about is called <em>smoothness</em> (<a href="https://en.wikipedia.org/wiki/Smoothness" rel="nofollow noreferrer">Wikipedia</a>, <a href="http://mathworld.wolfram.com/SmoothFunction.html" rel="nofollow noreferrer">MathWorld</a>).</p> <p>Functions like $e^x$ and $\sin(x)$ and polynomials are called <em>"smooth"</em> because their derivatives, and $n$th derivatives are all continuous. Smooth functions have derivatives all the way down, so they're as nice as their derivatives.</p> <p>But functions like $\operatorname{abs}(x)$ and $\operatorname{sgn}(x)$ aren't smooth since there are discontinuities in either them or their derivatives. They're nicer than their derivatives.</p> <p>A function is in class $C^k$ if its derivatives up to, and including, $k$ are continuous. So the number of levels of nice-ness will depend on $k$. Think about how integrating $\operatorname{sgn}(x)$ gives you $\operatorname{abs}(x)$, which in turn gives $\operatorname{sgn}(x)x^2$, and so on. As <strong>Zachary Selk</strong> points out, you can make functions nicer by integrating them.</p> <p>In fact, for most functions, its more likely that they can be integrated than differentiated. Not only is being "nice" a rare trait, being differentiable at all is too.</p>
logic
<p>I know that there are a lot of unsolved conjectures, but it could possible for them to be independent of ZFC (see <a href="https://math.stackexchange.com/questions/864149/could-it-be-that-goldbach-conjecture-is-undecidable">Could it be that Goldbach conjecture is undecidable?</a> for example).</p> <p>I was wondering if there is some conjecture for which we have proved that either a proof of it or a proof of its negation exists, but we just haven't found that proof yet.</p> <p>Could such a proof of dependence even exist, or would the only way of proving that a statement is dependent be proving/disproving it?</p>
<p>There are plenty of problems in mathematics which can be done by a finite computation in principle, but for which the computation is too large to actually carry out and we don't know of any shortcut to let us find the answer without (more or less) computing it by brute force. For instance, the value of the <a href="https://en.wikipedia.org/wiki/Ramsey%27s_theorem#Ramsey_numbers" rel="noreferrer">Ramsey number</a> $R(5,5)$ is unknown, even though it is known that it must be one of the numbers $43,44,45,46,47,$ or $48$. We know that a proof of which number it actually is exists, since you can in principle find the answer by an exhaustive search of a finite (but very very large) number of cases.</p> <p>Another example is solving the game of chess. We know that with perfect play, one of the following must be true of chess: White can force a win, Black can force a win, or both players can force a draw. A proof of one of the cases must exist, since you can just examine all possible sequences of moves (there are some technicalities about repeated positions but they don't end up mattering).</p> <p>In fact, every example must be of this form, in the following sense. Suppose you have a statement $P$ and you know that either a proof of $P$ exists or a proof of $\neg P$ exists (in some fixed formal system). Then there is an algorithm which you can carry out to determine whether $P$ or $\neg P$ is true (assuming your formal system is <em>sound</em>, meaning that it can only prove true statements): one by one, list out all possible proofs in your formal system and check whether they are a proof of $P$ or a proof of $\neg P$. Eventually you will find a proof of one of them, since a proof of one exists. So this is a computation which you know is guaranteed to be finite in length which you know will solve the problem.</p>
<p><a href="https://math.stackexchange.com/a/2273573/26369">Eric Wofsey's answer</a> is certainly more comprehensive, but I thought I'd provide an example where the assurance that there's a proof of $P$ or $\neg P$ that's not of the form "there's a straightforward, if tedious, algorithm to check things" as there is in the case of Ramsey Theory or who wins a game like Chess. </p> <p>The question is that of the "<a href="https://en.wikipedia.org/wiki/Kissing_number_problem" rel="noreferrer">kissing number</a>" for dimension 5 (or anything higher except 8 or 24). In 1d, two intervals of the same length can touch a third without overlapping. In 2d, 6 circles of a given radius can touch a 7th. We don't know the highest number of 5d spheres of a fixed radius that can all touch the same one, but the answer is between 40 and 44, inclusive. Since we're dealing with real numbers, I'd argue there's no obvious algorithm like with Ramsey Theory: we can't just test all of the possible centers of spheres.</p> <p>However, because this kind of question can be translated into a <a href="https://en.wikipedia.org/wiki/Kissing_number_problem#Mathematical_statement" rel="noreferrer">simple statement about real numbers</a> (think about using dot products to deal with the angles, for instance), it only depends on what things are true in any <a href="https://en.wikipedia.org/wiki/Real_closed_field#Definitions" rel="noreferrer">"real-closed field"</a> (a field that acts like the real numbers for many algebraic purposes).</p> <p>The thing is, <a href="https://en.wikipedia.org/wiki/Real_closed_field#Model_theory:_decidability_and_quantifier_elimination" rel="noreferrer">the theory of real-closed fields is complete and decidable</a>! <em>Everything</em> you can phrase in the language of real-closed fields can in theory be proven true or false with an algorithm, and they're even relatively efficient.</p> <p>This example is not my own, and is discussed in a number of places, including <a href="https://www.andrew.cmu.edu/user/avigad/Talks/australia4.pdf" rel="noreferrer">these slides on "Formal Methods in Analysis"</a> by <a href="https://www.andrew.cmu.edu/user/avigad/" rel="noreferrer">Jeremy Avigad</a>.</p>
geometry
<p>The volume of a cone with height <span class="math-container">$h$</span> and radius <span class="math-container">$r$</span> is <span class="math-container">$\frac{1}{3} \pi r^2 h$</span>, which is exactly one third the volume of the smallest cylinder that it fits inside.</p> <p>This can be proved easily by considering a cone as a <a href="http://en.wikipedia.org/wiki/Solid_of_revolution" rel="noreferrer">solid of revolution</a>, but I would like to know if it can be proved or at least visual demonstrated without using calculus.</p>
<p><img src="https://i.sstatic.net/ibgrF.gif" alt="alt text"><br> A visual demonstration for the case of a pyramid with a square base. As <a href="https://math.stackexchange.com/questions/623/why-is-the-volume-of-a-cone-one-third-of-the-volume-of-a-cylinder/635#635">Grigory states</a>, <a href="http://en.wikipedia.org/wiki/Cavalieris_principle" rel="noreferrer">Cavalieri&#39;s principle</a> can be used to get the formula for the volume of a cone. We just need the base of the square pyramid to have side length <span class="math-container">$ r\sqrt\pi$</span>. Such a pyramid has volume <span class="math-container">$\frac13 \cdot h \cdot \pi \cdot r^2. $</span><br> <img src="https://i.sstatic.net/Y3IH1.png" alt="alt text"><br> Then the area of the base is clearly the same. The cross-sectional area at distance a from the peak is a simple matter of similar triangles: The radius of the cone&#39;s cross section will be <span class="math-container">$a/h \times r$</span>. The side length of the square pyramid&#39;s cross section will be <span class="math-container">$\frac ah \cdot r\sqrt\pi.$</span><br> Once again, we see that the areas must be equal. So by Cavalieri&#39;s principle, the cone and square pyramid must have the same volume:<span class="math-container">$ \frac13\cdot h \cdot \pi \cdot r^2$</span></p>
<p>One can cut a cube into 3 pyramids with square bases -- so for such pyramids the volume is indeed 1/3 hS. And then one uses <a href="http://en.wikipedia.org/wiki/Cavalieris_principle">Cavalieri's principle</a> to prove that the volume of any cone is 1/3 hS.</p>
linear-algebra
<p>Let <span class="math-container">$A$</span> be a positive-definite real matrix in the sense that <span class="math-container">$x^T A x &gt; 0$</span> for every nonzero real vector <span class="math-container">$x$</span>. I don't require <span class="math-container">$A$</span> to be symmetric.</p> <p>Does it follow that <span class="math-container">$\mathrm{det}(A) &gt; 0$</span>?</p>
<p>Here is en eigenvalue-less proof that if <span class="math-container">$x^T A x &gt; 0$</span> for each nonzero real vector <span class="math-container">$x$</span>, then <span class="math-container">$\det A &gt; 0$</span>.</p> <p>Consider the function <span class="math-container">$f(t) = \det \left(t \cdot I + (1-t) \cdot A\right)$</span> defined on the segment <span class="math-container">$[0, 1]$</span>. Clearly, <span class="math-container">$f(0) = \det A$</span> and <span class="math-container">$f(1) = 1$</span>. Note that <span class="math-container">$f$</span> is continuous. If we manage to prove that <span class="math-container">$f(t) \neq 0$</span> for every <span class="math-container">$t \in [0, 1]$</span>, then it will imply that <span class="math-container">$f(0)$</span> and <span class="math-container">$f(1)$</span> have the same sign (by the intermediate value theorem), and the proof will be complete.</p> <p>So, it remains to show that <span class="math-container">$f(t) \neq 0$</span> whenever <span class="math-container">$t \in [0, 1]$</span>. But this is easy. If <span class="math-container">$t \in [0, 1]$</span> and <span class="math-container">$x$</span> is a nonzero real vector, then <span class="math-container">$$ x^T (tI + (1-t)A) x = t \cdot x^T x + (1-t) \cdot x^T A x &gt; 0, $$</span> which implies that <span class="math-container">$tI + (1-t)A$</span> is not singular, which means that its determinant is nonzero, hence <span class="math-container">$f(t) \neq 0$</span>. Done.</p> <p><em>PS:</em> The proof is essentially topological. We have shown that there is a path from <span class="math-container">$A$</span> to <span class="math-container">$I$</span> in the space of all invertible matrices, which implies that <span class="math-container">$\det A$</span> and <span class="math-container">$\det I$</span> can be connected by a path in <span class="math-container">$\mathbb{R} \setminus 0$</span>, which means that <span class="math-container">$\det A &gt; 0$</span>. One could use the same techniqe to prove other similar facts. For instance, this comes to mind: if <span class="math-container">$S^2 = \{(x, y, z) \mid x^2 + y^2 + z^2 = 1\}$</span> is the unit sphere, and <span class="math-container">$f: S^2 \to S^2$</span> is a continuous map such that <span class="math-container">$(v, f(v)) &gt; 0$</span> for every <span class="math-container">$v \in S^2$</span>, then <span class="math-container">$f$</span> has <a href="http://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping#From_Sn_to_Sn" rel="noreferrer">degree</a> <span class="math-container">$1$</span>.</p>
<p>Let $A\in\mathbb{R}^{n\times n}$ such that $x^TAx\geq 0$ for any $x\in\mathbb{R}^n$. </p> <ul> <li><p><em>The real eigenvalues of $A$ are non-negative.</em><br> This follows simply from the fact that to a real eigenvalue $\lambda$ of $A$, you can choose a real eigenvector $x\neq 0$. Hence, $0\leq x^TAx=\lambda x^Tx$ implies that $\lambda\geq 0$.</p></li> <li><p><em>The complex eigenvalues of $A$ have non-negative real parts.</em><br> Assume that $\lambda\in\mathbb{C}$, $\lambda=\xi+i\eta$, $\xi,\eta\in\mathbb{R}$ is an eigenvalue of $A$ with an associated eigenvector $x=u+iv$, $u,v\in\mathbb{R}^n$. From $Ax=\lambda x$, we obtain by equating real and imaginary part of the equality the following relations: $$ Au=\xi u-\eta v, \quad Av=\eta u+\xi v. $$ Premultiplying the first by $u^T$ and the second by $v^T$ gives $$ 0\leq u^TAu=\xi u^Tu-\eta u^Tv, \quad 0\leq v^TAv =\eta v^Tu+\xi v^Tv. $$ Since $u^Tv=v^Tu$, we get by summing the two together that $$ 0\leq u^TAu+v^TAv=\xi(u^Tu+v^Tv). $$ As before, this implies that $\xi\geq 0$ and hence $\mathrm{Re}(\lambda)\geq 0$.</p></li> </ul> <p>To summarize, we have the following statement (including the case with a strict inequality, which follows easily from the proofs above):</p> <blockquote> <p>Let $A\in\mathbb{R}^{n\times n}$ such that $x^TAx\geq 0$ for all $x\in\mathbb{R}^{n}$ ($x^TAx&gt;0$ for all non-zero $x\in\mathbb{R}^n$). Then the eigenvalues of $A$ have non-negative (positive) real parts.</p> </blockquote> <p>Now since the determinant of $A$ is the product of the eigenvalues of $A$, it is:</p> <ul> <li>non-negative if $x^TAx\geq 0$ for all $x\in\mathbb{R}^{n}$,</li> <li>positive if $x^TAx&gt;0$ for all non-zero $x\in\mathbb{R}^{n}$.</li> </ul> <p><strong>SLIGHTLY LONGER NOTE:</strong> </p> <p>When determining the sign of the determinant, we do not need to care much about the complex eigenvalues and avoid thus the second item above about the real parts of the complex spectrum. Assume that $\lambda_1,\ldots,\lambda_k\in\mathbb{R}$ and $\lambda_{k+1},\bar{\lambda}_{k+1},\ldots,\lambda_{p},\bar{\lambda}_{p}\in\mathbb{C}\setminus\mathbb{R}$ are the $n$ eigenvalues of $A$ (such that $k+2(p-k)=n$). The determinant of $A$ is given by $$\tag{1} \det(A)=\left(\prod_{i=1}^k\lambda_i\right)\left(\prod_{i=k+1}^p\lambda_i\bar{\lambda_i}\right)=\left(\prod_{i=1}^k\lambda_i\right)\underbrace{\left(\prod_{i=k+1}^p|\lambda_i|^2\right)}_{\geq 0} $$ The determinant is hence equal to the product of the real eigenvalues times something non-negative. </p> <p>Hence for the case $x^TAx\geq 0$ for all real $x$, one just needs to show that the <em>real</em> eigenvalues of $A$ are non-negative (the first item above) in order to arrive at the conclusion that $\det(A)\geq 0$. </p> <p>For the case $x^TAx&gt;0$ for all nonzero real $x$, the analog of the first item above shows that the real eigenvalues are positive and we just need to show that the non-negative term in (1) is actually positive. This can be made simply by showing that $A$ is non-singular which implies that there is no zero eigenvalue (real or complex). Hence assume that $x^TAx&gt;0$ for all nonzero real $x$ and $A$ is singular. Therefore, there exists $y\in\mathbb{R}^n$, $y\neq 0$, such that $Ay=0$. But $0&lt;y^TAy=y^T0=0$ (contradiction). Consequently, $\det(A)&gt;0$.</p>