Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Isoperimetric inequalities of a group How do you transform isoperimetric inequalities of a group to the of Riemann integrals of functions of the form $f\colon \mathbb{R}\rightarrow G$ where $G$ is a metric group so that being $\delta-$hyperbolic in the sense of Gromov is expressible via Riemann integration?
In other words, how do you define "being $\delta-$hyperbolic group" by using integrals in metric groups?
(Note: I am not interested in the "Riemann" part, so you are free to take commutative groups with lebesgue integration etc.)
| You can do this using metric currents in the sense of Ambrosio-Kirchheim. This is a rather new development of geometric measure theory, triggered by Gromov and really worked out only in the last decade. I should warn you that this is rather technical stuff and nothing for the faint-hearted.
Urs Lang has a set of nice lecture notes, where you can find most of the relevant references, see here.
My friend Stefan Wenger has done quite a bit of work on Gromov hyperbolic spaces and isoperimetric inequalities, his Inventiones paper Gromov hyperbolic spaces and the sharp isoperimetric constant seems most relevant. You can find a link to the published paper and his other work on his home page, the ArXiV-preprint is here.
I should add that I actually prefer to prove that a linear (or subquadratic) isoperimetric inequality implies $\delta$-hyperbolicity using a coarse notion of area (see e.g. Bridson-Haefliger's book) or using Dehn functions, the latter can be found in Bridson's beautiful paper The geometry of the word problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Why do we restrict the definition of Lebesgue Integrability? The function $f(x) = \sin(x)/x$ is Riemann Integrable from $0$ to $\infty$, but it is not Lebesgue Integrable on that same interval. (Note, it is not absolutely Riemann Integrable.)
Why is it we restrict our definition of Lebesgue Integrability to absolutely integrable? Wouldn't it be better to extend our definition to include ALL cases where Riemann Integrability holds, and use the current definition as a corollary for when the improper integral is absolutely integrable?
| I'll add an additional answer based on a response from my measure theory professor, as it may be of use.
Essentially, his response was that the purpose of Lebesgue Integration is to make the set of integrable functions complete. (Recall that we can form a sequence of Riemann Integrable functions that converge to a function that is not Riemann Integrable.) As was noted by @Carl Mummert in his answer, the things of interest in Lebesgue Theory are convergence theorems (which tie into this notion of completeness), not a theory of integrals for specific classes of functions.
As such, it isn't that the definition of Lebesgue Integrability isn't as broad as it might be, but that it is broad enough to ensure completeness.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46",
"answer_count": 8,
"answer_id": 7
} |
Calculate the unknown I've kind of forgotten the name of the following statements:
9x = 11
10x = 9y
What are they called? And how do you solve them?
| These such statements that you have written are called algebraic equations of one and two variables (single-ly), namely $x \text{ and } y$. Both equations together are what is know as a linear system of equations with two unknown variables (those variables being $x \text{ and } y$). There are a variety ways to solve these type of problems, but I am going to assume that you have learned to use substitution or process of elimination.
For our example we will solve the system of linear equations by substitution first:
$$
\begin{align}
\begin{array}{cc}
{9x} = 11 ~~~~~~~~~ (1) \\
{10x} = 9y ~~~~~~~(2) \\
\end{array}
\end{align}
$$
Solving for $x$ in our first equation ($1$), we should divide both side of the equation by the number 9. By doing this the $9$ would reduce to $1$ because $\dfrac{9}{9} = 1$ and the right hand side becomes $\dfrac{11}{9}$.
So we get that $x = \dfrac{11}{9}$.
Plugging in $x = \dfrac{11}{9}$ into our second equation ($2$) we get:
$10x = 9y$
$\Rightarrow 10\left(\dfrac{11}{9}\right) = 9y$
$\Rightarrow \dfrac{110}{9} = 9y$ $~~~~~~~~~$ (Divide the left and right hand side of the equation by the number 9)
By doing this, we would be dividing on the right $\dfrac{9}{9}$ and on the left $\dfrac{110}{9}\over\dfrac{9}{1}$. So now, on the left side, we are left with $y$ and when we have a fraction as so on the left, we want to multiply the top fraction
(numerator) by the reciprocal of the bottom fraction (denominator). Doing this lead us to get $\dfrac{110}{81}$
on the left side of the equation. So now we know what both are unknown variables $x \text{ and}~ y$ which are
$$
{x} = \dfrac{11}{9}
$$
$$
{y} = \dfrac{110}{81}
$$
Our second method of solving would be to use process of elimination:
To do this, we are going to multiply both equations ($1$) and ($2$) by some number, so that we can get
a variable to cancel out of both equations as follows:
$$
\begin{align}
\begin{array}{ll}
{9x} = 11 ~~~~~~~~~~~~~\text{ multiply by }~~~(-10)\\
{10x} = 9y ~~~~~~~~~~~\text{ multiply by }~~~~~~~(9) \\
\end{array}
\end{align}
$$
Leading us to:
$$
\begin{align}
\begin{array}{cc}
{-90x} = -110 \\
{90x} = 81y \\
\end{array}
\end{align}
$$
$$
~~~~~~~~~~~\text{ (adding these two equations together gives) }
$$
$\Rightarrow ~~~~~~ 0 = 81y - 110$
$\Rightarrow ~~ 110 = 81y$
$\Rightarrow ~~~~~~ y = \dfrac{110}{81}$
Now plugging $y$ back into our second equation ($2$), we will have:
$10x = 9y$
$\Rightarrow ~~ 10x = 9\dfrac{110}{81}$
$\Rightarrow ~~ 10x = \dfrac{110}{9}$
$\Rightarrow ~~~~~~ x = \dfrac{11}{9}$
Hence,
$$
{x} = \dfrac{11}{9}
$$
$$
{y} = \dfrac{110}{81}
$$
Showing that either method of strategy (substitution or elimination) works to solve this linear
system of equations.
So all in all, we have solved the linear system of equations and thus we can see that it has a unique
solution.
Hope this explains the concept of solving these type of problems fairly well. Let me know if its
something I explained here that you do not still quite understand.
Good-Luck
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Death process (stochastics)
From what I understand, the question is asking me to find P(X(t) = n | X(0) = N). I know that with a linear death rate this probability is (N choose n) * [e^(-alpha*t)]^n * [1 - e^(-alpha*t)]^N-n but I don't think this is true for a constant death rate. Any help on how to approach this would be great! Also, I see in the final solution that there are two answers: one for n = 1, 2, ... N and another one for n = 0. Why is this case? And how would I go about finding both cases? Thanks!
| Some hints:
To find $P_n(t) = Pr(X(t)=n|X(0)=N)$ you need to find the probability that exactly $N-n$ deaths have happened by time $t$. This looks Poisson with a suitable parameter.
The reason $n=0$ has a different form is that there is an unstated assumption that the population cannot fall below $0$, despite the literal implications of an indefinitely constant death rate, so you are looking for the probability that at least $N$ deaths have happened by time $t$.
One point to note that what is constant is the expected number of deaths per unit of time. Some other people use "constant death rate" to mean that the expected number of deaths per unit of time per unit of population, i.e. what you describe as linear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
the symbol for translation, transformation, or conversion What is the symbol for demonstrating syntactic conversion (transformation or translation)? For example, I want to show a calculation sequence, from $ \neg ( A \wedge B ) $ to $ \neg A \vee \neg B $. Now I just use $ \vdash $: $ \neg ( A \wedge B ) \vdash \neg A \vee \neg B $. Is there a suitable symbol to replace $ \vdash $?
Thank you.
Kejia
| You could use both $\Leftrightarrow$ (\Leftrightarrow) and $\equiv$ (\equiv), meaning that the statements are logically equivalent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
expected area of a triangle determined by randomly placed points Three points are placed at independently and at random in a unit square. What is the expected value of the area of the triangle formed by the three points?
| Here's a perl script that confirms the answer Shai linked to via a Monte Carlo approach.
#!/usr/bin/perl -w
$numTrials = 1000000 ;
sub distance {
my $point1 = $_[0] ;
my $point2 = $_[1] ;
return sqrt(($x[$point1]-$x[$point2])**2 + ($y[$point1]-$y[$point2])**2) ;
}
sub heron {
my $a = $legLength[$_[0]] ;
my $b = $legLength[$_[1]] ;
my $c = $legLength[$_[2]] ;
my $s = ( $a + $b + $c ) / 2 ;
return sqrt( $s * ( $s - $a ) * ( $s - $b ) * ( $s - $c ) ) ;
}
sub doAtriangle() {
for ( my $j = 0; $j <= 2 ; $j++ ) {
$x[$j] = rand(1) ;
$y[$j] = rand(1) ;
}
$legLength[0] = distance(0,1) ;
$legLength[1] = distance(1,2) ;
$legLength[2] = distance(2,0) ;
return heron(0,1,2) ;
}
for ( $i = 0 ; $i < $numTrials ; $i++ ) {
$sum += doAtriangle() ;
}
print $sum/$numTrials . "\n" ;
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
} |
Finding the optimum supply quantity when there is uncertainty in forecast This is actually a quiz that will be needed in a real life food stall! I need to decide how much stock to supply for my pumpkin soup stall. I sell each soup for $5$ dollars a cup, and let's say my ingredients cost is $1$ dollar. Therefore an outcome of under-forecasting is $4$ dollars per unit, while an outcome of over-forecasting is 1 dollar per unit.
My forecast isn't so simple, however. I'm guessing that the most probable number of sales is $150$ units, but I'm very unsure, so there's a normal distribution behind this prediction with a standard deviation of $30$ units.
This is harder than I expected.
Intuitively I would prepare ingredients for $180$ units, at which point I'd guess that the likely opportunity costs that would come with understocking would roughly meet the likely costs of overstocking. But given this is such a common dilemma, I thought that someone must be able to find a precise solution, and would then hopefully be able to explain it in layman's terms.
| This appears to be an instance of the Newsvendor Model. If the probability distribution of your demand is known, a (more-or-less) closed-form solution exists in terms of your costs and profit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
exponential equation $$\sqrt{(5+2\sqrt6)^x}+\sqrt{(5-2\sqrt6)^x}=10$$
So I have squared both sides and got:
$$(5-2\sqrt6)^x+(5+2\sqrt6)^x+2\sqrt{1^x}=100$$
$$(5-2\sqrt6)^x+(5+2\sqrt6)^x+2=100$$
I don't know what to do now
| You don't have to square the equation in the first place.
Let $y = \sqrt{(5+2\sqrt{6})^x}$, then $\frac{1}{y} = \sqrt{(5-2\sqrt{6})^x}$. Hence you have $y + \frac{1}{y} = 10$ i.e. $y^2 + 1 = 10y$ i.e. $y^2-10y+1 = 0$.
Hence, $(y-5)^2 =24 \Rightarrow y = 5 \pm 2 \sqrt{6}$.
Hence, $$\sqrt{(5+2\sqrt{6})^x} = 5 \pm 2\sqrt{6} \Rightarrow x = \pm 2$$
(If you plug in $x = \pm 2$, you will get $5+2\sqrt{6} + 5-2\sqrt{6} $ which is nothing but $10$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 0
} |
A first order sentence such that the finite Spectrum of that sentence is the prime numbers The finite spectrum of a theory $T$ is the set of natural numbers such that there exists a model of that size. That is $Fs(T):= \{n \in \mathbb{N} | \exists \mathcal{M}\models T : |\mathcal{M}| =n\}$ . What I am asking for is a finitely axiomatized $T$ such that $Fs(T)$ is the set of prime numbers.
In other words in what specific language $L$, and what specific $L$-sentence $\phi$ has the property that $Fs(\{\phi\})$ is the set of prime numbers?
| This exists due to very general results, namely that the set of primes is rudimentary. See this excellent survey on spectra: Durand et al. Fifty Years of the Spectrum Problem: Survey and New Results. The same holds true for all known "natural" number theoretic functions. Indeed, the authors remark in section 4.2 that "we are not aware of a natural number-theoretic function that is provably not rudimentary".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
} |
How to simplify trigonometric inequality? $| 3 ^ { \tan ( \pi x ) } - 3 ^ { 1 - \tan ( \pi x ) } | \geq 2$
| Let $y=3^{\tan(\pi x)}$ so that $3^{1-\tan(\pi x)}=\frac{3^1}{3^{\tan(\pi x)}}=\frac{3}{y}$. Now, your inequality becomes $|y-\frac{3}{y}|\ge 2$.
This can be solved with the boundary algorithm (sometimes called the test-point method)—solve the corresponding equation, $|y-\frac{3}{y}|=2$, plot those solutions on a number line, plot any values of $y$ for which part(s) of the equation are undefined (e.g. $y=0$), and test a value in each resulting interval to see if that interval satisfies the inequality you're solving.
Once you've got a solution for $y$, go back and use that to solve for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If f is surjective and g is injective, what is $f\circ g$ and $g\circ f$? Say I have $f=x^2$ (surjective) and $g=e^x$ (injective), what would $f\circ g$ and $g\circ f$ be? (injective or surjective?)
Both $f$ and $g : \mathbb{R} \to \mathbb{R}$.
I've graphed these out using Maple but I don't know how to write the proof, please help me!
| When you write $x$ in $f(x)=x^2$, it is a "dummy variable" in that you can put in anything in the proper range (here presumably the real numbers). So $f(g(x))=(g(x))^2$. Then you can expand the right side by inserting what you know about $g(x)$. Getting $g(f(x))$ is similar. Then for the injective/surjective part you could look at this question
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Is Lagrange's theorem the most basic result in finite group theory? Motivated by this question, can one prove that the order of an element in a finite group divides the order of the group without using Lagrange's theorem? (Or, equivalently, that the order of the group is an exponent for every element in the group?)
The simplest proof I can think of uses the coset proof of Lagrange's theorem in disguise and goes like this: take $a \in G$ and consider the map $f\colon G \to G$ given by $f(x)=ax$. Consider now the orbits of $f$, that is, the sets $\mathcal{O}(x)=\{ x, f(x), f(f(x)), \dots \}$. Now all orbits have the same number of elements and $|\mathcal{O}(e)| = o(a)$. Hence $o(a)$ divides $|G|$.
This proof has perhaps some pedagogical value in introductory courses because it can be generalized in a natural way to non-cyclic subgroups by introducing cosets, leading to the canonical proof of Lagrange's theorem.
Has anyone seen a different approach to this result that avoids using Lagrange's theorem? Or is Lagrange's theorem really the most basic result in finite group theory?
| I am late...
Here is a proposal, probably not far from Ihf's answer.
For $a \in G$ of order $p$,
define the binary relation
$x\cal Ry$ : $\exists k\in \mathbb{N} ; k<p$ such that $y=a^kx$
$\cal R$ is an equivalence relation on $G$ and sets up a partition of $G$.
A class is defined by $C_x=\left\{a^kx|k=0,1,\ldots,p-1 \right\}$
All the classes have $p$ elements, then $n=|G|$ is a multiple of $p$ ; $(n=pm)$ and $a^n=a^{pm}=e$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "85",
"answer_count": 2,
"answer_id": 1
} |
Incremental calculation of inverse of a matrix Does there exist a fast way to calculate the inverse of an $N \times N$ matrix, if we know the inverse of the $(N-1) \times (N-1)$ sub-matrix?
For example, if $A$ is a $1000 \times 1000$ invertible matrix for which the inverse is known, and $B$ is a $1001 \times 1001$ matrix obtained by adding a new row and column to $A$, what is the best approach for calculating inverse of $B$?
| Blockwise inverse
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Simple 4-cycle permutation I call a 4-cycle permutation simple if I can write it as $(i,i+1,i+2,i+3)$ so $(2,3,4,5)$ is a simple 4-cycle but $(1,3,4,5)$ is not. I want to write $(1,2,3,5)$ as a product of simple 4-cycles. So this is what I did:
$$
(1,2,3,5)=(1,2)(1,3)(1,5)
$$
but
$$\begin{align}
(1,3)&=(2,3)(1,2)(2,3)\\
(1,5)&=(4,5)(3,4)(2,3)(1,2)(2,3)(3,4)(4,5)
\end{align}$$
So now
$$(1,2,3,5)=(1,2)(2,3)(1,2)(2,3)(4,5)(3,4)(2,3)(1,2)(2,3)(3,4)(4,5)$$
Can you please give me a hint on how I can express
$$(1,2)(2,3)(1,2)(2,3)(4,5)(3,4)(2,3)(1,2)(2,3)(3,4)(4,5)$$
as a product of simple 4-cycles.
Note: We do permutation multiplication from left to right.
| I'm strictly shooting from the hip (i.e. this is just instinct), but it might help if you consider the following:
(1) The 4th powers of 4-cycles are unity. I.e. $(1234)^4 = e$ where $e$ is the identity.
(2) This means that inverses exist. I.e. $(1234)^3 (1234)^1 = e$.
(3) And this gives a suggestion for a way of walking around the 4-cycles.
If you need another hint, come back and ask again tomorrow?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
kernel maximal ideal If I have a homomorphism $\phi : R \rightarrow S$ between integral domains, how can I show that if the kernel is non-zero then it is a maximal ideal in R?
| The kernel of a ring homorphism $f\colon R\to S$ is maximal if and only if $f(R)$ is a simple ring (has no proper nontrivial two-sided ideals). In the case of a commutative ring with identity $1\neq 0$, a ring is simple if and only if it is a field.
So a homomorphism $\phi\colon R\to S$ between integral domains (which necessarily have $1\neq 0$) which sends $1$ to $1$ has kernel equal to a maximal ideal if and only if the image is a field. This is not true in general (as Tobias's example shows), but may be the case. E.g., $f\colon\mathbb{Q}[x]\to\mathbb{Q}[x]$ given by $f(p(x)) = p(0)$ has image equal to $\mathbb{Q}$, so the kernel, $(x)$, is maximal.
Since the image will always be an integral domain (being a subring of an integral domain that contains $1$), the kernel is always a prime ideal; if the morphism is not injective, then the kernel is always a proper nontrivial prime ideal.
In some cases, this may imply maximality. For example, if the ring $R$ has Krull dimension $1$ (this includes PIDs that are not fields), or if $R$ is an Artin ring (has the descending chain condition), then the conclusion you want will follow.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Hints for proving some algebraic integer rings are Euclidean In my book - Algebraic Number Theory and Fermat's Last Theorem Ian Stewart, David Tall - there is the exercise:
*
*Prove that the ring of integers of $\mathbb{Q}(\zeta_5)$ (the 5'th cyclotomic ring) is Euclidean.
I can prove that the integers of $\mathbb{Q}(\sqrt{-1})$ are Euclidean by a geometric argument but this doesn't work for the first problem since pentagons aren't in a lattice.
If anyone could give me a hint, thank you.
| HINT $\ \ $ It is norm Euclidean, i.e. the absolute value of the norm serves as a Euclidean function. For a nice survey see Lemmermeyer: The Euclidean algorithm in algebraic number fields.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Bijection between an open and a closed interval Recently, I answered to this problem:
Given $a<b\in \mathbb{R}$, find explicitly a bijection $f(x)$ from
$]a,b[$ to $[a,b]$.
using an "iterative construction" (see below the rule).
My question is: is it possible to solve the problem finding a less exotic function?
I mean: I know such a bijection cannot be monotone, nor globally continuous; but my $f(x)$ has a lot of jumps... Hence, can one do without so many discontinuities?
W.l.o.g. assume $a=-1$ and $b=1$ (the general case can be handled by translation and rescaling).
Let:
(1) $X_0:=]-1,-\frac{1}{2}] \cup [\frac{1}{2} ,1[$, and
(2) $f_0(x):=\begin{cases}
-x-\frac{3}{2} &\text{, if } -1<x\leq -\frac{1}{2} \\ -x+\frac{3}{2} &\text{,
if } \frac{1}{2}\leq
x<1\\ 0 &\text{, otherwise} \end{cases}$,
so that the graph of $f_0(x)$ is made of two segments (parallel to the line $y=x$) and one segment laying on the $x$ axis; then define by induction:
(3) $X_{n+1}:=\frac{1}{2} X_n$, and
(4) $f_{n+1}(x):= \frac{1}{2} f_n(2 x)$
for $n\in \mathbb{N}$ (hence $X_n=\frac{1}{2^n} X_0$ and $f_n=\frac{1}{2^n} f_0(2^n x)$).
Then the function $f:]-1,1[\to \mathbb{R}$:
(5) $f(x):=\sum_{n=0}^{+\infty} f_n(x)$
is a bijection from $]-1,1[$ to $[-1,1]$.
Proof: i. First of all, note that $\{ X_n\}_{n\in \mathbb{N}}$ is a pairwise disjoint covering of $]-1,1[\setminus \{ 0\}$. Moreover the range of each $f_n(x)$ is $f_n(]-1,1[)=[-\frac{1}{2^n}, -\frac{1}{2^{n+1}}[\cup \{ 0\} \cup ]\frac{1}{2^{n+1}}, \frac{1}{2^n}]$.
ii. Let $x\in ]-1,1[$. If $x=0$, then $f(x)=0$ by (5). If $x\neq 0$, then there exists only one $\nu\in \mathbb{N}$ s.t. $x\in X_\nu$, hence $f(x)=f_\nu (x)$. Therefore $f(x)$ is well defined.
iii. By i and ii, $f(x)\lesseqgtr 0$ for $x\lesseqgtr 0$ and the range of $f(x)$ is:
$f(]-1,1[)=\bigcup_{n\in \mathbb{N}} f(]-1,1[) =[-1,1]$,
therefore $f(x)$ is surjective.
iv. On the other hand, if $x\neq y \in ]-1,1[$, then: if there exists $\nu \in \mathbb{N}$ s.t. $x,y\in X_\nu$, then $f(x)=f_\nu (x)\neq f_\nu (y)=f(y)$ (for $f_\nu (x)$ restrited to $X_\nu$ is injective); if $x\in X_\nu$ and $y\in X_\mu$, then $f(x)=f_\nu (x)\neq f_\mu(y)=f(y)$ (for the restriction of $f_\nu (x)$ to $X_\nu$ and of $f_\mu(x)$ to $X_\mu$ have disjoint ranges); finally if $x=0\neq y$, then $f(x)=0\neq f(y)$ (because of ii).
Therefore $f(x)$ is injective, hence a bijection between $]-1,1[$ and $[-1,1]$. $\square$
| Define a bijection $f:(-1,1)\rightarrow[-1,1]$ as follows: $f(x)=2x$ if $|x|=2^{-k}$ for some $k\in\mathbb{N}$; otherwise $f(x)=x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 4,
"answer_id": 2
} |
Primary ideals of Noetherian rings which are not irreducible It is known that all prime ideals are irreducible (meaning that they cannot be written as an finite intersection of ideals properly containing them). While for Noetherian rings an irreducible ideal is always primary, the converse fails in general. In a recent problem set I was asked to provide an example of a primary ideal of a Noetherian ring which is not irreducible. The example I came up with is the ring $\mathbb{Z}_{p^2}[\eta]$ where $p$ is prime and $\eta$ is a nilpotent element of order $n > 2$, which has the $(p,\eta)$-primary ideal $(p)\cap (\eta) = (p\eta)$.
But this got me thinking: how severe is the failure of primary ideals to be irreducible in Noetherian rings?
In particular, are primary ideals of a Noetherian domain irreducible, or is a stronger condition on the ring required? I'd love to see suitably strong criteria for all primary ideals of a Noetherian ring to be irreducible, or examples of primary ideals of "well-behaved" rings which are not irreducible.
| The answer to the question "are primary ideals of a Noetherian domain irreducible?" is "no". For example take for domain $R=K[x,y]$ polynomials over field $K$. Ideal $I=(x^2,xy,y^2)$ is $(x,y)$-primary but reducible because $I=(x,y^2)\cap (y,x^2)$. Since $R$ is noetherian and domain we have a counterexample.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 2
} |
Fast growing function Aside from the power, gamma, exponential functions are there any other very fast growing functions in (semi-) regular use?
| Knuth's uparrow notation grows pretty fast. Basically, it goes as follows:
$$a\uparrow b=a^b$$
$$a\uparrow\uparrow b=a^{a^{a^{\dots}}}\bigg\}b\text{ amount of }a's$$
$$a\uparrow\uparrow\uparrow b=\underbrace{a\uparrow\uparrow(a\uparrow\uparrow (\dots(a\uparrow\uparrow a)\dots))}_{b\text{ amount of }a's}$$
etc. you get the main idea. It's use? It's a good way to represent large numbers, the most famous of which is known as Graham's number, which is the upper bound to a problem in Ramsey theory:
Possibly just as famous as the above, we have the Ackermann function. It's original usage was to show that not all total computable functions are primitive recursive, and it eventually built its way into things like Computability theory and computer science.
Within the realm of googology, one of the arguably most useful function is the fast growing hierarchy. It is useful because its simple and it can easily be used to rank most functions. For example, it is not only able to rank against addition, multiplication, and exponentiation, but it can easily extend further, providing non-trivial decent lower bounds to even the TREE sequence:
$$\text{TREE}(n)\ge f_{\vartheta(\Omega^\omega\omega)}(n)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Decomposition of $\Bbb R^n$ as union of countable disjoint closed balls and a null set This is a problem in Frank Jones's Lebesgue integration on Euclidean space (p.57),
$$\mathbb{R}^n = N \cup \bigcup_{k=1}^\infty \overline{B}_k$$
where $\lambda(N)=0$, and the closed balls are disjoint.
could any one give some hints?
| This is an idea which I cannot see if it ends up working or not. Maybe someone can?
Consider the set $\mathcal S$ of all families of closed balls in $\mathbb R^n$ which are pairwise disjoint. Ordered by inclusion, this poset satisfies the hypothesis in Zorn's lemma, so there exist maximal elements $S\in\mathcal S$.
One could hope for $S$ to be a candidate...
Now: is $\mathbb R^n\setminus\bigcup_{B\in S}B$ a null set? I don't see how to prove this...
Notice, though, that every decomposition like those gylns wants does give a maximal element in $\mathcal S$.
Later: This idea does not work: Mike has explicitely constructed a counterexample in the comments below.
Is there a way to fix this? I mean: can one select a smaller set $\mathcal S$ such that the union of its maximal elements have null complement?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 4
} |
Limit Comparison Test Using the limit comparison test how do I find the test the convergence of the sum of $$\frac{1+2^{(n+1)}}{1+3^{(n+1)}}?$$
| The limit comparison test is a good substitute for the comparison test when the inequalities are difficult to establish; essentially, if you have a feel that the series in front of you is "essentially proportional" to another series whose convergence you know, then you can try to use the limit comparison test for it.
Here, for large $n$ it should be clear that $2^{n+1}+1$ is "essentially" just $2^{n+1}$; and $3^{n+1}+1$ is "essentially" the same as $3^{n+1}$. So the fraction will be "essentially", for large $n$, about the same as $\frac{2^{n+1}}{3^{n+1}}$. So this suggests using limit comparison to compare
$$\sum \frac{1+2^{n+1}}{1+3^{n+1}}$$
with
$$\sum\frac{2^{n+1}}{3^{n+1}} = \sum\left(\frac{2}{3}\right)^{n+1}.$$
The latter is a geometric series, so it should be straightforward to determine whether it converges or not.
So let $a_n = \frac{1+2^{n+1}}{1+3^{n+1}}$, and $b_n = \left(\frac{3}{2}\right)^{n+1}$, and compute
$$\lim_{n\to\infty}\frac{a_n}{b_n}.$$
If this limit exists and is positive (greater than $0$), then both series converge or both series diverge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I equidistantly discretize a continuous function? Let's say I have a piecewise function that's continuous in an interval [1,7]:
$f(x) = \left\{
\begin{array}{lr}
-5x + 6 & : x \in [1,2)\\
-4x + 14 & : x \in [2,3)\\
-0.25x + 2.75 & : x \in [3,7]
\end{array}
\right.$
*
*How would I get a discretization of that function that consists of 10 equidistant points? [I'm interested in method(s), not calculations for that example.]
*More generally, how would I discretize functions with any number of parameters to get n equidistant points (e.g., to represent the surface of a sphere with 100 equidistant points)?
| In one dimension, you could try to parametrize the curve $C(t)=\{x(t),y(t)\}$ in such a way that the "velocity" is constant, and then quantize the paramenter $t$ - that is feasible, but that would lead you to equidistant points along the curve, and that's not apparently what you are after.
Elsewhere, the problems seems awkard, only tractable by some iterative algorithm.
In more dimensions, it's worse, it even becomes bad defined: what would "n equidistant points" mean?
There are many iterative algorithms, some related to vector quantization or clustering, some inspired by physical systems. For example, you could throw N random initial points over your domain, and move them according to some "energy" function, that increases at short distances: in that way, the points would try to go as far as the others as they can, and that "low energy" configuraton would correspond -more or less, conceptually- to the "equidistant points" you are envisioning.
If you, additionally, want to impose some organization/ordering/topology to the points (for example, in 2D you could want them to conform some distorted mesh, with each points having four -north-east-south-west 'neighbours') then you should take a look at Self Organizing Maps (Kohonen).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Counting number of sequences under cyclic permutation If I have say $2$ A matrices, $2$ B matrices and $1$ C matrix then I would like to know how many distinct traces of products of all of them can be formed.
Assume that $A$, $B$ and $C$ don't commute with each other.
Like $AABBC$, $CAABB$ and $BCAAB$ are distinct products which will have the same trace but $ABABC$ has a different trace.
I would like to know how to count the number of products upto having the same trace.
*
*Though I don't need it at this point I would be curious to know how the problem might get complicated if one assumes any or more of the pair of matrices commute.
| If you want to count the number of sequences with a fixed number of different types of letters up to cyclic permutation (words that are not guaranteed to have the same trace), then a complete answer is provided by the Polya(-Redfield) enumeration theorem, which is a corollary of Burnside's lemma.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Understanding proof of Farkas Lemma I've attached an image of my book (Theorem 4.4.1 is at the bottom of the image). I need help understanding what this book is saying.
In the first sentence on p.113:
"If (I) holds, then the primal is
feasible, and its optimal objective is
obviously zero",
They are talking about the scalar value resulting from taking the dot product of the zero vector and $x$, right? That's obviously zero. Because if they're talking about $x$ itself, then it makes no sense. Okay.
Next sentence:
"By applying the weak duality result
(Theorem 4.4.1), we have that any dual
feasible vector $u$ (that is, one
which satisfies $A'u \ge 0$) must have
$b'u \ge 0$."
I don't understand this sentence. How is the weak duality result being applied? I can see that $Ax = b, x \ge 0$ matches up with $Ax \ge b, x \ge 0$, but I don't see where $b'u \ge 0$ comes from. I would think that the only thing you could conclude from Theorem 4.4.1 is that $b'u \le 0$ since p = 0 in that problem.
Thanks in advance.
| I'll try a very short answer to the question why a dual feasible vector $u$ must verify $b'\cdot u\geq 0$ when the primal LP has a feasible solution. Observe that any $p\geq 0$ would leave $pu$ dual feasible; as such, $b'\cdot u<0$ would make the dual objective unbounded from below, because $pb'\cdot u$ could be as low as desired. The dual objective could be unbounded from below only when the primal LP had no feasible solution (using weak dualities is enough to verify this).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Why determinant of a 2 by 2 matrix is the area of a parallelogram? Let $A=\begin{bmatrix}a & b\\ c & d\end{bmatrix}$.
How could we show that $ad-bc$ is the area of a parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+b, c+d)$?
Are the areas of the following parallelograms the same?
$(1)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+c, b+d)$.
$(2)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+b, c+d)$.
$(3)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+d, b+c)$.
$(4)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+d, b+c)$.
Thank you very much.
| The oriented area $A(u,v)$ of the parallelogram spanned by vectors $u,v$ is bilinear (eg. $A(u+v,w)=A(u,w)+A(v,w)$ can be seen by adding and removing a triangle) and skew-symmetric. Hence $A(ae_1+be_2,ce_1+de_2)=(ad-bc)A(e_1,e_2)=ad-bc$. (the same works for oriented volumes in any dimension)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80",
"answer_count": 11,
"answer_id": 3
} |
Integral $\int{\sqrt{25 - x^2}dx}$ I'm trying to find $\int{\sqrt{25 - x^2} dx}$
Now I know that $\int{\frac{dx}{\sqrt{25 - x^2}}}$ would have been $\arcsin{\frac{x}{5}} + C$, but this integral I'm asking about has the rooted term in the numerator.
What are some techniques to evaluate this indefinite integral?
| Oh man, this wasn't my idea, but I found it in a hint in Schaum's Calculus 5e,
If evaluated as a definite integral, on x=0 to x=5, can consider $ \int{ \sqrt{ 25 - x^2 } dx } $ as the area under a quarter circle of radius 5.
So
$ \int_0^5{ \sqrt{ 25 - x^2 } dx } $
$ = \frac{1}{4} \pi r^2 $
$ = \frac{ 25 \pi }{4} $
This won't work if you integrate only part of the circle however.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Proving $\frac{1}{2}(n+1)<\frac{n^{n+\frac{1}{2}}}{e^{n-1}}$ without induction I want to show that $\displaystyle\frac{1}{2}(n+1)<\frac{n^{n+\frac{1}{2}}}{e^{n-1}}$. But except induction, I do not know how I could prove this?
| This is equivalent to showing $$n \leq 2 \frac{n^{n+\frac{1}{2}}}{e^{n-1}}-1 = 2\left(\frac{n^{n}n^{1/2}e}{e^n}\right)-1 $$
Let $$f(n) = 2\left(\frac{n^{n}n^{1/2}e}{e^n}\right)-n-1$$
Then $f(n) \geq 0$ (i.e. find critical points and use first derivative test).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Find 2d point given 2d point and transformation matrix I have a 1x3 matrix representing a point in space:
[x]
[y]
[1]
And a 3x3 matrix representing an affine 2d transformation matrix
[a][b][c]
[d][e][f]
[g][h][i]
How to I multiply the matrices so that I am given the matrix?
[newX]
[newY]
[w]
| Basic matrix multiplication works here.
$$ \begin{bmatrix} c_x \\ c_y \\ c_w \end{bmatrix} = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} $$
$$ c_x = a\, x + b\, y + c $$
$$ c_y = d\, x + e\, y + f$$
$$ c_w = g\, x + h\, y + i$$
with your new point at $(x,y) = (c_x/c_w, c_y/c_w)$
You will find examples here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
two examples in analysis I want to ask for two examples in the following cases:
1) Given a bounded sequence $\{a_n\}$, $$\lim_{n\to \infty}{(a_{n+1}-a_n)}=0$$ but $\{a_n\}$ diverges.
2) A function defined on real-line $f(x)$'s Taylor series converges at a point $x_0$ but does not equal to $f(x_0)$.
Thanks for your help.
Edit
in 2), I was thinking of the Taylor series of the function $f$ at the point $x_0$.
| For 1):
$$
1,\frac12,0,\frac13,\frac23,1,\frac34,\frac24,\frac14,0,\frac15,\frac25,\frac35,\frac45,1,\frac56,\frac46,\frac36,\frac26,\frac16,0,\frac17,\dots
$$
I leave it to you to find an explicit formula for $a_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
How to prove that eigenvectors from different eigenvalues are linearly independent How can I prove that if I have $n$ eigenvectors from different eigenvalues, they are all linearly independent?
| Consider the matrix $A$ with two distinct eigen values $\lambda_1$ and $\lambda_2$. First note that the eigen vectors cannot be same , i.e., If $Ax = \lambda_1x$ and $Ax = \lambda_2x$ for some non-zero vector $x$. Well , then it means $(\lambda_1- \lambda_2)x=\bf{0}$. Since $\lambda_1$ and $\lambda_2$ are scalars , this can only happen iff $x = \bf{0}$ (which is trivial) or when $\lambda_1 =\lambda_2$ .
Thus now we can safely assume that given two eigenvalue , eigenvector pair say $(\lambda_1, {\bf x_1})$ and $(\lambda_2 , {\bf x_2})$ there cannot exist another pair $(\lambda_3 , {\bf x_3})$ such that ${\bf x_3} = k{\bf x_1}$ or ${\bf x_3} = k{\bf x_2}$ for any scalar $k$. Now let ${\bf x_3} = k_1{\bf x_1}+k_2{\bf x_2}$ for some scalars $k_1,k_2$
Now,
$$ A{\bf x_3}=\lambda_3{\bf x_3} \\ $$
$$ {\bf x_3} = k_1{\bf x_1} + k_2{\bf x_2} \:\:\: \dots(1)\\ $$
$$ \Rightarrow A{\bf x_3}=\lambda_3k_1{\bf x_1} + \lambda_3k_2{\bf x_2}\\$$
but, $\:\: {\bf x_1}=\frac{1}{\lambda_1}Ax_1$ and ${\bf x_2}=\frac{1}{\lambda_2}Ax_2$. Substituting in above equation we get,
$$A{\bf x_3}=\frac{\lambda_3k_1}{\lambda_1}A{\bf x_1} + \frac{\lambda_3k_2}{\lambda_2}A{\bf x_2} \\$$
$$\Rightarrow {\bf x_3}=\frac{\lambda_3k_1}{\lambda_1}{\bf x_1} + \frac{\lambda_3k_2}{\lambda_2}{\bf x_2} \:\:\: \dots (2)$$
From equation $(1)$ and $(2)$ if we compare coefficients we get $\lambda_3 = \lambda_1$ and $\lambda_3 = \lambda_2$ , which implies $\lambda_1 = \lambda_2 = \lambda_3$ but according to our assumption they were all distinct!!! (Contradiction)
NOTE: This argument generalizes in the exact same fashion for any number of eigenvectors. Also, it is clear that if $ {\bf x_3}$ cannot be a linear combination of two vectors then it cannot be a linear combination of any $n >2$ vectors (Try to prove!!!)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114",
"answer_count": 8,
"answer_id": 7
} |
Recursion theory text, alternative to Soare I want/need to learn some recursion theory, roughly equivalent to parts A and B of Soare's text. This covers "basic graduate material", up to Post's problem, oracle constructions, and the finite injury priority method. Is there an alternate, more beginner friendly version of this material in another text? (At a roughly senior undergraduate or beginning graduate level of difficulty, although I wouldn't necessarily mind easier.)
For example, one text I've seen is S. Barry Cooper's book, Computability Theory. Is there anybody that has read both this and Soare, and could tell me about their similarities/differences?
| A popular introductory textbook that covers both computability and complexity is Michael Sipser's "Introduction to the Theory of Computation". The computability section concludes with a nice chapter on advanced computability issues.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 5
} |
Self-Contained Proof that $\sum\limits_{n=1}^{\infty} \frac1{n^p}$ Converges for $p > 1$ To prove the convergence of the p-series
$$\sum_{n=1}^{\infty} \frac1{n^p}$$
for $p > 1$, one typically appeals to either the Integral Test or the Cauchy Condensation Test.
I am wondering if there is a self-contained proof that this series converges which does not rely on either test.
I suspect that any proof would have to use the ideas behind one of these two tests.
| For every $p>1$, one can reduce the problem to the convergence of some geometric series. Then, either one takes the latter for granted or one proves it by an elementary argument. More details below.
Let $N(k)$ denote the integer part of $a^k$, where the real number $a>1$ is defined by $a^{p-1}=2$. The sum of $1/n^p$ over $n$ between $N(k)$ and $N(k+1)$ is at most the number of terms $N(k+1)-N(k)$ times the greatest term $1/N(k)^p$. This contribution behaves like
$$
(a^{k+1}-a^k)/a^{kp}\sim(a-1)a^{k(1-p)}=(a-1)2^{-k}.
$$
The geometric series of general term $2^{-k}$ converges hence the original series converges.
There is a variant of this, where one considers the slabs of integers limited by the powers of $2$. The contribution of slab $k$ is then at most the number of terms $2^k$ times the greatest term $1/2^{kp}$. This reduces the problem to the convergence of the geometric series of general term $b^k$, where $b=2^{1-p}<1$.
Finally, as everybody knows, a simple proof of the convergence of the geometric series of general term $b^k$, for every $b$ in $(0,1)$ say, is to note that the partial sums $s_k$ are such that $s_0=1$ and $s_{k+1}=1+bs_k$, and to show by induction on $k$ that $s_k\le1/(1-b)$ for every $k\ge0$.
Edit The so-called variant above shows, using $b=2^{1-p}$, that the sum of the series is at most $1/(1-b)=2^p/(2^p-2)$. But the contribution of slab $k$ is also at least the number of terms $2^k$ times the smallest term $1/2^{(k+1)p}$, hence the sum of the series is at least $1/(2^p-2)$. Finally the sum of the series is between $1/(2^p-2)$ and $2^p/(2^p-2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "184",
"answer_count": 9,
"answer_id": 7
} |
How to prove $u = \mathrm{span}\{v_{1},\dots,v_{n}\}$ The vectors $v_{1},\dots,v_{n}$ form a linearly independent set.
Addition of vector $u$ makes this set linearly dependent.
How to prove that $u \in span\{v_{1}\ldots,v_{n}\}$.
I was able to prove the reverse.
| There are $a_0, \dots, a_n$ not all zero, such that
$$a_0u + a_1v_1 + \dots a_nv_n = 0.$$
If $a_0 = 0$, the set will become independent, which is contradictory.
Clearly $a_0 \neq 0$, as $v_1, \dots, v_n$ are linearly independent, and therefore we get
$$u = -\frac{a_1}{a_0}u_1 - \dots - \frac{a_n}{a_0}u_n $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is the set of all finite sequences of letters of Latin alphabet countable/uncountable? How to prove either? Today in Coding/Cryptography class, we were talking about basic definitions, and the professor mentioned that for a set $A=\left \{ \left. a, b, \dots, z \right \} \right.$ (the alphabet) we can define a set $A^{*}=\left \{ \left. a, ksdjf, blocks, coffee, maskdj, \dots, asdlkajsdksjfs \right \} \right.$ (words) as a set that consists of all finite sequences of the elements/letters from our $A$/alphabet. My question is, is this set $A^{*}$ countably or uncountably infinite? Does it matter how many letters there are in your alphabet? If it was, say, $A=\left \{ \left. a \right \} \right.$, then the words in $A^{*}$ would be of form $a, aa, aaa, \dots$ which, I think, would allow a bijection $\mathbb{N} \to A^{*}$ where an integer would signify the number of a's in a word. Can something analogous be done with an alphabet that consists of 26 letters (Latin alphabet), or can countability/uncountability be proved otherwise? And as mentioned before, I am wondering if the number of elements in the alphabet matters, or if all it does is change the formula for a bijection.
P.S. Now that I think of it, maybe we could biject from $\underset{n}{\underbrace{\mathbb{N}\times\mathbb{N}\times\mathbb{N}\times\dots\times\mathbb{N}}}$ to some set of words $A^{*}$ whose alphabet $A$ has $n$ elements? Thanks!
| Suppose the 26 letters are "digits" in base-26. Any finite string of letters can be thought of as a unique positive integer in base-26. The positive integers are countable, so your set of strings must be as well.
(Actually, to be precise: letting the symbols in your set be non-zero digits establishes a bijection between your set and a subset of the natural numbers, so the cardinality of your set is less than or equal to that of the natural numbers; letting the symbols in your set be digits, including a "zero" digit, establishes a surjection from your set onto the natural numbers [leading zeros cause multiple digit strings to hit the same natural number], so the cardinality of your set is greater than or equal to that of the natural numbers; so since the cardinality of your set is ≥ and ≤ that of the natural numbers, they are equal in cardinality, so your set is countable.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Why is $\arctan(x)=x-x^3/3+x^5/5-x^7/7+\dots$? Why is $\arctan(x)=x-x^3/3+x^5/5-x^7/7+\dots$?
Can someone point me to a proof, or explain if it's a simple answer?
What I'm looking for is the point where it becomes understood that trigonometric functions and pi can be expressed as series. A lot of the information I find when looking for that seems to point back to arctan.
| Many of the functions you encounter on a regular basis are analytic functions, meaning that they can be written as a Taylor series or Taylor expansion. A Taylor series of $f(x)$ is an infinite series of the form $\sum_{i=0}^\infty a_ix^i$ which satisfies $f(x) = \sum_{i=0}^\infty a_ix^i$ wherever the series converges. Trigonometric functions are examples of analytic functions, and the series you are asking about is the Taylor series of $\operatorname{arctan}(x)$ about $0$ (the meaning of this is explained in the link). You can read more about Taylor series here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 10,
"answer_id": 4
} |
Largest eigenvalue of a real symmetric matrix If $\lambda$ is the largest eigenvalue of a real symmetric $n \times n$ matrix $H$, how can I show that: $$\forall v \in \mathbb{R^n}, ||v||=1 \implies v^tHv\leq \lambda$$
Thank you.
| Step 1: All Real Symmetric Matrices can be diagonalized in the form:
$
H = Q\Lambda Q^T
$
So, $ {\bf v}^TH{\bf v} = {\bf v}^TQ\Lambda Q^T{\bf v} $
Step 2: Define transformed vector: $ {\bf y} = Q^T{\bf v} $.
So, $ {\bf v}^TH{\bf v} = {\bf y}^T\Lambda {\bf y} $
Step 3: Expand
$ {\bf y}^T\Lambda {\bf y} = \lambda_{\max}y_1^2 + \lambda_{2}y_2^2 + \cdots + \lambda_{\min}y_N^2 $
\begin{eqnarray}
\lambda_{\max}y_1^2 + \lambda_{2}y_2^2 + \cdots + \lambda_{\min}y_N^2& \le & \lambda_{\max}y_1^2 + \lambda_{\max}y_2^2 + \cdots + \lambda_{\max}y_N^2 \\
& & =\lambda_{\max}(y_1^2 +y_2^2 + \cdots y_N^2) \\
& & =\lambda_{\max} {\bf y}^T{\bf y} \\
\implies {\bf y}^T\Lambda {\bf y} & \le & \lambda_{\max} {\bf y}^T{\bf y}
\end{eqnarray}
Step 5: Since $Q^{-1} = Q^T, QQ^T = I $
\begin{eqnarray}
{\bf y}^T{\bf y} &= &{\bf v}^TQQ^T{\bf v} = {\bf v}^T{\bf v}
\end{eqnarray}
Step 6: Putting it all back together
\begin{eqnarray}
{\bf y}^T\Lambda {\bf y} & \le & \lambda_{\max} {\bf y}^T{\bf y} \\
{\bf v}^TH{\bf v} & \le & \lambda_{\max}{\bf v}^T{\bf v}
\end{eqnarray}
By definition, $ {\bf v}^T{\bf v} = \|{\bf v}\|^2 $ and by definition $\|{\bf v}\| = 1$
\begin{eqnarray}
{\bf v}^TH{\bf v} & \le & \lambda_{\max}
\end{eqnarray}
Boom!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
Entire one-to-one functions are linear Can we prove that every entire one-to-one function is linear?
| I'll give the "usual" proof.
Note that by Little Picard, $f$ misses at most one point; but it is a homeomorphism onto its image, and the plane minus a point is not simply connected. Thus $f$ is onto $\mathbb{C}$, and hence bijective. Then $f$ has a holomorphic inverse, which is enough to imply $f$ is proper, that is, the pre-image of a compact set is compact. This in turn implies
$$ \lim_{z\rightarrow\infty} f(z)=\infty,$$
and thus if we define $f(\infty)=\infty$, $f$ becomes a Möbius transformation of the Riemann sphere. So $f$ has the form
$f(z) = \frac{az+b}{cz+d},$
and it is easy to see that if $f$ is entire on $\mathbb{C}$, then $c=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66",
"answer_count": 6,
"answer_id": 5
} |
Sum of random 0 and 1 Let $x_i$ be uniformly distributed random variables in the interval $[0,a]$, with $a>0$.
Let f(x) be equal to 1 if x=0, and 0 otherwise.
Let $$S(a)=\sum_{n=1}^\infty f(x_n)$$
What is S(1)?
What is $\lim_{a->0} S(a)$ from positive side?
| First note the following equality of events:
$$
\big \{ \sum\nolimits_{n = 1}^\infty {f(X_n )} > 0 \big \} = \cup _{n = 1}^\infty \{ f(X_n ) = 1\}.
$$
Now, for each $n$,
$$
{\rm P}[f(X_n) = 1] = {\rm P}[X_n = 0] = 0.
$$
Hence,
$$
{\rm P}\big[\sum\nolimits_{n = 1}^\infty {f(X_n )} > 0\big] = {\rm P}[ \cup _{n = 1}^\infty \{ f(X_n ) = 1\} ] \le \sum\nolimits_{n = 1}^\infty {{\rm P}[f(X_n ) = 1]} = 0.
$$
So, for any $a>0$, ${\rm P}[S(a)>0] = 0$, hence $S(a)=0$ with probability $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is there a generalisation of the distribution ratio From the theory of numbers we have the
Proposition:
If $\mathfrak{a}$ and $\mathfrak{b}$ are mutually prime, then the density of primes congruent to $\mathfrak{b}$ modulo $\mathfrak{a}$ in the set of all primes is the reciprocal of $\phi (\mathfrak{a})$ where $\phi$ denotes the Euler function.
And it can be shown that every polynomial with integer coefficients has infinitely many prime divisors where by a prime divisor of a polynomial we understand a prime which divides the value of that polynomial at some integer $\mathfrak{n}$; it is natural then for one to ask whether or not there is a similar result to the density theorem above, i.e. the
Conjecture:
If $\mathfrak{f}$ belongs to $\mathbb{Z} [x]$, then the density of primes which divide some values of $\mathfrak{f}$ at integers in the set of all primes is uniquely determined by its coefficients somehow.
I haven't heard of any result like this, and it is desirable to have some sources to search, if there exists,; in any case, thanks for paying attention.
| Of course, your "conjecture" is trivially true, because the set of prime divisors of $f(n)$ as $n$ ranges over the integers is determined by $f$. I guess what you meant to ask was "does this set have a density and can it be computed".
The answer to both questions is yes and the relevant result is called Chebotarev's density theorem. First, note that if $f=g\cdot h$, then the set of prime divisors of $f(n),\;n\in\mathbb{Z}$ is the union of the corresponding sets for $g$ and $h$. So we may without loss of generality consider irreducible $f$. For such an $f$ of degree $n$, let $G$ be the Galois group of its splitting field, thought of as a subgroup of $S_n$, and let $C$ be the subset of $G$ consisting of all fixed point-free elements. Then the set of primes that divide some $f(n)$ has a natural density and it is equal to $|C|/|G|$. Note that this is the set of primes modulo which $f$ does not attain a linear factor. Similar results hold for any given splitting pattern of $f$ modulo $p$, as the wikipedia page explains.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\int P(\sin x, \cos x) \text{d}x$ Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$.
For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$.
Is there a general method which allows us to evaluate the indefinite integral
$$ \int P(\sin x, \cos x) \text{d} x$$
What about the case when $\displaystyle P(x,y)$ is a rational function (i.e. a ratio of two polynomials)?
Example of a rational function: $\displaystyle \frac{x^2y + y^3}{x+y}$.
This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.
and here: List of Generalizations of Common Questions.
| Here are some other substitutions that you can try on a rational function of trigonometric functions. We name them Bioche substitution in France.
Let $P(\sin t,\cos t)=f(t)$ where $P(x,y)$ is rational function. Let $\omega(t)=f(t)dt$.
*
*If $\omega(-t)=\omega(t)$, then $u(t)=\cos t$ might be a good substitution.
For example : $$\int \frac{\sin^3t}{2+\cos t}dt=-\int \frac{(1-\cos^2t)(-\sin t)}{2+\cos t}dt=-\int\frac{1-u^2}{2+u}=\int\frac{u^2-1}{2+u}=\int u-2+\frac{3}{u-2}du=\int u du-2\int du+3\int \frac{1}{u-2}=\frac{u^2}{2}-2u+3\log(u-2)$$
*If $\omega(\pi-t)=\omega(t)$, then $u(t)=\sin t$ might be a good substitution.
For example : $$\int \frac{1}{\cos t}dt=\int \frac{\cos t}{\cos^2 t}dt=\int \frac{\cos t}{1-\sin^2 t}dt=\int \frac{1}{1-u^2}du=\int \frac{1}{2} \bigg(\frac{1}{1+u}+\frac{1}{1-u}\bigg)du=\frac{1}{2}(\log(u+1)-\log(1-u))$$
*If $\omega(\pi+t)=\omega(t)$, then $u(t)=\tan t$ might be a good substitution.
For example : $$\int\frac{1}{1+\cos^2 t}dt=\int \frac{1}{1+\frac{1}{\cos^2 t}}\frac{dt}{\cos^2 t}=\int \frac{1}{1+\frac{\cos^2t+\sin^2t}{\cos^2 t}}\frac{dt}{\cos^2 t}=\int \frac{1}{2+\tan^2t}\frac{dt}{\cos^2 t}=\int\frac{1}{2+u^2}du=\frac{1}{\sqrt2}\arctan\frac{u}{\sqrt2}$$
*If two of the previous relations are verified (in this case the three relations are verified), then $u=\cos(2t)$ might be a good substitution.
If none of these work, you can use the Weierstrass substitution presented in a previous answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54",
"answer_count": 3,
"answer_id": 1
} |
Limits: How to evaluate $\lim\limits_{x\rightarrow \infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x$ This is being asked in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions, and here: List of abstract duplicates.
What methods can be used to evaluate the limit $$\lim_{x\rightarrow\infty} \sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x.$$
In other words, if I am given a polynomial $P(x)=x^n + a_{n-1}x^{n-1} +\cdots +a_1 x+ a_0$, how would I find $$\lim_{x\rightarrow\infty} P(x)^{1/n}-x.$$
For example, how would I evaluate limits such as $$\lim_{x\rightarrow\infty} \sqrt{x^2 +x+1}-x$$ or $$\lim_{x\rightarrow\infty} \sqrt[5]{x^5 +x^3 +99x+101}-x.$$
| Here is one method to evaluate
$$\lim_{x\rightarrow\infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x.$$
Let $Q(x)=a_{n-1}x^{n-1}+\cdots+a_{0}$ for notational convenience, and notice $\frac{Q(x)}{x^{n-1}}\rightarrow a_{n-1}$ and $\frac{Q(x)}{x^{n}}\rightarrow0$ as $x\rightarrow\infty$. The crux is the factorization
$$y^{n}-z^{n}=(y-z)\left(y^{n-1}+y^{n-2}z+\cdots+yz^{n-2}+z^{n-1}\right).$$
Setting $y=\sqrt[n]{x^{n}+Q(x)}$ and $z=x$ we find
$$\left(\sqrt[n]{x^{n}+Q(x)}-x\right)=\frac{Q(x)}{\left(\left(\sqrt[n]{x^{n}+Q(x)}\right)^{n-1}+\left(\sqrt[n]{x^{n}+Q(x)}\right)^{n-2}x+\cdots+x^{n-1}\right)}.$$
Dividing both numerator and denominator by $x^{n-1}$ yields
$$\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x=\frac{Q(x)/x^{n-1}}{\left(\left(\sqrt[n]{1+\frac{Q(x)}{x^{n}}}\right)^{n-1}+\left(\sqrt[n]{1+\frac{Q(x)}{x^{n}}}\right)^{n-2}+\cdots+1\right)}.$$
As $x\rightarrow\infty$, $\frac{Q(x)}{x^{n}}\rightarrow0$ so that each term in the denominator converges to $1$. Since there are $n$ terms we find $$\lim_{x\rightarrow\infty}\left(\left(\sqrt[n]{1+\frac{Q(x)}{x^{n}}}\right)^{n-1}+\left(\sqrt[n]{1+\frac{Q(x)}{x^{n}}}\right)^{n-2}+\cdots+1\right)=n$$ by the addition formula for limits. As the numerator converges to $a_{n-1}$ we see by the quotient property of limits that $$\lim_{x\rightarrow\infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x=\frac{a_{n-1}}{n}$$ and the proof is finished.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65",
"answer_count": 6,
"answer_id": 3
} |
Are the singular values of the transpose equal to those of the original matrix? It is well known that eigenvalues for real symmetric matrices are the same for matrices $A$ and its transpose $A^\dagger$.
This made me wonder:
Can I say the same about the singular values of a rectangular matrix? So basically, are the eigenvalues of $B B^\dagger$ the same as those of $B^\dagger B$?
| Both eigenvalues and singular values are invariant to matrix transpose no matter a matrix is square or rectangular.
The definition of eigenvalues of $A$ (must be square) is the $\lambda$ makes
$$\det(\lambda I-A)=0$$
For $A^T$, $\det(\lambda I-A^T)=0$ is equivalent to $\det(\lambda I-A)=0$ since the determinant is invariant to matrix transpose. However, transpose does changes the eigenvectors.
It can also be demonstrated using Singular Value Decomposition. A matrix $A$ no matter square or rectangular can be decomposed as
$$A=U\Sigma V^T$$
Its transpose can be decomposed as $A^T=V \Sigma^T U^T$.
The transpose changes the singular vectors. But the singular values are persevered.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 1,
"answer_id": 0
} |
when does derivative of a function coincide with derivative of Fourier series? Example:
for function $$f(x)=x^{3}(1-x)^{3}=\sum f_{s}\exp(2\pi isx)$$ Fourier series of its fourth derivative are different from derivative of its Fourier series
$$f^{(4)}(x)=-360x^{2}+360x-72=\sum g_{s}\exp(2\pi i s x)$$
with $g_{s}\neq(2\pi s)^{4}f_{s}$
Related question:
When $$\sum\left|f_{s}\right|^{2}j^{2p}<\infty$$ is equivalent to $f\in C^{(p)}[0,1]$? (for periodic f)
| An advertisement for the utility and aptness of Sobolev theory is the perfect connection between $L^2$ "growth conditions" on Fourier coefficients, and $L^2$ notions of differentiability, mediated by Sobolev's lemma that says ${1\over 2}+k+\epsilon$ $L^2$ differentiability of a function on the circle implies $C^k$-ness. Yes, there is a "loss". However, the basis of this computation is very robust, and generalizes to many other interesting situations.
That is, rather than asking directly for a comparison of $C^k$ properties and convergence of Fourier series (etc.), I'd recommend seizing $L^2$ convergence, extending this to Sobolev theory for both differentiable and not-so-differentiable functions, and to many distributions (at least compactly-supported), and only returning to the "classical" notions of differentiability when strictly necessary.
I know this is a bit avante-garde, but all my experience recommends it. A supposedly readable account of the issue in the simplest possible case, the circle, is at functions on circles .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show $\Sigma_{n\leq X} 1/\phi(n) \sim \log(X)*\Sigma_{k=1}^\infty \mu(k)^2/(k*\phi(k))$ I would like to show that as X approaches infinity,
$$\sum_{n\leq X} \frac{1}{\phi(n)} \sim \log(X)\cdot\sum_{k=1}^{\infty} \frac{\mu(k)^2}{k\cdot\phi(k)}.$$
I have already proven
$$\sum_{n\leq X} \frac{1}{\phi(n)} = \sum_{k\leq X} \left[\frac{\mu(k)^2}{k\cdot\phi(k)}\cdot\sum_{t\leq X/k} \frac1t\right].$$
I recognize that $\Sigma_{t\leq X/k} 1/k$ is the $X/k$th harmonic number and that harmonic numbers can be approximated by $\ln(X/k)$.
Wolfram Alpha says that
$$\sum_{k=1}^{\infty} \frac{\mu(k)^2}{k\cdot\phi(k)} = \frac{315\zeta(3)}{2\pi^4}.$$
I have a feeling an epsilon-delta proof might be appropriate here, but I'm not sure where to proceed.
| Found an answer. $\sum_{t\leq X/k} 1/t = \log(X/k) + \gamma + O(1/(X/k))$ When you multiply it out, the sum can be split into four terms:
$$ \sum_{n\leq X} 1/\phi(n) = \log(X)*\sum_{k\leq X} \mu(k)^2/(k*\phi(k)) - \sum_{k\leq X} \mu(k)^2*\log(k)/(k*\phi(k)) $$
$$ + \sum_{k\leq X} \gamma*\mu(k)^2/(k*\phi(k)) + \sum_{k\leq X} \mu(k)^2/(k*\phi(k)*O(k/X)) $$
I had also previously shown that $\sum_{k\leq X} \mu(k)^2/(k*\phi(k))$ and $\sum_{k\leq X} \mu(k)^2*\log(k)/(k*\phi(k))$ converge, so as X approaches infinity, the last three terms of the sum become constants, leaving only the first term. Thus we can conclude that
$$ \lim_{X\to \infty} \frac{\sum_{n\leq X} 1/\phi(n)}{\sum_{k\leq X} \mu(k)^2/(k*\phi(k))} = 1 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
problem in set theory QN1 by logical argument verify that {a} is not open for any real number "a".
i guess the set is not open since there is no open interval about "a"
instead the set is said to be closed since any of its complement must be open.
am i correct?
QN2 GIVE TWO COUNTER-EXAMPLES WHICH SHOWS THAT THE IMAGE OF INTERSECTION OF TWO SETS IS NOT EQUAL TO THE INTERSECTION OF THEIR IMAGES
| Hint for Q2:
Must the image of a point in the intersection of two sets be in the intersection of the images of the two sets?
Must a point in the intersection of the images of two sets be the image of a point in the intersection of the two sets?
If the answers to these questions are different, then can you construct a counter-example?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding invertible polynomials in polynomial ring $\mathbb{Z}_{n}[x]$
Is there a method to find the units in $\mathbb{Z}_{n}[x]$?
For instance, take $\mathbb{Z}_{4}$. How do we find all invertible polynomials in $\mathbb{Z}_{4}[x]$? Clearly $2x+1$ is one. What about the others? Is there any method?
| HINT $\rm\ r_0 + r_1\ x +\:\cdots\: + r_n\ x^n\ $ is a unit in $\rm\:R[x]\:\ \iff\ r_0\:$ is a unit and $\rm\: r_i\: $ is nilpotent for $\rm\ i\ge 1\:.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
An 'opposite' or 'dual' group? Let $(G, \cdot)$ be a group. Define $(G, *)$ as a group with the same underlying set and an operation $$a * b := b \cdot a.$$ What do you call such a group? What is the usual notation for it? I tried searching for 'dual group' and 'opposite group' with no results. It seems that this group is all you need to dispose of having to explicitly define right action, but there is no mention of such a group in wiki's 'Group action'. Am I mistaken and it's not actually a group?
| A group is a groupoid (meaning a category all of whose arrows are invertible) with one object. If $\star$ denotes the object, group elements correspond to arrows $\star\to \star$, and multiplication $gh$ corresponds to composition $\star\xrightarrow{h} \star\xrightarrow{g} \star$. The opposite category of this category is the opposite group of this group: domain and codomain are reversed, so the old $gh$ is the new $hg$.
\Edit: Qiaochu Yuan was faster than me.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Continuous Collatz Conjecture Has anyone studied the real function
$$ f(x) = \frac{ 2 + 7x - ( 2 + 5x )\cos{\pi x}}{4}$$ (and $f(f(x))$ and $f(f(f(x)))$ and so
on) with respect to the Collatz conjecture?
It does what Collatz does on integers, and is defined smoothly on all
the reals.
I looked at $$\frac{ \overbrace{ f(f(\cdots(f(x)))) }^{\text{$n$ times}} }{x}$$ briefly, and it appears to have bounds independent of $n$.
Of course, the function is very wiggly, so Mathematica's graph is
probably not $100\%$ accurate.
| Yes people have studied that, as well as extending it to the complex plane, e.g.:
Chamberland, Marc (1996). "A continuous extension of the 3x + 1 problem to the real line". Dynam. Contin. Discrete Impuls Systems. 2 (4): 495–509.
Letherman, Simon; Schleicher, Dierk; Wood, Reg (1999). "The (3n + 1)-Problem and Holomorphic Dynamics". Experimental Mathematics. 8 (3): 241–252.
Cached version at CiteSeerX
You'll also find something about that in the corresponding wikipedia article, including the Collatz fractal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 1
} |
Complicated Logic Proof involving Tautology and Law of Excluded Middle I'm having great difficulty solving the following problem, and even figuring out where to start with the proof.
$$
\neg A\lor\neg(\neg B\land(\neg A\lor B))
$$
Please see the following examples of how to do proofs, I would appreciate it if you could attempt to give me guidance using the tools and the line numbers that it cites similar to those below:
This is a sample proof:
This is another sample proof (law of excluded middles):
| The "complicated" formula :
$¬A∨¬(¬B∧(¬A∨B))$
can be re-written, due to the equivalence between $P \rightarrow Q$ and $\lnot P \lor Q$, as :
$A \rightarrow \lnot ((A \rightarrow B) \land \lnot B)$.
But $P \rightarrow Q$ is also equivalent to $\lnot (P \land \lnot Q)$; so the formula it is simply :
$A \rightarrow ((A \rightarrow B) \rightarrow B)$.
Now, a Natural Deduction proof is quite easy :
(1) $A$ - assumed
(2) $A \rightarrow B$ --- assumed
(3) $B$ --- from (1) and (2) by $\rightarrow$-elimination
(4) $(A \rightarrow B) \rightarrow B$ --- from (2) and (3) by $\rightarrow$-introduction
(5) $A \rightarrow ((A \rightarrow B) \rightarrow B)$ --- from (1) and (4) by $\rightarrow$-introduction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
How does a region of integration change when making a rotation change of variables? Suppose I have a p-dimensional integral:
$$\int_{0}^{\infty}\int_{0}^{\infty}\dots \int_{0}^{\infty}f(x_1,x_2,\dots,x_p)dx_1dx_2\dots dx_p$$
And then I make a rotation + translation transform:
$$W=A^{T}(X-b)$$
Question: How will the region of integration $X>0$ change in the $W$ space?
Can assume $A$ is a matrix of eigenvectors of a real symmetric positive definite matrix if this makes the answer easier.
| Maybe you can put
$$\int_{0}^{\infty}\int_{0}^{\infty}\dots \int_{0}^{\infty}f(x_1,x_2,\dots,x_p)dx_1dx_2\dots dx_p$$
$$ = \frac{1}{2^p} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\dots \int_{-\infty}^{\infty}f(|x_1|,|x_2|,\dots,|x_p|)dx_1dx_2\dots dx_p.$$
Then, if the $f$ functions are even, you can rotate away.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find the missing term The sequence $10000, 121, 100, 31, 24, n, 20$ represents a number $x$ with respect to different bases. What is the missing number, $n$?
This is from my elementary computer aptitude paper. Is there any way to solve this quickly?
| If the base in the last term is $b$, the number is $2b$. The missing term is then in base $b-1$. Expressed in base $b-1$ the integer is $2(b-1)+2=22$ The third to last term shows that $b-2 \ge 5$ so we are safe from a carry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How can I evaluate $\sum_{n=0}^\infty(n+1)x^n$? How can I evaluate
$$\sum_{n=1}^\infty\frac{2n}{3^{n+1}}$$?
I know the answer thanks to Wolfram Alpha, but I'm more concerned with how I can derive that answer. It cites tests to prove that it is convergent, but my class has never learned these before. So I feel that there must be a simpler method.
In general, how can I evaluate $$\sum_{n=0}^\infty (n+1)x^n?$$
| Note that $n+1$ is the number ways to choose $n$ items of $2$ types (repetitions allowed but order is ignored), so that $n+1=\left(\!\binom2n\!\right)=(-1)^n\binom{-2}n$. (This uses the notation $\left(\!\binom mn\!\right)$ for the number of ways to choose $n$ items of $m$ types with repetition, a number equal to $\binom{m+n-1}n=(-1)^n\binom{-m}n$ by the usual definiton of binomial coefficients with general upper index.) Now recognise the binomial formula for exponent $-2$ in
$$
\sum_{n\geq0}(n+1)x^n=\sum_{n\geq0}(-1)^n\tbinom{-2}nx^n
=\sum_{n\geq0}\tbinom{-2}n(-x)^n=(1-x)^{-2}.
$$
This is valid as formal power series in$~x$, and also gives an identity for convergent power series whenever $|x|<1$.
There is a nice graphic way to understand this identity. The terms of the square of the formal power series $\frac1{1-x}=\sum_{i\geq0}x^i$ can be arranged into an infinite matrix, with at position $(i,j)$ (with $i,j\geq0$) the term$~x^{i+j}$ . Now for given $n$ the terms $x^n$ occur on the $n+1$ positions with $i+j=n$ (an anti-diagonal) and grouping like terms results in the series $\sum_{n\geq0}(n+1)x^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "438",
"answer_count": 23,
"answer_id": 6
} |
Prove that $a_{n}=0$ for all $n$, if $\sum a_{kn}=0$ for all $k\geq 1$ Let $\sum a_{n}$ be an absolutely convergent series such that $$\sum a_{kn}=0$$ for all $k\geq 1$. Help me prove that $a_{n}=0$ for all $n$.
Thank you!
| Let $A(p_1, \dots, p_m)$ be the set of all integers that are divisible by the first $m$ primes $p_1, \dots, p_m$.
I claim that
$$S(m) = \sum_{n \in N \setminus A(p_1, \dots, p_m)} x_n = 0.$$
If this is true, then
$$S(m) = x_1 + \sum_{n \in A} x_n = 0$$
where $A \subseteq \{p_{m+1}, p_{m+1}+1, \dots\}$.
That is,
$$|x_1| = \left| \sum_{x \in A} x_n \right| \leq \sum_{n \geq p_m}|x_n|.$$
Because $\sum |x_n|$ converges, if we let $m \to \infty$, we get $x_1=0$.
Next we realize that the sequence $x_{kn}$ satisfies the same conditions as $x_n$. Hence the same reasoning can be applied as above to conclude that $x_{1 \cdot k} = 0$. From this we conclude the entire sequence is zero identically.
It remains to show
$$S(m) = \sum_{n \in N \setminus A(p_1, \dots, p_m)} x_n = 0.$$
Because $\sum_n x_n = 0$, this is equivalent to showing
$$S(m) = \sum_{n \in A(p_1, \dots, p_m)} x_n = 0.$$
Using inclusion exclusion formula,
$$S(m) = \sum_{i=1}^m \left( (-1)^{i-1} \sum_{I \subseteq \{p_1, \dots, p_m\}, |I|=k} \sum_{n \in A(I)} x_n \right)$$
Where the innermost sum is over all numbers that are multiples of a subset of the primes in $\{p_1, \dots, p_m\}$. But this sum is of the form $\sum_{n} x_{nk}$ for $k$ the product of the primes in the subset, by assumption our sum is zero. Hence the entire expression for $S(m)$ is zero. This proves our claim, which then is used to prove our initial claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 3
} |
Show that $\mathbb{R}^2$ is not homeomorphic to $\mathbb{R}^2 \setminus\{(0,0)\}$ Show that $\mathbb{R}^2$ is not homeomorphic to $\mathbb{R}^2 \setminus \{(0,0)\} $.
| Here's a slightly different way of looking at it that avoids fundamental groups (although has its own messy details to check). One of the spaces, upon removing a compact set, can be separated into two connected components with noncompact closure. The other can't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
All natural solutions of $2x^2-1=y^{15}$ How can I find all positive integers $x$ and $y$ such that $2x^2-1=y^{15}$?
PS. See here.
| I was working on Byron's suggestion before he made it but it took me a while because I'm not a real number theorist. And, I used Sage. I consider $2y^2 = x^3 + 1$ and I want to put this in a form where the coefficient of $y^2$ and $x^3$ are 1 so I multiply both sides by 1/2 first and then use the transform $(x,y) \mapsto (X/2,Y/4)$ to get $Y^2 = X^3 + 8$. Now, Sage can find all integral points and they are $(-2, 0)$, $(1, \pm3)$, $(2, \pm4)$, and $(46, \pm312)$. This is good enough because note that any integer solution to the original $2y^2 = x^3 + 1$ will be mapped to an integer solution here. So, now you just need to consider these 7 solutions. Our map is clearly invertible by $(X, Y) \mapsto (2x, 4y)$, so we have a one-to-one correspondence between points and everything makes sense. All we need to do is map those 7 points backward to see what they were on $2y^2 = x^3 + 1$. They were $(-1, 0)$, $(1/2, \pm 3/4)$, $(1, \pm1)$, and $(23, \pm78)$. Thus, we have all integer solutions to $2y^2 = x^3 + 1$, 3 are obvious and the other two, $(23, \pm78)$ can not correspond to solutions of $2y^2 = x^{15} + 1$ as Byron explained.
By the way, the elliptic curve has rank 1 which means it has an infinite number of solutions over the rationals. But, whether or not any of these correspond to nontrivial solutions of the original $2y^2 = x^{15} + 1$, I have no idea.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
How to compare randomness of two sets of data? Given two sets of random numbers, is it possible to say that one set of random numbers has a greater degree of randomness when compared to the other?
Or one set of numbers is more random when compared to the other?
EDIT:
Consider this situation:
A hacker needs to know the target address where a heap/library/base of the executable is located. Once he knows the address he can take advantage of it and compromise the system.
Previously, the location was fixed across all computers and so it was easy for the hackers to hack the computer.
There are two software S1 and S2. S1 generates a random number where the heap/library/base of the executable is located. So now, it is difficult for the hacker to predict the location.
Between S1 and S2, both of which have random number generators, which one is better? Can we compare based on the random numbers generated by each software?
| There are randomness tests. Some tests are powerful enough that they will distinguish a human-generated sequence of 100 heads and tails from 100 tosses of a coin with high probability. For example, the distribution of streaks tends to change radically if you reverse every other coin in the human-generated sequence, while it stays the same for the coin flips. Some default random number generators in compilers will pass simple tests for randomness while failing more subtle tests.
There are other possible (but I think less likely) interpretations of your question. If you meant something else, such as the deviations of a random variable from the mean, please clarify.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is a graph simple, given the number of vertices and the degree sequence? Does there exist a simple graph with five vertices of the following degrees?
(a) 3,3,3,3,2
I know that the answer is no, however I do not know how to explain this.
(b) 1,2,3,4,3
No, as the sum of the degrees of an undirected graph is even.
(c) 1,2,3,4,4
Again, I believe the answer is no however I don't know the rule to explain why.
(d) 2,2,2,1,1
Same as above.
What method should I use to work out whether a graph is simple, given the number of vertices and the degree sequence?
| The answer to both a, and d, is that in fact such graphs exit. It is not hard to find them.
The answer for c is that there cannot be such a graph - since there are 2 vertices with degree 4, they must be connected to all other vertices. Therefore, the vertex with degree one, is an impossibility.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Tips for understanding the unit circle I am having trouble grasping some of the concepts regarding the unit circle. I think I have the basics down but I do not have an intuitive sense of what is going on. Is memorizing the radian measurements and their corresponding points the only way to master this? What are some ways one can memorize the unit circle?
Edit for clarification: I think my trouble arises by the use of $\pi$ instead of degrees. We started graphing today and the use of numbers on the number line were again being referred to with $\pi$. Why is this?
| It is probably useful to memorize a table like this:
\begin{align}
\theta & & \sin\theta & & \cos \theta
\\
0 & & \frac{\sqrt{0}}{2} = 0 & & \frac{\sqrt{4}}{2} = 1
\\
\frac{\pi}{6} & & \frac{\sqrt{1}}{2} = \frac{1}{2} & & \frac{\sqrt{3}}{2}
\\
\frac{\pi}{4} & & \frac{\sqrt{2}}{2} & & \frac{\sqrt{2}}{2}
\\
\frac{\pi}{3} & & \frac{\sqrt{3}}{2} & & \frac{\sqrt{1}}{2} = \frac{1}{2}
\\
\frac{\pi}{2} & & \frac{\sqrt{4}}{2} = 0 & & \frac{\sqrt{0}}{2} = 0
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 3
} |
How to calculate Jacobi Symbol $\left(\dfrac{27}{101}\right)$? How to calculate Jacobi Symbol $\left(\dfrac{27}{101}\right)$?
The book solution
$$\left(\dfrac{27}{101}\right) = \left(\dfrac{3}{101}\right)^3 = \left(\dfrac{101}{3}\right)^3 = (-1)^3 = -1$$
My solution
$$\left(\dfrac{27}{101}\right) = \left(\dfrac{101}{27}\right) = \left(\dfrac{20}{27}\right) = \left(\dfrac{2^2}{27}\right) \cdot \left(\dfrac{5}{27}\right)$$
$$= (-1) \cdot \left(\dfrac{27}{5}\right) = (-1) \cdot \left(\dfrac{2}{5}\right) = (-1) \cdot (-1) = 1.$$
Whenever I encounter $\left(\dfrac{2^b}{p}\right)$, I use the formula
$$(-1)^{\frac{p^2 - 1}{8}}$$
I guess mine was wrong, but I couldn't figure out where? Any idea?
Thank you,
| I think it's better to make sure that the number in the lower case is a prime, since there are examples, if I remember rightly, that the Jacobi symbol is 1 but the corresponding quadratic congruence is not solvable; in addition, as already mentioned, you cannot say that $\left(\dfrac{2^b}{p}\right)\ = (-1)^{(p^2 -1)/8}$; it is a mistake without second thought, and I think it can be well avoided if you know the quadratic reciprocity law well, thanks.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How to prove a number $a$ is quadratic residue modulo $n$? In general, to show that $a$ is quadratic residue modulo $n$? What do I have to show? I'm always struggling with proving a number $a$ is quadratic residue or non-quadratic residue.
For example,
If $n = 2^{\alpha}m$, where $m$ is odd, and $(a, n) = 1$.
Prove that $a$ is quadratic residue modulo $n$ iff the following are satisfied:
If $\alpha = 2$ then $a \equiv 1 \pmod{4}$.
I just want to know what do I need to show in general, because I want to solve this problem on my own. Any suggestion would be greatly appreciated.
Thank you.
| The correct statement is as below. Note that the special case you mention follows from the fact that $\rm\ a = b^2\ (mod\ 4\:m)\ \Rightarrow\ a = b^2\ (mod\ 4)\:,\:$ but $1$ is the only odd square $\rm\:(mod\ 4)\:,\ $ so $\rm\ a\equiv 1\ (mod\ 4)\:$
THEOREM $\ $ Let $\rm\ a,\:n\:$ be integers, with $\rm\:a\:$ coprime to $\rm\:n\ =\ 2^e \:p_1^{e_1}\cdots p_k^{e_k}\:,\ \ p_i\:$ primes.
$\rm\quad\quad \ x^2\ =\ a\ \ (mod\ n)\ $ is solvable for $\rm\:x\:$
$\rm\quad\quad \: \iff\ \ \: a^{(p_i\ -\ 1)/2} \ \ \equiv\ \ 1\ \ (mod\ p_i)\quad\quad\ \ $ for all $\rm\ i\le k$
$\quad\quad\ $ and $\rm\quad\ \ e>1 \:\Rightarrow\: a\equiv 1\ \ (mod\ 2^{2+\delta}\:),\ \ \ \delta = 1\ \ if\ \ e\ge 3\ \ else\ \ \delta = 0$
Proof: See Ireland and Rosen, A Classical Introduction to Modern Number Theory, Proposition 5.1.1 p.50.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Find equation of quadratic when given tangents? I know the equations of 4 lines which are tangents to a quadratic:
$y=2x-10$
$y=x-4$
$y=-x-4$
$y=-2x-10$
If I know that all of these equations are tangents, how do I find the equation of the quadratic?
Normally I would be told where the tangents touch the curve, but that info isn't given.
Thanks!
| As they are symmetric around the origin, the quadratic has no linear term in $x$. So I would put $y^2=ax^2+b$ as any linear term in $y$ can be absorbed into a vertical shift or $y=ax^2+b$ to get the parabolas. Then calculate what $a$ and $b$ need to be to make them tangent. Because we incorporated the symmetry, you only have two tangent lines, but that gives two equations for $a, b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Invariant Subspace of Two Linear Involutions I'd love some help with this practice qualifier problem:
If $A$ and $B$ are two linear operators on a finite dimensional complex vector space $V$ such that $A^2=B^2=I$ then show that $V$ has a one or two dimensional subspace invariant under $A$ and $B$.
Thanks!
| Arturo's answer can be condensed to the following:
Let $U_1$, $\ldots$, $U_4$ be the eigenspaces of $A$ and $B$. Letting the simple cases aside we may assume that $U_i\oplus U_j=V$ for all $i\ne j$. We have to produce four nonzero vectors $x_i\in U_i$ that lie in a two-dimensional plane.
For $i\ne j$ denote by $P_{ij}:V\to V$ the projection along $U_i$ onto $U_j$; whence $P_{ij}+P_{ji}=I$. The map $T:=P_{41}\circ P_{32}$ maps $V$ to $U_1$, so it leaves $U_1$ invariant. It follows that $T$ has an eigenvector $x_1\in U_1$ with an eigenvalue $\lambda\in{\mathbb C}$. Now put $$\eqalign{x_2&:=P_{32}x_1\in U_2\ ,\cr x_3&:=x_1-x_2=P_{23} x_1\in U_3\ ,\cr x_4&:=x_2-\lambda x_1=x_2- P_{41}P_{32}x_1=x_2-P_{41}x_2=P_{14}x_2\in U_4\ .\cr}$$
It is easily checked that all four $x_i$ are nonzero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Finding best response function with probabilities (BR) given a normal-matrix representation of the game We are given players 1, 2 and their respective strategies (U, M, D for player 1, L, C, R for player 2) and the corresponding pay-offs through the following table:
$\begin{matrix}
1|2 & L & C & R\\
U & 10, 0 & 0, 10 & 3, 3 \\
M & 2,10 & 10, 2 & 6, 4\\
D & 3, 3 & 4, 6 & 6, 6
\end{matrix}$
Player 1 holds a belief that player 2 will play each of his/her strategies with frequency $\frac{1}{3}$, in other words $\alpha_2$=($\frac{1}{3}, \frac{1}{3}, \frac{1}{3}$). Given this, I need to find best response $BR_1(\alpha_2)$ for player 1. I am wondering how to do this mathematically. I have an intuitive way, and am not sure if it is correct. I think that if player 2 chooses $L$, player 1 is better off choosing $U$, anf if player 2 chooses $C$ or $R$ player 1 is better off choosing $M$, so best response for player 1 given his/her beliefs about player 2 would be $(\frac{1}{3}, \frac{2}{3}, 0)$, but I do not know if this is correct and how to get an answer mathematically (though I think it could involve derivatives which I would have to take to see what probability value would maximize the pay-off, just can't think of a function).
| I think Carl already gave the right answer. Even though mixed strategies may look better than pure ones, actually they are not.
Suppose that player 1 choose a mixed strategy $\alpha_1=(a,b,c)$. Then the probability of each scenario is given by
$$\begin{matrix}
1|2 & L & C & R\\
U & a/3 & a/3 & a/3 \\
M & b/3 & b/3 & b/3 \\
D & c/3 & c/3 & c/3
\end{matrix}
$$
If you consider the payoff of player 1 in each of the 9 possible scenarios and compute the expected payoff, that will be $13a/3+18b/3+13c/3$. The maximum is achieved at $(a,b,c)=(0,1,0)$ which is $18/3$. Of course, the best response for player 1 is a pure strategy --- always choosing $M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Every subgroup $H$ of a free abelian group $F$ is free abelian I am working through a proof that every subgroup $H$ of a free abelian group $F$ is free abelian (for finite rank)
For the inductive step, let $\{ x_1, \ldots, x_n \}$ be a basis of $F$, let $F_n = \langle x_1,\ldots,x_{n-1} \rangle$, and let $H_n = H \cap F_n$. By induction $H_n$ is free abelian of rank $\le n-1$. Now $$H/H_n = H/(H \cap F_n) \simeq (H+F_n)/F_n \subset F/F_n \simeq \mathbb{Z}$$
The isomorphism I can't see is $$H/(H \cap F_n) \simeq (H+F_n)/F_n.$$ I guess there is a way to get this from the first isomorphism theorem, but I am having a hard time seeing it
| What you need is the Third Isomorphism Theorem: given a group $G,$ a normal subgroup $K$ of $G$ and a subgroup $H$ of $G$ we have that
$$
HK/K \cong H/H\cap K.
$$
You rightly guessed that the proof uses the Fundamental Isomorphism Theorem. The homomorphism $f : HK \to H/H \cap K$ defined via
$$
f(hk) = h (H\cap K)
$$
where $h \in H$ and $k \in K$ is a surjective one and its kernel is $K.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
number of ordered partitions of integer How to evaluate the number of ordered partitions of the positive integer $ 5 $?
Thanks!
| Counting in binary the groups of 1s or 0s form the partitions. Half are the same so there are 2^(n-1). As to be expected this gives the same results as the gaps method, but in a different order.
Groups
0000 4
0001 3,1
0010 2,1,1
0011 2,2
0100 1,1,2
0101 1,1,1,1
0110 1,2,1
0111 1,3
Gaps
000 4
001 3,1
010 2,2
011 2,1,1
100 1,3
101 1,2,1
110 1,1,2
111 1,1,1,1
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 0
} |
Looking for a function $f$ such that $f(i)=2(f(i-1)+f(\lceil i/2\rceil))$ I'm looking for a solution $f$ to the difference equation $$f(i)=2(f(i-1)+f(\lceil i/2\rceil))$$ with $f(2)=4$. Very grateful for any ideas.
PS. I've tried plotting the the initial values into "Sloan", but it doesn't seem to recognize the sequence.
| Since $f$ is always positive, $f(i) \gt 2f(i-1)$ and so by induction $f(n) \gt 2^n$. $f(\lceil i/2\rceil)$ is then exponentially smaller than $f(i)$, so $2^n$ is the dominant term. Divide out $f$ by the exponential and define $g(n) = 2^{-n}f(n)$; then $g(i) = g(i-1) + 2^{1-\lfloor i/2\rfloor}g(\lceil i/2\rceil)$, with $g(2)=1$. It's easy to see by induction that $g(n) \lt n$ and in fact that $g(n) = O(n^\epsilon)$ for any $\epsilon$ (the differences for a series of $\Theta(n^\epsilon)$ are $\Theta(n^{\epsilon-1}) = {n^\epsilon\over n}$, which is eventually larger than $n^{\epsilon}\over2^\epsilon2^{n/2}$ for any $\epsilon$). In fact, the form of the series loosely suggests that $g(n)$ may be some exponentially damped constant, roughly $C+\Theta(2^{-kn})$ for some $k$ and $C$; you might try from that perspective...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
When is the product of two quotient maps a quotient map? It is not true in general that the product of two quotient maps is a quotient maps (I don't know any examples though).
Are any weaker statements true? For example, if $X, Y, Z$ are spaces and $f : X \to Y$ is a quotient map, is it true that $ f \times {\rm id} : X \times Z \to Y \times Z$ is a quotient map?
| In the category of compactly generated spaces, I think that the product of quotient maps is (always) a quotient map.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 6,
"answer_id": 1
} |
Having trouble verifying absolute/conditional convergence for these series Greetings,
I'm having trouble applying the tests for convergence on these series; I can never seem to wrap my head around how to determine if they're absolutely convergent, conditionally convergent or divergent.
a) $\displaystyle \sum_{k=1}^{\infty}\frac{\sqrt{k}}{e^{k^3}}$.
b) $\displaystyle \sum_{k=2}^{\infty}\frac{(-1)^k}{k(\ln k)(\ln\ln k)}$.
| a) Since $e^x >x$ then $$\frac{\sqrt{k}}{e^{k^3}}<\frac{\sqrt{k}}{k^3}=\frac{1}{k^{5/2}}$$
But $\sum_{k=1}^\infty \frac{1}{k^{5/2}}$ is convergent ($p$-series test). Hence the original series is convergent (hence absolutely convergent since it is a positive series).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
What is the best book to learn probability? Question is quite straight... I'm not very good in this subject but need to understand at a good level.
| I happened to take an introductory course on probability and statistics on two different universities. In one they used a horrible book, and in the other they used a truly amazing one. It's rare that a book really stands out as fantastic, but it did.
Probability and Statistics for Engineers and Scientists
by Ronald E. Walpole, Raymond Myers, Sharon L. Myers and Keying E. Ye.
The version number doesn't matter, just find an old version second hand.
It is really thorough, takes one definition at a time, and builds on top of that. The structuring and writing is top class, and the examples are well chosen.
Don't worry if you are not an engineer. When using examples they have taken them from the domain of engineering, eg "A factory produces so and so many items per hour, and only so and so many can be broken, ...", instead of using examples from fx social science or economics. But they don't involve engineering science such as statics, aerodynamics, electronics, thermodynamics or any such things. This means that everyone can understand the book, it does not even help to have an engineering background. Perhaps the examples are more appealing/interesting to engineers, but that's all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "159",
"answer_count": 11,
"answer_id": 3
} |
How to solve implicit solution of IVP The given question is:
$$
dy/dt + 2y = 1\ ;\qquad y(0)= 5/2
$$
when i solve this i get $\ln(-4)=c$
now the problem is how to solve $\ln(-4)$?
| this is a linear differential equation and the solution of this equation is as follow:
IF= = e^∫▒〖2 dt〗
= e^2t
Now solution of equation is:
y e^2t = ∫▒〖1 .e^2t dt〗
y e^2t = e^2t/2 + c ……………………….eq(1)
Now put t=0 and y = 5/2
5/2 = 1/2+ c
C = 2
Putting this in eq(1), we get
y e^2t = e^2t/2+ 2
y=1/2+ 1/2 e^(-2t)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Are there any random variables so that E[X] and E[Y] exist but E[XY] doesn't? Are there any random variables so that E[X] and E[Y] exist but E[XY] doesn't?
| Having recently discussed here the difference between "$X$ has expectation" (in the wide sense) and "$X$ is integrable", let us give an example where $X$ and $Y$ are integrable (that is, have finite expectation) but $XY$ does not admit an expectation (that is, ${\rm E}(XY)^+ = {\rm E}(XY)^- = \infty$).
Let $Z$ be any nonnegative random variable satisfying ${\rm E}(Z)<\infty$ and ${\rm E}(Z^2) = \infty$ (for example, $Z=1/\sqrt{U}$, where $U \sim {\rm uniform}(0,1)$; cf. user8268's comment above). Let $R$ be independent of $Z$ with ${\rm P}(R=1) = {\rm P}(R=-1) = 1/2$. Define the random variables $X$ and $Y$ by $X=ZR$ and $Y=Z$. Then, $X$ and $Y$ have finite expectation. Next note that $XY = Z^2$ if $R=1$ and $XY = -Z^2$ if $R=-1$. Hence,
$$
(XY)^ + := \max \{ XY,0\} = Z^2 {\mathbf 1}(R = 1)
$$
and
$$
(XY)^ - := -\min \{ XY,0\} = Z^2 {\mathbf 1}(R = -1),
$$
where ${\mathbf 1}$ denotes the indicator function. Since $Z$ and $R$ are independent (and, by assumption, ${\rm E}(Z^2) = \infty$), we get ${\rm E}(XY)^+ = {\rm E}(XY)^- = \infty$, as desired.
In the example (mentioned above) where $Z=1/\sqrt{U}$, $U \sim {\rm uniform}(0,1)$, one can find that $XY$ has density function $f_{XY}$ given by $f_{XY} = 1/(2x^2)$, $|x|>1$. Thus, $\int_0^\infty {xf_{XY} (x)\,{\rm d}x} = \int_1^\infty {(2x)^{ - 1} \,{\rm d}x} = \infty $ and $\int_{ - \infty }^0 {|x|f_{XY} (x)\,{\rm d}x} = \int_{ - \infty }^{ - 1} {(2|x|)^{ - 1} \,{\rm d}x} = \infty $, corresponding to ${\rm E}(XY)^+ = \infty$ and ${\rm E}(XY)^- = \infty$, respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Representation of integers How can I prove that every integer $n>=170$ can be written as a sum of five positive squares? (i.e. none of the squares are allowed to be zero).
I know that $169=13^2=12^2+5^2=12^2+4^2+3^2=10^2+8^2+2^2+1^2$, and $n-169=a^2+b^2+c^2+d^2$ for some integers $a$, $b$, $c$, $d$, but do I show it?
Thank you.
| Hint: let $n-169 = a^2+b^2+c^2+d^2$; if $a,b,c,d \neq 0$ then ... if $d = 0$ and $a,b,c \neq 0$ then ... if $c = d = 0$ and $a,b \neq 0$ then ... if $b = c = d = 0$ and $a \neq 0$ then ... if $a = b = c = d = 0$ then - wait, that can't happen!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
What is the best way to show that no positive powers of this matrix will be the identity matrix?
Show that no positive power of the matrix $\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right)$ equals $I_2$.
I claim that given $A^{n}, a_{11} = 1$ and $a_{12} >0, \forall n \in \mathbb{N}$. This is the case for $n=1$ since $A^{1} = \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right)$ with $1=1$ and $1>0$.
Now assuming that $a_{11} = 1$ and $a_{12}>0$ for $A^{n}$ show that $A^{n+1} = A^{n}A = \left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right)\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right) = \left( \begin{array}{cc} a_{11} & a_{11} + a_{12} \\ a_{21} & a_{21} + a_{22} \end{array} \right)$.
According to the assumption $a_{11} = 1$ and $a_{12}>0 \Rightarrow 1+a_{12} = a_{11}+a_{12}>0$. Taken together, this shows that $A^{n} \neq I_{2} \forall n\in \mathbb{N}$ since $a_{12}\neq0 = i_{12}$.
First of all, was my approach legitimate and done correctly? I suspect that I did not solve this problem as intended by the author (if at all!), could anyone explain the expected solution please? Thank you!
| Your solution seems OK to me. You can also find $A^n$ explicitly:
let $E=\left(\begin{array}{cc}0&1\\0&0\end{array}\right)$.
Then $A=I+E$ and $E^2=0$. So $(I+nE)(I+E)=I+(n+1)E$ and so, by induction, $A^n=I+nE\ne I$ for $n\ge1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Probability distribution for the remainder of a fixed integer In the "Notes" section of Modern Computer Algebra by Joachim Von Zur Gathen, there is a quick throwaway remark that says:
Dirichlet also proves the fact, surprising at first sight, that for fixed $a$ in a division the remainder $r = a \operatorname{rem} b$, with $0 \leq r < b$, is more likely to be smaller than $b/2$ than larger: If $p_a$ denotes the probability for the former, where $1 \leq b \leq a$ is chosen uniformly at random, then $p_a$ is asymptotically $2 - \ln{4} \approx 61.37\%$.
The note ends there and nothing is said about it again. This fact does surprise me, and I've tried to look it up, but all my searches for "Dirichlet" and "probability" together end up being dominated by talks of Dirichlet stochastic processes (which, I assume, is unrelated).
Does anybody have a reference or proof for this result?
| sos440's answer is correct, but I think it makes the calculation look unnecessarily complicated. The boundaries where the remainder switches between being greater or less than $b/2$ are $a/b=n/2$, that is $b=2a/n$, for $n>2$. If we choose $b$ as a real number uniformly distributed over $[0,a]$, we can calculate the probability of $a \;\text{mod}\; b$ (defined as the unique number between $0$ and $a$ that differs from $a$ by an integer multiple of $b$) being less than $b/2$ by adding up the lengths of the corresponding intervals,
$$
\begin{eqnarray}
&&\left(\left(\frac{2a}{2}-\frac{2a}{3}\right)+\left(\frac{2a}{4}-\frac{2a}{5}\right)+\left(\frac{2a}{6}-\frac{2a}{7}\right)+\ldots\right)\\
&=&2a\left(1-(1-\frac{1}{2}+\frac{1}{3}-\ldots)\right)\\
&=&2a(1-\ln2)\;,
\end{eqnarray}
$$
which is the integral from $0$ to $a$ of the characteristic function $\chi_S$ with $S=\{b\mid a\;\mathrm{mod}\;b < b/2\}$ and yields the probability $p_a=2a(1-\ln2)/a=2(1-\ln2)$. By scaling from $[0,a]$ to $[0,1]$, we can interpret the probability for integer $b$ as an approximation to this integral using the rectangle rule, which converges to the integral as $a\to\infty$ since the mesh size of the approximation is $1/a\to0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 1
} |
Alternative definition for topological spaces? I have just started reading topology so I am a total beginner but why are topological spaces defined in terms of open sets? I find it hard and unnatural to think about them intuitively. Perhaps the reason is that I can't see them visually. Take groups, for example, are related directly to physical rotations and numbers, thus allowing me to see them at work. Is there a similar analogy or defintion that could allow me to understand topological spaces more intuitively?
| From Wikipedia:
In topology and related branches of mathematics, the Kuratowski closure axioms are a set of axioms which can be used to define a topological structure on a set. They are equivalent to the more commonly used open set definition. They were first introduced by Kazimierz Kuratowski, in a slightly different form that applied only to Hausdorff spaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Basic question about tensor products I almost feel embarrassed to ask this, but I am trying to learn about tensor products (for now over Abelian groups). Here is the definition given:
Let $A$ and $B$ be abelian groups. Their tensor product, denoted by $A \otimes B$, is the abelian group having the following presentation
Generators: $A \times B$ that is, all ordered pairs $(a,b)$
Relations: $(a+a',b)=(a,b)+(a',b)$ and $(a,b+b')=(a,b)+(a,b')$ for all $a,a' \in A$ and $b,b' \in B$
So from this, why is $a \otimes 0 = 0$? Looks to me like if $b$ is zero, then any $a,a' \in A$ will still satisfy the relations. I'm just after a simple explanation, then hopefully once that makes sense, it will all make sense!
| You are missing the relations $(na,b)=(a,nb)=n(a,b)$ for all $n\in\mathbb{Z}$ (recall that abelian groups are $\mathbb{Z}$-modules). Thus
$$a\otimes 0_B=a\otimes (0_{\mathbb{Z}}\cdot 0_B)=(0_{\mathbb{Z}}\cdot a)\otimes 0_B=0_A\otimes 0_B=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
a question on notation for function spaces
If $X$ is some topological space, such
as the unit interval $[0,1]$, we can
consider the space of all continuous
functions from $X$ to $R$. This is a
vector subspace of $R^X$ since the sum
of any two continuous functions is
continuous and scalar multiplication
is continuous.
Please let me know the notation $R^X$ in the above example.
| This means the space of all functions from $X$ to $R$. Without regard for any structure. Set-theoretic ones.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding x in $a^{x} \bmod b = c$ when values a,b, and c are known? If values $a$, $b$, and $c$ are known, is there an efficient way to find $x$ in the equation: $a^{x} \bmod b = c$?
E.g. finding $x$ in $128^{x}\bmod 209 = 39$.
| A better reference than Wikipedia for the discrete logarithm problem is Andrew Sutherland's 2007 MIT Thesis Order Computations in Generic Groups. Here's an excerpt from p. 14 that provides a concise summary of the current state of knowledge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Localization at a prime ideal in $\mathbb{Z}/6\mathbb{Z}$ How can we compute the localization of the ring $\mathbb{Z}/6\mathbb{Z}$ at the prime ideal $2\mathbb{Z}/\mathbb{6Z}$? (or how do we see that this localization is an integral domain)?
| One simple way to compute this is to exploit the universal property of localization. By definition $\rm\ L\ =\ \mathbb Z/6_{\:(2)}\ =\ S^{-1}\ \mathbb Z/6\ $ where $\rm\ S\ =\ \mathbb Z/6 \ \backslash\ 2\ \mathbb Z/6\ =\ \{\bar 1, \bar 3, \bar 5\}\:.\: $ Hence, since the natural map $\rm\ \mathbb Z/6\ \to\ \mathbb Z/2\ $ maps $\rm\:S\:$ to units, by universality it must factor through $\rm\:L\:,\ $ i.e. $\rm\ \mathbb Z/6\ \to\ L\ \to\ \mathbb Z/2\:.\ $ Thus either $\rm\ L = \mathbb Z/6\ $ or $\rm\ L = \mathbb Z/2\:.\: $ But $\rm\ \bar 3\in S,\ \ {\bar3}^{-1}\not\in \mathbb Z/6\:,\ $ so $\rm\:L \ne \mathbb Z/6\:.\ $ Thus we infer $\rm\:L = \mathbb Z/2\:. $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 0
} |
Are calculus and real analysis the same thing?
*
*I guess this may seem stupid, but
how calculus and real analysis are
different from and related to each
other?
I tend to think they are the same
because all I know is that the
objects of both are real-valued
functions defined on $\mathbb{R}^n$,
and their topics are continuity,
differentiation and integration of
such functions. Isn't it?
*But there is also
$\lambda$-calculus, about which I
honestly don't quite know. Does it
belong to calculus? If not, why is
it called *-calculus?
*I have heard at the undergraduate course level, some people mentioned the
topics in linear algebra as
calculus. Is that correct?
Thanks and regards!
| *
*A first approximation is that real analysis is the rigorous version of calculus. You might think about the distinction as follows: engineers use calculus, but pure mathematicians use real analysis. The term "real analysis" also includes topics not of interest to engineers but of interest to pure mathematicians.
*As is mentioned in the comments, this refers to a different meaning of the word "calculus," which simply means "a method of calculation."
*This is imprecise. Linear algebra is essential to the study of multivariable calculus, but I wouldn't call it a calculus topic in and of itself. People who say this probably mean that it is a calculus-level topic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "102",
"answer_count": 8,
"answer_id": 0
} |
What are good resources to self-teach mathematics? I am teaching myself mathematics using textbooks and I'm currently studying the UK a-level syllabus (I think in the USA this is equivalent to pre-college algebra & calculus). Two resources I have found invaluable for this are this website (http://math.stackexchange.com) and Wolfram Alpha (http://wolframalpha.com). I am very grateful that with those tools, I have managed to understand any questions/doubts I have had so far.
Can anyone recommended other valuable resources for the self-taught student of mathematics at this basic level?
I hope questions of this format are valid here?
Thanks!
| Yes, this site as well as wolfram|alpha are both excellent resources for teaching yourself math!
In addition, I would suggest looking at this site. It provides tons of great math videos, if you are like me and too lazy to read your book sometimes. :) KhanAcademy is also good, but I do prefer the latter. If you can afford it, perhaps you should consider getting into an online class? That way you get more resources and a professor to directly speak to. Not to mention, most math jobs require that you show some accreditation (e.g. a degree). Not exactly sure about your situation, but thought I would mention it. Best of luck!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Using the Partial Summation Formula Partial Summation formula:
Consider $\sum a_n$ and $\sum b_n$. If $A_n= \sum _{k=1}^{n} a_k$, then
$\sum _{k=1}^{n} a_kb_k = A_nb_{n+1}- \sum _{k=1}^{n} A_k(b_{k+1}-b_k)$
So $\sum _{k=1}^{\infty} a_kb_k$ converges if both
$\sum _{k=1}^{\infty} A_k(b_{k+1}-b_k)$
and $\{A_nb_{n+1}\}$ converge
The problem I'm working on is:
Given that $\sum c_n$ converges where each $c_n > 0$ prove that
$\sum (c_nc_{n+1})^{1/2}$ also converges.
I wanted to use the partial summation formula to help me solve this.
I let $\{a_n\}=(c_n)^{1/2}$ and $\{b_n\}=(c_{n+1})^{1/2}$
Since $\sum c_n$ converges, $\lim_{n\to\infty} a_n = 0$ which implies $\lim_{n\to\infty} b_n = 0$
hence we get $\{A_nb_{n+1}\}$ converges
I'm getting stuck at proving $\sum _{k=1}^{\infty} A_k(b_{k+1}-b_k)$ converges.
The second part of the problem says:
Show that the converse is also true if $\{c_n\}$ is monotonic.
I'm not really sure where to start on this one, but if $\{c_n\}$ is monotonic and already bounded below by $0$ it has to be decreasing in order for $\sum (c_nc_{n+1})^{1/2}$ to converge.
| Off the top of my head, partial summation is not what I would use to solve your first problem. Instead note that for all $n$
$\sqrt{c_n c_{n+1}} \leq \sqrt{ (\max c_n, c_{n+1})^2} = \max c_n, c_{n+1} \leq c_n + c_{n+1}$.
By a direct comparison, this implies $\sum_n \sqrt{ c_n c_{n+1}}$ converges.
Is there some reason you are trying to use partial summation? Were you instructed to?
As for the second part, note that if $\{c_n\}$ is decreasing, then $c_n c_{n+1} \geq c_{n+1}^2$. This should help to make a comparison going "the other way"...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What's the opposite of a cross product? For example, $a \times b = c$
If you only know $a$ and $c$, what method can you use to find $b$?
| The name "product" for the cross product is unfortunate. It really should not be thought of as a product in the ordinary sense; for example, it is not even associative. Thus one should not expect it to have properties analogous to the properties of ordinary multiplication.
What the cross product really is is a Lie bracket.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 5,
"answer_id": 0
} |
Faithful representations and character tables Suppose an n-dimensional irreducible complex representation is not faithful. Then a non-identity element gets mapped to the identity matrix in $GL_n(\mathbb{C})$ so that the value of its associated character on the conjugacy class of this element is $n$. Thus, $n$ appears at least twice in the corresponding row of the group's character table.
I suspect the converse is true: if the row corresponding to an irreducible $n$-dimensional complex representation contains the dimension of the representation in more than one column, then the representation is not faithful. I have looked in a few of the standard algebra references and have been unable to find a proof. Can anyone point me in the right direction? We proved this for $n=2$, but it seems that it would be difficult and messy to generalize. I wonder if there is a simpler proof.
| If $\chi$ is the character, and $\chi(g)=\chi(1)=n$ for some group element $g$, then $\rho(g)$ is an $n\times n$ matrix $A$ whose eigenvalues are all complex numbers of modulus 1 and whose trace is $n$ (here $\rho$ is the representation whose character is $\chi$). Also, some power of $A$ is the identity. Can you see how this forces $A$ to be the identity matrix?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Write $\sum_{1}^{n} F_{2n-1} \cdot F_{2n}$ in a simpler form, where $F_n$ is the n-th element of the Fibonacci sequence? The exercise asks to express the following:
$\sum_{1}^{n} F_{2n-1} \cdot F_{2n}$
in a simpler form, not necessarily a closed one. The previous problem in the set was the same, with a different expression:
$\sum_{0}^{n} F_{n}^{2}$ which equals $F_{n} \cdot F_{n+1}$
Side note:
I just started to work through an analysis book, my first big self-study effort. This problem appears in the introductory chapter with topics such as methods of proof, induction, sets, etc.
| You want to watch those indices. I think you mean $\sum_{k=0}^n F_k^2 = F_n F_{n+1}$ and $\sum_{k=1}^n F_{2k-1} F_{2k}$. Hmm, it looks to me like this one can be expressed as a linear combination of $n$, $1$ and a certain Fibonacci number...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Finding probability of an unfair coin An unfair coin is tossed giving heads with probability $p$ and tails with probability $1-p$. How many tosses do we have to perform if we want to find $p$ with a desired accuracy?
There is an obvious bound of $N$ tosses for $\lfloor \log_{10}{N} \rfloor$ digits of $p$; is there a better bound?
| This is a binomial distribution. The standard deviation on the number of heads is $\sqrt{Np(1-p)}$, so leaving aside the difference between your measured $p$ and the real $p$ you need $N \gt \frac{p(1-p)}{accuracy^2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Inverse Image as the left adjoint to pushforward Assume $X$ and $Y$ are topological spaces, $f : X \to Y$ is a continuous map. Let ${\bf Sh}(X)$, ${\bf Sh}(Y)$ be the category of sheaves on $X$ and $Y$ respectively. Modulo existence issues we can define the inverse image functor $f^{-1} : {\bf Sh}(Y) \to {\bf Sh}(X)$ to be the left adjoint to the push forward functor $f_{*} : {\bf Sh}(X) \to {\bf Sh}(Y)$ which is easily described.
My question is this: Using this definition of the inverse image functor, how can I show (without explicitly constructing the functor) that it respects stalks? i.e is there a completely categorical reason why the left adjoint to the push forward functor respects stalks?
| A functor which is a left adjoint preserves colimits (see for instance Mac Lane, "Categories for the working mathematician", chapter V, section 5 "Adjoints on Limits"); particularly, stalks.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Why is Harish-Chandra's last name never used? This is only barely a math question but I don't know where else to ask. I've always wondered about Harish-Chandra's name. The Wikipedia article seems to mention "Mehrotra" as a last name but only in passing, and it's not even used in the page's title. Did he simply not use a last name?
| A link to a biography by Roger Howe now shows up on Wikipedia, and it has this to say:
about the name harish-chandra: indian names do not necessarily follow
the western two-part pattern of given name, family name. a person may
often have only one name, and this was the case with harish-chandra,
who in his youth was called harishchandra. the hyphen was bestowed
on him by the copy editor of his first scientific papers, and he kept
it. later he adopted “chandra” as a family name for his daughters.
given names in india are often those of gods or ancient heroes, and
“harishchandra” was a king, legendary for his truthfulness already at
the time of the mahabharata. i once saw an indian comic book whose cover featured
“harishchandra—whose name is synonymous with truth.”
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
how to prove this inequality? Given $x>0$, $y>0$ and $x + y =1$, how to prove that $\frac{1}{x}\cdot\log_2\left(\frac{1}{y}\right)+\frac{1}{y}\cdot\log_2\left(\frac{1}{x}\right)\ge 4$ ?
| Hint 1: Rewrite this inequality as:
$$-x\log_2 x - (1-x)\log_2 (1-x) \geq 4 x (1-x)$$
Both sides of the inequality define concave functions on the interval $[0,1]$. Plot them. Can you show that the graph of the second is always lying below the graph of the other?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Solving differential equation Below is my work for a particular problem that is mixing me up, since no matter how many times, I can't get my answer to match the book solution.
Given ${f}''(x)= x^{-\frac{3}{2}}$ where $f'(4)= 2$ and $f(0)= 0$, solve the differential equation.
$$f'(x)= \int x^{-\frac{3}{2}} \Rightarrow \frac{x^{-\frac{3}{2}+1}}{-\frac{3}{2}+1} \Rightarrow -2x^{-\frac{1}{2}} + C$$
$$f'(4)= -2(4)^{-\frac{1}{2}}+ C= 2 \Rightarrow -4+C= 2 \Rightarrow C= 6$$
Thus, the first differential equation is $f'(x)= -2x^{-\frac{1}{2}}+6$
$$f(x)= \int -2x^{-\frac{1}{2}}+6 \Rightarrow 2(\frac{x^{-\frac{1}{2}}}{-\frac{1}{2}+1}) \Rightarrow -4x^{\frac{1}{2}}+6x+C$$
Since $f(0)= 0, C=0$, so the final differential equation should be $f(x)= -4x^{\frac{1}{2}}+6x$, but the book answer has $3x$ in place of my $6x$. Where did I go wrong?
| Nothing to worry about! There is a minor slip, $-2(4)^{-1/2}=-2/2=-1$. You got $-4$ instead.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
I haven't studied math in 12 years and need help wrapping my mind back around it I was never fabulous at Algebra and have always studied the arts. However, now I have to take Math 30 pure 12 years after I finished my last required high school math class.
If anyone has thoughts on how to help me re-learn some of what I used to know and help me build upon that knowledge before my class starts please let me know!
Thanks in advance.
J
| Practise, Practise, Practise.
Maths is not a spectator sport and you only get the hang of it by experimenting with it yourself. Even if after reading a question you think "I can do that" don't skip it - you might find it was more complicated than you first thought, and if not, you will gain confidence by doing it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
An inequality on a convex function An exercise in my textbook asked to prove the following inequality, valid for all $a,b,c,d \in R $
$$\left(\frac{a}{2} + \frac{b}{3} + \frac{c}{12} + \frac{d}{12}\right)^4 \leq \frac{a^4}{2} + \frac{b^4}{3} + \frac{c^4}{12} + \frac{d^4}{12}$$
There is a straightforward proof using Convex Functions:
*
*$f(x) = x^4$ is a convex function satisfying $f(\lambda x + (1-\lambda) y) \leq \lambda f(x) + (1 - \lambda)f(y)$ for all $x,y \in R$ and $\lambda \in [0,1]$
*Since $\frac{1}{2} + \frac{1}{3} + \frac{1}{12} + \frac{1}{12} = 1$, we can use the convexity property to obtain the inequality.
Since this question was on the chapter about Convex Functions, I was able to find the solution quickly. However, had I seen the problem in a "standalone" manner I would probably take longer to solve it, and at least spend a lot of muscle opening up the left hand term :)
My question is: What would be other ways to obtain this same result? What if someone had shown me this problem back when I was in eight grade?
| By Holder
$$\frac{a^4}{2} + \frac{b^4}{3} + \frac{c^4}{12} + \frac{d^4}{12}=\left(\frac{a^4}{2} + \frac{b^4}{3} + \frac{c^4}{12} + \frac{d^4}{12}\right)\left(\frac{1}{2} + \frac{1}{3} + \frac{1}{12} + \frac{1}{12}\right)^3\geq$$
$$\geq\left(\frac{|a|}{2} + \frac{|b|}{3} + \frac{|c|}{12} + \frac{|d|}{12}\right)^4\geq\left(\frac{a}{2} + \frac{b}{3} + \frac{c}{12} + \frac{d}{12}\right)^4$$
and we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
For a covariance matrix, what would be the properties associated with the eigenvectors space of this matrix? I want to know, since the covariance matrix is symmetric, positive, and semi-definite, then if I calculate its eigenvectors what would be the properties of the space constructed by those eigenvectors (corresponds to non-close-zero eigenvalues), is it orthogonal or anything else special?
Suppose this eigenvector matrix is called U, then what would be the properties with
U*transpose(U)?
| The eigenvectors correspond to the principal components and the eigenvalues correspond to the variance explained by the principal components.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that if $p$ is an odd prime that divides a number of the form $n^4 + 1$ then $p \equiv 1 \pmod{8}$ Problem
Prove that if $p$ is an odd prime that divides a number of the form $n^4 + 1$ then $p \equiv 1 \pmod{8}$
My attempt was,
Since $p$ divides $n^4 + 1 \implies n^4 + 1 \equiv 0 \pmod{p} \Leftrightarrow n^4 \equiv -1 \pmod{p}$.
It follows that $(n^2)^2 \equiv -1 \pmod{p}$, which implies $-1$ is quadratic residue modulo $p$. Hence $p \equiv 1 \pmod{4} \Leftrightarrow p \equiv 1 \pmod{8}$.
Am I in the right track?
Thanks,
| If $$p \mid (n^k+1), $$
$$n^k \equiv -1 \pmod{p}$$
$$n^{2k} \equiv 1 \pmod{p}$$
If$$ \operatorname{ord}_pn=d,$$ then $d \mid 2k$.
If $d \mid k$, then $$n^k\equiv 1 \pmod{p}$$ $\Rightarrow$ $$-1\equiv 1 \pmod{p}$$
$\Rightarrow p\mid 2$ which is impossible as $p$ is odd prime $\Rightarrow d\nmid k$.
If $(k,2)=1$ i.e., $k$ is odd, $d$ can divide $2$ $\Rightarrow$ $d=2$ as $d \nmid k \Rightarrow d \neq 1$.
In that case,$$ p \mid (n^2-1) \text{, or } p \mid (n+1) \text{ as } d\neq 1.$$
Then $d$ will be $2k$ if $d \neq 2$ i.e., iff $p \nmid (n+1)$.
If $k$ is $2^r$ where integer r ≥1, then $d \nmid 2$ as $d \nmid k$, then $d=2k$.
But $d \mid (p-1) \Rightarrow p≡1 \pmod{2k}$ if $k$ is of the form $2^r$ where integer r ≥1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 2
} |
Confusing question: try and prove that $x -\tan(x) = (2k+1)\frac{\pi}{2}$ has no solution in $[\frac{3\pi}{4},\frac{5\pi}{4}]$ I am trying to show that $x - \tan(x) = (2k+1)\frac{\pi}{2}$ has no solution in $[\frac{3\pi}{4},\frac{5\pi}{4}]$. However, I seem to be stuck as I don't know where to begin.
The only sort of idea is that if I were to draw a graph of $\tan x$ and the lines
$x- \frac{(2k+1)\pi}{2}$, I can see that in the interval $[3\pi/4,5\pi/4]$ the lines intersect $\tan(x)$ near the asymptotes. I can also sort of say that as $\tan (x)$ is a strictly increasing function on $(\pi/2,3\pi/2)$, this means the difference between any two roots of the equation $x- \tan(x)$, one root being to the left of the zero of $\tan(x)$ in here, namely $x=\pi$ and the other to the right of the root, is smallest when we consider the lines $x - \pi/2$ and $x-3\pi/2$.
I can sort of think of something as well to do with the fact that the tangent to $\tan(x)$ at $x=\pi$ is parallel to each of these lines, so the solutions to $\tan(x) = x- \pi/2$, $\tan(x) = x-3\pi/2$ must be sufficiently far away from $\pi$ or rather lie outside $[3\pi/4,5\pi/4]$.
Apart from that, I have no idea how to attack this problem. Can anyone help please?
| Let $f(x) = x -tan(x)$. Taking the first derivative we see that this function is constantly decreasing. The derivative will be negative everywhere besides $x = \pi$. Hence the values live in the interval $[f(\frac{5\pi}{4}),f(\frac{3\pi}{4})]$. Computing those values you see that for every choice of $k$, the number $(2k+1) \frac{\pi}{2}$ is outside the interval. Basically you need to see that it is true for $k = 0, 1, -1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
The locus of the intersection point of two perpendicular tangents to a given ellipse
For a given ellipse, find the locus of all points P for which the two tangents are perpendicular.
I have a trigonometric proof that the locus is a circle, but I'd like a pure (synthetic) geometry proof.
| If all you want is a proof that the locus is a circle, we may assume that the ellipse is given by
$$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1.$$
Ignoring vertical tangents for now,
if a line $y=mx+k$ is tangent to the ellipse, then plugging in this value of $y$ into the equation for the ellipse gives
$$\frac{x^2}{a^2} + \frac{(m^2x^2 + 2mkx + k^2)}{b^2} = 1$$
or
$$(b^2 + a^2m^2)x^2 + 2a^2mkx + (a^2k^2 - a^2b^2) = 0.$$
This equation gives the two points of intersection of the line with the ellipse. If the line is tangent, then the two points must coincide, so the quadratic must have zero discriminant. That is, we need
$$(2a^2mk)^2 - 4(a^2k^2 - a^2b^2)(b^2+a^2m^2) = 0$$
or equivalently,
$$\begin{align*}
(a^2m^2)k^2 -a^2(b^2+a^2m^2)k^2 &= -a^2b^2(b^2+a^2m^2)\\
-a^2b^2k^2&= -a^2b^2(b^2+a^2m^2)\\
k^2 &= b^2+a^2m^2\\
k &= \pm\sqrt{a^2m^2 + b^2}.
\end{align*}
$$
So the lines that are tangent to the ellipse are of the form
$$y = mx \pm \sqrt{a^2m^2 + b^2}.$$
Since the problem is symmetric about $x$ and $y$, consider the points on the upper half plane, so that we will take the plus sign above. The tangent perpendicular to this one will therefore have equation
$$y = -\frac{1}{m}x + \sqrt{\frac{a^2}{m^2} + b^2},$$
or equivalently
$$my = -x + \sqrt{a^2 + m^2b^2}.$$
(We are ignoring the vertical and horizontal tangents; I'll deal with them at the end).
If a point $(r,s)$ is on both lines, then we have
$$\begin{align*}
s-mr &= \sqrt{a^2m^2 + b^2}\\
ms + r &= \sqrt{a^2+m^2b^2}.
\end{align*}$$
Squaring both sides of both equations we get
$$\begin{align*}
s^2 - 2mrs + m^2r^2 &= a^2m^2 + b^2\\
m^2s^2 + 2mrs + r^2 &= a^2 + m^2b^2
\end{align*}$$
and adding both equations, we have
$$\begin{align*}
(1+m^2)s^2 + (1+m^2)r^2 &= (1+m^2)a^2 + (1+m^2)b^2,\\
(1+m^2)(s^2+r^2) &= (1+m^2)(a^2+b^2)\\
s^2+r^2 = a^2+b^2,
\end{align*}$$
showing that $(s,r)$ lies in a circle, namely $x^2+y^2 = a^2+b^2$.
Taking the negative sign for the square root leads to the same equation.
Finally, for the vertical and horizontal tangents, these occur at $x=\pm a$; the horizontal tangents are $y=\pm b$. Their intersections occur at $(\pm a,\pm b)$, which lie on the circle given above. So the locus of such points is contained in the circle $x^2+y^2 = a^2+b^2$.
Conversely, consider a point $(r,s)$ that lies on $x^2+y^2 = a^2+b^2$. If a tangent to the ellipse
$$ y = mx + \sqrt{a^2m^2 + b^2}$$
goes through $(r,s)$, then we have
$$ s-mr = \sqrt{a^2m^2+b^2}.$$
Squaring both sides, we have
$$s^2 - 2msr + m^2r^2 = a^2m^2 + b^2$$
or
$$(a^2-r^2)m^2 +2srm + (b^2-s^2) = 0.$$
Since $r^s+s^2 = a^2+b^2$, then $a^2 - r^2 = s^2-b^2$, we we have
$$(s^2-b^2)m^2 + 2srm + (b^2-s^2) = 0,$$
and if we do not have $s=\pm b$ (the horizontal/vertical tangent intersection points), then we get
$$m^2 + tm - 1 = 0,\qquad\text{where } t = \frac{2sr}{s^2-b^2}.$$
So the two solutions for $m$, $m_1$ and $m_2$, satisfy $m_1m_2 = -1$, hence the two tangents are perpendicular. That is, at every point on the (upper half of the) circle, the two lines through the point that are tangent to the ellipse are perpendicular to each other.
So all such points are on the circle, and all points on the circle are such points. (The circle is called the director circle of the ellipse).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 4,
"answer_id": 1
} |
Finding distribution functions of exponential random variables Find the distribution functions of X+Y/X and X+Y/Z, given that X, Y, and Z have a common exponential distribution.
I think the main thing is that I wanted to confirm the distribution I got for X+Y. I'm doing the integral, and my calculus is a little rusty. I'm getting -e^-ax - ae^-as with parameters x from -infinity to infinity.
From there presumably I can just treat X+Y like one variable and then divide by z.
Thanks so much!
| Thanks so much for your help. I'm still having some trouble, however. Currently for (X+Y)/X I have denoted X+Y = t, adn the distribution of T is \int \boldsymbol{\alpha e^{-\alpha x}(1+\alpha x + \frac{(\alpha x)^2}{2})}. Then doing the integral caculations for the distribution, I get \int \boldsymbol{\alpha e^{-\alpha t x}(1+\alpha t x + \frac{(\alpha t x)^2}{2})}. This seems right but for some reason I'm not getting the answer I'm supposed to. Is it because X+Y and X are related? Would this work for (X+Y)/Z?
Thanks!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
CDF of a ratio of exponential variables Let $X$ and $Y$ be independent exponential variables with rates $\alpha$ and $\beta$, respectively. Find the CDF of $X/Y$.
I tried out the problem, and wanted to check to see if my answer of: $\frac{\alpha}{ \beta/t + \alpha}$ is correct, where $t$ is the time, which we need in our final answer since we need a cdf.
Can someone verify if this is correct?
| Here is a one-line proof.
$$
\mathbb P(X/Y \le t) = \mathbb P(Y \ge X/t) = \mathbb E[\exp(-\beta X/t)] = \text{MGF}_X(-\beta/t) = (1 - (-\beta/t)1/\alpha)^{-1} = \frac{\alpha}{\alpha + \beta/t} = \frac{\alpha t}{\alpha t + \beta}.
$$
N.B.: For the MGF of an exponential variable, see this table.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
Calculate relative contribution to percent change Let me use a simple example to illustrate my problem. First, assume we are calculating rate $r$ at time $t$ such that $r_t=\frac{x_t}{y_t}$. Furthermore, each measure has two component parts: $x = x_A +x_B$ and $y = y_A + y_B$. We can thus calculate percent change $c$ for the rate between $t_2>t_1$ as $c=\frac{r_2-r_1}{r_1}$.
Next, I want to allocate $c$ to measure the relative contribution of each component $A$ and $B$. When the changes are in the same direction between $t_1$ and $t_2$ this is easy (e.g. $x_{A_1} > x_{A_2}$ and $x_{B_1} > x_{B_2}$ and $y_{A_1} > y_{A_2}$ and $y_{B_1} > y_{B_2}$). You calculate the change for each component, divide that by the absolute change and apply that "share" to the total percent change. That allows me to make a statement, e.g. when the rate changed from $10\%$ to $15\%$, $75\%$ of the $50\%$ change was due to component $A$ and $25\%$ to component $B$.
Here's my question: how can I calculate the relative contribution of these components when the differences are in opposite directions? For example, component $A$ decreased for $x$ and $y$ (and more for $y$, relatively) and component $B$ increased for $x$ and $y$ (and more for $y$, relatively).
I'm sure this is simple but no amount of searching has made me the wiser. If you could point me in the right direction -- or ask questions to better illuminate my subject matter -- I would greatly appreciate your help. Thanks!
PS: I found a few resources, linked below but I'm still not sure of the exact math required....
http://www.bea.gov/papers/pdf/Moulton0603.pdf
http://www.esri.cao.go.jp/en/sna/sokuhou/kako/2007/qe074/kiyoe.pdf
| this might be a bit old but I was looking for something similar and I found these two articles which may help someone with a similar question (or so I hope):
Contribution to Growth: growth here could be taken as a change rate
calculating contribution percent change: author proposes a couple of ways to determine how a variable contributes to a change rate when a kpi (like change rate) is additive or multiplicative
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
meaning of $GF(2)[x]/(x^3-1)$ What does $GF(2)[x]/(x^3-1)$ mean? I know $GF(2)$ is the Galois field with 2 elements, but what does the forward slash mean? Also, what's the meaning of the entire expression?
Thanks!
| $\mbox{GF}(2)$ is the finite field with 2 elements (one of the rare instances in mathematics where the common name for an object is kind of larger than the object itself).
$\mbox{GF}(2)[x]$ is the ring of polynomials in the variable $x$ with coefficients in that field. If you're not sure what this means, you should probably learn about this first before you tackle the expression at hand. Let's call this ring $R$.
$x^3-1$ is a specific polynomial in that ring. Since the field is $\mbox{GF}(2)$, it's actually the same polynomial as $x^3+1$.
$(x^3-1)$ is the ideal generated by that polynomial: it's all the polynomials of the form $(x^3-1)f(x)$ where $f(x)$ is an arbitrary polynomial over $\mbox{GF}(2)$. Let's call this ideal $I$.
The forward slash means "quotient" - the quotient of ring $R$ by the ideal $I$, denoted $R/I$. One good way to think about it is as the the set of polynomials of degree 2 (or less), endowed with the operations of ordinary polynomial addition and ordinary multiplication except that after multiplying you take the remainder of the result upon division by $x^3-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |